WorldWideScience

Sample records for confidence interval determination

  1. Confidence intervals for experiments with background and small numbers of events

    International Nuclear Information System (INIS)

    Bruechle, W.

    2003-01-01

    Methods to find a confidence interval for Poisson distributed variables are illuminated, especially for the case of poor statistics. The application of 'central' and 'highest probability density' confidence intervals is compared for the case of low count-rates. A method to determine realistic estimates of the confidence intervals for Poisson distributed variables affected with background, and their ratios, is given. (orig.)

  2. Confidence intervals for experiments with background and small numbers of events

    International Nuclear Information System (INIS)

    Bruechle, W.

    2002-07-01

    Methods to find a confidence interval for Poisson distributed variables are illuminated, especially for the case of poor statistics. The application of 'central' and 'highest probability density' confidence intervals is compared for the case of low count-rates. A method to determine realistic estimates of the confidence intervals for Poisson distributed variables affected with background, and their ratios, is given. (orig.)

  3. Interpretation of Confidence Interval Facing the Conflict

    Science.gov (United States)

    Andrade, Luisa; Fernández, Felipe

    2016-01-01

    As literature has reported, it is usual that university students in statistics courses, and even statistics teachers, interpret the confidence level associated with a confidence interval as the probability that the parameter value will be between the lower and upper interval limits. To confront this misconception, class activities have been…

  4. Confidence Intervals from Normalized Data: A correction to Cousineau (2005

    Directory of Open Access Journals (Sweden)

    Richard D. Morey

    2008-09-01

    Full Text Available Presenting confidence intervals around means is a common method of expressing uncertainty in data. Loftus and Masson (1994 describe confidence intervals for means in within-subjects designs. These confidence intervals are based on the ANOVA mean squared error. Cousineau (2005 presents an alternative to the Loftus and Masson method, but his method produces confidence intervals that are smaller than those of Loftus and Masson. I show why this is the case and offer a simple correction that makes the expected size of Cousineau confidence intervals the same as that of Loftus and Masson confidence intervals.

  5. Confidence Interval Approximation For Treatment Variance In ...

    African Journals Online (AJOL)

    In a random effects model with a single factor, variation is partitioned into two as residual error variance and treatment variance. While a confidence interval can be imposed on the residual error variance, it is not possible to construct an exact confidence interval for the treatment variance. This is because the treatment ...

  6. Confidence Intervals for True Scores Using the Skew-Normal Distribution

    Science.gov (United States)

    Garcia-Perez, Miguel A.

    2010-01-01

    A recent comparative analysis of alternative interval estimation approaches and procedures has shown that confidence intervals (CIs) for true raw scores determined with the Score method--which uses the normal approximation to the binomial distribution--have actual coverage probabilities that are closest to their nominal level. It has also recently…

  7. Using the confidence interval confidently.

    Science.gov (United States)

    Hazra, Avijit

    2017-10-01

    Biomedical research is seldom done with entire populations but rather with samples drawn from a population. Although we work with samples, our goal is to describe and draw inferences regarding the underlying population. It is possible to use a sample statistic and estimates of error in the sample to get a fair idea of the population parameter, not as a single value, but as a range of values. This range is the confidence interval (CI) which is estimated on the basis of a desired confidence level. Calculation of the CI of a sample statistic takes the general form: CI = Point estimate ± Margin of error, where the margin of error is given by the product of a critical value (z) derived from the standard normal curve and the standard error of point estimate. Calculation of the standard error varies depending on whether the sample statistic of interest is a mean, proportion, odds ratio (OR), and so on. The factors affecting the width of the CI include the desired confidence level, the sample size and the variability in the sample. Although the 95% CI is most often used in biomedical research, a CI can be calculated for any level of confidence. A 99% CI will be wider than 95% CI for the same sample. Conflict between clinical importance and statistical significance is an important issue in biomedical research. Clinical importance is best inferred by looking at the effect size, that is how much is the actual change or difference. However, statistical significance in terms of P only suggests whether there is any difference in probability terms. Use of the CI supplements the P value by providing an estimate of actual clinical effect. Of late, clinical trials are being designed specifically as superiority, non-inferiority or equivalence studies. The conclusions from these alternative trial designs are based on CI values rather than the P value from intergroup comparison.

  8. Robust misinterpretation of confidence intervals

    NARCIS (Netherlands)

    Hoekstra, Rink; Morey, Richard; Rouder, Jeffrey N.; Wagenmakers, Eric-Jan

    2014-01-01

    Null hypothesis significance testing (NHST) is undoubtedly the most common inferential technique used to justify claims in the social sciences. However, even staunch defenders of NHST agree that its outcomes are often misinterpreted. Confidence intervals (CIs) have frequently been proposed as a more

  9. Graphing within-subjects confidence intervals using SPSS and S-Plus.

    Science.gov (United States)

    Wright, Daniel B

    2007-02-01

    Within-subjects confidence intervals are often appropriate to report and to display. Loftus and Masson (1994) have reported methods to calculate these, and their use is becoming common. In the present article, procedures for calculating within-subjects confidence intervals in SPSS and S-Plus are presented (an R version is on the accompanying Web site). The procedure in S-Plus allows the user to report the bias corrected and adjusted bootstrap confidence intervals as well as the standard confidence intervals based on traditional methods. The presented code can be easily altered to fit the individual user's needs.

  10. Confidence intervals for distinguishing ordinal and disordinal interactions in multiple regression.

    Science.gov (United States)

    Lee, Sunbok; Lei, Man-Kit; Brody, Gene H

    2015-06-01

    Distinguishing between ordinal and disordinal interaction in multiple regression is useful in testing many interesting theoretical hypotheses. Because the distinction is made based on the location of a crossover point of 2 simple regression lines, confidence intervals of the crossover point can be used to distinguish ordinal and disordinal interactions. This study examined 2 factors that need to be considered in constructing confidence intervals of the crossover point: (a) the assumption about the sampling distribution of the crossover point, and (b) the possibility of abnormally wide confidence intervals for the crossover point. A Monte Carlo simulation study was conducted to compare 6 different methods for constructing confidence intervals of the crossover point in terms of the coverage rate, the proportion of true values that fall to the left or right of the confidence intervals, and the average width of the confidence intervals. The methods include the reparameterization, delta, Fieller, basic bootstrap, percentile bootstrap, and bias-corrected accelerated bootstrap methods. The results of our Monte Carlo simulation study suggest that statistical inference using confidence intervals to distinguish ordinal and disordinal interaction requires sample sizes more than 500 to be able to provide sufficiently narrow confidence intervals to identify the location of the crossover point. (c) 2015 APA, all rights reserved).

  11. Confidence intervals for correlations when data are not normal.

    Science.gov (United States)

    Bishara, Anthony J; Hittner, James B

    2017-02-01

    With nonnormal data, the typical confidence interval of the correlation (Fisher z') may be inaccurate. The literature has been unclear as to which of several alternative methods should be used instead, and how extreme a violation of normality is needed to justify an alternative. Through Monte Carlo simulation, 11 confidence interval methods were compared, including Fisher z', two Spearman rank-order methods, the Box-Cox transformation, rank-based inverse normal (RIN) transformation, and various bootstrap methods. Nonnormality often distorted the Fisher z' confidence interval-for example, leading to a 95 % confidence interval that had actual coverage as low as 68 %. Increasing the sample size sometimes worsened this problem. Inaccurate Fisher z' intervals could be predicted by a sample kurtosis of at least 2, an absolute sample skewness of at least 1, or significant violations of normality hypothesis tests. Only the Spearman rank-order and RIN transformation methods were universally robust to nonnormality. Among the bootstrap methods, an observed imposed bootstrap came closest to accurate coverage, though it often resulted in an overly long interval. The results suggest that sample nonnormality can justify avoidance of the Fisher z' interval in favor of a more robust alternative. R code for the relevant methods is provided in supplementary materials.

  12. Generalized Confidence Intervals and Fiducial Intervals for Some Epidemiological Measures

    Directory of Open Access Journals (Sweden)

    Ionut Bebu

    2016-06-01

    Full Text Available For binary outcome data from epidemiological studies, this article investigates the interval estimation of several measures of interest in the absence or presence of categorical covariates. When covariates are present, the logistic regression model as well as the log-binomial model are investigated. The measures considered include the common odds ratio (OR from several studies, the number needed to treat (NNT, and the prevalence ratio. For each parameter, confidence intervals are constructed using the concepts of generalized pivotal quantities and fiducial quantities. Numerical results show that the confidence intervals so obtained exhibit satisfactory performance in terms of maintaining the coverage probabilities even when the sample sizes are not large. An appealing feature of the proposed solutions is that they are not based on maximization of the likelihood, and hence are free from convergence issues associated with the numerical calculation of the maximum likelihood estimators, especially in the context of the log-binomial model. The results are illustrated with a number of examples. The overall conclusion is that the proposed methodologies based on generalized pivotal quantities and fiducial quantities provide an accurate and unified approach for the interval estimation of the various epidemiological measures in the context of binary outcome data with or without covariates.

  13. Differentially Private Confidence Intervals for Empirical Risk Minimization

    OpenAIRE

    Wang, Yue; Kifer, Daniel; Lee, Jaewoo

    2018-01-01

    The process of data mining with differential privacy produces results that are affected by two types of noise: sampling noise due to data collection and privacy noise that is designed to prevent the reconstruction of sensitive information. In this paper, we consider the problem of designing confidence intervals for the parameters of a variety of differentially private machine learning models. The algorithms can provide confidence intervals that satisfy differential privacy (as well as the mor...

  14. Confidence interval procedures for Monte Carlo transport simulations

    International Nuclear Information System (INIS)

    Pederson, S.P.

    1997-01-01

    The problem of obtaining valid confidence intervals based on estimates from sampled distributions using Monte Carlo particle transport simulation codes such as MCNP is examined. Such intervals can cover the true parameter of interest at a lower than nominal rate if the sampled distribution is extremely right-skewed by large tallies. Modifications to the standard theory of confidence intervals are discussed and compared with some existing heuristics, including batched means normality tests. Two new types of diagnostics are introduced to assess whether the conditions of central limit theorem-type results are satisfied: the relative variance of the variance determines whether the sample size is sufficiently large, and estimators of the slope of the right tail of the distribution are used to indicate the number of moments that exist. A simulation study is conducted to quantify the relationship between various diagnostics and coverage rates and to find sample-based quantities useful in indicating when intervals are expected to be valid. Simulated tally distributions are chosen to emulate behavior seen in difficult particle transport problems. Measures of variation in the sample variance s 2 are found to be much more effective than existing methods in predicting when coverage will be near nominal rates. Batched means tests are found to be overly conservative in this regard. A simple but pathological MCNP problem is presented as an example of false convergence using existing heuristics. The new methods readily detect the false convergence and show that the results of the problem, which are a factor of 4 too small, should not be used. Recommendations are made for applying these techniques in practice, using the statistical output currently produced by MCNP

  15. An Introduction to Confidence Intervals for Both Statistical Estimates and Effect Sizes.

    Science.gov (United States)

    Capraro, Mary Margaret

    This paper summarizes methods of estimating confidence intervals, including classical intervals and intervals for effect sizes. The recent American Psychological Association (APA) Task Force on Statistical Inference report suggested that confidence intervals should always be reported, and the fifth edition of the APA "Publication Manual"…

  16. Confidence Intervals: From tests of statistical significance to confidence intervals, range hypotheses and substantial effects

    Directory of Open Access Journals (Sweden)

    Dominic Beaulieu-Prévost

    2006-03-01

    Full Text Available For the last 50 years of research in quantitative social sciences, the empirical evaluation of scientific hypotheses has been based on the rejection or not of the null hypothesis. However, more than 300 articles demonstrated that this method was problematic. In summary, null hypothesis testing (NHT is unfalsifiable, its results depend directly on sample size and the null hypothesis is both improbable and not plausible. Consequently, alternatives to NHT such as confidence intervals (CI and measures of effect size are starting to be used in scientific publications. The purpose of this article is, first, to provide the conceptual tools necessary to implement an approach based on confidence intervals, and second, to briefly demonstrate why such an approach is an interesting alternative to an approach based on NHT. As demonstrated in the article, the proposed CI approach avoids most problems related to a NHT approach and can often improve the scientific and contextual relevance of the statistical interpretations by testing range hypotheses instead of a point hypothesis and by defining the minimal value of a substantial effect. The main advantage of such a CI approach is that it replaces the notion of statistical power by an easily interpretable three-value logic (probable presence of a substantial effect, probable absence of a substantial effect and probabilistic undetermination. The demonstration includes a complete example.

  17. A nonparametric statistical method for determination of a confidence interval for the mean of a set of results obtained in a laboratory intercomparison

    International Nuclear Information System (INIS)

    Veglia, A.

    1981-08-01

    In cases where sets of data are obviously not normally distributed, the application of a nonparametric method for the estimation of a confidence interval for the mean seems to be more suitable than some other methods because such a method requires few assumptions about the population of data. A two-step statistical method is proposed which can be applied to any set of analytical results: elimination of outliers by a nonparametric method based on Tchebycheff's inequality, and determination of a confidence interval for the mean by a non-parametric method based on binominal distribution. The method is appropriate only for samples of size n>=10

  18. Using an R Shiny to Enhance the Learning Experience of Confidence Intervals

    Science.gov (United States)

    Williams, Immanuel James; Williams, Kelley Kim

    2018-01-01

    Many students find understanding confidence intervals difficult, especially because of the amalgamation of concepts such as confidence levels, standard error, point estimates and sample sizes. An R Shiny application was created to assist the learning process of confidence intervals using graphics and data from the US National Basketball…

  19. Coefficient Omega Bootstrap Confidence Intervals: Nonnormal Distributions

    Science.gov (United States)

    Padilla, Miguel A.; Divers, Jasmin

    2013-01-01

    The performance of the normal theory bootstrap (NTB), the percentile bootstrap (PB), and the bias-corrected and accelerated (BCa) bootstrap confidence intervals (CIs) for coefficient omega was assessed through a Monte Carlo simulation under conditions not previously investigated. Of particular interests were nonnormal Likert-type and binary items.…

  20. Estimation and interpretation of keff confidence intervals in MCNP

    International Nuclear Information System (INIS)

    Urbatsch, T.J.

    1995-11-01

    MCNP's criticality methodology and some basic statistics are reviewed. Confidence intervals are discussed, as well as how to build them and their importance in the presentation of a Monte Carlo result. The combination of MCNP's three k eff estimators is shown, theoretically and empirically, by statistical studies and examples, to be the best k eff estimator. The method of combining estimators is based on a solid theoretical foundation, namely, the Gauss-Markov Theorem in regard to the least squares method. The confidence intervals of the combined estimator are also shown to have correct coverage rates for the examples considered

  1. Understanding Confidence Intervals With Visual Representations

    OpenAIRE

    Navruz, Bilgin; Delen, Erhan

    2014-01-01

    In the present paper, we showed how confidence intervals (CIs) are valuable and useful in research studies when they are used in the correct form with correct interpretations. The sixth edition of the APA (2010) Publication Manual strongly recommended reporting CIs in research studies, and it was described as “the best reporting strategy” (p. 34). Misconceptions and correct interpretations of CIs were presented from several textbooks. In addition, limitations of the null hypothesis statistica...

  2. The P Value Problem in Otolaryngology: Shifting to Effect Sizes and Confidence Intervals.

    Science.gov (United States)

    Vila, Peter M; Townsend, Melanie Elizabeth; Bhatt, Neel K; Kao, W Katherine; Sinha, Parul; Neely, J Gail

    2017-06-01

    There is a lack of reporting effect sizes and confidence intervals in the current biomedical literature. The objective of this article is to present a discussion of the recent paradigm shift encouraging the use of reporting effect sizes and confidence intervals. Although P values help to inform us about whether an effect exists due to chance, effect sizes inform us about the magnitude of the effect (clinical significance), and confidence intervals inform us about the range of plausible estimates for the general population mean (precision). Reporting effect sizes and confidence intervals is a necessary addition to the biomedical literature, and these concepts are reviewed in this article.

  3. The Applicability of Confidence Intervals of Quantiles for the Generalized Logistic Distribution

    Science.gov (United States)

    Shin, H.; Heo, J.; Kim, T.; Jung, Y.

    2007-12-01

    The generalized logistic (GL) distribution has been widely used for frequency analysis. However, there is a little study related to the confidence intervals that indicate the prediction accuracy of distribution for the GL distribution. In this paper, the estimation of the confidence intervals of quantiles for the GL distribution is presented based on the method of moments (MOM), maximum likelihood (ML), and probability weighted moments (PWM) and the asymptotic variances of each quantile estimator are derived as functions of the sample sizes, return periods, and parameters. Monte Carlo simulation experiments are also performed to verify the applicability of the derived confidence intervals of quantile. As the results, the relative bias (RBIAS) and relative root mean square error (RRMSE) of the confidence intervals generally increase as return period increases and reverse as sample size increases. And PWM for estimating the confidence intervals performs better than the other methods in terms of RRMSE when the data is almost symmetric while ML shows the smallest RBIAS and RRMSE when the data is more skewed and sample size is moderately large. The GL model was applied to fit the distribution of annual maximum rainfall data. The results show that there are little differences in the estimated quantiles between ML and PWM while distinct differences in MOM.

  4. Quantifying uncertainty on sediment loads using bootstrap confidence intervals

    Science.gov (United States)

    Slaets, Johanna I. F.; Piepho, Hans-Peter; Schmitter, Petra; Hilger, Thomas; Cadisch, Georg

    2017-01-01

    Load estimates are more informative than constituent concentrations alone, as they allow quantification of on- and off-site impacts of environmental processes concerning pollutants, nutrients and sediment, such as soil fertility loss, reservoir sedimentation and irrigation channel siltation. While statistical models used to predict constituent concentrations have been developed considerably over the last few years, measures of uncertainty on constituent loads are rarely reported. Loads are the product of two predictions, constituent concentration and discharge, integrated over a time period, which does not make it straightforward to produce a standard error or a confidence interval. In this paper, a linear mixed model is used to estimate sediment concentrations. A bootstrap method is then developed that accounts for the uncertainty in the concentration and discharge predictions, allowing temporal correlation in the constituent data, and can be used when data transformations are required. The method was tested for a small watershed in Northwest Vietnam for the period 2010-2011. The results showed that confidence intervals were asymmetric, with the highest uncertainty in the upper limit, and that a load of 6262 Mg year-1 had a 95 % confidence interval of (4331, 12 267) in 2010 and a load of 5543 Mg an interval of (3593, 8975) in 2011. Additionally, the approach demonstrated that direct estimates from the data were biased downwards compared to bootstrap median estimates. These results imply that constituent loads predicted from regression-type water quality models could frequently be underestimating sediment yields and their environmental impact.

  5. Sample size planning for composite reliability coefficients: accuracy in parameter estimation via narrow confidence intervals.

    Science.gov (United States)

    Terry, Leann; Kelley, Ken

    2012-11-01

    Composite measures play an important role in psychology and related disciplines. Composite measures almost always have error. Correspondingly, it is important to understand the reliability of the scores from any particular composite measure. However, the point estimates of the reliability of composite measures are fallible and thus all such point estimates should be accompanied by a confidence interval. When confidence intervals are wide, there is much uncertainty in the population value of the reliability coefficient. Given the importance of reporting confidence intervals for estimates of reliability, coupled with the undesirability of wide confidence intervals, we develop methods that allow researchers to plan sample size in order to obtain narrow confidence intervals for population reliability coefficients. We first discuss composite reliability coefficients and then provide a discussion on confidence interval formation for the corresponding population value. Using the accuracy in parameter estimation approach, we develop two methods to obtain accurate estimates of reliability by planning sample size. The first method provides a way to plan sample size so that the expected confidence interval width for the population reliability coefficient is sufficiently narrow. The second method ensures that the confidence interval width will be sufficiently narrow with some desired degree of assurance (e.g., 99% assurance that the 95% confidence interval for the population reliability coefficient will be less than W units wide). The effectiveness of our methods was verified with Monte Carlo simulation studies. We demonstrate how to easily implement the methods with easy-to-use and freely available software. ©2011 The British Psychological Society.

  6. Parametric change point estimation, testing and confidence interval ...

    African Journals Online (AJOL)

    In many applications like finance, industry and medicine, it is important to consider that the model parameters may undergo changes at unknown moment in time. This paper deals with estimation, testing and confidence interval of a change point for a univariate variable which is assumed to be normally distributed. To detect ...

  7. Using Confidence Interval-Based Estimation of Relevance to Select Social-Cognitive Determinants for Behavior Change Interventions

    Directory of Open Access Journals (Sweden)

    Rik Crutzen

    2017-07-01

    Full Text Available When developing an intervention aimed at behavior change, one of the crucial steps in the development process is to select the most relevant social-cognitive determinants. These determinants can be seen as the buttons one needs to push to establish behavior change. Insight into these determinants is needed to select behavior change methods (i.e., general behavior change techniques that are applied in an intervention in the development process. Therefore, a study on determinants is often conducted as formative research in the intervention development process. Ideally, all relevant determinants identified in such a study are addressed by an intervention. However, when developing a behavior change intervention, there are limits in terms of, for example, resources available for intervention development and the amount of content that participants of an intervention can be exposed to. Hence, it is important to select those determinants that are most relevant to the target behavior as these determinants should be addressed in an intervention. The aim of the current paper is to introduce a novel approach to select the most relevant social-cognitive determinants and use them in intervention development. This approach is based on visualization of confidence intervals for the means and correlation coefficients for all determinants simultaneously. This visualization facilitates comparison, which is necessary when making selections. By means of a case study on the determinants of using a high dose of 3,4-methylenedioxymethamphetamine (commonly known as ecstasy, we illustrate this approach. We provide a freely available tool to facilitate the analyses needed in this approach.

  8. Using Confidence Interval-Based Estimation of Relevance to Select Social-Cognitive Determinants for Behavior Change Interventions.

    Science.gov (United States)

    Crutzen, Rik; Peters, Gjalt-Jorn Ygram; Noijen, Judith

    2017-01-01

    When developing an intervention aimed at behavior change, one of the crucial steps in the development process is to select the most relevant social-cognitive determinants. These determinants can be seen as the buttons one needs to push to establish behavior change. Insight into these determinants is needed to select behavior change methods (i.e., general behavior change techniques that are applied in an intervention) in the development process. Therefore, a study on determinants is often conducted as formative research in the intervention development process. Ideally, all relevant determinants identified in such a study are addressed by an intervention. However, when developing a behavior change intervention, there are limits in terms of, for example, resources available for intervention development and the amount of content that participants of an intervention can be exposed to. Hence, it is important to select those determinants that are most relevant to the target behavior as these determinants should be addressed in an intervention. The aim of the current paper is to introduce a novel approach to select the most relevant social-cognitive determinants and use them in intervention development. This approach is based on visualization of confidence intervals for the means and correlation coefficients for all determinants simultaneously. This visualization facilitates comparison, which is necessary when making selections. By means of a case study on the determinants of using a high dose of 3,4-methylenedioxymethamphetamine (commonly known as ecstasy), we illustrate this approach. We provide a freely available tool to facilitate the analyses needed in this approach.

  9. Confidence Intervals from Realizations of Simulated Nuclear Data

    Energy Technology Data Exchange (ETDEWEB)

    Younes, W. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Ratkiewicz, A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Ressler, J. J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-09-28

    Various statistical techniques are discussed that can be used to assign a level of confidence in the prediction of models that depend on input data with known uncertainties and correlations. The particular techniques reviewed in this paper are: 1) random realizations of the input data using Monte-Carlo methods, 2) the construction of confidence intervals to assess the reliability of model predictions, and 3) resampling techniques to impose statistical constraints on the input data based on additional information. These techniques are illustrated with a calculation of the keff value, based on the 235U(n, f) and 239Pu (n, f) cross sections.

  10. On Bayesian treatment of systematic uncertainties in confidence interval calculation

    CERN Document Server

    Tegenfeldt, Fredrik

    2005-01-01

    In high energy physics, a widely used method to treat systematic uncertainties in confidence interval calculations is based on combining a frequentist construction of confidence belts with a Bayesian treatment of systematic uncertainties. In this note we present a study of the coverage of this method for the standard Likelihood Ratio (aka Feldman & Cousins) construction for a Poisson process with known background and Gaussian or log-Normal distributed uncertainties in the background or signal efficiency. For uncertainties in the signal efficiency of upto 40 % we find over-coverage on the level of 2 to 4 % depending on the size of uncertainties and the region in signal space. Uncertainties in the background generally have smaller effect on the coverage. A considerable smoothing of the coverage curves is observed. A software package is presented which allows fast calculation of the confidence intervals for a variety of assumptions on shape and size of systematic uncertainties for different nuisance paramete...

  11. The Distribution of the Product Explains Normal Theory Mediation Confidence Interval Estimation.

    Science.gov (United States)

    Kisbu-Sakarya, Yasemin; MacKinnon, David P; Miočević, Milica

    2014-05-01

    The distribution of the product has several useful applications. One of these applications is its use to form confidence intervals for the indirect effect as the product of 2 regression coefficients. The purpose of this article is to investigate how the moments of the distribution of the product explain normal theory mediation confidence interval coverage and imbalance. Values of the critical ratio for each random variable are used to demonstrate how the moments of the distribution of the product change across values of the critical ratio observed in research studies. Results of the simulation study showed that as skewness in absolute value increases, coverage decreases. And as skewness in absolute value and kurtosis increases, imbalance increases. The difference between testing the significance of the indirect effect using the normal theory versus the asymmetric distribution of the product is further illustrated with a real data example. This article is the first study to show the direct link between the distribution of the product and indirect effect confidence intervals and clarifies the results of previous simulation studies by showing why normal theory confidence intervals for indirect effects are often less accurate than those obtained from the asymmetric distribution of the product or from resampling methods.

  12. Confidence intervals for the lognormal probability distribution

    International Nuclear Information System (INIS)

    Smith, D.L.; Naberejnev, D.G.

    2004-01-01

    The present communication addresses the topic of symmetric confidence intervals for the lognormal probability distribution. This distribution is frequently utilized to characterize inherently positive, continuous random variables that are selected to represent many physical quantities in applied nuclear science and technology. The basic formalism is outlined herein and a conjured numerical example is provided for illustration. It is demonstrated that when the uncertainty reflected in a lognormal probability distribution is large, the use of a confidence interval provides much more useful information about the variable used to represent a particular physical quantity than can be had by adhering to the notion that the mean value and standard deviation of the distribution ought to be interpreted as best value and corresponding error, respectively. Furthermore, it is shown that if the uncertainty is very large a disturbing anomaly can arise when one insists on interpreting the mean value and standard deviation as the best value and corresponding error, respectively. Reliance on using the mode and median as alternative parameters to represent the best available knowledge of a variable with large uncertainties is also shown to entail limitations. Finally, a realistic physical example involving the decay of radioactivity over a time period that spans many half-lives is presented and analyzed to further illustrate the concepts discussed in this communication

  13. Profile-likelihood Confidence Intervals in Item Response Theory Models.

    Science.gov (United States)

    Chalmers, R Philip; Pek, Jolynn; Liu, Yang

    2017-01-01

    Confidence intervals (CIs) are fundamental inferential devices which quantify the sampling variability of parameter estimates. In item response theory, CIs have been primarily obtained from large-sample Wald-type approaches based on standard error estimates, derived from the observed or expected information matrix, after parameters have been estimated via maximum likelihood. An alternative approach to constructing CIs is to quantify sampling variability directly from the likelihood function with a technique known as profile-likelihood confidence intervals (PL CIs). In this article, we introduce PL CIs for item response theory models, compare PL CIs to classical large-sample Wald-type CIs, and demonstrate important distinctions among these CIs. CIs are then constructed for parameters directly estimated in the specified model and for transformed parameters which are often obtained post-estimation. Monte Carlo simulation results suggest that PL CIs perform consistently better than Wald-type CIs for both non-transformed and transformed parameters.

  14. A comparison of confidence interval methods for the intraclass correlation coefficient in community-based cluster randomization trials with a binary outcome.

    Science.gov (United States)

    Braschel, Melissa C; Svec, Ivana; Darlington, Gerarda A; Donner, Allan

    2016-04-01

    Many investigators rely on previously published point estimates of the intraclass correlation coefficient rather than on their associated confidence intervals to determine the required size of a newly planned cluster randomized trial. Although confidence interval methods for the intraclass correlation coefficient that can be applied to community-based trials have been developed for a continuous outcome variable, fewer methods exist for a binary outcome variable. The aim of this study is to evaluate confidence interval methods for the intraclass correlation coefficient applied to binary outcomes in community intervention trials enrolling a small number of large clusters. Existing methods for confidence interval construction are examined and compared to a new ad hoc approach based on dividing clusters into a large number of smaller sub-clusters and subsequently applying existing methods to the resulting data. Monte Carlo simulation is used to assess the width and coverage of confidence intervals for the intraclass correlation coefficient based on Smith's large sample approximation of the standard error of the one-way analysis of variance estimator, an inverted modified Wald test for the Fleiss-Cuzick estimator, and intervals constructed using a bootstrap-t applied to a variance-stabilizing transformation of the intraclass correlation coefficient estimate. In addition, a new approach is applied in which clusters are randomly divided into a large number of smaller sub-clusters with the same methods applied to these data (with the exception of the bootstrap-t interval, which assumes large cluster sizes). These methods are also applied to a cluster randomized trial on adolescent tobacco use for illustration. When applied to a binary outcome variable in a small number of large clusters, existing confidence interval methods for the intraclass correlation coefficient provide poor coverage. However, confidence intervals constructed using the new approach combined with Smith

  15. Confidence interval of intrinsic optimum temperature estimated using thermodynamic SSI model

    Institute of Scientific and Technical Information of China (English)

    Takaya Ikemoto; Issei Kurahashi; Pei-Jian Shi

    2013-01-01

    The intrinsic optimum temperature for the development of ectotherms is one of the most important factors not only for their physiological processes but also for ecological and evolutional processes.The Sharpe-Schoolfield-Ikemoto (SSI) model succeeded in defining the temperature that can thermodynamically meet the condition that at a particular temperature the probability of an active enzyme reaching its maximum activity is realized.Previously,an algorithm was developed by Ikemoto (Tropical malaria does not mean hot environments.Journal of Medical Entomology,45,963-969) to estimate model parameters,but that program was computationally very time consuming.Now,investigators can use the SSI model more easily because a full automatic computer program was designed by Shi et al.(A modified program for estimating the parameters of the SSI model.Environmental Entomology,40,462-469).However,the statistical significance of the point estimate of the intrinsic optimum temperature for each ectotherm has not yet been determined.Here,we provided a new method for calculating the confidence interval of the estimated intrinsic optimum temperature by modifying the approximate bootstrap confidence intervals method.For this purpose,it was necessary to develop a new program for a faster estimation of the parameters in the SSI model,which we have also done.

  16. Robust Confidence Interval for a Ratio of Standard Deviations

    Science.gov (United States)

    Bonett, Douglas G.

    2006-01-01

    Comparing variability of test scores across alternate forms, test conditions, or subpopulations is a fundamental problem in psychometrics. A confidence interval for a ratio of standard deviations is proposed that performs as well as the classic method with normal distributions and performs dramatically better with nonnormal distributions. A simple…

  17. Comparing confidence intervals for Goodman and Kruskal's gamma coefficient

    NARCIS (Netherlands)

    van der Ark, L.A.; van Aert, R.C.M.

    2015-01-01

    This study was motivated by the question which type of confidence interval (CI) one should use to summarize sample variance of Goodman and Kruskal's coefficient gamma. In a Monte-Carlo study, we investigated the coverage and computation time of the Goodman-Kruskal CI, the Cliff-consistent CI, the

  18. Binomial Distribution Sample Confidence Intervals Estimation 1. Sampling and Medical Key Parameters Calculation

    Directory of Open Access Journals (Sweden)

    Tudor DRUGAN

    2003-08-01

    Full Text Available The aim of the paper was to present the usefulness of the binomial distribution in studying of the contingency tables and the problems of approximation to normality of binomial distribution (the limits, advantages, and disadvantages. The classification of the medical keys parameters reported in medical literature and expressing them using the contingency table units based on their mathematical expressions restrict the discussion of the confidence intervals from 34 parameters to 9 mathematical expressions. The problem of obtaining different information starting with the computed confidence interval for a specified method, information like confidence intervals boundaries, percentages of the experimental errors, the standard deviation of the experimental errors and the deviation relative to significance level was solves through implementation in PHP programming language of original algorithms. The cases of expression, which contain two binomial variables, were separately treated. An original method of computing the confidence interval for the case of two-variable expression was proposed and implemented. The graphical representation of the expression of two binomial variables for which the variation domain of one of the variable depend on the other variable was a real problem because the most of the software used interpolation in graphical representation and the surface maps were quadratic instead of triangular. Based on an original algorithm, a module was implements in PHP in order to represent graphically the triangular surface plots. All the implementation described above was uses in computing the confidence intervals and estimating their performance for binomial distributions sample sizes and variable.

  19. Simulation data for an estimation of the maximum theoretical value and confidence interval for the correlation coefficient.

    Science.gov (United States)

    Rocco, Paolo; Cilurzo, Francesco; Minghetti, Paola; Vistoli, Giulio; Pedretti, Alessandro

    2017-10-01

    The data presented in this article are related to the article titled "Molecular Dynamics as a tool for in silico screening of skin permeability" (Rocco et al., 2017) [1]. Knowledge of the confidence interval and maximum theoretical value of the correlation coefficient r can prove useful to estimate the reliability of developed predictive models, in particular when there is great variability in compiled experimental datasets. In this Data in Brief article, data from purposely designed numerical simulations are presented to show how much the maximum r value is worsened by increasing the data uncertainty. The corresponding confidence interval of r is determined by using the Fisher r → Z transform.

  20. Closed-form confidence intervals for functions of the normal mean and standard deviation.

    Science.gov (United States)

    Donner, Allan; Zou, G Y

    2012-08-01

    Confidence interval methods for a normal mean and standard deviation are well known and simple to apply. However, the same cannot be said for important functions of these parameters. These functions include the normal distribution percentiles, the Bland-Altman limits of agreement, the coefficient of variation and Cohen's effect size. We present a simple approach to this problem by using variance estimates recovered from confidence limits computed for the mean and standard deviation separately. All resulting confidence intervals have closed forms. Simulation results demonstrate that this approach performs very well for limits of agreement, coefficients of variation and their differences.

  1. Effect size, confidence intervals and statistical power in psychological research.

    Directory of Open Access Journals (Sweden)

    Téllez A.

    2015-07-01

    Full Text Available Quantitative psychological research is focused on detecting the occurrence of certain population phenomena by analyzing data from a sample, and statistics is a particularly helpful mathematical tool that is used by researchers to evaluate hypotheses and make decisions to accept or reject such hypotheses. In this paper, the various statistical tools in psychological research are reviewed. The limitations of null hypothesis significance testing (NHST and the advantages of using effect size and its respective confidence intervals are explained, as the latter two measurements can provide important information about the results of a study. These measurements also can facilitate data interpretation and easily detect trivial effects, enabling researchers to make decisions in a more clinically relevant fashion. Moreover, it is recommended to establish an appropriate sample size by calculating the optimum statistical power at the moment that the research is designed. Psychological journal editors are encouraged to follow APA recommendations strictly and ask authors of original research studies to report the effect size, its confidence intervals, statistical power and, when required, any measure of clinical significance. Additionally, we must account for the teaching of statistics at the graduate level. At that level, students do not receive sufficient information concerning the importance of using different types of effect sizes and their confidence intervals according to the different types of research designs; instead, most of the information is focused on the various tools of NHST.

  2. Determination of confidence limits for experiments with low numbers of counts

    International Nuclear Information System (INIS)

    Kraft, R.P.; Burrows, D.N.; Nousek, J.A.

    1991-01-01

    Two different methods, classical and Bayesian, for determining confidence intervals involving Poisson-distributed data are compared. Particular consideration is given to cases where the number of counts observed is small and is comparable to the mean number of background counts. Reasons for preferring the Bayesian over the classical method are given. Tables of confidence limits calculated by the Bayesian method are provided for quick reference. 12 refs

  3. The Optimal Confidence Intervals for Agricultural Products’ Price Forecasts Based on Hierarchical Historical Errors

    Directory of Open Access Journals (Sweden)

    Yi Wang

    2016-12-01

    Full Text Available With the levels of confidence and system complexity, interval forecasts and entropy analysis can deliver more information than point forecasts. In this paper, we take receivers’ demands as our starting point, use the trade-off model between accuracy and informativeness as the criterion to construct the optimal confidence interval, derive the theoretical formula of the optimal confidence interval and propose a practical and efficient algorithm based on entropy theory and complexity theory. In order to improve the estimation precision of the error distribution, the point prediction errors are STRATIFIED according to prices and the complexity of the system; the corresponding prediction error samples are obtained by the prices stratification; and the error distributions are estimated by the kernel function method and the stability of the system. In a stable and orderly environment for price forecasting, we obtain point prediction error samples by the weighted local region and RBF (Radial basis function neural network methods, forecast the intervals of the soybean meal and non-GMO (Genetically Modified Organism soybean continuous futures closing prices and implement unconditional coverage, independence and conditional coverage tests for the simulation results. The empirical results are compared from various interval evaluation indicators, different levels of noise, several target confidence levels and different point prediction methods. The analysis shows that the optimal interval construction method is better than the equal probability method and the shortest interval method and has good anti-noise ability with the reduction of system entropy; the hierarchical estimation error method can obtain higher accuracy and better interval estimation than the non-hierarchical method in a stable system.

  4. Confidence Intervals for Weighted Composite Scores under the Compound Binomial Error Model

    Science.gov (United States)

    Kim, Kyung Yong; Lee, Won-Chan

    2018-01-01

    Reporting confidence intervals with test scores helps test users make important decisions about examinees by providing information about the precision of test scores. Although a variety of estimation procedures based on the binomial error model are available for computing intervals for test scores, these procedures assume that items are randomly…

  5. Comparing confidence intervals for Goodman and Kruskal’s gamma coefficient

    NARCIS (Netherlands)

    van der Ark, L.A.; van Aert, R.C.M.

    2015-01-01

    This study was motivated by the question which type of confidence interval (CI) one should use to summarize sample variance of Goodman and Kruskal's coefficient gamma. In a Monte-Carlo study, we investigated the coverage and computation time of the Goodman–Kruskal CI, the Cliff-consistent CI, the

  6. Confidence Intervals for Assessing Heterogeneity in Generalized Linear Mixed Models

    Science.gov (United States)

    Wagler, Amy E.

    2014-01-01

    Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…

  7. Confidence intervals for modeling anthocyanin retention in grape pomace during nonisothermal heating.

    Science.gov (United States)

    Mishra, D K; Dolan, K D; Yang, L

    2008-01-01

    Degradation of nutraceuticals in low- and intermediate-moisture foods heated at high temperature (>100 degrees C) is difficult to model because of the nonisothermal condition. Isothermal experiments above 100 degrees C are difficult to design because they require high pressure and small sample size in sealed containers. Therefore, a nonisothermal method was developed to estimate the thermal degradation kinetic parameter of nutraceuticals and determine the confidence intervals for the parameters and the predicted Y (concentration). Grape pomace at 42% moisture content (wb) was heated in sealed 202 x 214 steel cans in a steam retort at 126.7 degrees C for > 30 min. Can center temperature was measured by thermocouple and predicted using Comsol software. Thermal conductivity (k) and specific heat (C(p)) were estimated as quadratic functions of temperature using Comsol and nonlinear regression. The k and C(p) functions were then used to predict temperature inside the grape pomace during retorting. Similar heating experiments were run at different time-temperature treatments from 8 to 25 min for kinetic parameter estimation. Anthocyanin concentration in the grape pomace was measured using HPLC. Degradation rate constant (k(110 degrees C)) and activation energy (E(a)) were estimated using nonlinear regression. The thermophysical properties estimates at 100 degrees C were k = 0.501 W/m degrees C, Cp= 3600 J/kg and the kinetic parameters were k(110 degrees C)= 0.0607/min and E(a)= 65.32 kJ/mol. The 95% confidence intervals for the parameters and the confidence bands and prediction bands for anthocyanin retention were plotted. These methods are useful for thermal processing design for nutraceutical products.

  8. On a linear method in bootstrap confidence intervals

    Directory of Open Access Journals (Sweden)

    Andrea Pallini

    2007-10-01

    Full Text Available A linear method for the construction of asymptotic bootstrap confidence intervals is proposed. We approximate asymptotically pivotal and non-pivotal quantities, which are smooth functions of means of n independent and identically distributed random variables, by using a sum of n independent smooth functions of the same analytical form. Errors are of order Op(n-3/2 and Op(n-2, respectively. The linear method allows a straightforward approximation of bootstrap cumulants, by considering the set of n independent smooth functions as an original random sample to be resampled with replacement.

  9. Binomial confidence intervals for testing non-inferiority or superiority: a practitioner's dilemma.

    Science.gov (United States)

    Pradhan, Vivek; Evans, John C; Banerjee, Tathagata

    2016-08-01

    In testing for non-inferiority or superiority in a single arm study, the confidence interval of a single binomial proportion is frequently used. A number of such intervals are proposed in the literature and implemented in standard software packages. Unfortunately, use of different intervals leads to conflicting conclusions. Practitioners thus face a serious dilemma in deciding which one to depend on. Is there a way to resolve this dilemma? We address this question by investigating the performances of ten commonly used intervals of a single binomial proportion, in the light of two criteria, viz., coverage and expected length of the interval. © The Author(s) 2013.

  10. Robust misinterpretation of confidence intervals.

    Science.gov (United States)

    Hoekstra, Rink; Morey, Richard D; Rouder, Jeffrey N; Wagenmakers, Eric-Jan

    2014-10-01

    Null hypothesis significance testing (NHST) is undoubtedly the most common inferential technique used to justify claims in the social sciences. However, even staunch defenders of NHST agree that its outcomes are often misinterpreted. Confidence intervals (CIs) have frequently been proposed as a more useful alternative to NHST, and their use is strongly encouraged in the APA Manual. Nevertheless, little is known about how researchers interpret CIs. In this study, 120 researchers and 442 students-all in the field of psychology-were asked to assess the truth value of six particular statements involving different interpretations of a CI. Although all six statements were false, both researchers and students endorsed, on average, more than three statements, indicating a gross misunderstanding of CIs. Self-declared experience with statistics was not related to researchers' performance, and, even more surprisingly, researchers hardly outperformed the students, even though the students had not received any education on statistical inference whatsoever. Our findings suggest that many researchers do not know the correct interpretation of a CI. The misunderstandings surrounding p-values and CIs are particularly unfortunate because they constitute the main tools by which psychologists draw conclusions from data.

  11. Binomial Distribution Sample Confidence Intervals Estimation 7. Absolute Risk Reduction and ARR-like Expressions

    Directory of Open Access Journals (Sweden)

    Andrei ACHIMAŞ CADARIU

    2004-08-01

    Full Text Available Assessments of a controlled clinical trial suppose to interpret some key parameters as the controlled event rate, experimental event date, relative risk, absolute risk reduction, relative risk reduction, number needed to treat when the effect of the treatment are dichotomous variables. Defined as the difference in the event rate between treatment and control groups, the absolute risk reduction is the parameter that allowed computing the number needed to treat. The absolute risk reduction is compute when the experimental treatment reduces the risk for an undesirable outcome/event. In medical literature when the absolute risk reduction is report with its confidence intervals, the method used is the asymptotic one, even if it is well know that may be inadequate. The aim of this paper is to introduce and assess nine methods of computing confidence intervals for absolute risk reduction and absolute risk reduction – like function.Computer implementations of the methods use the PHP language. Methods comparison uses the experimental errors, the standard deviations, and the deviation relative to the imposed significance level for specified sample sizes. Six methods of computing confidence intervals for absolute risk reduction and absolute risk reduction-like functions were assessed using random binomial variables and random sample sizes.The experiments shows that the ADAC, and ADAC1 methods obtains the best overall performance of computing confidence intervals for absolute risk reduction.

  12. Confidence Intervals Verification for Simulated Error Rate Performance of Wireless Communication System

    KAUST Repository

    Smadi, Mahmoud A.

    2012-12-06

    In this paper, we derived an efficient simulation method to evaluate the error rate of wireless communication system. Coherent binary phase-shift keying system is considered with imperfect channel phase recovery. The results presented demonstrate the system performance under very realistic Nakagami-m fading and additive white Gaussian noise channel. On the other hand, the accuracy of the obtained results is verified through running the simulation under a good confidence interval reliability of 95 %. We see that as the number of simulation runs N increases, the simulated error rate becomes closer to the actual one and the confidence interval difference reduces. Hence our results are expected to be of significant practical use for such scenarios. © 2012 Springer Science+Business Media New York.

  13. Uncertainty in population growth rates: determining confidence intervals from point estimates of parameters.

    Directory of Open Access Journals (Sweden)

    Eleanor S Devenish Nelson

    Full Text Available BACKGROUND: Demographic models are widely used in conservation and management, and their parameterisation often relies on data collected for other purposes. When underlying data lack clear indications of associated uncertainty, modellers often fail to account for that uncertainty in model outputs, such as estimates of population growth. METHODOLOGY/PRINCIPAL FINDINGS: We applied a likelihood approach to infer uncertainty retrospectively from point estimates of vital rates. Combining this with resampling techniques and projection modelling, we show that confidence intervals for population growth estimates are easy to derive. We used similar techniques to examine the effects of sample size on uncertainty. Our approach is illustrated using data on the red fox, Vulpes vulpes, a predator of ecological and cultural importance, and the most widespread extant terrestrial mammal. We show that uncertainty surrounding estimated population growth rates can be high, even for relatively well-studied populations. Halving that uncertainty typically requires a quadrupling of sampling effort. CONCLUSIONS/SIGNIFICANCE: Our results compel caution when comparing demographic trends between populations without accounting for uncertainty. Our methods will be widely applicable to demographic studies of many species.

  14. Comparison of Bootstrap Confidence Intervals Using Monte Carlo Simulations

    Directory of Open Access Journals (Sweden)

    Roberto S. Flowers-Cano

    2018-02-01

    Full Text Available Design of hydraulic works requires the estimation of design hydrological events by statistical inference from a probability distribution. Using Monte Carlo simulations, we compared coverage of confidence intervals constructed with four bootstrap techniques: percentile bootstrap (BP, bias-corrected bootstrap (BC, accelerated bias-corrected bootstrap (BCA and a modified version of the standard bootstrap (MSB. Different simulation scenarios were analyzed. In some cases, the mother distribution function was fit to the random samples that were generated. In other cases, a distribution function different to the mother distribution was fit to the samples. When the fitted distribution had three parameters, and was the same as the mother distribution, the intervals constructed with the four techniques had acceptable coverage. However, the bootstrap techniques failed in several of the cases in which the fitted distribution had two parameters.

  15. Methods for confidence interval estimation of a ratio parameter with application to location quotients

    Directory of Open Access Journals (Sweden)

    Beyene Joseph

    2005-10-01

    Full Text Available Abstract Background The location quotient (LQ ratio, a measure designed to quantify and benchmark the degree of relative concentration of an activity in the analysis of area localization, has received considerable attention in the geographic and economics literature. This index can also naturally be applied in the context of population health to quantify and compare health outcomes across spatial domains. However, one commonly observed limitation of LQ is its widespread use as only a point estimate without an accompanying confidence interval. Methods In this paper we present statistical methods that can be used to construct confidence intervals for location quotients. The delta and Fieller's methods are generic approaches for a ratio parameter and the generalized linear modelling framework is a useful re-parameterization particularly helpful for generating profile-likelihood based confidence intervals for the location quotient. A simulation experiment is carried out to assess the performance of each of the analytic approaches and a health utilization data set is used for illustration. Results Both the simulation results as well as the findings from the empirical data show that the different analytical methods produce very similar confidence limits for location quotients. When incidence of outcome is not rare and sample sizes are large, the confidence limits are almost indistinguishable. The confidence limits from the generalized linear model approach might be preferable in small sample situations. Conclusion LQ is a useful measure which allows quantification and comparison of health and other outcomes across defined geographical regions. It is a very simple index to compute and has a straightforward interpretation. Reporting this estimate with appropriate confidence limits using methods presented in this paper will make the measure particularly attractive for policy and decision makers.

  16. Determination and Interpretation of Characteristic Limits for Radioactivity Measurements: Decision Threshhold, Detection Limit and Limits of the Confidence Interval

    International Nuclear Information System (INIS)

    2017-01-01

    Since 2004, the environment programme of the IAEA has included activities aimed at developing a set of procedures for analytical measurements of radionuclides in food and the environment. Reliable, comparable and fit for purpose results are essential for any analytical measurement. Guidelines and national and international standards for laboratory practices to fulfil quality assurance requirements are extremely important when performing such measurements. The guidelines and standards should be comprehensive, clearly formulated and readily available to both the analyst and the customer. ISO 11929:2010 is the international standard on the determination of the characteristic limits (decision threshold, detection limit and limits of the confidence interval) for measuring ionizing radiation. For nuclear analytical laboratories involved in the measurement of radioactivity in food and the environment, robust determination of the characteristic limits of radioanalytical techniques is essential with regard to national and international regulations on permitted levels of radioactivity. However, characteristic limits defined in ISO 11929:2010 are complex, and the correct application of the standard in laboratories requires a full understanding of various concepts. This publication provides additional information to Member States in the understanding of the terminology, definitions and concepts in ISO 11929:2010, thus facilitating its implementation in Member State laboratories.

  17. Energy Performance Certificate of building and confidence interval in assessment: An Italian case study

    International Nuclear Information System (INIS)

    Tronchin, Lamberto; Fabbri, Kristian

    2012-01-01

    The Directive 2002/91/CE introduced the Energy Performance Certificate (EPC), an energy policy tool. The aim of the EPC is to inform building buyers about the energy performance and energy costs of buildings. The EPCs represent a specific energy policy tool to orient the building sector and real-estate markets toward higher energy efficiency buildings. The effectiveness of the EPC depends on two factors: •The accuracy of energy performance evaluation made by independent experts. •The capability of the energy classification and of the scale of energy performance to control the energy index fluctuations. In this paper, the results of a case study located in Italy are shown. In this example, 162 independent technicians on energy performance of building evaluation have studied the same building. The results reveal which part of confidence intervals is dependent on software misunderstanding and that the energy classification ranges are able to tolerate the fluctuation of energy indices. The example was chosen in accordance with the legislation of the Emilia-Romagna Region on Energy Efficiency of Buildings. Following these results, some thermo-economic evaluation related to building and energy labelling are illustrated, as the EPC, which is an energy policy tool for the real-estate market and building sector to find a way to build or retrofit an energy efficiency building. - Highlights: ► Evaluation of the accuracy of energy performance of buildings in relation with the knowledge of independent experts. ► Round robin test based on 162 case studies on the confidence intervals expressed by independent experts. ► Statistical considerations between the confidence intervals expressed by independent experts and energy simulation software. ► Relation between “proper class” in energy classification of buildings and confidence intervals of independent experts.

  18. The best confidence interval of the failure rate and unavailability per demand when few experimental data are available

    International Nuclear Information System (INIS)

    Goodman, J.

    1985-01-01

    Using a few available data the likelihood functions for the failure rate and unavailability per demand are constructed. These likelihood functions are used to obtain likelihood density functions for the failure rate and unavailability per demand. The best (or shortest) confidence intervals for these functions are provided. The failure rate and unavailability per demand are important characteristics needed for reliability and availability analysis. The methods of estimation of these characteristics when plenty of observed data are available are well known. However, on many occasions when we deal with rare failure modes or with new equipment or components for which sufficient experience has not accumulated, we have scarce data where few or zero failures have occurred. In these cases, a technique which reflects exactly our state of knowledge is required. This technique is based on likelihood density function or Bayesian methods depending on the available prior distribution. To extract the maximum amount of information from the data the best confidence interval is determined

  19. WASP (Write a Scientific Paper) using Excel - 6: Standard error and confidence interval.

    Science.gov (United States)

    Grech, Victor

    2018-03-01

    The calculation of descriptive statistics includes the calculation of standard error and confidence interval, an inevitable component of data analysis in inferential statistics. This paper provides pointers as to how to do this in Microsoft Excel™. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Tablet potency of Tianeptine in coated tablets by near infrared spectroscopy: model optimisation, calibration transfer and confidence intervals.

    Science.gov (United States)

    Boiret, Mathieu; Meunier, Loïc; Ginot, Yves-Michel

    2011-02-20

    A near infrared (NIR) method was developed for determination of tablet potency of active pharmaceutical ingredient (API) in a complex coated tablet matrix. The calibration set contained samples from laboratory and production scale batches. The reference values were obtained by high performance liquid chromatography (HPLC) and partial least squares (PLS) regression was used to establish a model. The model was challenged by calculating tablet potency of two external test sets. Root mean square errors of prediction were respectively equal to 2.0% and 2.7%. To use this model with a second spectrometer from the production field, a calibration transfer method called piecewise direct standardisation (PDS) was used. After the transfer, the root mean square error of prediction of the first test set was 2.4% compared to 4.0% without transferring the spectra. A statistical technique using bootstrap of PLS residuals was used to estimate confidence intervals of tablet potency calculations. This method requires an optimised PLS model, selection of the bootstrap number and determination of the risk. In the case of a chemical analysis, the tablet potency value will be included within the confidence interval calculated by the bootstrap method. An easy to use graphical interface was developed to easily determine if the predictions, surrounded by minimum and maximum values, are within the specifications defined by the regulatory organisation. Copyright © 2010 Elsevier B.V. All rights reserved.

  1. Bootstrap resampling: a powerful method of assessing confidence intervals for doses from experimental data

    International Nuclear Information System (INIS)

    Iwi, G.; Millard, R.K.; Palmer, A.M.; Preece, A.W.; Saunders, M.

    1999-01-01

    Bootstrap resampling provides a versatile and reliable statistical method for estimating the accuracy of quantities which are calculated from experimental data. It is an empirically based method, in which large numbers of simulated datasets are generated by computer from existing measurements, so that approximate confidence intervals of the derived quantities may be obtained by direct numerical evaluation. A simple introduction to the method is given via a detailed example of estimating 95% confidence intervals for cumulated activity in the thyroid following injection of 99m Tc-sodium pertechnetate using activity-time data from 23 subjects. The application of the approach to estimating confidence limits for the self-dose to the kidney following injection of 99m Tc-DTPA organ imaging agent based on uptake data from 19 subjects is also illustrated. Results are then given for estimates of doses to the foetus following administration of 99m Tc-sodium pertechnetate for clinical reasons during pregnancy, averaged over 25 subjects. The bootstrap method is well suited for applications in radiation dosimetry including uncertainty, reliability and sensitivity analysis of dose coefficients in biokinetic models, but it can also be applied in a wide range of other biomedical situations. (author)

  2. Confidence intervals for the first crossing point of two hazard functions.

    Science.gov (United States)

    Cheng, Ming-Yen; Qiu, Peihua; Tan, Xianming; Tu, Dongsheng

    2009-12-01

    The phenomenon of crossing hazard rates is common in clinical trials with time to event endpoints. Many methods have been proposed for testing equality of hazard functions against a crossing hazards alternative. However, there has been relatively few approaches available in the literature for point or interval estimation of the crossing time point. The problem of constructing confidence intervals for the first crossing time point of two hazard functions is considered in this paper. After reviewing a recent procedure based on Cox proportional hazard modeling with Box-Cox transformation of the time to event, a nonparametric procedure using the kernel smoothing estimate of the hazard ratio is proposed. The proposed procedure and the one based on Cox proportional hazard modeling with Box-Cox transformation of the time to event are both evaluated by Monte-Carlo simulations and applied to two clinical trial datasets.

  3. How to Avoid Errors in Error Propagation: Prediction Intervals and Confidence Intervals in Forest Biomass

    Science.gov (United States)

    Lilly, P.; Yanai, R. D.; Buckley, H. L.; Case, B. S.; Woollons, R. C.; Holdaway, R. J.; Johnson, J.

    2016-12-01

    Calculations of forest biomass and elemental content require many measurements and models, each contributing uncertainty to the final estimates. While sampling error is commonly reported, based on replicate plots, error due to uncertainty in the regression used to estimate biomass from tree diameter is usually not quantified. Some published estimates of uncertainty due to the regression models have used the uncertainty in the prediction of individuals, ignoring uncertainty in the mean, while others have propagated uncertainty in the mean while ignoring individual variation. Using the simple case of the calcium concentration of sugar maple leaves, we compare the variation among individuals (the standard deviation) to the uncertainty in the mean (the standard error) and illustrate the declining importance in the prediction of individual concentrations as the number of individuals increases. For allometric models, the analogous statistics are the prediction interval (or the residual variation in the model fit) and the confidence interval (describing the uncertainty in the best fit model). The effect of propagating these two sources of error is illustrated using the mass of sugar maple foliage. The uncertainty in individual tree predictions was large for plots with few trees; for plots with 30 trees or more, the uncertainty in individuals was less important than the uncertainty in the mean. Authors of previously published analyses have reanalyzed their data to show the magnitude of these two sources of uncertainty in scales ranging from experimental plots to entire countries. The most correct analysis will take both sources of uncertainty into account, but for practical purposes, country-level reports of uncertainty in carbon stocks, as required by the IPCC, can ignore the uncertainty in individuals. Ignoring the uncertainty in the mean will lead to exaggerated estimates of confidence in estimates of forest biomass and carbon and nutrient contents.

  4. Growth Estimators and Confidence Intervals for the Mean of Negative Binomial Random Variables with Unknown Dispersion

    Directory of Open Access Journals (Sweden)

    David Shilane

    2013-01-01

    Full Text Available The negative binomial distribution becomes highly skewed under extreme dispersion. Even at moderately large sample sizes, the sample mean exhibits a heavy right tail. The standard normal approximation often does not provide adequate inferences about the data's expected value in this setting. In previous work, we have examined alternative methods of generating confidence intervals for the expected value. These methods were based upon Gamma and Chi Square approximations or tail probability bounds such as Bernstein's inequality. We now propose growth estimators of the negative binomial mean. Under high dispersion, zero values are likely to be overrepresented in the data. A growth estimator constructs a normal-style confidence interval by effectively removing a small, predetermined number of zeros from the data. We propose growth estimators based upon multiplicative adjustments of the sample mean and direct removal of zeros from the sample. These methods do not require estimating the nuisance dispersion parameter. We will demonstrate that the growth estimators' confidence intervals provide improved coverage over a wide range of parameter values and asymptotically converge to the sample mean. Interestingly, the proposed methods succeed despite adding both bias and variance to the normal approximation.

  5. A Note on Confidence Interval for the Power of the One Sample Test

    Directory of Open Access Journals (Sweden)

    A. Wong

    2010-01-01

    Full Text Available In introductory statistics texts, the power of the test of a one-sample mean when the variance is known is widely discussed. However, when the variance is unknown, the power of the Student's -test is seldom mentioned. In this note, a general methodology for obtaining inference concerning a scalar parameter of interest of any exponential family model is proposed. The method is then applied to the one-sample mean problem with unknown variance to obtain a (1−100% confidence interval for the power of the Student's -test that detects the difference (−0. The calculations require only the density and the cumulative distribution functions of the standard normal distribution. In addition, the methodology presented can also be applied to determine the required sample size when the effect size and the power of a size test of mean are given.

  6. Estimation and interpretation of keff confidence intervals in MCNP

    International Nuclear Information System (INIS)

    Urbatsch, T.J.

    1995-01-01

    MCNP has three different, but correlated, estimators for Calculating k eff in nuclear criticality calculations: collision, absorption, and track length estimators. The combination of these three estimators, the three-combined k eff estimator, is shown to be the best k eff estimator available in MCNP for estimating k eff confidence intervals. Theoretically, the Gauss-Markov Theorem provides a solid foundation for MCNP's three-combined estimator. Analytically, a statistical study, where the estimates are drawn using a known covariance matrix, shows that the three-combined estimator is superior to the individual estimator with the smallest variance. The importance of MCNP's batch statistics is demonstrated by an investigation of the effects of individual estimator variance bias on the combination of estimators, both heuristically with the analytical study and emprically with MCNP

  7. Estimation and interpretation of keff confidence intervals in MCNP

    International Nuclear Information System (INIS)

    Urbatsch, T.J.

    1995-01-01

    The Monte Carlo code MCNP has three different, but correlated, estimators for calculating k eff in nuclear criticality calculations: collision, absorption, and track length estimators. The combination of these three estimators, the three-combined k eff estimator, is shown to be the best k eff estimator available in MCNP for estimating k eff confidence intervals. Theoretically, the Gauss-Markov theorem provides a solid foundation for MCNP's three-combined estimator. Analytically, a statistical study, where the estimates are drawn using a known covariance matrix, shows that the three-combined estimator is superior to the estimator with the smallest variance. Empirically, MCNP examples for several physical systems demonstrate the three-combined estimator's superiority over each of the three individual estimators and its correct coverage rates. Additionally, the importance of MCNP's statistical checks is demonstrated

  8. Tests and Confidence Intervals for an Extended Variance Component Using the Modified Likelihood Ratio Statistic

    DEFF Research Database (Denmark)

    Christensen, Ole Fredslund; Frydenberg, Morten; Jensen, Jens Ledet

    2005-01-01

    The large deviation modified likelihood ratio statistic is studied for testing a variance component equal to a specified value. Formulas are presented in the general balanced case, whereas in the unbalanced case only the one-way random effects model is studied. Simulation studies are presented......, showing that the normal approximation to the large deviation modified likelihood ratio statistic gives confidence intervals for variance components with coverage probabilities very close to the nominal confidence coefficient....

  9. Number of core samples: Mean concentrations and confidence intervals

    International Nuclear Information System (INIS)

    Jensen, L.; Cromar, R.D.; Wilmarth, S.R.; Heasler, P.G.

    1995-01-01

    This document provides estimates of how well the mean concentration of analytes are known as a function of the number of core samples, composite samples, and replicate analyses. The estimates are based upon core composite data from nine recently sampled single-shell tanks. The results can be used when determining the number of core samples needed to ''characterize'' the waste from similar single-shell tanks. A standard way of expressing uncertainty in the estimate of a mean is with a 95% confidence interval (CI). The authors investigate how the width of a 95% CI on the mean concentration decreases as the number of observations increase. Specifically, the tables and figures show how the relative half-width (RHW) of a 95% CI decreases as the number of core samples increases. The RHW of a CI is a unit-less measure of uncertainty. The general conclusions are as follows: (1) the RHW decreases dramatically as the number of core samples is increased, the decrease is much smaller when the number of composited samples or the number of replicate analyses are increase; (2) if the mean concentration of an analyte needs to be estimated with a small RHW, then a large number of core samples is required. The estimated number of core samples given in the tables and figures were determined by specifying different sizes of the RHW. Four nominal sizes were examined: 10%, 25%, 50%, and 100% of the observed mean concentration. For a majority of analytes the number of core samples required to achieve an accuracy within 10% of the mean concentration is extremely large. In many cases, however, two or three core samples is sufficient to achieve a RHW of approximately 50 to 100%. Because many of the analytes in the data have small concentrations, this level of accuracy may be satisfactory for some applications

  10. Statistical variability and confidence intervals for planar dose QA pass rates

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, Daniel W.; Nelms, Benjamin E.; Attwood, Kristopher; Kumaraswamy, Lalith; Podgorsak, Matthew B. [Department of Physics, State University of New York at Buffalo, Buffalo, New York 14260 (United States) and Department of Radiation Medicine, Roswell Park Cancer Institute, Buffalo, New York 14263 (United States); Canis Lupus LLC, Merrimac, Wisconsin 53561 (United States); Department of Biostatistics, Roswell Park Cancer Institute, Buffalo, New York 14263 (United States); Department of Radiation Medicine, Roswell Park Cancer Institute, Buffalo, New York 14263 (United States); Department of Radiation Medicine, Roswell Park Cancer Institute, Buffalo, New York 14263 (United States); Department of Molecular and Cellular Biophysics and Biochemistry, Roswell Park Cancer Institute, Buffalo, New York 14263 (United States) and Department of Physiology and Biophysics, State University of New York at Buffalo, Buffalo, New York 14214 (United States)

    2011-11-15

    Purpose: The most common metric for comparing measured to calculated dose, such as for pretreatment quality assurance of intensity-modulated photon fields, is a pass rate (%) generated using percent difference (%Diff), distance-to-agreement (DTA), or some combination of the two (e.g., gamma evaluation). For many dosimeters, the grid of analyzed points corresponds to an array with a low areal density of point detectors. In these cases, the pass rates for any given comparison criteria are not absolute but exhibit statistical variability that is a function, in part, on the detector sampling geometry. In this work, the authors analyze the statistics of various methods commonly used to calculate pass rates and propose methods for establishing confidence intervals for pass rates obtained with low-density arrays. Methods: Dose planes were acquired for 25 prostate and 79 head and neck intensity-modulated fields via diode array and electronic portal imaging device (EPID), and matching calculated dose planes were created via a commercial treatment planning system. Pass rates for each dose plane pair (both centered to the beam central axis) were calculated with several common comparison methods: %Diff/DTA composite analysis and gamma evaluation, using absolute dose comparison with both local and global normalization. Specialized software was designed to selectively sample the measured EPID response (very high data density) down to discrete points to simulate low-density measurements. The software was used to realign the simulated detector grid at many simulated positions with respect to the beam central axis, thereby altering the low-density sampled grid. Simulations were repeated with 100 positional iterations using a 1 detector/cm{sup 2} uniform grid, a 2 detector/cm{sup 2} uniform grid, and similar random detector grids. For each simulation, %/DTA composite pass rates were calculated with various %Diff/DTA criteria and for both local and global %Diff normalization

  11. Optimal and Most Exact Confidence Intervals for Person Parameters in Item Response Theory Models

    Science.gov (United States)

    Doebler, Anna; Doebler, Philipp; Holling, Heinz

    2013-01-01

    The common way to calculate confidence intervals for item response theory models is to assume that the standardized maximum likelihood estimator for the person parameter [theta] is normally distributed. However, this approximation is often inadequate for short and medium test lengths. As a result, the coverage probabilities fall below the given…

  12. Confidence intervals for population allele frequencies: the general case of sampling from a finite diploid population of any size.

    Science.gov (United States)

    Fung, Tak; Keenan, Kevin

    2014-01-01

    The estimation of population allele frequencies using sample data forms a central component of studies in population genetics. These estimates can be used to test hypotheses on the evolutionary processes governing changes in genetic variation among populations. However, existing studies frequently do not account for sampling uncertainty in these estimates, thus compromising their utility. Incorporation of this uncertainty has been hindered by the lack of a method for constructing confidence intervals containing the population allele frequencies, for the general case of sampling from a finite diploid population of any size. In this study, we address this important knowledge gap by presenting a rigorous mathematical method to construct such confidence intervals. For a range of scenarios, the method is used to demonstrate that for a particular allele, in order to obtain accurate estimates within 0.05 of the population allele frequency with high probability (> or = 95%), a sample size of > 30 is often required. This analysis is augmented by an application of the method to empirical sample allele frequency data for two populations of the checkerspot butterfly (Melitaea cinxia L.), occupying meadows in Finland. For each population, the method is used to derive > or = 98.3% confidence intervals for the population frequencies of three alleles. These intervals are then used to construct two joint > or = 95% confidence regions, one for the set of three frequencies for each population. These regions are then used to derive a > or = 95%% confidence interval for Jost's D, a measure of genetic differentiation between the two populations. Overall, the results demonstrate the practical utility of the method with respect to informing sampling design and accounting for sampling uncertainty in studies of population genetics, important for scientific hypothesis-testing and also for risk-based natural resource management.

  13. Confidence intervals for population allele frequencies: the general case of sampling from a finite diploid population of any size.

    Directory of Open Access Journals (Sweden)

    Tak Fung

    Full Text Available The estimation of population allele frequencies using sample data forms a central component of studies in population genetics. These estimates can be used to test hypotheses on the evolutionary processes governing changes in genetic variation among populations. However, existing studies frequently do not account for sampling uncertainty in these estimates, thus compromising their utility. Incorporation of this uncertainty has been hindered by the lack of a method for constructing confidence intervals containing the population allele frequencies, for the general case of sampling from a finite diploid population of any size. In this study, we address this important knowledge gap by presenting a rigorous mathematical method to construct such confidence intervals. For a range of scenarios, the method is used to demonstrate that for a particular allele, in order to obtain accurate estimates within 0.05 of the population allele frequency with high probability (> or = 95%, a sample size of > 30 is often required. This analysis is augmented by an application of the method to empirical sample allele frequency data for two populations of the checkerspot butterfly (Melitaea cinxia L., occupying meadows in Finland. For each population, the method is used to derive > or = 98.3% confidence intervals for the population frequencies of three alleles. These intervals are then used to construct two joint > or = 95% confidence regions, one for the set of three frequencies for each population. These regions are then used to derive a > or = 95%% confidence interval for Jost's D, a measure of genetic differentiation between the two populations. Overall, the results demonstrate the practical utility of the method with respect to informing sampling design and accounting for sampling uncertainty in studies of population genetics, important for scientific hypothesis-testing and also for risk-based natural resource management.

  14. Monte Carlo simulation of parameter confidence intervals for non-linear regression analysis of biological data using Microsoft Excel.

    Science.gov (United States)

    Lambert, Ronald J W; Mytilinaios, Ioannis; Maitland, Luke; Brown, Angus M

    2012-08-01

    This study describes a method to obtain parameter confidence intervals from the fitting of non-linear functions to experimental data, using the SOLVER and Analysis ToolPaK Add-In of the Microsoft Excel spreadsheet. Previously we have shown that Excel can fit complex multiple functions to biological data, obtaining values equivalent to those returned by more specialized statistical or mathematical software. However, a disadvantage of using the Excel method was the inability to return confidence intervals for the computed parameters or the correlations between them. Using a simple Monte-Carlo procedure within the Excel spreadsheet (without recourse to programming), SOLVER can provide parameter estimates (up to 200 at a time) for multiple 'virtual' data sets, from which the required confidence intervals and correlation coefficients can be obtained. The general utility of the method is exemplified by applying it to the analysis of the growth of Listeria monocytogenes, the growth inhibition of Pseudomonas aeruginosa by chlorhexidine and the further analysis of the electrophysiological data from the compound action potential of the rodent optic nerve. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  15. Confidence Intervals for Asbestos Fiber Counts: Approximate Negative Binomial Distribution.

    Science.gov (United States)

    Bartley, David; Slaven, James; Harper, Martin

    2017-03-01

    The negative binomial distribution is adopted for analyzing asbestos fiber counts so as to account for both the sampling errors in capturing only a finite number of fibers and the inevitable human variation in identifying and counting sampled fibers. A simple approximation to this distribution is developed for the derivation of quantiles and approximate confidence limits. The success of the approximation depends critically on the use of Stirling's expansion to sufficient order, on exact normalization of the approximating distribution, on reasonable perturbation of quantities from the normal distribution, and on accurately approximating sums by inverse-trapezoidal integration. Accuracy of the approximation developed is checked through simulation and also by comparison to traditional approximate confidence intervals in the specific case that the negative binomial distribution approaches the Poisson distribution. The resulting statistics are shown to relate directly to early research into the accuracy of asbestos sampling and analysis. Uncertainty in estimating mean asbestos fiber concentrations given only a single count is derived. Decision limits (limits of detection) and detection limits are considered for controlling false-positive and false-negative detection assertions and are compared to traditional limits computed assuming normal distributions. Published by Oxford University Press on behalf of the British Occupational Hygiene Society 2017.

  16. Learning about confidence intervals with software R

    Directory of Open Access Journals (Sweden)

    Gariela Gonçalves

    2013-08-01

    Full Text Available 0 0 1 202 1111 USAL 9 2 1311 14.0 Normal 0 21 false false false ES JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Tabla normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:Calibri; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-ansi-language:ES; mso-fareast-language:EN-US;} This work was to study the feasibility of implementing a teaching method that employs software, in a Computational Mathematics course, involving students and teachers through the use of the statistical software R in carrying out practical work, such as strengthening the traditional teaching. The statistical inference, namely the determination of confidence intervals, was the content selected for this experience. It was intended show, first of all, that it is possible to promote, through the proposal methodology, the acquisition of basic skills in statistical inference and to promote the positive relationships between teachers and students. It presents also a comparative study between the methodologies used and their quantitative and qualitative results on two consecutive school years, in several indicators. The data used in the study were obtained from the students to the exam questions in the years 2010/2011 and 2011/2012, from the achievement of a working group in 2011/2012 and via the responses to a questionnaire (optional and anonymous also applied in 2011 / 2012. In terms of results, we emphasize a better performance of students in the examination questions in 2011/2012, the year that students used the software R, and a very favorable student’s perspective about

  17. Secure and Usable Bio-Passwords based on Confidence Interval

    Directory of Open Access Journals (Sweden)

    Aeyoung Kim

    2017-02-01

    Full Text Available The most popular user-authentication method is the password. Many authentication systems try to enhance their security by enforcing a strong password policy, and by using the password as the first factor, something you know, with the second factor being something you have. However, a strong password policy and a multi-factor authentication system can make it harder for a user to remember the password and login in. In this paper a bio-password-based scheme is proposed as a unique authentication method, which uses biometrics and confidence interval sets to enhance the security of the log-in process and make it easier as well. The method offers a user-friendly solution for creating and registering strong passwords without the user having to memorize them. Here we also show the results of our experiments which demonstrate the efficiency of this method and how it can be used to protect against a variety of malicious attacks.

  18. GENERALISED MODEL BASED CONFIDENCE INTERVALS IN TWO STAGE CLUSTER SAMPLING

    Directory of Open Access Journals (Sweden)

    Christopher Ouma Onyango

    2010-09-01

    Full Text Available Chambers and Dorfman (2002 constructed bootstrap confidence intervals in model based estimation for finite population totals assuming that auxiliary values are available throughout a target population and that the auxiliary values are independent. They also assumed that the cluster sizes are known throughout the target population. We now extend to two stage sampling in which the cluster sizes are known only for the sampled clusters, and we therefore predict the unobserved part of the population total. Jan and Elinor (2008 have done similar work, but unlike them, we use a general model, in which the auxiliary values are not necessarily independent. We demonstrate that the asymptotic properties of our proposed estimator and its coverage rates are better than those constructed under the model assisted local polynomial regression model.

  19. The confidence-accuracy relationship for eyewitness identification decisions: Effects of exposure duration, retention interval, and divided attention.

    Science.gov (United States)

    Palmer, Matthew A; Brewer, Neil; Weber, Nathan; Nagesh, Ambika

    2013-03-01

    Prior research points to a meaningful confidence-accuracy (CA) relationship for positive identification decisions. However, there are theoretical grounds for expecting that different aspects of the CA relationship (calibration, resolution, and over/underconfidence) might be undermined in some circumstances. This research investigated whether the CA relationship for eyewitness identification decisions is affected by three, forensically relevant variables: exposure duration, retention interval, and divided attention at encoding. In Study 1 (N = 986), a field experiment, we examined the effects of exposure duration (5 s vs. 90 s) and retention interval (immediate testing vs. a 1-week delay) on the CA relationship. In Study 2 (N = 502), we examined the effects of attention during encoding on the CA relationship by reanalyzing data from a laboratory experiment in which participants viewed a stimulus video under full or divided attention conditions and then attempted to identify two targets from separate lineups. Across both studies, all three manipulations affected identification accuracy. The central analyses concerned the CA relation for positive identification decisions. For the manipulations of exposure duration and retention interval, overconfidence was greater in the more difficult conditions (shorter exposure; delayed testing) than the easier conditions. Only the exposure duration manipulation influenced resolution (which was better for 5 s than 90 s), and only the retention interval manipulation affected calibration (which was better for immediate testing than delayed testing). In all experimental conditions, accuracy and diagnosticity increased with confidence, particularly at the upper end of the confidence scale. Implications for theory and forensic settings are discussed.

  20. A Note on Confidence Interval for the Power of the One Sample Test

    OpenAIRE

    A. Wong

    2010-01-01

    In introductory statistics texts, the power of the test of a one-sample mean when the variance is known is widely discussed. However, when the variance is unknown, the power of the Student's -test is seldom mentioned. In this note, a general methodology for obtaining inference concerning a scalar parameter of interest of any exponential family model is proposed. The method is then applied to the one-sample mean problem with unknown variance to obtain a ( 1 − ) 100% confidence interval for...

  1. Rescaled Range Analysis and Detrended Fluctuation Analysis: Finite Sample Properties and Confidence Intervals

    Czech Academy of Sciences Publication Activity Database

    Krištoufek, Ladislav

    4/2010, č. 3 (2010), s. 236-250 ISSN 1802-4696 R&D Projects: GA ČR GD402/09/H045; GA ČR GA402/09/0965 Grant - others:GA UK(CZ) 118310 Institutional research plan: CEZ:AV0Z10750506 Keywords : rescaled range analysis * detrended fluctuation analysis * Hurst exponent * long-range dependence Subject RIV: AH - Economics http://library.utia.cas.cz/separaty/2010/E/kristoufek-rescaled range analysis and detrended fluctuation analysis finite sample properties and confidence intervals.pdf

  2. The 95% confidence intervals of error rates and discriminant coefficients

    Directory of Open Access Journals (Sweden)

    Shuichi Shinmura

    2015-02-01

    Full Text Available Fisher proposed a linear discriminant function (Fisher’s LDF. From 1971, we analysed electrocardiogram (ECG data in order to develop the diagnostic logic between normal and abnormal symptoms by Fisher’s LDF and a quadratic discriminant function (QDF. Our four years research was inferior to the decision tree logic developed by the medical doctor. After this experience, we discriminated many data and found four problems of the discriminant analysis. A revised Optimal LDF by Integer Programming (Revised IP-OLDF based on the minimum number of misclassification (minimum NM criterion resolves three problems entirely [13, 18]. In this research, we discuss fourth problem of the discriminant analysis. There are no standard errors (SEs of the error rate and discriminant coefficient. We propose a k-fold crossvalidation method. This method offers a model selection technique and a 95% confidence intervals (C.I. of error rates and discriminant coefficients.

  3. Coverage probability of bootstrap confidence intervals in heavy-tailed frequency models, with application to precipitation data

    Czech Academy of Sciences Publication Activity Database

    Kyselý, Jan

    2010-01-01

    Roč. 101, 3-4 (2010), s. 345-361 ISSN 0177-798X R&D Projects: GA AV ČR KJB300420801 Institutional research plan: CEZ:AV0Z30420517 Keywords : bootstrap * extreme value analysis * confidence intervals * heavy-tailed distributions * precipitation amounts Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 1.684, year: 2010

  4. A NEW METHOD FOR CONSTRUCTING CONFIDENCE INTERVAL FOR CPM BASED ON FUZZY DATA

    Directory of Open Access Journals (Sweden)

    Bahram Sadeghpour Gildeh

    2011-06-01

    Full Text Available A measurement control system ensures that measuring equipment and measurement processes are fit for their intended use and its importance in achieving product quality objectives. In most real life applications, the observations are fuzzy. In some cases specification limits (SLs are not precise numbers and they are expressed in fuzzy terms, s o that the classical capability indices could not be applied. In this paper we obtain 100(1 - α% fuzzy confidence interval for C pm fuzzy process capability index, where instead of precise quality we have two membership functions for specification limits.

  5. Confidence Intervals for Effect Sizes: Compliance and Clinical Significance in the "Journal of Consulting and Clinical Psychology"

    Science.gov (United States)

    Odgaard, Eric C.; Fowler, Robert L.

    2010-01-01

    Objective: In 2005, the "Journal of Consulting and Clinical Psychology" ("JCCP") became the first American Psychological Association (APA) journal to require statistical measures of clinical significance, plus effect sizes (ESs) and associated confidence intervals (CIs), for primary outcomes (La Greca, 2005). As this represents the single largest…

  6. The Model Confidence Set

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Lunde, Asger; Nason, James M.

    The paper introduces the model confidence set (MCS) and applies it to the selection of models. A MCS is a set of models that is constructed such that it will contain the best model with a given level of confidence. The MCS is in this sense analogous to a confidence interval for a parameter. The MCS......, beyond the comparison of models. We apply the MCS procedure to two empirical problems. First, we revisit the inflation forecasting problem posed by Stock and Watson (1999), and compute the MCS for their set of inflation forecasts. Second, we compare a number of Taylor rule regressions and determine...... the MCS of the best in terms of in-sample likelihood criteria....

  7. Bootstrap confidence intervals and bias correction in the estimation of HIV incidence from surveillance data with testing for recent infection.

    Science.gov (United States)

    Carnegie, Nicole Bohme

    2011-04-15

    The incidence of new infections is a key measure of the status of the HIV epidemic, but accurate measurement of incidence is often constrained by limited data. Karon et al. (Statist. Med. 2008; 27:4617–4633) developed a model to estimate the incidence of HIV infection from surveillance data with biologic testing for recent infection for newly diagnosed cases. This method has been implemented by public health departments across the United States and is behind the new national incidence estimates, which are about 40 per cent higher than previous estimates. We show that the delta method approximation given for the variance of the estimator is incomplete, leading to an inflated variance estimate. This contributes to the generation of overly conservative confidence intervals, potentially obscuring important differences between populations. We demonstrate via simulation that an innovative model-based bootstrap method using the specified model for the infection and surveillance process improves confidence interval coverage and adjusts for the bias in the point estimate. Confidence interval coverage is about 94–97 per cent after correction, compared with 96–99 per cent before. The simulated bias in the estimate of incidence ranges from −6.3 to +14.6 per cent under the original model but is consistently under 1 per cent after correction by the model-based bootstrap. In an application to data from King County, Washington in 2007 we observe correction of 7.2 per cent relative bias in the incidence estimate and a 66 per cent reduction in the width of the 95 per cent confidence interval using this method. We provide open-source software to implement the method that can also be extended for alternate models.

  8. An SPSS Macro to Compute Confidence Intervals for Pearson’s Correlation

    Directory of Open Access Journals (Sweden)

    Bruce Weaver

    2014-04-01

    Full Text Available In many disciplines, including psychology, medical research, epidemiology and public health, authors are required, or at least encouraged to report confidence intervals (CIs along with effect size estimates. Many students and researchers in these areas use IBM-SPSS for statistical analysis. Unfortunately, the CORRELATIONS procedure in SPSS does not provide CIs in the output. Various work-around solutions have been suggested for obtaining CIs for rhowith SPSS, but most of them have been sub-optimal. Since release 18, it has been possible to compute bootstrap CIs, but only if users have the optional bootstrap module. The !rhoCI macro described in this article is accessible to all SPSS users with release 14 or later. It directs output from the CORRELATIONS procedure to another dataset, restructures that dataset to have one row per correlation, computes a CI for each correlation, and displays the results in a single table. Because the macro uses the CORRELATIONS procedure, it allows users to specify a list of two or more variables to include in the correlation matrix, to choose a confidence level, and to select either listwise or pairwise deletion. Thus, it offers substantial improvements over previous solutions to theproblem of how to compute CIs for rho with SPSS.

  9. Bayesian-statistical decision threshold, detection limit, and confidence interval in nuclear radiation measurement

    International Nuclear Information System (INIS)

    Weise, K.

    1998-01-01

    When a contribution of a particular nuclear radiation is to be detected, for instance, a spectral line of interest for some purpose of radiation protection, and quantities and their uncertainties must be taken into account which, such as influence quantities, cannot be determined by repeated measurements or by counting nuclear radiation events, then conventional statistics of event frequencies is not sufficient for defining the decision threshold, the detection limit, and the limits of a confidence interval. These characteristic limits are therefore redefined on the basis of Bayesian statistics for a wider applicability and in such a way that the usual practice remains as far as possible unaffected. The principle of maximum entropy is applied to establish probability distributions from available information. Quantiles of these distributions are used for defining the characteristic limits. But such a distribution must not be interpreted as a distribution of event frequencies such as the Poisson distribution. It rather expresses the actual state of incomplete knowledge of a physical quantity. The different definitions and interpretations and their quantitative consequences are presented and discussed with two examples. The new approach provides a theoretical basis for the DIN 25482-10 standard presently in preparation for general applications of the characteristic limits. (orig.) [de

  10. Assessing a disaggregated energy input: using confidence intervals around translog elasticity estimates

    International Nuclear Information System (INIS)

    Hisnanick, J.J.; Kyer, B.L.

    1995-01-01

    The role of energy in the production of manufacturing output has been debated extensively in the literature, particularly its relationship with capital and labor. In an attempt to provide some clarification in this debate, a two-step methodology was used. First under the assumption of a five-factor production function specification, we distinguished between electric and non-electric energy and assessed each component's relationship with capital and labor. Second, we calculated both the Allen and price elasticities and constructed 95% confidence intervals around these values. Our approach led to the following conclusions: that the disaggregation of the energy input into electric and non-electric energy is justified; that capital and electric energy and capital and non-electric energy are substitutes, while labor and electric energy and labor and non-electric energy are complements in production; and that capital and energy are substitutes, while labor and energy are complements. (author)

  11. Determining the confidence levels of sensor outputs using neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Broten, G S; Wood, H C [Saskatchewan Univ., Saskatoon, SK (Canada). Dept. of Electrical Engineering

    1996-12-31

    This paper describes an approach for determining the confidence level of a sensor output using multi-sensor arrays, sensor fusion and artificial neural networks. The authors have shown in previous work that sensor fusion and artificial neural networks can be used to learn the relationships between the outputs of an array of simulated partially selective sensors and the individual analyte concentrations in a mixture of analyses. Other researchers have shown that an array of partially selective sensors can be used to determine the individual gas concentrations in a gaseous mixture. The research reported in this paper shows that it is possible to extract confidence level information from an array of partially selective sensors using artificial neural networks. The confidence level of a sensor output is defined as a numeric value, ranging from 0% to 100%, that indicates the confidence associated with a output of a given sensor. A three layer back-propagation neural network was trained on a subset of the sensor confidence level space, and was tested for its ability to generalize, where the confidence level space is defined as all possible deviations from the correct sensor output. A learning rate of 0.1 was used and no momentum terms were used in the neural network. This research has shown that an artificial neural network can accurately estimate the confidence level of individual sensors in an array of partially selective sensors. This research has also shown that the neural network`s ability to determine the confidence level is influenced by the complexity of the sensor`s response and that the neural network is able to estimate the confidence levels even if more than one sensor is in error. The fundamentals behind this research could be applied to other configurations besides arrays of partially selective sensors, such as an array of sensors separated spatially. An example of such a configuration could be an array of temperature sensors in a tank that is not in

  12. Determining the confidence levels of sensor outputs using neural networks

    International Nuclear Information System (INIS)

    Broten, G.S.; Wood, H.C.

    1995-01-01

    This paper describes an approach for determining the confidence level of a sensor output using multi-sensor arrays, sensor fusion and artificial neural networks. The authors have shown in previous work that sensor fusion and artificial neural networks can be used to learn the relationships between the outputs of an array of simulated partially selective sensors and the individual analyte concentrations in a mixture of analyses. Other researchers have shown that an array of partially selective sensors can be used to determine the individual gas concentrations in a gaseous mixture. The research reported in this paper shows that it is possible to extract confidence level information from an array of partially selective sensors using artificial neural networks. The confidence level of a sensor output is defined as a numeric value, ranging from 0% to 100%, that indicates the confidence associated with a output of a given sensor. A three layer back-propagation neural network was trained on a subset of the sensor confidence level space, and was tested for its ability to generalize, where the confidence level space is defined as all possible deviations from the correct sensor output. A learning rate of 0.1 was used and no momentum terms were used in the neural network. This research has shown that an artificial neural network can accurately estimate the confidence level of individual sensors in an array of partially selective sensors. This research has also shown that the neural network's ability to determine the confidence level is influenced by the complexity of the sensor's response and that the neural network is able to estimate the confidence levels even if more than one sensor is in error. The fundamentals behind this research could be applied to other configurations besides arrays of partially selective sensors, such as an array of sensors separated spatially. An example of such a configuration could be an array of temperature sensors in a tank that is not in

  13. User guide to the UNC1NLI1 package and three utility programs for computation of nonlinear confidence and prediction intervals using MODFLOW-2000

    DEFF Research Database (Denmark)

    Christensen, Steen; Cooley, R.L.

    a model (for example when using the Parameter-Estimation Process of MODFLOW-2000) it is advantageous to also use regression-based methods to quantify uncertainty. For this reason the UNC Process computes (1) confidence intervals for parameters of the Parameter-Estimation Process and (2) confidence...

  14. Predicting fecal coliform using the interval-to-interval approach and SWAT in the Miyun watershed, China.

    Science.gov (United States)

    Bai, Jianwen; Shen, Zhenyao; Yan, Tiezhu; Qiu, Jiali; Li, Yangyang

    2017-06-01

    Pathogens in manure can cause waterborne-disease outbreaks, serious illness, and even death in humans. Therefore, information about the transformation and transport of bacteria is crucial for determining their source. In this study, the Soil and Water Assessment Tool (SWAT) was applied to simulate fecal coliform bacteria load in the Miyun Reservoir watershed, China. The data for the fecal coliform were obtained at three sampling sites, Chenying (CY), Gubeikou (GBK), and Xiahui (XH). The calibration processes of the fecal coliform were conducted using the CY and GBK sites, and validation was conducted at the XH site. An interval-to-interval approach was designed and incorporated into the processes of fecal coliform calibration and validation. The 95% confidence interval of the predicted values and the 95% confidence interval of measured values were considered during calibration and validation in the interval-to-interval approach. Compared with the traditional point-to-point comparison, this method can improve simulation accuracy. The results indicated that the simulation of fecal coliform using the interval-to-interval approach was reasonable for the watershed. This method could provide a new research direction for future model calibration and validation studies.

  15. Bootstrap confidence intervals for three-way methods

    NARCIS (Netherlands)

    Kiers, Henk A.L.

    Results from exploratory three-way analysis techniques such as CANDECOMP/PARAFAC and Tucker3 analysis are usually presented without giving insight into uncertainties due to sampling. Here a bootstrap procedure is proposed that produces percentile intervals for all output parameters. Special

  16. The Precision of Effect Size Estimation From Published Psychological Research: Surveying Confidence Intervals.

    Science.gov (United States)

    Brand, Andrew; Bradley, Michael T

    2016-02-01

    Confidence interval ( CI) widths were calculated for reported Cohen's d standardized effect sizes and examined in two automated surveys of published psychological literature. The first survey reviewed 1,902 articles from Psychological Science. The second survey reviewed a total of 5,169 articles from across the following four APA journals: Journal of Abnormal Psychology, Journal of Applied Psychology, Journal of Experimental Psychology: Human Perception and Performance, and Developmental Psychology. The median CI width for d was greater than 1 in both surveys. Hence, CI widths were, as Cohen (1994) speculated, embarrassingly large. Additional exploratory analyses revealed that CI widths varied across psychological research areas and that CI widths were not discernably decreasing over time. The theoretical implications of these findings are discussed along with ways of reducing the CI widths and thus improving precision of effect size estimation.

  17. Generalized additive models and Lucilia sericata growth: assessing confidence intervals and error rates in forensic entomology.

    Science.gov (United States)

    Tarone, Aaron M; Foran, David R

    2008-07-01

    Forensic entomologists use blow fly development to estimate a postmortem interval. Although accurate, fly age estimates can be imprecise for older developmental stages and no standard means of assigning confidence intervals exists. Presented here is a method for modeling growth of the forensically important blow fly Lucilia sericata, using generalized additive models (GAMs). Eighteen GAMs were created to predict the extent of juvenile fly development, encompassing developmental stage, length, weight, strain, and temperature data, collected from 2559 individuals. All measures were informative, explaining up to 92.6% of the deviance in the data, though strain and temperature exerted negligible influences. Predictions made with an independent data set allowed for a subsequent examination of error. Estimates using length and developmental stage were within 5% of true development percent during the feeding portion of the larval life cycle, while predictions for postfeeding third instars were less precise, but within expected error.

  18. Prediction of the distillation temperatures of crude oils using ¹H NMR and support vector regression with estimated confidence intervals.

    Science.gov (United States)

    Filgueiras, Paulo R; Terra, Luciana A; Castro, Eustáquio V R; Oliveira, Lize M S L; Dias, Júlio C M; Poppi, Ronei J

    2015-09-01

    This paper aims to estimate the temperature equivalent to 10% (T10%), 50% (T50%) and 90% (T90%) of distilled volume in crude oils using (1)H NMR and support vector regression (SVR). Confidence intervals for the predicted values were calculated using a boosting-type ensemble method in a procedure called ensemble support vector regression (eSVR). The estimated confidence intervals obtained by eSVR were compared with previously accepted calculations from partial least squares (PLS) models and a boosting-type ensemble applied in the PLS method (ePLS). By using the proposed boosting strategy, it was possible to identify outliers in the T10% property dataset. The eSVR procedure improved the accuracy of the distillation temperature predictions in relation to standard PLS, ePLS and SVR. For T10%, a root mean square error of prediction (RMSEP) of 11.6°C was obtained in comparison with 15.6°C for PLS, 15.1°C for ePLS and 28.4°C for SVR. The RMSEPs for T50% were 24.2°C, 23.4°C, 22.8°C and 14.4°C for PLS, ePLS, SVR and eSVR, respectively. For T90%, the values of RMSEP were 39.0°C, 39.9°C and 39.9°C for PLS, ePLS, SVR and eSVR, respectively. The confidence intervals calculated by the proposed boosting methodology presented acceptable values for the three properties analyzed; however, they were lower than those calculated by the standard methodology for PLS. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Metacognitive Confidence Increases with, but Does Not Determine, Visual Perceptual Learning.

    Science.gov (United States)

    Zizlsperger, Leopold; Kümmel, Florian; Haarmeier, Thomas

    2016-01-01

    While perceptual learning increases objective sensitivity, the effects on the constant interaction of the process of perception and its metacognitive evaluation have been rarely investigated. Visual perception has been described as a process of probabilistic inference featuring metacognitive evaluations of choice certainty. For visual motion perception in healthy, naive human subjects here we show that perceptual sensitivity and confidence in it increased with training. The metacognitive sensitivity-estimated from certainty ratings by a bias-free signal detection theoretic approach-in contrast, did not. Concomitant 3Hz transcranial alternating current stimulation (tACS) was applied in compliance with previous findings on effective high-low cross-frequency coupling subserving signal detection. While perceptual accuracy and confidence in it improved with training, there were no statistically significant tACS effects. Neither metacognitive sensitivity in distinguishing between their own correct and incorrect stimulus classifications, nor decision confidence itself determined the subjects' visual perceptual learning. Improvements of objective performance and the metacognitive confidence in it were rather determined by the perceptual sensitivity at the outset of the experiment. Post-decision certainty in visual perceptual learning was neither independent of objective performance, nor requisite for changes in sensitivity, but rather covaried with objective performance. The exact functional role of metacognitive confidence in human visual perception has yet to be determined.

  20. Test Statistics and Confidence Intervals to Establish Noninferiority between Treatments with Ordinal Categorical Data.

    Science.gov (United States)

    Zhang, Fanghong; Miyaoka, Etsuo; Huang, Fuping; Tanaka, Yutaka

    2015-01-01

    The problem for establishing noninferiority is discussed between a new treatment and a standard (control) treatment with ordinal categorical data. A measure of treatment effect is used and a method of specifying noninferiority margin for the measure is provided. Two Z-type test statistics are proposed where the estimation of variance is constructed under the shifted null hypothesis using U-statistics. Furthermore, the confidence interval and the sample size formula are given based on the proposed test statistics. The proposed procedure is applied to a dataset from a clinical trial. A simulation study is conducted to compare the performance of the proposed test statistics with that of the existing ones, and the results show that the proposed test statistics are better in terms of the deviation from nominal level and the power.

  1. Weighted profile likelihood-based confidence interval for the difference between two proportions with paired binomial data.

    Science.gov (United States)

    Pradhan, Vivek; Saha, Krishna K; Banerjee, Tathagata; Evans, John C

    2014-07-30

    Inference on the difference between two binomial proportions in the paired binomial setting is often an important problem in many biomedical investigations. Tang et al. (2010, Statistics in Medicine) discussed six methods to construct confidence intervals (henceforth, we abbreviate it as CI) for the difference between two proportions in paired binomial setting using method of variance estimates recovery. In this article, we propose weighted profile likelihood-based CIs for the difference between proportions of a paired binomial distribution. However, instead of the usual likelihood, we use weighted likelihood that is essentially making adjustments to the cell frequencies of a 2 × 2 table in the spirit of Agresti and Min (2005, Statistics in Medicine). We then conduct numerical studies to compare the performances of the proposed CIs with that of Tang et al. and Agresti and Min in terms of coverage probabilities and expected lengths. Our numerical study clearly indicates that the weighted profile likelihood-based intervals and Jeffreys interval (cf. Tang et al.) are superior in terms of achieving the nominal level, and in terms of expected lengths, they are competitive. Finally, we illustrate the use of the proposed CIs with real-life examples. Copyright © 2014 John Wiley & Sons, Ltd.

  2. Empirical likelihood-based confidence intervals for the sensitivity of a continuous-scale diagnostic test at a fixed level of specificity.

    Science.gov (United States)

    Gengsheng Qin; Davis, Angela E; Jing, Bing-Yi

    2011-06-01

    For a continuous-scale diagnostic test, it is often of interest to find the range of the sensitivity of the test at the cut-off that yields a desired specificity. In this article, we first define a profile empirical likelihood ratio for the sensitivity of a continuous-scale diagnostic test and show that its limiting distribution is a scaled chi-square distribution. We then propose two new empirical likelihood-based confidence intervals for the sensitivity of the test at a fixed level of specificity by using the scaled chi-square distribution. Simulation studies are conducted to compare the finite sample performance of the newly proposed intervals with the existing intervals for the sensitivity in terms of coverage probability. A real example is used to illustrate the application of the recommended methods.

  3. Computing confidence and prediction intervals of industrial equipment degradation by bootstrapped support vector regression

    International Nuclear Information System (INIS)

    Lins, Isis Didier; Droguett, Enrique López; Moura, Márcio das Chagas; Zio, Enrico; Jacinto, Carlos Magno

    2015-01-01

    Data-driven learning methods for predicting the evolution of the degradation processes affecting equipment are becoming increasingly attractive in reliability and prognostics applications. Among these, we consider here Support Vector Regression (SVR), which has provided promising results in various applications. Nevertheless, the predictions provided by SVR are point estimates whereas in order to take better informed decisions, an uncertainty assessment should be also carried out. For this, we apply bootstrap to SVR so as to obtain confidence and prediction intervals, without having to make any assumption about probability distributions and with good performance even when only a small data set is available. The bootstrapped SVR is first verified on Monte Carlo experiments and then is applied to a real case study concerning the prediction of degradation of a component from the offshore oil industry. The results obtained indicate that the bootstrapped SVR is a promising tool for providing reliable point and interval estimates, which can inform maintenance-related decisions on degrading components. - Highlights: • Bootstrap (pairs/residuals) and SVR are used as an uncertainty analysis framework. • Numerical experiments are performed to assess accuracy and coverage properties. • More bootstrap replications does not significantly improve performance. • Degradation of equipment of offshore oil wells is estimated by bootstrapped SVR. • Estimates about the scale growth rate can support maintenance-related decisions

  4. [Confidence interval or p-value--similarities and differences between two important methods of statistical inference of quantitative studies].

    Science.gov (United States)

    Harari, Gil

    2014-01-01

    Statistic significance, also known as p-value, and CI (Confidence Interval) are common statistics measures and are essential for the statistical analysis of studies in medicine and life sciences. These measures provide complementary information about the statistical probability and conclusions regarding the clinical significance of study findings. This article is intended to describe the methodologies, compare between the methods, assert their suitability for the different needs of study results analysis and to explain situations in which each method should be used.

  5. Statistical intervals a guide for practitioners

    CERN Document Server

    Hahn, Gerald J

    2011-01-01

    Presents a detailed exposition of statistical intervals and emphasizes applications in industry. The discussion differentiates at an elementary level among different kinds of statistical intervals and gives instruction with numerous examples and simple math on how to construct such intervals from sample data. This includes confidence intervals to contain a population percentile, confidence intervals on probability of meeting specified threshold value, and prediction intervals to include observation in a future sample. Also has an appendix containing computer subroutines for nonparametric stati

  6. Indirect methods for reference interval determination - review and recommendations.

    Science.gov (United States)

    Jones, Graham R D; Haeckel, Rainer; Loh, Tze Ping; Sikaris, Ken; Streichert, Thomas; Katayev, Alex; Barth, Julian H; Ozarda, Yesim

    2018-04-19

    Reference intervals are a vital part of the information supplied by clinical laboratories to support interpretation of numerical pathology results such as are produced in clinical chemistry and hematology laboratories. The traditional method for establishing reference intervals, known as the direct approach, is based on collecting samples from members of a preselected reference population, making the measurements and then determining the intervals. An alternative approach is to perform analysis of results generated as part of routine pathology testing and using appropriate statistical techniques to determine reference intervals. This is known as the indirect approach. This paper from a working group of the International Federation of Clinical Chemistry (IFCC) Committee on Reference Intervals and Decision Limits (C-RIDL) aims to summarize current thinking on indirect approaches to reference intervals. The indirect approach has some major potential advantages compared with direct methods. The processes are faster, cheaper and do not involve patient inconvenience, discomfort or the risks associated with generating new patient health information. Indirect methods also use the same preanalytical and analytical techniques used for patient management and can provide very large numbers for assessment. Limitations to the indirect methods include possible effects of diseased subpopulations on the derived interval. The IFCC C-RIDL aims to encourage the use of indirect methods to establish and verify reference intervals, to promote publication of such intervals with clear explanation of the process used and also to support the development of improved statistical techniques for these studies.

  7. Factorial-based response-surface modeling with confidence intervals for optimizing thermal-optical transmission analysis of atmospheric black carbon

    International Nuclear Information System (INIS)

    Conny, J.M.; Norris, G.A.; Gould, T.R.

    2009-01-01

    Thermal-optical transmission (TOT) analysis measures black carbon (BC) in atmospheric aerosol on a fibrous filter. The method pyrolyzes organic carbon (OC) and employs laser light absorption to distinguish BC from the pyrolyzed OC; however, the instrument does not necessarily separate the two physically. In addition, a comprehensive temperature protocol for the analysis based on the Beer-Lambert Law remains elusive. Here, empirical response-surface modeling was used to show how the temperature protocol in TOT analysis can be modified to distinguish pyrolyzed OC from BC based on the Beer-Lambert Law. We determined the apparent specific absorption cross sections for pyrolyzed OC (σ Char ) and BC (σ BC ), which accounted for individual absorption enhancement effects within the filter. Response-surface models of these cross sections were derived from a three-factor central-composite factorial experimental design: temperature and duration of the high-temperature step in the helium phase, and the heating increase in the helium-oxygen phase. The response surface for σ BC , which varied with instrument conditions, revealed a ridge indicating the correct conditions for OC pyrolysis in helium. The intersection of the σ BC and σ Char surfaces indicated the conditions where the cross sections were equivalent, satisfying an important assumption upon which the method relies. 95% confidence interval surfaces defined a confidence region for a range of pyrolysis conditions. Analyses of wintertime samples from Seattle, WA revealed a temperature between 830 deg. C and 850 deg. C as most suitable for the helium high-temperature step lasting 150 s. However, a temperature as low as 750 deg. C could not be rejected statistically

  8. Confidence intervals permit, but don't guarantee, better inference than statistical significance testing

    Directory of Open Access Journals (Sweden)

    Melissa Coulson

    2010-07-01

    Full Text Available A statistically significant result, and a non-significant result may differ little, although significance status may tempt an interpretation of difference. Two studies are reported that compared interpretation of such results presented using null hypothesis significance testing (NHST, or confidence intervals (CIs. Authors of articles published in psychology, behavioural neuroscience, and medical journals were asked, via email, to interpret two fictitious studies that found similar results, one statistically significant, and the other non-significant. Responses from 330 authors varied greatly, but interpretation was generally poor, whether results were presented as CIs or using NHST. However, when interpreting CIs respondents who mentioned NHST were 60% likely to conclude, unjustifiably, the two results conflicted, whereas those who interpreted CIs without reference to NHST were 95% likely to conclude, justifiably, the two results were consistent. Findings were generally similar for all three disciplines. An email survey of academic psychologists confirmed that CIs elicit better interpretations if NHST is not invoked. Improved statistical inference can result from encouragement of meta-analytic thinking and use of CIs but, for full benefit, such highly desirable statistical reform requires also that researchers interpret CIs without recourse to NHST.

  9. A spreadsheet template compatible with Microsoft Excel and iWork Numbers that returns the simultaneous confidence intervals for all pairwise differences between multiple sample means.

    Science.gov (United States)

    Brown, Angus M

    2010-04-01

    The objective of the method described in this paper is to develop a spreadsheet template for the purpose of comparing multiple sample means. An initial analysis of variance (ANOVA) test on the data returns F--the test statistic. If F is larger than the critical F value drawn from the F distribution at the appropriate degrees of freedom, convention dictates rejection of the null hypothesis and allows subsequent multiple comparison testing to determine where the inequalities between the sample means lie. A variety of multiple comparison methods are described that return the 95% confidence intervals for differences between means using an inclusive pairwise comparison of the sample means. 2009 Elsevier Ireland Ltd. All rights reserved.

  10. Confidence interval estimation of the difference between two sensitivities to the early disease stage.

    Science.gov (United States)

    Dong, Tuochuan; Kang, Le; Hutson, Alan; Xiong, Chengjie; Tian, Lili

    2014-03-01

    Although most of the statistical methods for diagnostic studies focus on disease processes with binary disease status, many diseases can be naturally classified into three ordinal diagnostic categories, that is normal, early stage, and fully diseased. For such diseases, the volume under the ROC surface (VUS) is the most commonly used index of diagnostic accuracy. Because the early disease stage is most likely the optimal time window for therapeutic intervention, the sensitivity to the early diseased stage has been suggested as another diagnostic measure. For the purpose of comparing the diagnostic abilities on early disease detection between two markers, it is of interest to estimate the confidence interval of the difference between sensitivities to the early diseased stage. In this paper, we present both parametric and non-parametric methods for this purpose. An extensive simulation study is carried out for a variety of settings for the purpose of evaluating and comparing the performance of the proposed methods. A real example of Alzheimer's disease (AD) is analyzed using the proposed approaches. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. 用Delta法估计多维测验合成信度的置信区间%Estimating the Confidence Interval of Composite Reliability of a Multidimensional Test With the Delta Method

    Institute of Scientific and Technical Information of China (English)

    叶宝娟; 温忠麟

    2012-01-01

    Reliability is very important in evaluating the quality of a test. Based on the confirmatory factor analysis, composite reliabili- ty is a good index to estimate the test reliability for general applications. As is well known, point estimate contains limited information a- bout a population parameter and cannot indicate how far it can be from the population parameter. The confidence interval of the parame- ter can provide more information. In evaluating the quality of a test, the confidence interval of composite reliability has received atten- tion in recent years. There are three approaches to estimating the confidence interval of composite reliability of an unidimensional test: the Bootstrap method, the Delta method, and the direct use of the standard error of a software output (e. g. , LISREL). The Bootstrap method pro- vides empirical results of the standard error, and is the most credible method. But it needs data simulation techniques, and its computa- tion process is rather complex. The Delta method computes the standard error of composite reliability by approximate calculation. It is simpler than the Bootstrap method. The LISREL software can directly prompt the standard error, and it is the easiest among the three methods. By simulation study, it had been found that the interval estimates obtained by the Delta method and the Bootstrap method were almost identical, whereas the results obtained by LISREL and by the Bootstrap method were substantially different ( Ye & Wen, 2011 ). The Delta method is recommended when the confidence interval of composite reliability of a unidimensional test is estimated, because the Delta method is simpler than the Bootstrap method. There was little research about how to compute the confidence interval of composite reliability of a multidimensional test. We de- duced a formula by using the Delta method for computing the standard error of composite reliability of a multidimensional test. Based on the standard error, the

  12. PCA-based bootstrap confidence interval tests for gene-disease association involving multiple SNPs

    Directory of Open Access Journals (Sweden)

    Xue Fuzhong

    2010-01-01

    Full Text Available Abstract Background Genetic association study is currently the primary vehicle for identification and characterization of disease-predisposing variant(s which usually involves multiple single-nucleotide polymorphisms (SNPs available. However, SNP-wise association tests raise concerns over multiple testing. Haplotype-based methods have the advantage of being able to account for correlations between neighbouring SNPs, yet assuming Hardy-Weinberg equilibrium (HWE and potentially large number degrees of freedom can harm its statistical power and robustness. Approaches based on principal component analysis (PCA are preferable in this regard but their performance varies with methods of extracting principal components (PCs. Results PCA-based bootstrap confidence interval test (PCA-BCIT, which directly uses the PC scores to assess gene-disease association, was developed and evaluated for three ways of extracting PCs, i.e., cases only(CAES, controls only(COES and cases and controls combined(CES. Extraction of PCs with COES is preferred to that with CAES and CES. Performance of the test was examined via simulations as well as analyses on data of rheumatoid arthritis and heroin addiction, which maintains nominal level under null hypothesis and showed comparable performance with permutation test. Conclusions PCA-BCIT is a valid and powerful method for assessing gene-disease association involving multiple SNPs.

  13. Determination of post-burial interval using entomology: A review.

    Science.gov (United States)

    Singh, Rajinder; Sharma, Sahil; Sharma, Arun

    2016-08-01

    Insects and other arthropods are used in different matters pertinent to the criminal justice system as they play very important role in the decomposition of cadavers. They are used as evidence in a criminal investigation to determine post mortem interval (PMI). Various researches and review articles are available on forensic entomology to determine PMI in the terrestrial environment but very less work has been reported in context to buried bodies. Burring the carcass, is one of the methods used by criminals to conceal the crime. So, to drive the attention of researchers toward this growing field and to help various investigating agencies, the present paper reviews the studies done on determination of post-burial interval (PBI), its importance and future prospective. Copyright © 2016 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  14. Perpetrator admissions and earwitness renditions: the effects of retention interval and rehearsal on accuracy of and confidence in memory for criminal accounts

    OpenAIRE

    Boydell, Carroll

    2008-01-01

    While much research has explored how well earwitnesses can identify the voice of a perpetrator, little research has examined how well they can recall details from a perpetrator’s confession. This study examines the accuracy-confidence correlation for memory for details from a perpetrator’s verbal account of a crime, as well as the effects of two variables commonly encountered in a criminal investigation (rehearsal and length of retention interval) on that correlation. Results suggest that con...

  15. nigerian students' self-confidence in responding to statements

    African Journals Online (AJOL)

    Temechegn

    Altogether the test is made up of 40 items covering students' ability to recall definition ... confidence interval within which student have confidence in their choice of the .... is mentioned these equilibrium systems come to memory of the learner.

  16. Recurrence determinism and Li-Yorke chaos for interval maps

    OpenAIRE

    Špitalský, Vladimír

    2017-01-01

    Recurrence determinism, one of the fundamental characteristics of recurrence quantification analysis, measures predictability of a trajectory of a dynamical system. It is tightly connected with the conditional probability that, given a recurrence, following states of the trajectory will be recurrences. In this paper we study recurrence determinism of interval dynamical systems. We show that recurrence determinism distinguishes three main types of $\\omega$-limit sets of zero entropy maps: fini...

  17. Correct Bayesian and frequentist intervals are similar

    International Nuclear Information System (INIS)

    Atwood, C.L.

    1986-01-01

    This paper argues that Bayesians and frequentists will normally reach numerically similar conclusions, when dealing with vague data or sparse data. It is shown that both statistical methodologies can deal reasonably with vague data. With sparse data, in many important practical cases Bayesian interval estimates and frequentist confidence intervals are approximately equal, although with discrete data the frequentist intervals are somewhat longer. This is not to say that the two methodologies are equally easy to use: The construction of a frequentist confidence interval may require new theoretical development. Bayesians methods typically require numerical integration, perhaps over many variables. Also, Bayesian can easily fall into the trap of over-optimism about their amount of prior knowledge. But in cases where both intervals are found correctly, the two intervals are usually not very different. (orig.)

  18. A note on Nonparametric Confidence Interval for a Shift Parameter ...

    African Journals Online (AJOL)

    The method is illustrated using the Cauchy distribution as a location model. The kernel-based method is found to have a shorter interval for the shift parameter between two Cauchy distributions than the one based on the Mann-Whitney test statistic. Keywords: Best Asymptotic Normal; Cauchy distribution; Kernel estimates; ...

  19. Method and system for assigning a confidence metric for automated determination of optic disc location

    Science.gov (United States)

    Karnowski, Thomas P [Knoxville, TN; Tobin, Jr., Kenneth W.; Muthusamy Govindasamy, Vijaya Priya [Knoxville, TN; Chaum, Edward [Memphis, TN

    2012-07-10

    A method for assigning a confidence metric for automated determination of optic disc location that includes analyzing a retinal image and determining at least two sets of coordinates locating an optic disc in the retinal image. The sets of coordinates can be determined using first and second image analysis techniques that are different from one another. An accuracy parameter can be calculated and compared to a primary risk cut-off value. A high confidence level can be assigned to the retinal image if the accuracy parameter is less than the primary risk cut-off value and a low confidence level can be assigned to the retinal image if the accuracy parameter is greater than the primary risk cut-off value. The primary risk cut-off value being selected to represent an acceptable risk of misdiagnosis of a disease having retinal manifestations by the automated technique.

  20. The prognostic value of the QT interval and QT interval dispersion in all-cause and cardiac mortality and morbidity in a population of Danish citizens.

    Science.gov (United States)

    Elming, H; Holm, E; Jun, L; Torp-Pedersen, C; Køber, L; Kircshoff, M; Malik, M; Camm, J

    1998-09-01

    To evaluate the prognostic value of the QT interval and QT interval dispersion in total and in cardiovascular mortality, as well as in cardiac morbidity, in a general population. The QT interval was measured in all leads from a standard 12-lead ECG in a random sample of 1658 women and 1797 men aged 30-60 years. QT interval dispersion was calculated from the maximal difference between QT intervals in any two leads. All cause mortality over 13 years, and cardiovascular mortality as well as cardiac morbidity over 11 years, were the main outcome parameters. Subjects with a prolonged QT interval (430 ms or more) or prolonged QT interval dispersion (80 ms or more) were at higher risk of cardiovascular death and cardiac morbidity than subjects whose QT interval was less than 360 ms, or whose QT interval dispersion was less than 30 ms. Cardiovascular death relative risk ratios, adjusted for age, gender, myocardial infarct, angina pectoris, diabetes mellitus, arterial hypertension, smoking habits, serum cholesterol level, and heart rate were 2.9 for the QT interval (95% confidence interval 1.1-7.8) and 4.4 for QT interval dispersion (95% confidence interval 1.0-19-1). Fatal and non-fatal cardiac morbidity relative risk ratios were similar, at 2.7 (95% confidence interval 1.4-5.5) for the QT interval and 2.2 (95% confidence interval 1.1-4.0) for QT interval dispersion. Prolongation of the QT interval and QT interval dispersion independently affected the prognosis of cardiovascular mortality and cardiac fatal and non-fatal morbidity in a general population over 11 years.

  1. Asymptotics for the Fredholm Determinant of the Sine Kernel on a Union of Intervals

    OpenAIRE

    Widom, Harold

    1994-01-01

    In the bulk scaling limit of the Gaussian Unitary Ensemble of Hermitian matrices the probability that an interval of length $s$ contains no eigenvalues is the Fredholm determinant of the sine kernel $\\sin(x-y)\\over\\pi(x-y)$ over this interval. A formal asymptotic expansion for the determinant as $s$ tends to infinity was obtained by Dyson. In this paper we replace a single interval of length $s$ by $sJ$ where $J$ is a union of $m$ intervals and present a proof of the asymptotics up to second ...

  2. CONFIDENCE LEVELS AND/VS. STATISTICAL HYPOTHESIS TESTING IN STATISTICAL ANALYSIS. CASE STUDY

    Directory of Open Access Journals (Sweden)

    ILEANA BRUDIU

    2009-05-01

    Full Text Available Estimated parameters with confidence intervals and testing statistical assumptions used in statistical analysis to obtain conclusions on research from a sample extracted from the population. Paper to the case study presented aims to highlight the importance of volume of sample taken in the study and how this reflects on the results obtained when using confidence intervals and testing for pregnant. If statistical testing hypotheses not only give an answer "yes" or "no" to some questions of statistical estimation using statistical confidence intervals provides more information than a test statistic, show high degree of uncertainty arising from small samples and findings build in the "marginally significant" or "almost significant (p very close to 0.05.

  3. Adjusted Wald Confidence Interval for a Difference of Binomial Proportions Based on Paired Data

    Science.gov (United States)

    Bonett, Douglas G.; Price, Robert M.

    2012-01-01

    Adjusted Wald intervals for binomial proportions in one-sample and two-sample designs have been shown to perform about as well as the best available methods. The adjusted Wald intervals are easy to compute and have been incorporated into introductory statistics courses. An adjusted Wald interval for paired binomial proportions is proposed here and…

  4. AlphaCI: un programa de cálculo de intervalos de confianza para el coeficiente alfa de Cronbach AlphaCI: a computer program for computing confidence intervals around Cronbach's alfa coefficient

    Directory of Open Access Journals (Sweden)

    Rubén Ledesma

    2004-06-01

    Full Text Available El coeficiente alfa de Cronbach es el modo más habitual de estimar la fiabilidad de pruebas basadas en Teoría Clásica de los Test. En dicha estimación, los investigadores usualmente omiten informar intervalos de confianza para el coeficiente, un aspecto no solo recomendado por los especialistas, sino también requerido explícitamente en las normas editoriales de algunas revistas especializadas. Esta situación puede atribuirse a que los métodos de estimación de intervalos de confianza son poco conocidos, además de no estar disponibles al usuario en los programas estadísticos más populares. Así, en este trabajo se presenta un programa desarrollado dentro del sistema estadístico ViSta que permite calcular intervalos de confianza basados en el enfoque clásico y mediante la técnica bootstrap. Se espera promover la inclusión de intervalos de confianza para medidas de fiabilidad, facilitando el acceso a las herramientas necesarias para su aplicación. El programa es gratuito y puede obtenerse enviando un mail de solicitud al autor del trabajo.Cronbach's alpha coefficient is the most popular way of estimating reliability in measurement scales based on Classic Test Theory. When estimating it, researchers usually omit to report confidence intervals of this coefficient, as it is not only recommended by experts, but also required by some journal's guidelines. This situation is because the different methods of estimating confidence intervals are not well-known by researchers, as well as they are not being available among the most popular statistical packages. Therefore, this paper describes a computer program integrated into the ViSta statistical system, which allows computing confidence intervals based on the classical approach and using bootstrap technique. It is hoped that this work promotes the inclusion of confidence intervals of reliability measures, by increasing the availability of the required computer tools. The program is free and

  5. Confidence limits for parameters of Poisson and binomial distributions

    International Nuclear Information System (INIS)

    Arnett, L.M.

    1976-04-01

    The confidence limits for the frequency in a Poisson process and for the proportion of successes in a binomial process were calculated and tabulated for the situations in which the observed values of the frequency or proportion and an a priori distribution of these parameters are available. Methods are used that produce limits with exactly the stated confidence levels. The confidence interval [a,b] is calculated so that Pr [a less than or equal to lambda less than or equal to b c,μ], where c is the observed value of the parameter, and μ is the a priori hypothesis of the distribution of this parameter. A Bayesian type analysis is used. The intervals calculated are narrower and appreciably different from results, known to be conservative, that are often used in problems of this type. Pearson and Hartley recognized the characteristics of their methods and contemplated that exact methods could someday be used. The calculation of the exact intervals requires involved numerical analyses readily implemented only on digital computers not available to Pearson and Hartley. A Monte Carlo experiment was conducted to verify a selected interval from those calculated. This numerical experiment confirmed the results of the analytical methods and the prediction of Pearson and Hartley that their published tables give conservative results

  6. QT interval in healthy dogs: which method of correcting the QT interval in dogs is appropriate for use in small animal clinics?

    Directory of Open Access Journals (Sweden)

    Maira S. Oliveira

    2014-05-01

    Full Text Available The electrocardiography (ECG QT interval is influenced by fluctuations in heart rate (HR what may lead to misinterpretation of its length. Considering that alterations in QT interval length reflect abnormalities of the ventricular repolarisation which predispose to occurrence of arrhythmias, this variable must be properly evaluated. The aim of this work is to determine which method of correcting the QT interval is the most appropriate for dogs regarding different ranges of normal HR (different breeds. Healthy adult dogs (n=130; German Shepherd, Boxer, Pit Bull Terrier, and Poodle were submitted to ECG examination and QT intervals were determined in triplicates from the bipolar limb II lead and corrected for the effects of HR through the application of three published formulae involving quadratic, cubic or linear regression. The mean corrected QT values (QTc obtained using the diverse formulae were significantly different (ρ<0.05, while those derived according to the equation QTcV = QT + 0.087(1- RR were the most consistent (linear regression. QTcV values were strongly correlated (r=0.83 with the QT interval and showed a coefficient of variation of 8.37% and a 95% confidence interval of 0.22-0.23 s. Owing to its simplicity and reliability, the QTcV was considered the most appropriate to be used for the correction of QT interval in dogs.

  7. Determinants and consequences of short birth interval in rural Bangladesh: A cross-sectional study

    NARCIS (Netherlands)

    H.R. de Jonge (Hugo); K. Azad (Kishwar); N. Seward (Nadine); A. Kuddus (Abdul); S. Shaha (Sanjit); J. Beard (James); A. Costello (Anthony); A.J. Houweling (Tanja); E. Fottrell (Edward)

    2014-01-01

    textabstractBackground: Short birth intervals are known to have negative effects on pregnancy outcomes. We analysed data from a large population surveillance system in rural Bangladesh to identify predictors of short birth interval and determine consequences of short intervals on pregnancy outcomes.

  8. R package to estimate intracluster correlation coefficient with confidence interval for binary data.

    Science.gov (United States)

    Chakraborty, Hrishikesh; Hossain, Akhtar

    2018-03-01

    The Intracluster Correlation Coefficient (ICC) is a major parameter of interest in cluster randomized trials that measures the degree to which responses within the same cluster are correlated. There are several types of ICC estimators and its confidence intervals (CI) suggested in the literature for binary data. Studies have compared relative weaknesses and advantages of ICC estimators as well as its CI for binary data and suggested situations where one is advantageous in practical research. The commonly used statistical computing systems currently facilitate estimation of only a very few variants of ICC and its CI. To address the limitations of current statistical packages, we developed an R package, ICCbin, to facilitate estimating ICC and its CI for binary responses using different methods. The ICCbin package is designed to provide estimates of ICC in 16 different ways including analysis of variance methods, moments based estimation, direct probabilistic methods, correlation based estimation, and resampling method. CI of ICC is estimated using 5 different methods. It also generates cluster binary data using exchangeable correlation structure. ICCbin package provides two functions for users. The function rcbin() generates cluster binary data and the function iccbin() estimates ICC and it's CI. The users can choose appropriate ICC and its CI estimate from the wide selection of estimates from the outputs. The R package ICCbin presents very flexible and easy to use ways to generate cluster binary data and to estimate ICC and it's CI for binary response using different methods. The package ICCbin is freely available for use with R from the CRAN repository (https://cran.r-project.org/package=ICCbin). We believe that this package can be a very useful tool for researchers to design cluster randomized trials with binary outcome. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. INEMO: Distributed RF-Based Indoor Location Determination with Confidence Indicator

    Directory of Open Access Journals (Sweden)

    Youxian Sun

    2007-12-01

    Full Text Available Using radio signal strength (RSS in sensor networks localization is an attractive method since it is a cost-efficient method to provide range indication. In this paper, we present a two-tier distributed approach for RF-based indoor location determination. Our approach, namely, INEMO, provides positioning accuracy of room granularity and office cube granularity. A target can first give a room granularity request and the background anchor nodes cooperate to accomplish the positioning process. Anchors in the same room can give cube granularity if the target requires further accuracy. Fixed anchor nodes keep monitoring status of nearby anchors and local reference matching is used to support room separation. Furthermore, we utilize the RSS difference to infer the positioning confidence. The simulation results demonstrate the efficiency of the proposed RF-based indoor location determination.

  10. Confidence bands for inverse regression models

    International Nuclear Information System (INIS)

    Birke, Melanie; Bissantz, Nicolai; Holzmann, Hajo

    2010-01-01

    We construct uniform confidence bands for the regression function in inverse, homoscedastic regression models with convolution-type operators. Here, the convolution is between two non-periodic functions on the whole real line rather than between two periodic functions on a compact interval, since the former situation arguably arises more often in applications. First, following Bickel and Rosenblatt (1973 Ann. Stat. 1 1071–95) we construct asymptotic confidence bands which are based on strong approximations and on a limit theorem for the supremum of a stationary Gaussian process. Further, we propose bootstrap confidence bands based on the residual bootstrap and prove consistency of the bootstrap procedure. A simulation study shows that the bootstrap confidence bands perform reasonably well for moderate sample sizes. Finally, we apply our method to data from a gel electrophoresis experiment with genetically engineered neuronal receptor subunits incubated with rat brain extract

  11. Asymptotics for the Fredholm determinant of the sine kernel on a union of intervals

    Science.gov (United States)

    Widom, Harold

    1995-07-01

    In the bulk scaling limit of the Gaussian Unitary Ensemble of hermitian matrices the probability that an interval of length s contains no eigenvalues is the Fredholm determinant of the sine kernel{sin (x - y)}/{π (x - y)} over this interval. A formal asymptotic expansion for the determinant as s tends to infinity was obtained by Dyson. In this paper we replace a single interval of length s by sJ, where J is a union of m intervals and present a proof of the asymptotics up to second order. The logarithmic derivative with respect to s of the determinant equals a constant (expressible in terms of hyperelliptic integrals) times s, plus a bounded oscillatory function of s (zero if m=1, periodic if m=2, and in general expressible in terms of the solution of a Jacobi inversion problem), plus o(1). Also determined are the asymptotics of the trace of the resolvent operator, which is the ratio in the same model of the probability that the set contains exactly one eigenvalue to the probability that it contains none. The proofs use ideas from orthogonal polynomial theory.

  12. Common pitfalls in statistical analysis: "P" values, statistical significance and confidence intervals

    Directory of Open Access Journals (Sweden)

    Priya Ranganathan

    2015-01-01

    Full Text Available In the second part of a series on pitfalls in statistical analysis, we look at various ways in which a statistically significant study result can be expressed. We debunk some of the myths regarding the ′P′ value, explain the importance of ′confidence intervals′ and clarify the importance of including both values in a paper

  13. Estimating confidence intervals in predicted responses for oscillatory biological models.

    Science.gov (United States)

    St John, Peter C; Doyle, Francis J

    2013-07-29

    The dynamics of gene regulation play a crucial role in a cellular control: allowing the cell to express the right proteins to meet changing needs. Some needs, such as correctly anticipating the day-night cycle, require complicated oscillatory features. In the analysis of gene regulatory networks, mathematical models are frequently used to understand how a network's structure enables it to respond appropriately to external inputs. These models typically consist of a set of ordinary differential equations, describing a network of biochemical reactions, and unknown kinetic parameters, chosen such that the model best captures experimental data. However, since a model's parameter values are uncertain, and since dynamic responses to inputs are highly parameter-dependent, it is difficult to assess the confidence associated with these in silico predictions. In particular, models with complex dynamics - such as oscillations - must be fit with computationally expensive global optimization routines, and cannot take advantage of existing measures of identifiability. Despite their difficulty to model mathematically, limit cycle oscillations play a key role in many biological processes, including cell cycling, metabolism, neuron firing, and circadian rhythms. In this study, we employ an efficient parameter estimation technique to enable a bootstrap uncertainty analysis for limit cycle models. Since the primary role of systems biology models is the insight they provide on responses to rate perturbations, we extend our uncertainty analysis to include first order sensitivity coefficients. Using a literature model of circadian rhythms, we show how predictive precision is degraded with decreasing sample points and increasing relative error. Additionally, we show how this method can be used for model discrimination by comparing the output identifiability of two candidate model structures to published literature data. Our method permits modellers of oscillatory systems to confidently

  14. Confidence bounds for normal and lognormal distribution coefficients of variation

    Science.gov (United States)

    Steve Verrill

    2003-01-01

    This paper compares the so-called exact approach for obtaining confidence intervals on normal distribution coefficients of variation to approximate methods. Approximate approaches were found to perform less well than the exact approach for large coefficients of variation and small sample sizes. Web-based computer programs are described for calculating confidence...

  15. Demographic and Socio-economic Determinants of Birth Interval Dynamics in Manipur: A Survival Analysis

    Directory of Open Access Journals (Sweden)

    Sanajaoba Singh N,

    2011-01-01

    Full Text Available The birth interval is a major determinant of levels of fertility in high fertility populations. A house-to-house survey of 1225 women in Manipur, a tiny state in North Eastern India was carried out to investigate birth interval patterns and its determinants. Using survival analysis, among the nine explanatory variables of interest, only three factors – infant mortality, Lactation and use of contraceptive devices have highly significant effect (P<0.01 on the duration of birth interval and only three factors – age at marriage of wife, parity and sex of child are found to be significant (P<0.05 on the duration variable.

  16. A computer program (COSTUM) to calculate confidence intervals for in situ stress measurements. V. 1

    International Nuclear Information System (INIS)

    Dzik, E.J.; Walker, J.R.; Martin, C.D.

    1989-03-01

    The state of in situ stress is one of the parameters required both for the design and analysis of underground excavations and for the evaluation of numerical models used to simulate underground conditions. To account for the variability and uncertainty of in situ stress measurements, it is desirable to apply confidence limits to measured stresses. Several measurements of the state of stress along a borehole are often made to estimate the average state of stress at a point. Since stress is a tensor, calculating the mean stress and confidence limits using scalar techniques is inappropriate as well as incorrect. A computer program has been written to calculate and present the mean principle stresses and the confidence limits for the magnitudes and directions of the mean principle stresses. This report describes the computer program, COSTUM

  17. Understanding Consumer Confidence in the Safety of Food: Its Two-Dimensional Structure and Determinants

    NARCIS (Netherlands)

    Jonge, de J.; Trijp, van J.C.M.; Renes, R.J.; Frewer, L.J.

    2007-01-01

    Understanding of the determinants of consumer confidence in the safety of food is important if effective risk management and communication are to be developed. In the research reported here, we attempt to understand the roles of consumer trust in actors in the food chain and regulators, consumer

  18. Planning an Availability Demonstration Test with Consideration of Confidence Level

    Directory of Open Access Journals (Sweden)

    Frank Müller

    2017-08-01

    Full Text Available The full service life of a technical product or system is usually not completed after an initial failure. With appropriate measures, the system can be returned to a functional state. Availability is an important parameter for evaluating such repairable systems: Failure and repair behaviors are required to determine this availability. These data are usually given as mean value distributions with a certain confidence level. Consequently, the availability value also needs to be expressed with a confidence level. This paper first highlights the bootstrap Monte Carlo simulation (BMCS for availability demonstration and inference with confidence intervals based on limited failure and repair data. The BMCS enables point-, steady-state and average availability to be determined with a confidence level based on the pure samples or mean value distributions in combination with the corresponding sample size of failure and repair behavior. Furthermore, the method enables individual sample sizes to be used. A sample calculation of a system with Weibull-distributed failure behavior and a sample of repair times is presented. Based on the BMCS, an extended, new procedure is introduced: the “inverse bootstrap Monte Carlo simulation” (IBMCS to be used for availability demonstration tests with consideration of confidence levels. The IBMCS provides a test plan comprising the required number of failures and repair actions that must be observed to demonstrate a certain availability value. The concept can be applied to each type of availability and can also be applied to the pure samples or distribution functions of failure and repair behavior. It does not require special types of distribution. In other words, for example, a Weibull, a lognormal or an exponential distribution can all be considered as distribution functions of failure and repair behavior. After presenting the IBMCS, a sample calculation will be carried out and the potential of the BMCS and the IBMCS

  19. Thought confidence as a determinant of persuasion: the self-validation hypothesis.

    Science.gov (United States)

    Petty, Richard E; Briñol, Pablo; Tormala, Zakary L

    2002-05-01

    Previous research in the domain of attitude change has described 2 primary dimensions of thinking that impact persuasion processes and outcomes: the extent (amount) of thinking and the direction (valence) of issue-relevant thought. The authors examined the possibility that another, more meta-cognitive aspect of thinking is also important-the degree of confidence people have in their own thoughts. Four studies test the notion that thought confidence affects the extent of persuasion. When positive thoughts dominate in response to a message, increasing confidence in those thoughts increases persuasion, but when negative thoughts dominate, increasing confidence decreases persuasion. In addition, using self-reported and manipulated thought confidence in separate studies, the authors provide evidence that the magnitude of the attitude-thought relationship depends on the confidence people have in their thoughts. Finally, the authors also show that these self-validation effects are most likely in situations that foster high amounts of information processing activity.

  20. Confidence bounds for nonlinear dose-response relationships

    DEFF Research Database (Denmark)

    Baayen, C; Hougaard, P

    2015-01-01

    An important aim of drug trials is to characterize the dose-response relationship of a new compound. Such a relationship can often be described by a parametric (nonlinear) function that is monotone in dose. If such a model is fitted, it is useful to know the uncertainty of the fitted curve...... intervals for the dose-response curve. These confidence bounds have better coverage than Wald intervals and are more precise and generally faster than bootstrap methods. Moreover, if monotonicity is assumed, the profile likelihood approach takes this automatically into account. The approach is illustrated...

  1. Common pitfalls in statistical analysis: “P” values, statistical significance and confidence intervals

    Science.gov (United States)

    Ranganathan, Priya; Pramesh, C. S.; Buyse, Marc

    2015-01-01

    In the second part of a series on pitfalls in statistical analysis, we look at various ways in which a statistically significant study result can be expressed. We debunk some of the myths regarding the ‘P’ value, explain the importance of ‘confidence intervals’ and clarify the importance of including both values in a paper PMID:25878958

  2. Resampling Approach for Determination of the Method for Reference Interval Calculation in Clinical Laboratory Practice▿

    Science.gov (United States)

    Pavlov, Igor Y.; Wilson, Andrew R.; Delgado, Julio C.

    2010-01-01

    Reference intervals (RI) play a key role in clinical interpretation of laboratory test results. Numerous articles are devoted to analyzing and discussing various methods of RI determination. The two most widely used approaches are the parametric method, which assumes data normality, and a nonparametric, rank-based procedure. The decision about which method to use is usually made arbitrarily. The goal of this study was to demonstrate that using a resampling approach for the comparison of RI determination techniques could help researchers select the right procedure. Three methods of RI calculation—parametric, transformed parametric, and quantile-based bootstrapping—were applied to multiple random samples drawn from 81 values of complement factor B observations and from a computer-simulated normally distributed population. It was shown that differences in RI between legitimate methods could be up to 20% and even more. The transformed parametric method was found to be the best method for the calculation of RI of non-normally distributed factor B estimations, producing an unbiased RI and the lowest confidence limits and interquartile ranges. For a simulated Gaussian population, parametric calculations, as expected, were the best; quantile-based bootstrapping produced biased results at low sample sizes, and the transformed parametric method generated heavily biased RI. The resampling approach could help compare different RI calculation methods. An algorithm showing a resampling procedure for choosing the appropriate method for RI calculations is included. PMID:20554803

  3. The proximate determinants of fertility and birth intervals in Egypt: An application of calendar data

    Directory of Open Access Journals (Sweden)

    Andrew Hinde

    2007-01-01

    Full Text Available In this paper we use calendar data from the 2000 Egyptian Demographic and Health Survey (DHS to assess the determinants of birth interval length among women who are in union. We make use of the well-known model of the proximate determinants of fertility, and take advantage of the fact that the DHS calendar data provide month-by-month data on contraceptive use, breastfeeding and post-partum amenorrhoea, which are the most important proximate determinants among women in union. One aim of the analysis is to see whether the calendar data are sufficiently detailed to account for all variation among individual women in birth interval duration, in that once they are controlled, the effect of background social, economic and cultural variables is not statistically significant. The results suggest that this is indeed the case, especially after a random effect term to account for the unobserved proximate determinants is included in the model. Birth intervals are determined mainly by the use of modern methods of contraception (the IUD being more effective than the pill. Breastfeeding and post-partum amenorrhoea both inhibit conception, and the effect of breastfeeding remains even after the period of amenorrhoea has ended.

  4. Graphical interpretation of confidence curves in rankit plots

    DEFF Research Database (Denmark)

    Hyltoft Petersen, Per; Blaabjerg, Ole; Andersen, Marianne

    2004-01-01

    A well-known transformation from the bell-shaped Gaussian (normal) curve to a straight line in the rankit plot is investigated, and a tool for evaluation of the distribution of reference groups is presented. It is based on the confidence intervals for percentiles of the calculated Gaussian distri...

  5. Parents' obesity-related behavior and confidence to support behavioral change in their obese child: data from the STAR study.

    Science.gov (United States)

    Arsenault, Lisa N; Xu, Kathleen; Taveras, Elsie M; Hacker, Karen A

    2014-01-01

    Successful childhood obesity interventions frequently focus on behavioral modification and involve parents or family members. Parental confidence in supporting behavior change may be an element of successful family-based prevention efforts. We aimed to determine whether parents' own obesity-related behaviors were related to their confidence in supporting their child's achievement of obesity-related behavioral goals. Cross-sectional analyses of data collected at baseline of a randomized control trial testing a treatment intervention for obese children (n = 787) in primary care settings (n = 14). Five obesity-related behaviors (physical activity, screen time, sugar-sweetened beverage, sleep duration, fast food) were self-reported by parents for themselves and their child. Behaviors were dichotomized on the basis of achievement of behavioral goals. Five confidence questions asked how confident the parent was in helping their child achieve each goal. Logistic regression modeling high confidence was conducted with goal achievement and demographics as independent variables. Parents achieving physical activity or sleep duration goals were significantly more likely to be highly confident in supporting their child's achievement of those goals (physical activity, odds ratio 1.76; 95% confidence interval 1.19-2.60; sleep, odds ratio 1.74; 95% confidence interval 1.09-2.79) independent of sociodemographic variables and child's current behavior. Parental achievements of TV watching and fast food goals were also associated with confidence, but significance was attenuated after child's behavior was included in models. Parents' own obesity-related behaviors are factors that may affect their confidence to support their child's behavior change. Providers seeking to prevent childhood obesity should address parent/family behaviors as part of their obesity prevention strategies. Copyright © 2014 Academic Pediatric Association. Published by Elsevier Inc. All rights reserved.

  6. Confidence intervals for effect sizes: compliance and clinical significance in the Journal of Consulting and clinical Psychology.

    Science.gov (United States)

    Odgaard, Eric C; Fowler, Robert L

    2010-06-01

    In 2005, the Journal of Consulting and Clinical Psychology (JCCP) became the first American Psychological Association (APA) journal to require statistical measures of clinical significance, plus effect sizes (ESs) and associated confidence intervals (CIs), for primary outcomes (La Greca, 2005). As this represents the single largest editorial effort to improve statistical reporting practices in any APA journal in at least a decade, in this article we investigate the efficacy of that change. All intervention studies published in JCCP in 2003, 2004, 2007, and 2008 were reviewed. Each article was coded for method of clinical significance, type of ES, and type of associated CI, broken down by statistical test (F, t, chi-square, r/R(2), and multivariate modeling). By 2008, clinical significance compliance was 75% (up from 31%), with 94% of studies reporting some measure of ES (reporting improved for individual statistical tests ranging from eta(2) = .05 to .17, with reasonable CIs). Reporting of CIs for ESs also improved, although only to 40%. Also, the vast majority of reported CIs used approximations, which become progressively less accurate for smaller sample sizes and larger ESs (cf. Algina & Kessleman, 2003). Changes are near asymptote for ESs and clinical significance, but CIs lag behind. As CIs for ESs are required for primary outcomes, we show how to compute CIs for the vast majority of ESs reported in JCCP, with an example of how to use CIs for ESs as a method to assess clinical significance.

  7. Confidence limits for regional cerebral blood flow values obtained with circular positron system, using krypton-77

    International Nuclear Information System (INIS)

    Meyer, E.; Yamamoto, Y.L.; Thompson, C.J.

    1978-01-01

    The 90% confidence limits have been determined for regional cerebral blood flow (rCBF) values obtained in each cm 2 of a cross section of the human head after inhalation of radioactive krypton-77, using the MNI circular positron emission tomography system (Positome). CBF values for small brain tissue elements are calculated by linear regression analysis on the semi-logarithmically transformed clearance curve. A computer program displays CBF values and their estimated error in numeric and gray scale forms. The following typical results have been obtained on a control subject: mean CBF in the entire cross section of the head: 54.6 + - 5 ml/min/100 g tissue, rCBF for small area of frontal gray matter: 75.8 + - 9 ml/min/100 g tissue. Confidence intervals for individual rCBF values varied between + - 13 and + - 55% except for areas pertaining to the ventricular system where particularly poor statistics have been obtained. Knowledge of confidence limits for rCBF values improves their diagnostic significance, particularly with respect to the assessment of reduced rCBF in stroke patients. A nomogram for convenient determination of 90% confidence limits for slope values obtained in linear regression analysis has been designed with the number of fitted points (n) and the correlation coefficient (r) as parameters. (author)

  8. Doubly Bayesian Analysis of Confidence in Perceptual Decision-Making.

    Science.gov (United States)

    Aitchison, Laurence; Bang, Dan; Bahrami, Bahador; Latham, Peter E

    2015-10-01

    Humans stand out from other animals in that they are able to explicitly report on the reliability of their internal operations. This ability, which is known as metacognition, is typically studied by asking people to report their confidence in the correctness of some decision. However, the computations underlying confidence reports remain unclear. In this paper, we present a fully Bayesian method for directly comparing models of confidence. Using a visual two-interval forced-choice task, we tested whether confidence reports reflect heuristic computations (e.g. the magnitude of sensory data) or Bayes optimal ones (i.e. how likely a decision is to be correct given the sensory data). In a standard design in which subjects were first asked to make a decision, and only then gave their confidence, subjects were mostly Bayes optimal. In contrast, in a less-commonly used design in which subjects indicated their confidence and decision simultaneously, they were roughly equally likely to use the Bayes optimal strategy or to use a heuristic but suboptimal strategy. Our results suggest that, while people's confidence reports can reflect Bayes optimal computations, even a small unusual twist or additional element of complexity can prevent optimality.

  9. Incidence of interval cancers in faecal immunochemical test colorectal screening programmes in Italy.

    Science.gov (United States)

    Giorgi Rossi, Paolo; Carretta, Elisa; Mangone, Lucia; Baracco, Susanna; Serraino, Diego; Zorzi, Manuel

    2018-03-01

    Objective In Italy, colorectal screening programmes using the faecal immunochemical test from ages 50 to 69 every two years have been in place since 2005. We aimed to measure the incidence of interval cancers in the two years after a negative faecal immunochemical test, and compare this with the pre-screening incidence of colorectal cancer. Methods Using data on colorectal cancers diagnosed in Italy from 2000 to 2008 collected by cancer registries in areas with active screening programmes, we identified cases that occurred within 24 months of negative screening tests. We used the number of tests with a negative result as a denominator, grouped by age and sex. Proportional incidence was calculated for the first and second year after screening. Results Among 579,176 and 226,738 persons with negative test results followed up at 12 and 24 months, respectively, we identified 100 interval cancers in the first year and 70 in the second year. The proportional incidence was 13% (95% confidence interval 10-15) and 23% (95% confidence interval 18-25), respectively. The estimate for the two-year incidence is 18%, which was slightly higher in females (22%; 95% confidence interval 17-26), and for proximal colon (22%; 95% confidence interval 16-28). Conclusion The incidence of interval cancers in the two years after a negative faecal immunochemical test in routine population-based colorectal cancer screening was less than one-fifth of the expected incidence. This is direct evidence that the faecal immunochemical test-based screening programme protocol has high sensitivity for cancers that will become symptomatic.

  10. Comparison of the methods for determination of calibration and verification intervals of measuring devices

    Directory of Open Access Journals (Sweden)

    Toteva Pavlina

    2017-01-01

    Full Text Available The paper presents different determination and optimisation methods for verification intervals of technical devices for monitoring and measurement based on the requirements of some widely used international standards, e.g. ISO 9001, ISO/IEC 17020, ISO/IEC 17025 etc., maintained by various organizations implementing measuring devices in practice. Comparative analysis of the reviewed methods is conducted in terms of opportunities for assessing the adequacy of interval(s for calibration of measuring devices and their optimisation accepted by an organization – an extension or reduction depending on the obtained results. The advantages and disadvantages of the reviewed methods are discussed, and recommendations for their applicability are provided.

  11. Effects of human errors on the determination of surveillance test interval

    International Nuclear Information System (INIS)

    Chung, Dae Wook; Koo, Bon Hyun

    1990-01-01

    This paper incorporates the effects of human error relevant to the periodic test on the unavailability of the safety system as well as the component unavailability. Two types of possible human error during the test are considered. One is the possibility that a good safety system is inadvertently left in a bad state after the test (Type A human error) and the other is the possibility that bad safety system is undetected upon the test (Type B human error). An event tree model is developed for the steady-state unavailability of safety system to determine the effects of human errors on the component unavailability and the test interval. We perform the reliability analysis of safety injection system (SIS) by applying aforementioned two types of human error to safety injection pumps. Results of various sensitivity analyses show that; 1) the appropriate test interval decreases and steady-state unavailability increases as the probabilities of both types of human errors increase, and they are far more sensitive to Type A human error than Type B and 2) the SIS unavailability increases slightly as the probability of Type B human error increases, and significantly as the probability of Type A human error increases. Therefore, to avoid underestimation, the effects of human error should be incorporated in the system reliability analysis which aims at the relaxations of the surveillance test intervals, and Type A human error has more important effect on the unavailability and surveillance test interval

  12. A comparison of confidence interval methods for the concordance correlation coefficient and intraclass correlation coefficient with small number of raters.

    Science.gov (United States)

    Feng, Dai; Svetnik, Vladimir; Coimbra, Alexandre; Baumgartner, Richard

    2014-01-01

    The intraclass correlation coefficient (ICC) with fixed raters or, equivalently, the concordance correlation coefficient (CCC) for continuous outcomes is a widely accepted aggregate index of agreement in settings with small number of raters. Quantifying the precision of the CCC by constructing its confidence interval (CI) is important in early drug development applications, in particular in qualification of biomarker platforms. In recent years, there have been several new methods proposed for construction of CIs for the CCC, but their comprehensive comparison has not been attempted. The methods consisted of the delta method and jackknifing with and without Fisher's Z-transformation, respectively, and Bayesian methods with vague priors. In this study, we carried out a simulation study, with data simulated from multivariate normal as well as heavier tailed distribution (t-distribution with 5 degrees of freedom), to compare the state-of-the-art methods for assigning CI to the CCC. When the data are normally distributed, the jackknifing with Fisher's Z-transformation (JZ) tended to provide superior coverage and the difference between it and the closest competitor, the Bayesian method with the Jeffreys prior was in general minimal. For the nonnormal data, the jackknife methods, especially the JZ method, provided the coverage probabilities closest to the nominal in contrast to the others which yielded overly liberal coverage. Approaches based upon the delta method and Bayesian method with conjugate prior generally provided slightly narrower intervals and larger lower bounds than others, though this was offset by their poor coverage. Finally, we illustrated the utility of the CIs for the CCC in an example of a wake after sleep onset (WASO) biomarker, which is frequently used in clinical sleep studies of drugs for treatment of insomnia.

  13. Reference intervals for selected serum biochemistry analytes in cheetahs Acinonyx jubatus.

    Science.gov (United States)

    Hudson-Lamb, Gavin C; Schoeman, Johan P; Hooijberg, Emma H; Heinrich, Sonja K; Tordiffe, Adrian S W

    2016-02-26

    Published haematologic and serum biochemistry reference intervals are very scarce for captive cheetahs and even more for free-ranging cheetahs. The current study was performed to establish reference intervals for selected serum biochemistry analytes in cheetahs. Baseline serum biochemistry analytes were analysed from 66 healthy Namibian cheetahs. Samples were collected from 30 captive cheetahs at the AfriCat Foundation and 36 free-ranging cheetahs from central Namibia. The effects of captivity-status, age, sex and haemolysis score on the tested serum analytes were investigated. The biochemistry analytes that were measured were sodium, potassium, magnesium, chloride, urea and creatinine. The 90% confidence interval of the reference limits was obtained using the non-parametric bootstrap method. Reference intervals were preferentially determined by the non-parametric method and were as follows: sodium (128 mmol/L - 166 mmol/L), potassium (3.9 mmol/L - 5.2 mmol/L), magnesium (0.8 mmol/L - 1.2 mmol/L), chloride (97 mmol/L - 130 mmol/L), urea (8.2 mmol/L - 25.1 mmol/L) and creatinine (88 µmol/L - 288 µmol/L). Reference intervals from the current study were compared with International Species Information System values for cheetahs and found to be narrower. Moreover, age, sex and haemolysis score had no significant effect on the serum analytes in this study. Separate reference intervals for captive and free-ranging cheetahs were also determined. Captive cheetahs had higher urea values, most likely due to dietary factors. This study is the first to establish reference intervals for serum biochemistry analytes in cheetahs according to international guidelines. These results can be used for future health and disease assessments in both captive and free-ranging cheetahs.

  14. The integrated model of sport confidence: a canonical correlation and mediational analysis.

    Science.gov (United States)

    Koehn, Stefan; Pearce, Alan J; Morris, Tony

    2013-12-01

    The main purpose of the study was to examine crucial parts of Vealey's (2001) integrated framework hypothesizing that sport confidence is a mediating variable between sources of sport confidence (including achievement, self-regulation, and social climate) and athletes' affect in competition. The sample consisted of 386 athletes, who completed the Sources of Sport Confidence Questionnaire, Trait Sport Confidence Inventory, and Dispositional Flow Scale-2. Canonical correlation analysis revealed a confidence-achievement dimension underlying flow. Bias-corrected bootstrap confidence intervals in AMOS 20.0 were used in examining mediation effects between source domains and dispositional flow. Results showed that sport confidence partially mediated the relationship between achievement and self-regulation domains and flow, whereas no significant mediation was found for social climate. On a subscale level, full mediation models emerged for achievement and flow dimensions of challenge-skills balance, clear goals, and concentration on the task at hand.

  15. Reference intervals for selected serum biochemistry analytes in cheetahs (Acinonyx jubatus

    Directory of Open Access Journals (Sweden)

    Gavin C. Hudson-Lamb

    2016-02-01

    Full Text Available Published haematologic and serum biochemistry reference intervals are very scarce for captive cheetahs and even more for free-ranging cheetahs. The current study was performed to establish reference intervals for selected serum biochemistry analytes in cheetahs. Baseline serum biochemistry analytes were analysed from 66 healthy Namibian cheetahs. Samples were collected from 30 captive cheetahs at the AfriCat Foundation and 36 free-ranging cheetahs from central Namibia. The effects of captivity-status, age, sex and haemolysis score on the tested serum analytes were investigated. The biochemistry analytes that were measured were sodium, potassium, magnesium, chloride, urea and creatinine. The 90% confidence interval of the reference limits was obtained using the non-parametric bootstrap method. Reference intervals were preferentially determined by the non-parametric method and were as follows: sodium (128 mmol/L – 166 mmol/L, potassium (3.9 mmol/L – 5.2 mmol/L, magnesium (0.8 mmol/L – 1.2 mmol/L, chloride (97 mmol/L – 130 mmol/L, urea (8.2 mmol/L – 25.1 mmol/L and creatinine (88 µmol/L – 288 µmol/L. Reference intervals from the current study were compared with International Species Information System values for cheetahs and found to be narrower. Moreover, age, sex and haemolysis score had no significant effect on the serum analytes in this study. Separate reference intervals for captive and free-ranging cheetahs were also determined. Captive cheetahs had higher urea values, most likely due to dietary factors. This study is the first to establish reference intervals for serum biochemistry analytes in cheetahs according to international guidelines. These results can be used for future health and disease assessments in both captive and free-ranging cheetahs.

  16. Development of free statistical software enabling researchers to calculate confidence levels, clinical significance curves and risk-benefit contours

    International Nuclear Information System (INIS)

    Shakespeare, T.P.; Mukherjee, R.K.; Gebski, V.J.

    2003-01-01

    Confidence levels, clinical significance curves, and risk-benefit contours are tools improving analysis of clinical studies and minimizing misinterpretation of published results, however no software has been available for their calculation. The objective was to develop software to help clinicians utilize these tools. Excel 2000 spreadsheets were designed using only built-in functions, without macros. The workbook was protected and encrypted so that users can modify only input cells. The workbook has 4 spreadsheets for use in studies comparing two patient groups. Sheet 1 comprises instructions and graphic examples for use. Sheet 2 allows the user to input the main study results (e.g. survival rates) into a 2-by-2 table. Confidence intervals (95%), p-value and the confidence level for Treatment A being better than Treatment B are automatically generated. An additional input cell allows the user to determine the confidence associated with a specified level of benefit. For example if the user wishes to know the confidence that Treatment A is at least 10% better than B, 10% is entered. Sheet 2 automatically displays clinical significance curves, graphically illustrating confidence levels for all possible benefits of one treatment over the other. Sheet 3 allows input of toxicity data, and calculates the confidence that one treatment is more toxic than the other. It also determines the confidence that the relative toxicity of the most effective arm does not exceed user-defined tolerability. Sheet 4 automatically calculates risk-benefit contours, displaying the confidence associated with a specified scenario of minimum benefit and maximum risk of one treatment arm over the other. The spreadsheet is freely downloadable at www.ontumor.com/professional/statistics.htm A simple, self-explanatory, freely available spreadsheet calculator was developed using Excel 2000. The incorporated decision-making tools can be used for data analysis and improve the reporting of results of any

  17. Methodology for building confidence measures

    Science.gov (United States)

    Bramson, Aaron L.

    2004-04-01

    This paper presents a generalized methodology for propagating known or estimated levels of individual source document truth reliability to determine the confidence level of a combined output. Initial document certainty levels are augmented by (i) combining the reliability measures of multiply sources, (ii) incorporating the truth reinforcement of related elements, and (iii) incorporating the importance of the individual elements for determining the probability of truth for the whole. The result is a measure of confidence in system output based on the establishing of links among the truth values of inputs. This methodology was developed for application to a multi-component situation awareness tool under development at the Air Force Research Laboratory in Rome, New York. Determining how improvements in data quality and the variety of documents collected affect the probability of a correct situational detection helps optimize the performance of the tool overall.

  18. Self-Confidence in the Hospitality Industry

    Directory of Open Access Journals (Sweden)

    Michael Oshins

    2014-02-01

    Full Text Available Few industries rely on self-confidence to the extent that the hospitality industry does because guests must feel welcome and that they are in capable hands. This article examines the results of hundreds of student interviews with industry professionals at all levels to determine where the majority of the hospitality industry gets their self-confidence.

  19. Bootstrap Signal-to-Noise Confidence Intervals: An Objective Method for Subject Exclusion and Quality Control in ERP Studies

    Science.gov (United States)

    Parks, Nathan A.; Gannon, Matthew A.; Long, Stephanie M.; Young, Madeleine E.

    2016-01-01

    Analysis of event-related potential (ERP) data includes several steps to ensure that ERPs meet an appropriate level of signal quality. One such step, subject exclusion, rejects subject data if ERP waveforms fail to meet an appropriate level of signal quality. Subject exclusion is an important quality control step in the ERP analysis pipeline as it ensures that statistical inference is based only upon those subjects exhibiting clear evoked brain responses. This critical quality control step is most often performed simply through visual inspection of subject-level ERPs by investigators. Such an approach is qualitative, subjective, and susceptible to investigator bias, as there are no standards as to what constitutes an ERP of sufficient signal quality. Here, we describe a standardized and objective method for quantifying waveform quality in individual subjects and establishing criteria for subject exclusion. The approach uses bootstrap resampling of ERP waveforms (from a pool of all available trials) to compute a signal-to-noise ratio confidence interval (SNR-CI) for individual subject waveforms. The lower bound of this SNR-CI (SNRLB) yields an effective and objective measure of signal quality as it ensures that ERP waveforms statistically exceed a desired signal-to-noise criterion. SNRLB provides a quantifiable metric of individual subject ERP quality and eliminates the need for subjective evaluation of waveform quality by the investigator. We detail the SNR-CI methodology, establish the efficacy of employing this approach with Monte Carlo simulations, and demonstrate its utility in practice when applied to ERP datasets. PMID:26903849

  20. Biomass Thermogravimetric Analysis: Uncertainty Determination Methodology and Sampling Maps Generation

    Science.gov (United States)

    Pazó, Jose A.; Granada, Enrique; Saavedra, Ángeles; Eguía, Pablo; Collazo, Joaquín

    2010-01-01

    The objective of this study was to develop a methodology for the determination of the maximum sampling error and confidence intervals of thermal properties obtained from thermogravimetric analysis (TG), including moisture, volatile matter, fixed carbon and ash content. The sampling procedure of the TG analysis was of particular interest and was conducted with care. The results of the present study were compared to those of a prompt analysis, and a correlation between the mean values and maximum sampling errors of the methods were not observed. In general, low and acceptable levels of uncertainty and error were obtained, demonstrating that the properties evaluated by TG analysis were representative of the overall fuel composition. The accurate determination of the thermal properties of biomass with precise confidence intervals is of particular interest in energetic biomass applications. PMID:20717532

  1. Microvascular anastomosis simulation using a chicken thigh model: Interval versus massed training.

    Science.gov (United States)

    Schoeff, Stephen; Hernandez, Brian; Robinson, Derek J; Jameson, Mark J; Shonka, David C

    2017-11-01

    To compare the effectiveness of massed versus interval training when teaching otolaryngology residents microvascular suturing on a validated microsurgical model. Otolaryngology residents were placed into interval (n = 7) or massed (n = 7) training groups. The interval group performed three separate 30-minute practice sessions separated by at least 1 week, and the massed group performed a single 90-minute practice session. Both groups viewed a video demonstration and recorded a pretest prior to the first training session. A post-test was administered following the last practice session. At an academic medical center, 14 otolaryngology residents were assigned using stratified randomization to interval or massed training. Blinded evaluators graded performance using a validated microvascular Objective Structured Assessment of Technical Skill tool. The tool is comprised of two major components: task-specific score (TSS) and global rating scale (GRS). Participants also received pre- and poststudy surveys to compare subjective confidence in multiple aspects of microvascular skill acquisition. Overall, all residents showed increased TSS and GRS on post- versus pretest. After completion of training, the interval group had a statistically significant increase in both TSS and GRS, whereas the massed group's increase was not significant. Residents in both groups reported significantly increased levels of confidence after completion of the study. Self-directed learning using a chicken thigh artery model may benefit microsurgical skills, competence, and confidence for resident surgeons. Interval training results in significant improvement in early development of microvascular anastomosis skills, whereas massed training does not. NA. Laryngoscope, 127:2490-2494, 2017. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.

  2. Determining diabetic retinopathy screening interval based on time from no retinopathy to laser therapy.

    Science.gov (United States)

    Hughes, Daniel; Nair, Sunil; Harvey, John N

    2017-12-01

    Objectives To determine the necessary screening interval for retinopathy in diabetic patients with no retinopathy based on time to laser therapy and to assess long-term visual outcome following screening. Methods In a population-based community screening programme in North Wales, 2917 patients were followed until death or for approximately 12 years. At screening, 2493 had no retinopathy; 424 had mostly minor degrees of non-proliferative retinopathy. Data on timing of first laser therapy and visual outcome following screening were obtained from local hospitals and ophthalmology units. Results Survival analysis showed that very few of the no retinopathy at screening group required laser therapy in the early years compared with the non-proliferative retinopathy group ( p retinopathy at screening group required laser therapy, and at three years 0.2% (cumulative), lower rates of treatment than have been suggested by analyses of sight-threatening retinopathy determined photographically. At follow-up (mean 7.8 ± 4.6 years), mild to moderate visual impairment in one or both eyes due to diabetic retinopathy was more common in those with retinopathy at screening (26% vs. 5%, p diabetes occurred in only 1 in 1000. Conclusions Optimum screening intervals should be determined from time to active treatment. Based on requirement for laser therapy, the screening interval for diabetic patients with no retinopathy can be extended to two to three years. Patients who attend for retinal screening and treatment who have no or non-proliferative retinopathy now have a very low risk of eventual blindness from diabetes.

  3. Level of confidence in venepuncture and knowledge in determining causes of blood sample haemolysis among clinical staff and phlebotomists.

    Science.gov (United States)

    Makhumula-Nkhoma, Nellie; Whittaker, Vicki; McSherry, Robert

    2015-02-01

    To investigate the association between confidence level in venepuncture and knowledge in determining causes of blood sample haemolysis among clinical staff and phlebotomists. Various collection methods are used to perform venepuncture, also called phlebotomy, the act of drawing blood from a patient using a needle. The collection method used has an impact on preanalytical blood sample haemolysis. Haemolysis is the breakdown of red blood cells, which makes the sample unsuitable. Despite available evidence on the common causes, extensive literature search showed a lack of published evidence on the association of haemolysis with staff confidence and knowledge. A quantitative primary research design using survey method. A purposive sample of 290 clinical staff and phlebotomists conducting venepuncture in one North England hospital participated in this quantitative survey. A three-section web-based questionnaire comprising demographic profile, confidence and competence levels, and knowledge sections was used to collect data in 2012. The chi-squared test for independence was used to compare the distribution of responses for categorical data. anova was used to determine mean difference in the knowledge scores of staff with different confidence levels. Almost 25% clinical staff and phlebotomists participated in the survey. There was an increase in confidence at the last venepuncture among staff of all categories. While doctors' scores were higher compared with healthcare assistants', p ≤ 0·001, nurses' were of wide range and lowest. There was no statistically significant difference (at the 5% level) in the total knowledge scores and confidence level at the last venepuncture F(2,4·690) = 1·67, p = 0·31 among staff of all categories. Evidence-based measures are required to boost staff knowledge base of preanalytical blood sample haemolysis for standardised and quality service. Monitoring and evaluation of the training, conducting and monitoring haemolysis rate are

  4. Prolonged corrected QT interval is predictive of future stroke events even in subjects without ECG-diagnosed left ventricular hypertrophy.

    Science.gov (United States)

    Ishikawa, Joji; Ishikawa, Shizukiyo; Kario, Kazuomi

    2015-03-01

    We attempted to evaluate whether subjects who exhibit prolonged corrected QT (QTc) interval (≥440 ms in men and ≥460 ms in women) on ECG, with and without ECG-diagnosed left ventricular hypertrophy (ECG-LVH; Cornell product, ≥244 mV×ms), are at increased risk of stroke. Among the 10 643 subjects, there were a total of 375 stroke events during the follow-up period (128.7±28.1 months; 114 142 person-years). The subjects with prolonged QTc interval (hazard ratio, 2.13; 95% confidence interval, 1.22-3.73) had an increased risk of stroke even after adjustment for ECG-LVH (hazard ratio, 1.71; 95% confidence interval, 1.22-2.40). When we stratified the subjects into those with neither a prolonged QTc interval nor ECG-LVH, those with a prolonged QTc interval but without ECG-LVH, and those with ECG-LVH, multivariate-adjusted Cox proportional hazards analysis demonstrated that the subjects with prolonged QTc intervals but not ECG-LVH (1.2% of all subjects; incidence, 10.7%; hazard ratio, 2.70, 95% confidence interval, 1.48-4.94) and those with ECG-LVH (incidence, 7.9%; hazard ratio, 1.83; 95% confidence interval, 1.31-2.57) had an increased risk of stroke events, compared with those with neither a prolonged QTc interval nor ECG-LVH. In conclusion, prolonged QTc interval was associated with stroke risk even among patients without ECG-LVH in the general population. © 2014 American Heart Association, Inc.

  5. The statistical significance of error probability as determined from decoding simulations for long codes

    Science.gov (United States)

    Massey, J. L.

    1976-01-01

    The very low error probability obtained with long error-correcting codes results in a very small number of observed errors in simulation studies of practical size and renders the usual confidence interval techniques inapplicable to the observed error probability. A natural extension of the notion of a 'confidence interval' is made and applied to such determinations of error probability by simulation. An example is included to show the surprisingly great significance of as few as two decoding errors in a very large number of decoding trials.

  6. The radiographic acromiohumeral interval is affected by arm and radiographic beam position

    Energy Technology Data Exchange (ETDEWEB)

    Fehringer, Edward V.; Rosipal, Charles E.; Rhodes, David A.; Lauder, Anthony J.; Feschuk, Connie A.; Mormino, Matthew A.; Hartigan, David E. [University of Nebraska Medical Center, Department of Orthopaedic Surgery and Rehabilitation, Omaha, NE (United States); Puumala, Susan E. [Nebraska Medical Center, Department of Preventive and Societal Medicine, Omaha, NE (United States)

    2008-06-15

    The objective was to determine whether arm and radiographic beam positional changes affect the acromiohumeral interval (AHI) in radiographs of healthy shoulders. Controlling for participant's height and position as well as radiographic beam height and angle, from 30 right shoulders of right-handed males without shoulder problems four antero-posterior (AP) radiographic views each were obtained in defined positions. Three independent, blinded physicians measured the AHI to the nearest millimeter in 120 randomized radiographs. Mean differences between measurements were calculated, along with a 95% confidence interval. Controlling for observer effect, there was a significant difference between AHI measurements on different views (p<0.01). All pair-wise differences were statistically significant after adjusting for multiple comparisons (all p values <0.01). Even in healthy shoulders, small changes in arm position and radiographic beam orientation affect the AHI in radiographs. (orig.)

  7. Predictor sort sampling and one-sided confidence bounds on quantiles

    Science.gov (United States)

    Steve Verrill; Victoria L. Herian; David W. Green

    2002-01-01

    Predictor sort experiments attempt to make use of the correlation between a predictor that can be measured prior to the start of an experiment and the response variable that we are investigating. Properly designed and analyzed, they can reduce necessary sample sizes, increase statistical power, and reduce the lengths of confidence intervals. However, if the non- random...

  8. Five-band microwave radiometer system for noninvasive brain temperature measurement in newborn babies: Phantom experiment and confidence interval

    Science.gov (United States)

    Sugiura, T.; Hirata, H.; Hand, J. W.; van Leeuwen, J. M. J.; Mizushina, S.

    2011-10-01

    Clinical trials of hypothermic brain treatment for newborn babies are currently hindered by the difficulty in measuring deep brain temperatures. As one of the possible methods for noninvasive and continuous temperature monitoring that is completely passive and inherently safe is passive microwave radiometry (MWR). We have developed a five-band microwave radiometer system with a single dual-polarized, rectangular waveguide antenna operating within the 1-4 GHz range and a method for retrieving the temperature profile from five radiometric brightness temperatures. This paper addresses (1) the temperature calibration for five microwave receivers, (2) the measurement experiment using a phantom model that mimics the temperature profile in a newborn baby, and (3) the feasibility for noninvasive monitoring of deep brain temperatures. Temperature resolutions were 0.103, 0.129, 0.138, 0.105 and 0.111 K for 1.2, 1.65, 2.3, 3.0 and 3.6 GHz receivers, respectively. The precision of temperature estimation (2σ confidence interval) was about 0.7°C at a 5-cm depth from the phantom surface. Accuracy, which is the difference between the estimated temperature using this system and the measured temperature by a thermocouple at a depth of 5 cm, was about 2°C. The current result is not satisfactory for clinical application because the clinical requirement for accuracy must be better than 1°C for both precision and accuracy at a depth of 5 cm. Since a couple of possible causes for this inaccuracy have been identified, we believe that the system can take a step closer to the clinical application of MWR for hypothermic rescue treatment.

  9. Post choice information integration as a causal determinant of confidence: Novel data and a computational account.

    Science.gov (United States)

    Moran, Rani; Teodorescu, Andrei R; Usher, Marius

    2015-05-01

    Confidence judgments are pivotal in the performance of daily tasks and in many domains of scientific research including the behavioral sciences, psychology and neuroscience. Positive resolution i.e., the positive correlation between choice-correctness and choice-confidence is a critical property of confidence judgments, which justifies their ubiquity. In the current paper, we study the mechanism underlying confidence judgments and their resolution by investigating the source of the inputs for the confidence-calculation. We focus on the intriguing debate between two families of confidence theories. According to single stage theories, confidence is based on the same information that underlies the decision (or on some other aspect of the decision process), whereas according to dual stage theories, confidence is affected by novel information that is collected after the decision was made. In three experiments, we support the case for dual stage theories by showing that post-choice perceptual availability manipulations exert a causal effect on confidence-resolution in the decision followed by confidence paradigm. These finding establish the role of RT2, the duration of the post-choice information-integration stage, as a prime dependent variable that theories of confidence should account for. We then present a novel list of robust empirical patterns ('hurdles') involving RT2 to guide further theorizing about confidence judgments. Finally, we present a unified computational dual stage model for choice, confidence and their latencies namely, the collapsing confidence boundary model (CCB). According to CCB, a diffusion-process choice is followed by a second evidence-integration stage towards a stochastic collapsing confidence boundary. Despite its simplicity, CCB clears the entire list of hurdles. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. Communication confidence in persons with aphasia.

    Science.gov (United States)

    Babbitt, Edna M; Cherney, Leora R

    2010-01-01

    Communication confidence is a construct that has not been explored in the aphasia literature. Recently, national and international organizations have endorsed broader assessment methods that address quality of life and include participation, activity, and impairment domains as well as psychosocial areas. Individuals with aphasia encounter difficulties in all these areas on a daily basis in living with a communication disorder. Improvements are often reflected in narratives that are not typically included in standard assessments. This article illustrates how a new instrument measuring communication confidence might fit into a broad assessment framework and discusses the interaction of communication confidence, autonomy, and self-determination for individuals living with aphasia.

  11. Human error considerations and annunciator effects in determining optimal test intervals for periodically inspected standby systems

    International Nuclear Information System (INIS)

    McWilliams, T.P.; Martz, H.F.

    1981-01-01

    This paper incorporates the effects of four types of human error in a model for determining the optimal time between periodic inspections which maximizes the steady state availability for standby safety systems. Such safety systems are characteristic of nuclear power plant operations. The system is modeled by means of an infinite state-space Markov chain. Purpose of the paper is to demonstrate techniques for computing steady-state availability A and the optimal periodic inspection interval tau* for the system. The model can be used to investigate the effects of human error probabilities on optimal availability, study the benefits of annunciating the standby-system, and to determine optimal inspection intervals. Several examples which are representative of nuclear power plant applications are presented

  12. Interpregnancy intervals: impact of postpartum contraceptive effectiveness and coverage.

    Science.gov (United States)

    Thiel de Bocanegra, Heike; Chang, Richard; Howell, Mike; Darney, Philip

    2014-04-01

    The purpose of this study was to determine the use of contraceptive methods, which was defined by effectiveness, length of coverage, and their association with short interpregnancy intervals, when controlling for provider type and client demographics. We identified a cohort of 117,644 women from the 2008 California Birth Statistical Master file with second or higher order birth and at least 1 Medicaid (Family Planning, Access, Care, and Treatment [Family PACT] program or Medi-Cal) claim within 18 months after index birth. We explored the effect of contraceptive method provision on the odds of having an optimal interpregnancy interval and controlled for covariates. The average length of contraceptive coverage was 3.81 months (SD = 4.84). Most women received user-dependent hormonal contraceptives as their most effective contraceptive method (55%; n = 65,103 women) and one-third (33%; n = 39,090 women) had no contraceptive claim. Women who used long-acting reversible contraceptive methods had 3.89 times the odds and women who used user-dependent hormonal methods had 1.89 times the odds of achieving an optimal birth interval compared with women who used barrier methods only; women with no method had 0.66 times the odds. When user-dependent methods are considered, the odds of having an optimal birth interval increased for each additional month of contraceptive coverage by 8% (odds ratio, 1.08; 95% confidence interval, 1.08-1.09). Women who were seen by Family PACT or by both Family PACT and Medi-Cal providers had significantly higher odds of optimal birth intervals compared with women who were served by Medi-Cal only. To achieve optimal birth spacing and ultimately to improve birth outcomes, attention should be given to contraceptive counseling and access to contraceptive methods in the postpartum period. Copyright © 2014 Mosby, Inc. All rights reserved.

  13. Determinants of waterpipe use amongst adolescents in Northern Sweden: a survey of use pattern, risk perception, and environmental factors.

    Science.gov (United States)

    Ramji, Rathi; Arnetz, Judy; Nilsson, Maria; Jamil, Hikmet; Norström, Fredrik; Maziak, Wasim; Wiklund, Ywonne; Arnetz, Bengt

    2015-09-15

    Determinants of waterpipe use in adolescents are believed to differ from those for other tobacco products, but there is a lack of studies of possible social, cultural, or psychological aspects of waterpipe use in this population. This study applied a socioecological model to explore waterpipe use, and its relationship to other tobacco use in Swedish adolescents. A total of 106 adolescents who attended an urban high-school in northern Sweden responded to an anonymous questionnaire. Prevalence rates for waterpipe use were examined in relation to socio-demographics, peer pressure, sensation seeking behavior, harm perception, environmental factors, and depression. Thirty-three percent reported ever having smoked waterpipe (ever use), with 30% having done so during the last 30 days (current use). Among waterpipe ever users, 60% had ever smoked cigarettes in comparison to 32% of non-waterpipe smokers (95% confidence interval 1.4-7.9). The odds of having ever smoked waterpipe were three times higher among male high school seniors as well as students with lower grades. Waterpipe ever users had three times higher odds of having higher levels of sensation-seeking (95% confidence interval 1.2-9.5) and scored high on the depression scales (95% confidence interval 1.6-6.8) than non-users. The odds of waterpipe ever use were four times higher for those who perceived waterpipe products to have pleasant smell compared to cigarettes (95% confidence interval 1.7-9.8). Waterpipe ever users were twice as likely to have seen waterpipe use on television compared to non-users (95% confidence interval 1.1-5.7). The odds of having friends who smoked regularly was eight times higher for waterpipe ever users than non-users (95% confidence interval 2.1-31.2). The current study reports a high use of waterpipe in a select group of students in northern Sweden. The study adds the importance of looking at socioecological determinants of use, including peer pressure and exposure to media marketing

  14. Regional Competition for Confidence: Features of Formation

    Directory of Open Access Journals (Sweden)

    Irina Svyatoslavovna Vazhenina

    2016-09-01

    Full Text Available The increase in economic independence of the regions inevitably leads to an increase in the quality requirements of the regional economic policy. The key to successful regional policy, both during its development and implementation, is the understanding of the necessity of gaining confidence (at all levels, and the inevitable participation in the competition for confidence. The importance of confidence in the region is determined by its value as a competitive advantage in the struggle for partners, resources and tourists, and attracting investments. In today’s environment the focus of governments, regions and companies on long-term cooperation is clearly expressed, which is impossible without a high level of confidence between partners. Therefore, the most important competitive advantages of territories are intangible assets such as an attractive image and a good reputation, which builds up confidence of the population and partners. The higher the confidence in the region is, the broader is the range of potential partners, the larger is the planning horizon of long-term concerted action, the better are the chances of acquiring investment, the higher is the level of competitive immunity of the territories. The article defines competition for confidence as purposeful behavior of a market participant in economic environment, aimed at acquiring specific intangible competitive advantage – the confidence of the largest possible number of other market actors. The article also highlights the specifics of confidence as a competitive goal, presents factors contributing to the destruction of confidence, proposes a strategy to fight for confidence as a program of four steps, considers the factors which integrate regional confidence and offers several recommendations for the establishment of effective regional competition for confidence

  15. Chosen interval methods for solving linear interval systems with special type of matrix

    Science.gov (United States)

    Szyszka, Barbara

    2013-10-01

    The paper is devoted to chosen direct interval methods for solving linear interval systems with special type of matrix. This kind of matrix: band matrix with a parameter, from finite difference problem is obtained. Such linear systems occur while solving one dimensional wave equation (Partial Differential Equations of hyperbolic type) by using the central difference interval method of the second order. Interval methods are constructed so as the errors of method are enclosed in obtained results, therefore presented linear interval systems contain elements that determining the errors of difference method. The chosen direct algorithms have been applied for solving linear systems because they have no errors of method. All calculations were performed in floating-point interval arithmetic.

  16. Vaccination Confidence and Parental Refusal/Delay of Early Childhood Vaccines.

    Directory of Open Access Journals (Sweden)

    Melissa B Gilkey

    Full Text Available To support efforts to address parental hesitancy towards early childhood vaccination, we sought to validate the Vaccination Confidence Scale using data from a large, population-based sample of U.S. parents.We used weighted data from 9,354 parents who completed the 2011 National Immunization Survey. Parents reported on the immunization history of a 19- to 35-month-old child in their households. Healthcare providers then verified children's vaccination status for vaccines including measles, mumps, and rubella (MMR, varicella, and seasonal flu. We used separate multivariable logistic regression models to assess associations between parents' mean scores on the 8-item Vaccination Confidence Scale and vaccine refusal, vaccine delay, and vaccination status.A substantial minority of parents reported a history of vaccine refusal (15% or delay (27%. Vaccination confidence was negatively associated with refusal of any vaccine (odds ratio [OR] = 0.58, 95% confidence interval [CI], 0.54-0.63 as well as refusal of MMR, varicella, and flu vaccines specifically. Negative associations between vaccination confidence and measures of vaccine delay were more moderate, including delay of any vaccine (OR = 0.81, 95% CI, 0.76-0.86. Vaccination confidence was positively associated with having received vaccines, including MMR (OR = 1.53, 95% CI, 1.40-1.68, varicella (OR = 1.54, 95% CI, 1.42-1.66, and flu vaccines (OR = 1.32, 95% CI, 1.23-1.42.Vaccination confidence was consistently associated with early childhood vaccination behavior across multiple vaccine types. Our findings support expanding the application of the Vaccination Confidence Scale to measure vaccination beliefs among parents of young children.

  17. Determinants of willingness-to-pay for water pollution abatement: a point and interval data payment card application.

    Science.gov (United States)

    Mahieu, Pierre-Alexandre; Riera, Pere; Giergiczny, Marek

    2012-10-15

    This paper shows a contingent valuation exercise of pollution abatement in remote lakes. In addition to estimating the usual interval data model, it applies a point and interval statistical approach allowing for uncensored data, left-censored data, right-censored data and left- and right-censored data to explore the determinants of willingness-to-pay in a payment card survey. Results suggest that the estimations between models may diverge under certain conditions. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. The Confidence-Accuracy Relationship for Eyewitness Identification Decisions: Effects of Exposure Duration, Retention Interval, and Divided Attention

    Science.gov (United States)

    Palmer, Matthew A.; Brewer, Neil; Weber, Nathan; Nagesh, Ambika

    2013-01-01

    Prior research points to a meaningful confidence-accuracy (CA) relationship for positive identification decisions. However, there are theoretical grounds for expecting that different aspects of the CA relationship (calibration, resolution, and over/underconfidence) might be undermined in some circumstances. This research investigated whether the…

  19. Analysis of methods to determine the latency of online movement adjustments

    NARCIS (Netherlands)

    Oostwoud Wijdenes, L.; Brenner, E.; Smeets, J.B.J.

    2014-01-01

    When studying online movement adjustments, one of the interesting parameters is their latency. We set out to compare three different methods of determining the latency: the threshold, confidence interval, and extrapolation methods. We simulated sets of movements with different movement times and

  20. The Development of Confidence Limits for Fatigue Strength Data

    International Nuclear Information System (INIS)

    SUTHERLAND, HERBERT J.; VEERS, PAUL S.

    1999-01-01

    Over the past several years, extensive databases have been developed for the S-N behavior of various materials used in wind turbine blades, primarily fiberglass composites. These data are typically presented both in their raw form and curve fit to define their average properties. For design, confidence limits must be placed on these descriptions. In particular, most designs call for the 95/95 design values; namely, with a 95% level of confidence, the designer is assured that 95% of the material will meet or exceed the design value. For such material properties as the ultimate strength, the procedures for estimating its value at a particular confidence level is well defined if the measured values follow a normal or a log-normal distribution. Namely, based upon the number of sample points and their standard deviation, a commonly-found table may be used to determine the survival percentage at a particular confidence level with respect to its mean value. The same is true for fatigue data at a constant stress level (the number of cycles to failure N at stress level S(sub 1)). However, when the stress level is allowed to vary, as with a typical S-N fatigue curve, the procedures for determining confidence limits are not as well defined. This paper outlines techniques for determining confidence limits of fatigue data. Different approaches to estimating the 95/95 level are compared. Data from the MSU/DOE and the FACT fatigue databases are used to illustrate typical results

  1. Determination of fat content in chicken hamburgers using NIR spectroscopy and the Successive Projections Algorithm for interval selection in PLS regression (iSPA-PLS)

    Science.gov (United States)

    Krepper, Gabriela; Romeo, Florencia; Fernandes, David Douglas de Sousa; Diniz, Paulo Henrique Gonçalves Dias; de Araújo, Mário César Ugulino; Di Nezio, María Susana; Pistonesi, Marcelo Fabián; Centurión, María Eugenia

    2018-01-01

    Determining fat content in hamburgers is very important to minimize or control the negative effects of fat on human health, effects such as cardiovascular diseases and obesity, which are caused by the high consumption of saturated fatty acids and cholesterol. This study proposed an alternative analytical method based on Near Infrared Spectroscopy (NIR) and Successive Projections Algorithm for interval selection in Partial Least Squares regression (iSPA-PLS) for fat content determination in commercial chicken hamburgers. For this, 70 hamburger samples with a fat content ranging from 14.27 to 32.12 mg kg- 1 were prepared based on the upper limit recommended by the Argentinean Food Codex, which is 20% (w w- 1). NIR spectra were then recorded and then preprocessed by applying different approaches: base line correction, SNV, MSC, and Savitzky-Golay smoothing. For comparison, full-spectrum PLS and the Interval PLS are also used. The best performance for the prediction set was obtained for the first derivative Savitzky-Golay smoothing with a second-order polynomial and window size of 19 points, achieving a coefficient of correlation of 0.94, RMSEP of 1.59 mg kg- 1, REP of 7.69% and RPD of 3.02. The proposed methodology represents an excellent alternative to the conventional Soxhlet extraction method, since waste generation is avoided, yet without the use of either chemical reagents or solvents, which follows the primary principles of Green Chemistry. The new method was successfully applied to chicken hamburger analysis, and the results agreed with those with reference values at a 95% confidence level, making it very attractive for routine analysis.

  2. Effects of Training and Feedback on Accuracy of Predicting Rectosigmoid Neoplastic Lesions and Selection of Surveillance Intervals by Endoscopists Performing Optical Diagnosis of Diminutive Polyps.

    Science.gov (United States)

    Vleugels, Jasper L A; Dijkgraaf, Marcel G W; Hazewinkel, Yark; Wanders, Linda K; Fockens, Paul; Dekker, Evelien

    2018-05-01

    Real-time differentiation of diminutive polyps (1-5 mm) during endoscopy could replace histopathology analysis. According to guidelines, implementation of optical diagnosis into routine practice would require it to identify rectosigmoid neoplastic lesions with a negative predictive value (NPV) of more than 90%, using histologic findings as a reference, and agreement with histology-based surveillance intervals for more than 90% of cases. We performed a prospective study with 39 endoscopists accredited to perform colonoscopies on participants with positive results from fecal immunochemical tests in the Bowel Cancer Screening Program at 13 centers in the Netherlands. Endoscopists were trained in optical diagnosis using a validated module (Workgroup serrAted polypS and Polyposis). After meeting predefined performance thresholds in the training program, the endoscopists started a 1-year program (continuation phase) in which they performed narrow band imaging analyses during colonoscopies of participants in the screening program and predicted histological findings with confidence levels. The endoscopists were randomly assigned to groups that received feedback or no feedback on the accuracy of their predictions. Primary outcome measures were endoscopists' abilities to identify rectosigmoid neoplastic lesions (using histology as a reference) with NPVs of 90% or more, and selecting surveillance intervals that agreed with those determined by histology for at least 90% of cases. Of 39 endoscopists initially trained, 27 (69%) completed the training program. During the continuation phase, these 27 endoscopists performed 3144 colonoscopies in which 4504 diminutive polyps were removed. The endoscopists identified neoplastic lesions with a pooled NPV of 90.8% (95% confidence interval 88.6-92.6); their proposed surveillance intervals agreed with those determined by histologic analysis for 95.4% of cases (95% confidence interval 94.0-96.6). Findings did not differ between the group

  3. Application of Interval Arithmetic in the Evaluation of Transfer Capabilities by Considering the Sources of Uncertainty

    Directory of Open Access Journals (Sweden)

    Prabha Umapathy

    2009-01-01

    Full Text Available Total transfer capability (TTC is an important index in a power system with large volume of inter-area power exchanges. This paper proposes a novel technique to determine the TTC and its confidence intervals in the system by considering the uncertainties in the load and line parameters. The optimal power flow (OPF method is used to obtain the TTC. Variations in the load and line parameters are incorporated using the interval arithmetic (IA method. The IEEE 30 bus test system is used to illustrate the proposed methodology. Various uncertainties in the line, load and both line and load are incorporated in the evaluation of total transfer capability. From the results, it is observed that the solutions obtained through the proposed method provide much wider information in terms of closed interval form which is more useful in ensuring secured operation of the interconnected system in the presence of uncertainties in load and line parameters.

  4. Dependency of magnetocardiographically determined fetal cardiac time intervals on gestational age, gender and postnatal biometrics in healthy pregnancies

    Directory of Open Access Journals (Sweden)

    Geue Daniel

    2004-04-01

    Full Text Available Abstract Background Magnetocardiography enables the precise determination of fetal cardiac time intervals (CTI as early as the second trimester of pregnancy. It has been shown that fetal CTI change in course of gestation. The aim of this work was to investigate the dependency of fetal CTI on gestational age, gender and postnatal biometric data in a substantial sample of subjects during normal pregnancy. Methods A total of 230 fetal magnetocardiograms were obtained in 47 healthy fetuses between the 15th and 42nd week of gestation. In each recording, after subtraction of the maternal cardiac artifact and the identification of fetal beats, fetal PQRST courses were signal averaged. On the basis of therein detected wave onsets and ends, the following CTI were determined: P wave, PR interval, PQ interval, QRS complex, ST segment, T wave, QT and QTc interval. Using regression analysis, the dependency of the CTI were examined with respect to gestational age, gender and postnatal biometric data. Results Atrioventricular conduction and ventricular depolarization times could be determined dependably whereas the T wave was often difficult to detect. Linear and nonlinear regression analysis established strong dependency on age for the P wave and QRS complex (r2 = 0.67, p r2 = 0.66, p r2 = 0.21, p r2 = 0.13, p st week onward (p Conclusion We conclude that 1 from approximately the 18th week to term, fetal CTI which quantify depolarization times can be reliably determined using magnetocardiography, 2 the P wave and QRS complex duration show a high dependency on age which to a large part reflects fetal growth and 3 fetal gender plays a role in QRS complex duration in the third trimester. Fetal development is thus in part reflected in the CTI and may be useful in the identification of intrauterine growth retardation.

  5. Interpregnancy interval and risk of autistic disorder.

    Science.gov (United States)

    Gunnes, Nina; Surén, Pål; Bresnahan, Michaeline; Hornig, Mady; Lie, Kari Kveim; Lipkin, W Ian; Magnus, Per; Nilsen, Roy Miodini; Reichborn-Kjennerud, Ted; Schjølberg, Synnve; Susser, Ezra Saul; Øyen, Anne-Siri; Stoltenberg, Camilla

    2013-11-01

    A recent California study reported increased risk of autistic disorder in children conceived within a year after the birth of a sibling. We assessed the association between interpregnancy interval and risk of autistic disorder using nationwide registry data on pairs of singleton full siblings born in Norway. We defined interpregnancy interval as the time from birth of the first-born child to conception of the second-born child in a sibship. The outcome of interest was autistic disorder in the second-born child. Analyses were restricted to sibships in which the second-born child was born in 1990-2004. Odds ratios (ORs) were estimated by fitting ordinary logistic models and logistic generalized additive models. The study sample included 223,476 singleton full-sibling pairs. In sibships with interpregnancy intervals autistic disorder, compared with 0.13% in the reference category (≥ 36 months). For interpregnancy intervals shorter than 9 months, the adjusted OR of autistic disorder in the second-born child was 2.18 (95% confidence interval 1.42-3.26). The risk of autistic disorder in the second-born child was also increased for interpregnancy intervals of 9-11 months in the adjusted analysis (OR = 1.71 [95% CI = 1.07-2.64]). Consistent with a previous report from California, interpregnancy intervals shorter than 1 year were associated with increased risk of autistic disorder in the second-born child. A possible explanation is depletion of micronutrients in mothers with closely spaced pregnancies.

  6. Probability Distribution for Flowing Interval Spacing

    International Nuclear Information System (INIS)

    Kuzio, S.

    2001-01-01

    The purpose of this analysis is to develop a probability distribution for flowing interval spacing. A flowing interval is defined as a fractured zone that transmits flow in the Saturated Zone (SZ), as identified through borehole flow meter surveys (Figure 1). This analysis uses the term ''flowing interval spacing'' as opposed to fractured spacing, which is typically used in the literature. The term fracture spacing was not used in this analysis because the data used identify a zone (or a flowing interval) that contains fluid-conducting fractures but does not distinguish how many or which fractures comprise the flowing interval. The flowing interval spacing is measured between the midpoints of each flowing interval. Fracture spacing within the SZ is defined as the spacing between fractures, with no regard to which fractures are carrying flow. The Development Plan associated with this analysis is entitled, ''Probability Distribution for Flowing Interval Spacing'', (CRWMS M and O 2000a). The parameter from this analysis may be used in the TSPA SR/LA Saturated Zone Flow and Transport Work Direction and Planning Documents: (1) ''Abstraction of Matrix Diffusion for SZ Flow and Transport Analyses'' (CRWMS M and O 1999a) and (2) ''Incorporation of Heterogeneity in SZ Flow and Transport Analyses'', (CRWMS M and O 1999b). A limitation of this analysis is that the probability distribution of flowing interval spacing may underestimate the effect of incorporating matrix diffusion processes in the SZ transport model because of the possible overestimation of the flowing interval spacing. Larger flowing interval spacing results in a decrease in the matrix diffusion processes. This analysis may overestimate the flowing interval spacing because the number of fractures that contribute to a flowing interval cannot be determined from the data. Because each flowing interval probably has more than one fracture contributing to a flowing interval, the true flowing interval spacing could be

  7. Power, effects, confidence, and significance: an investigation of statistical practices in nursing research.

    Science.gov (United States)

    Gaskin, Cadeyrn J; Happell, Brenda

    2014-05-01

    To (a) assess the statistical power of nursing research to detect small, medium, and large effect sizes; (b) estimate the experiment-wise Type I error rate in these studies; and (c) assess the extent to which (i) a priori power analyses, (ii) effect sizes (and interpretations thereof), and (iii) confidence intervals were reported. Statistical review. Papers published in the 2011 volumes of the 10 highest ranked nursing journals, based on their 5-year impact factors. Papers were assessed for statistical power, control of experiment-wise Type I error, reporting of a priori power analyses, reporting and interpretation of effect sizes, and reporting of confidence intervals. The analyses were based on 333 papers, from which 10,337 inferential statistics were identified. The median power to detect small, medium, and large effect sizes was .40 (interquartile range [IQR]=.24-.71), .98 (IQR=.85-1.00), and 1.00 (IQR=1.00-1.00), respectively. The median experiment-wise Type I error rate was .54 (IQR=.26-.80). A priori power analyses were reported in 28% of papers. Effect sizes were routinely reported for Spearman's rank correlations (100% of papers in which this test was used), Poisson regressions (100%), odds ratios (100%), Kendall's tau correlations (100%), Pearson's correlations (99%), logistic regressions (98%), structural equation modelling/confirmatory factor analyses/path analyses (97%), and linear regressions (83%), but were reported less often for two-proportion z tests (50%), analyses of variance/analyses of covariance/multivariate analyses of variance (18%), t tests (8%), Wilcoxon's tests (8%), Chi-squared tests (8%), and Fisher's exact tests (7%), and not reported for sign tests, Friedman's tests, McNemar's tests, multi-level models, and Kruskal-Wallis tests. Effect sizes were infrequently interpreted. Confidence intervals were reported in 28% of papers. The use, reporting, and interpretation of inferential statistics in nursing research need substantial

  8. Reference Interval and Subject Variation in Excretion of Urinary Metabolites of Nicotine from Non-Smoking Healthy Subjects in Denmark

    DEFF Research Database (Denmark)

    Hansen, Å. M.; Garde, A. H.; Christensen, J. M.

    2001-01-01

    for determination of cotinine was carried out on 27 samples from non-smokers and smokers. Results obtained from the RIA method showed 2.84 [confidence interval (CI): 2.50; 3.18] times higher results compared to the GC-MS method. A linear correlation between the two methods was demonstrated (rho=0.96). CONCLUSION......BACKGROUND: Passive smoking has been found to be a respiratory health hazard in humans. The present study describes the calculation of a reference interval for urinary nicotine metabolites calculated as cotinine equivalents on the basis of 72 non-smokers exposed to tobacco smoke less than 25....... Parametric reference interval for excretion of nicotine metabolites in urine from non-smokers was established according to International Union of Pure and Applied Chemistry (IUPAC) and International Federation for Clinical Chemistry (IFCC) for use of risk assessment of exposure to tobacco smoke...

  9. Optimal preparation-to-colonoscopy interval in split-dose PEG bowel preparation determines satisfactory bowel preparation quality: an observational prospective study.

    Science.gov (United States)

    Seo, Eun Hee; Kim, Tae Oh; Park, Min Jae; Joo, Hee Rin; Heo, Nae Yun; Park, Jongha; Park, Seung Ha; Yang, Sung Yeon; Moon, Young Soo

    2012-03-01

    Several factors influence bowel preparation quality. Recent studies have indicated that the time interval between bowel preparation and the start of colonoscopy is also important in determining bowel preparation quality. To evaluate the influence of the preparation-to-colonoscopy (PC) interval (the interval of time between the last polyethylene glycol dose ingestion and the start of the colonoscopy) on bowel preparation quality in the split-dose method for colonoscopy. Prospective observational study. University medical center. A total of 366 consecutive outpatients undergoing colonoscopy. Split-dose bowel preparation and colonoscopy. The quality of bowel preparation was assessed by using the Ottawa Bowel Preparation Scale according to the PC interval, and other factors that might influence bowel preparation quality were analyzed. Colonoscopies with a PC interval of 3 to 5 hours had the best bowel preparation quality score in the whole, right, mid, and rectosigmoid colon according to the Ottawa Bowel Preparation Scale. In multivariate analysis, the PC interval (odds ratio [OR] 1.85; 95% CI, 1.18-2.86), the amount of PEG ingested (OR 4.34; 95% CI, 1.08-16.66), and compliance with diet instructions (OR 2.22l 95% CI, 1.33-3.70) were significant contributors to satisfactory bowel preparation. Nonrandomized controlled, single-center trial. The optimal time interval between the last dose of the agent and the start of colonoscopy is one of the important factors to determine satisfactory bowel preparation quality in split-dose polyethylene glycol bowel preparation. Copyright © 2012 American Society for Gastrointestinal Endoscopy. Published by Mosby, Inc. All rights reserved.

  10. CIMP status of interval colon cancers: another piece to the puzzle.

    Science.gov (United States)

    Arain, Mustafa A; Sawhney, Mandeep; Sheikh, Shehla; Anway, Ruth; Thyagarajan, Bharat; Bond, John H; Shaukat, Aasma

    2010-05-01

    Colon cancers diagnosed in the interval after a complete colonoscopy may occur due to limitations of colonoscopy or due to the development of new tumors, possibly reflecting molecular and environmental differences in tumorigenesis resulting in rapid tumor growth. In a previous study from our group, interval cancers (colon cancers diagnosed within 5 years of a complete colonoscopy) were almost four times more likely to demonstrate microsatellite instability (MSI) than non-interval cancers. In this study we extended our molecular analysis to compare the CpG island methylator phenotype (CIMP) status of interval and non-interval colorectal cancers and investigate the relationship between the CIMP and MSI pathways in the pathogenesis of interval cancers. We searched our institution's cancer registry for interval cancers, defined as colon cancers that developed within 5 years of a complete colonoscopy. These were frequency matched in a 1:2 ratio by age and sex to patients with non-interval cancers (defined as colon cancers diagnosed on a patient's first recorded colonoscopy). Archived cancer specimens for all subjects were retrieved and tested for CIMP gene markers. The MSI status of subjects identified between 1989 and 2004 was known from our previous study. Tissue specimens of newly identified cases and controls (between 2005 and 2006) were tested for MSI. There were 1,323 cases of colon cancer diagnosed over the 17-year study period, of which 63 were identified as having interval cancer and matched to 131 subjects with non-interval cancer. Study subjects were almost all Caucasian men. CIMP was present in 57% of interval cancers compared to 33% of non-interval cancers (P=0.004). As shown previously, interval cancers were more likely than non-interval cancers to occur in the proximal colon (63% vs. 39%; P=0.002), and have MSI 29% vs. 11%, P=0.004). In multivariable logistic regression model, proximal location (odds ratio (OR) 1.85; 95% confidence interval (CI) 1

  11. Assessing Mediational Models: Testing and Interval Estimation for Indirect Effects.

    Science.gov (United States)

    Biesanz, Jeremy C; Falk, Carl F; Savalei, Victoria

    2010-08-06

    Theoretical models specifying indirect or mediated effects are common in the social sciences. An indirect effect exists when an independent variable's influence on the dependent variable is mediated through an intervening variable. Classic approaches to assessing such mediational hypotheses ( Baron & Kenny, 1986 ; Sobel, 1982 ) have in recent years been supplemented by computationally intensive methods such as bootstrapping, the distribution of the product methods, and hierarchical Bayesian Markov chain Monte Carlo (MCMC) methods. These different approaches for assessing mediation are illustrated using data from Dunn, Biesanz, Human, and Finn (2007). However, little is known about how these methods perform relative to each other, particularly in more challenging situations, such as with data that are incomplete and/or nonnormal. This article presents an extensive Monte Carlo simulation evaluating a host of approaches for assessing mediation. We examine Type I error rates, power, and coverage. We study normal and nonnormal data as well as complete and incomplete data. In addition, we adapt a method, recently proposed in statistical literature, that does not rely on confidence intervals (CIs) to test the null hypothesis of no indirect effect. The results suggest that the new inferential method-the partial posterior p value-slightly outperforms existing ones in terms of maintaining Type I error rates while maximizing power, especially with incomplete data. Among confidence interval approaches, the bias-corrected accelerated (BC a ) bootstrapping approach often has inflated Type I error rates and inconsistent coverage and is not recommended; In contrast, the bootstrapped percentile confidence interval and the hierarchical Bayesian MCMC method perform best overall, maintaining Type I error rates, exhibiting reasonable power, and producing stable and accurate coverage rates.

  12. Frontline nurse managers' confidence and self-efficacy.

    Science.gov (United States)

    Van Dyk, Jennifer; Siedlecki, Sandra L; Fitzpatrick, Joyce J

    2016-05-01

    This study was focused on determining relationships between confidence levels and self-efficacy among nurse managers. Frontline nurse managers have a pivotal role in delivering high-quality patient care while managing the associated costs and resources. The competency and skill of nurse managers affect every aspect of patient care and staff well-being as nurse managers are largely responsible for creating work environments in which clinical nurses are able to provide high-quality, patient-centred, holistic care. A descriptive, correlational survey design was used; 85 nurse managers participated. Years in a formal leadership role and confidence scores were found to be significant predictors of self-efficacy scores. Experience as a nurse manager is an important component of confidence and self-efficacy. There is a need to develop educational programmes for nurse managers to enhance their self-confidence and self-efficacy, and to maintain experienced nurse managers in the role. © 2016 John Wiley & Sons Ltd.

  13. Determination of the postmortem interval by Laser Induced Breakdown Spectroscopy using swine skeletal muscles

    International Nuclear Information System (INIS)

    Marín-Roldan, A.; Manzoor, S.; Moncayo, S.; Navarro-Villoslada, F.; Izquierdo-Hornillos, R.C.; Caceres, J.O.

    2013-01-01

    Skin and muscle samples are useful to discriminate individuals as well as their postmortem interval (PMI) in crime scenes and natural or caused disasters. In this study, a simple and fast method based on Laser Induced Breakdown Spectroscopy (LIBS) has been developed to estimate PMI using swine skeletal muscle samples. Environmental conditions (moisture, temperature, fauna, etc.) having strong influence on the PMI determination were considered. Time-dependent changes in the emission intensity ratio for Mg, Na, Hα and K were observed, as a result of the variations in their concentration due to chemical reactions in tissues and were correlated with PMI. This relationship, which has not been reported previously in the forensic literature, offers a simple and potentially valuable means of estimating the PMI. - Highlights: • LIBS has been applied for Postmortem Interval estimation. • Environmental and sample storage conditions have been considered. • Significant correlation of elemental emission intensity with PMI has been observed. • Pig skeletal muscle samples have been used

  14. Determination of the postmortem interval by Laser Induced Breakdown Spectroscopy using swine skeletal muscles

    Energy Technology Data Exchange (ETDEWEB)

    Marín-Roldan, A.; Manzoor, S.; Moncayo, S.; Navarro-Villoslada, F.; Izquierdo-Hornillos, R.C.; Caceres, J.O., E-mail: jcaceres@quim.ucm.es

    2013-10-01

    Skin and muscle samples are useful to discriminate individuals as well as their postmortem interval (PMI) in crime scenes and natural or caused disasters. In this study, a simple and fast method based on Laser Induced Breakdown Spectroscopy (LIBS) has been developed to estimate PMI using swine skeletal muscle samples. Environmental conditions (moisture, temperature, fauna, etc.) having strong influence on the PMI determination were considered. Time-dependent changes in the emission intensity ratio for Mg, Na, Hα and K were observed, as a result of the variations in their concentration due to chemical reactions in tissues and were correlated with PMI. This relationship, which has not been reported previously in the forensic literature, offers a simple and potentially valuable means of estimating the PMI. - Highlights: • LIBS has been applied for Postmortem Interval estimation. • Environmental and sample storage conditions have been considered. • Significant correlation of elemental emission intensity with PMI has been observed. • Pig skeletal muscle samples have been used.

  15. A modified Wald interval for the area under the ROC curve (AUC) in diagnostic case-control studies.

    Science.gov (United States)

    Kottas, Martina; Kuss, Oliver; Zapf, Antonia

    2014-02-19

    The area under the receiver operating characteristic (ROC) curve, referred to as the AUC, is an appropriate measure for describing the overall accuracy of a diagnostic test or a biomarker in early phase trials without having to choose a threshold. There are many approaches for estimating the confidence interval for the AUC. However, all are relatively complicated to implement. Furthermore, many approaches perform poorly for large AUC values or small sample sizes. The AUC is actually a probability. So we propose a modified Wald interval for a single proportion, which can be calculated on a pocket calculator. We performed a simulation study to compare this modified Wald interval (without and with continuity correction) with other intervals regarding coverage probability and statistical power. The main result is that the proposed modified Wald intervals maintain and exploit the type I error much better than the intervals of Agresti-Coull, Wilson, and Clopper-Pearson. The interval suggested by Bamber, the Mann-Whitney interval without transformation and also the interval of the binormal AUC are very liberal. For small sample sizes the Wald interval with continuity has a comparable coverage probability as the LT interval and higher power. For large sample sizes the results of the LT interval and of the Wald interval without continuity correction are comparable. If individual patient data is not available, but only the estimated AUC and the total sample size, the modified Wald intervals can be recommended as confidence intervals for the AUC. For small sample sizes the continuity correction should be used.

  16. Determination and identification of naturally occurring decay series using milli-second order pulse time interval analysis (TIA)

    International Nuclear Information System (INIS)

    Hashimoto, T.; Sanada, Y.; Uezu, Y.

    2003-01-01

    A delayed coincidence method, called a time interval analysis (TIA) method, has been successfully applied to selective determination of the correlated α-α decay events in millisecond order life-time. A main decay process applicable to TIA-treatment is 220 Rn → 216 Po(T 1/2 :145ms) → {Th-series}. The TIA is fundamentally based on the difference of time interval distribution between non-correlated decay events and other events such as background or random events when they were compiled the time interval data within a fixed time (for example, a tenth of concerned half lives). The sensitivity of the TIA-analysis due to correlated α-α decay events could be subsequently improved in respect of background elimination using the pulse shape discrimination technique (PSD with PERALS counter) to reject β/γ-pulses, purging of nitrogen gas into extra scintillator, and applying solvent extraction of Ra. (author)

  17. Interpretando correctamente en salud pública estimaciones puntuales, intervalos de confianza y contrastes de hipótesis Accurate interpretation of point estimates, confidence intervals, and hypothesis tests in public health

    Directory of Open Access Journals (Sweden)

    Manuel G Scotto

    2003-12-01

    Full Text Available El presente ensayo trata de aclarar algunos conceptos utilizados habitualmente en el campo de investigación de la salud pública, que en numerosas situaciones son interpretados de manera incorrecta. Entre ellos encontramos la estimación puntual, los intervalos de confianza, y los contrastes de hipótesis. Estableciendo un paralelismo entre estos tres conceptos, podemos observar cuáles son sus diferencias más importantes a la hora de ser interpretados, tanto desde el punto de vista del enfoque clásico como desde la óptica bayesiana.This essay reviews some statistical concepts frequently used in public health research that are commonly misinterpreted. These include point estimates, confidence intervals, and hypothesis tests. By comparing them using the classical and the Bayesian perspectives, their interpretation becomes clearer.

  18. Confidence building - is science the only approach

    International Nuclear Information System (INIS)

    Bragg, K.

    1990-01-01

    The Atomic Energy Control Board (AECB) has begun to develop some simplified methods to determine if it is possible to provide confidence that dose, risk and environmental criteria can be respected without undue reliance on detailed scientific models. The progress to date will be outlined and the merits of this new approach will be compared to the more complex, traditional approach. Stress will be given to generating confidence in both technical and non-technical communities as well as the need to enhance communication between them. 3 refs., 1 tab

  19. Confidant Relations in Italy

    Directory of Open Access Journals (Sweden)

    Jenny Isaacs

    2015-02-01

    Full Text Available Confidants are often described as the individuals with whom we choose to disclose personal, intimate matters. The presence of a confidant is associated with both mental and physical health benefits. In this study, 135 Italian adults responded to a structured questionnaire that asked if they had a confidant, and if so, to describe various features of the relationship. The vast majority of participants (91% reported the presence of a confidant and regarded this relationship as personally important, high in mutuality and trust, and involving minimal lying. Confidants were significantly more likely to be of the opposite sex. Participants overall were significantly more likely to choose a spouse or other family member as their confidant, rather than someone outside of the family network. Familial confidants were generally seen as closer, and of greater value, than non-familial confidants. These findings are discussed within the context of Italian culture.

  20. Method for calculating the variance and prediction intervals for biomass estimates obtained from allometric equations

    CSIR Research Space (South Africa)

    Kirton, A

    2010-08-01

    Full Text Available for calculating the variance and prediction intervals for biomass estimates obtained from allometric equations A KIRTON B SCHOLES S ARCHIBALD CSIR Ecosystem Processes and Dynamics, Natural Resources and the Environment P.O. BOX 395, Pretoria, 0001, South... intervals (confidence intervals for predicted values) for allometric estimates can be obtained using an example of estimating tree biomass from stem diameter. It explains how to deal with relationships which are in the power function form - a common form...

  1. Haematological and biochemical reference intervals for free-ranging brown bears (Ursus arctos) in Sweden

    DEFF Research Database (Denmark)

    Græsli, Anne Randi; Fahlman, Åsa; Evans, Alina L.

    2014-01-01

    BackgroundEstablishment of haematological and biochemical reference intervals is important to assess health of animals on individual and population level. Reference intervals for 13 haematological and 34 biochemical variables were established based on 88 apparently healthy free-ranging brown bears...... and marking for ecological studies. For each of the variables, the reference interval was described based on the 95% confidence interval, and differences due to host characteristics sex and age were included if detected. To our knowledge, this is the first report of reference intervals for free-ranging brown...... and the differences due to host factors age and gender can be useful for evaluation of health status in free-ranging European brown bears....

  2. Identifying the bad guy in a lineup using confidence judgments under deadline pressure.

    Science.gov (United States)

    Brewer, Neil; Weber, Nathan; Wootton, David; Lindsay, D Stephen

    2012-10-01

    Eyewitness-identification tests often culminate in witnesses not picking the culprit or identifying innocent suspects. We tested a radical alternative to the traditional lineup procedure used in such tests. Rather than making a positive identification, witnesses made confidence judgments under a short deadline about whether each lineup member was the culprit. We compared this deadline procedure with the traditional sequential-lineup procedure in three experiments with retention intervals ranging from 5 min to 1 week. A classification algorithm that identified confidence criteria that optimally discriminated accurate from inaccurate decisions revealed that decision accuracy was 24% to 66% higher under the deadline procedure than under the traditional procedure. Confidence profiles across lineup stimuli were more informative than were identification decisions about the likelihood that an individual witness recognized the culprit or correctly recognized that the culprit was not present. Large differences between the maximum and the next-highest confidence value signaled very high accuracy. Future support for this procedure across varied conditions would highlight a viable alternative to the problematic lineup procedures that have traditionally been used by law enforcement.

  3. "Normality of Residuals Is a Continuous Variable, and Does Seem to Influence the Trustworthiness of Confidence Intervals: A Response to, and Appreciation of, Williams, Grajales, and Kurkiewicz (2013"

    Directory of Open Access Journals (Sweden)

    Jason W. Osborne

    2013-09-01

    Full Text Available Osborne and Waters (2002 focused on checking some of the assumptions of multiple linear.regression. In a critique of that paper, Williams, Grajales, and Kurkiewicz correctly clarify that.regression models estimated using ordinary least squares require the assumption of normally.distributed errors, but not the assumption of normally distributed response or predictor variables..They go on to discuss estimate bias and provide a helpful summary of the assumptions of multiple.regression when using ordinary least squares. While we were not as precise as we could have been.when discussing assumptions of normality, the critical issue of the 2002 paper remains -' researchers.often do not check on or report on the assumptions of their statistical methods. This response.expands on the points made by Williams, advocates a thorough examination of data prior to.reporting results, and provides an example of how incremental improvements in meeting the.assumption of normality of residuals incrementally improves the accuracy of confidence intervals.

  4. Determinants of birth interval in a rural Mediterranean population (La Alpujarra, Spain).

    Science.gov (United States)

    Polo, V; Luna, F; Fuster, V

    2000-10-01

    The fertility pattern, in terms of birth intervals, in a rural population not practicing contraception belonging to La Alta Alpujarra Oriental (southeast Spain) is analyzed. During the first half of the 20th century, this population experienced a considerable degree of geographical and cultural isolation. Because of this population's high variability in fertility and therefore in birth intervals, the analysis was limited to a homogenous subsample of 154 families, each with at least five pregnancies. This limitation allowed us to analyze, among and within families, effects of a set of variables on the interbirth pattern, and to avoid possible problems of pseudoreplication. Information on birth date of the mother, age at marriage, children's birth date and death date, birth order, and frequency of miscarriages was collected. Our results indicate that interbirth intervals depend on an exponential effect of maternal age, especially significant after the age of 35. This effect is probably related to the biological degenerative processes of female fertility with age. A linear increase of birth intervals with birth order within families was found as well as a reduction of intervals among families experiencing an infant death. Our sample size was insufficient to detect a possible replacement behavior in the case of infant death. High natality and mortality rates, a secular decrease of natality rates, a log-normal birth interval, and family-size distributions suggest that La Alpujarra has been a natural fertility population following a demographic transition process.

  5. Zero- vs. one-dimensional, parametric vs. non-parametric, and confidence interval vs. hypothesis testing procedures in one-dimensional biomechanical trajectory analysis.

    Science.gov (United States)

    Pataky, Todd C; Vanrenterghem, Jos; Robinson, Mark A

    2015-05-01

    Biomechanical processes are often manifested as one-dimensional (1D) trajectories. It has been shown that 1D confidence intervals (CIs) are biased when based on 0D statistical procedures, and the non-parametric 1D bootstrap CI has emerged in the Biomechanics literature as a viable solution. The primary purpose of this paper was to clarify that, for 1D biomechanics datasets, the distinction between 0D and 1D methods is much more important than the distinction between parametric and non-parametric procedures. A secondary purpose was to demonstrate that a parametric equivalent to the 1D bootstrap exists in the form of a random field theory (RFT) correction for multiple comparisons. To emphasize these points we analyzed six datasets consisting of force and kinematic trajectories in one-sample, paired, two-sample and regression designs. Results showed, first, that the 1D bootstrap and other 1D non-parametric CIs were qualitatively identical to RFT CIs, and all were very different from 0D CIs. Second, 1D parametric and 1D non-parametric hypothesis testing results were qualitatively identical for all six datasets. Last, we highlight the limitations of 1D CIs by demonstrating that they are complex, design-dependent, and thus non-generalizable. These results suggest that (i) analyses of 1D data based on 0D models of randomness are generally biased unless one explicitly identifies 0D variables before the experiment, and (ii) parametric and non-parametric 1D hypothesis testing provide an unambiguous framework for analysis when one׳s hypothesis explicitly or implicitly pertains to whole 1D trajectories. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Abstract: Inference and Interval Estimation for Indirect Effects With Latent Variable Models.

    Science.gov (United States)

    Falk, Carl F; Biesanz, Jeremy C

    2011-11-30

    Models specifying indirect effects (or mediation) and structural equation modeling are both popular in the social sciences. Yet relatively little research has compared methods that test for indirect effects among latent variables and provided precise estimates of the effectiveness of different methods. This simulation study provides an extensive comparison of methods for constructing confidence intervals and for making inferences about indirect effects with latent variables. We compared the percentile (PC) bootstrap, bias-corrected (BC) bootstrap, bias-corrected accelerated (BC a ) bootstrap, likelihood-based confidence intervals (Neale & Miller, 1997), partial posterior predictive (Biesanz, Falk, and Savalei, 2010), and joint significance tests based on Wald tests or likelihood ratio tests. All models included three reflective latent variables representing the independent, dependent, and mediating variables. The design included the following fully crossed conditions: (a) sample size: 100, 200, and 500; (b) number of indicators per latent variable: 3 versus 5; (c) reliability per set of indicators: .7 versus .9; (d) and 16 different path combinations for the indirect effect (α = 0, .14, .39, or .59; and β = 0, .14, .39, or .59). Simulations were performed using a WestGrid cluster of 1680 3.06GHz Intel Xeon processors running R and OpenMx. Results based on 1,000 replications per cell and 2,000 resamples per bootstrap method indicated that the BC and BC a bootstrap methods have inflated Type I error rates. Likelihood-based confidence intervals and the PC bootstrap emerged as methods that adequately control Type I error and have good coverage rates.

  7. Experimental uncertainty estimation and statistics for data having interval uncertainty.

    Energy Technology Data Exchange (ETDEWEB)

    Kreinovich, Vladik (Applied Biomathematics, Setauket, New York); Oberkampf, William Louis (Applied Biomathematics, Setauket, New York); Ginzburg, Lev (Applied Biomathematics, Setauket, New York); Ferson, Scott (Applied Biomathematics, Setauket, New York); Hajagos, Janos (Applied Biomathematics, Setauket, New York)

    2007-05-01

    This report addresses the characterization of measurements that include epistemic uncertainties in the form of intervals. It reviews the application of basic descriptive statistics to data sets which contain intervals rather than exclusively point estimates. It describes algorithms to compute various means, the median and other percentiles, variance, interquartile range, moments, confidence limits, and other important statistics and summarizes the computability of these statistics as a function of sample size and characteristics of the intervals in the data (degree of overlap, size and regularity of widths, etc.). It also reviews the prospects for analyzing such data sets with the methods of inferential statistics such as outlier detection and regressions. The report explores the tradeoff between measurement precision and sample size in statistical results that are sensitive to both. It also argues that an approach based on interval statistics could be a reasonable alternative to current standard methods for evaluating, expressing and propagating measurement uncertainties.

  8. The INTERVAL trial to determine whether intervals between blood donations can be safely and acceptably decreased to optimise blood supply: study protocol for a randomised controlled trial.

    Science.gov (United States)

    Moore, Carmel; Sambrook, Jennifer; Walker, Matthew; Tolkien, Zoe; Kaptoge, Stephen; Allen, David; Mehenny, Susan; Mant, Jonathan; Di Angelantonio, Emanuele; Thompson, Simon G; Ouwehand, Willem; Roberts, David J; Danesh, John

    2014-09-17

    Ageing populations may demand more blood transfusions, but the blood supply could be limited by difficulties in attracting and retaining a decreasing pool of younger donors. One approach to increase blood supply is to collect blood more frequently from existing donors. If more donations could be safely collected in this manner at marginal cost, then it would be of considerable benefit to blood services. National Health Service (NHS) Blood and Transplant in England currently allows men to donate up to every 12 weeks and women to donate up to every 16 weeks. In contrast, some other European countries allow donations as frequently as every 8 weeks for men and every 10 weeks for women. The primary aim of the INTERVAL trial is to determine whether donation intervals can be safely and acceptably decreased to optimise blood supply whilst maintaining the health of donors. INTERVAL is a randomised trial of whole blood donors enrolled from all 25 static centres of NHS Blood and Transplant. Recruitment of about 50,000 male and female donors started in June 2012 and was completed in June 2014. Men have been randomly assigned to standard 12-week versus 10-week versus 8-week inter-donation intervals, while women have been assigned to standard 16-week versus 14-week versus 12-week inter-donation intervals. Sex-specific comparisons will be made by intention-to-treat analysis of outcomes assessed after two years of intervention. The primary outcome is the number of blood donations made. A key secondary outcome is donor quality of life, assessed using the Short Form Health Survey. Additional secondary endpoints include the number of 'deferrals' due to low haemoglobin (and other factors), iron status, cognitive function, physical activity, and donor attitudes. A comprehensive health economic analysis will be undertaken. The INTERVAL trial should yield novel information about the effect of inter-donation intervals on blood supply, acceptability, and donors' physical and mental well

  9. Spreadsheet design and validation for characteristic limits determination in gross alpha and beta measurement

    International Nuclear Information System (INIS)

    Prado, Rodrigo G.P. do; Dalmazio, Ilza

    2013-01-01

    The identification and detection of ionizing radiation are essential requisites of radiation protection. Gross alpha and beta measurements are widely applied as a screening method in radiological characterization, environmental monitoring and industrial applications. As in any other analytical technique, test performance depends on the quality of instrumental measurements and reliability of calculations. Characteristic limits refer to three specific statistics, namely, decision threshold, detection limit and confidence interval, which are fundamental to ensuring the quality of determinations. This work describes a way to calculate characteristic limits for measurements of gross alpha and beta activity applying spreadsheets. The approach used for determination of decision threshold, detection limit and limits of the confidence interval, the mathematical expressions of measurands and uncertainty followed standards guidelines. A succinct overview of this approach and examples are presented and spreadsheets were validated using specific software. Furthermore, these spreadsheets could be used as tool to instruct beginner users of methods for ionizing radiation measurements. (author)

  10. Critical analysis of consecutive unilateral cleft lip repairs: determining ideal sample size.

    Science.gov (United States)

    Power, Stephanie M; Matic, Damir B

    2013-03-01

    Objective : Cleft surgeons often show 10 consecutive lip repairs to reduce presentation bias, however the validity remains unknown. The purpose of this study is to determine the number of consecutive cases that represent average outcomes. Secondary objectives are to determine if outcomes correlate with cleft severity and to calculate interrater reliability. Design : Consecutive preoperative and 2-year postoperative photographs of the unilateral cleft lip-nose complex were randomized and evaluated by cleft surgeons. Parametric analysis was performed according to chronologic, consecutive order. The mean standard deviation over all raters enabled calculation of expected 95% confidence intervals around a mean tested for various sample sizes. Setting : Meeting of the American Cleft Palate-Craniofacial Association in 2009. Patients, Participants : Ten senior cleft surgeons evaluated 39 consecutive lip repairs. Main Outcome Measures : Preoperative severity and postoperative outcomes were evaluated using descriptive and quantitative scales. Results : Intraclass correlation coefficients for cleft severity and postoperative evaluations were 0.65 and 0.21, respectively. Outcomes did not correlate with cleft severity (P  =  .28). Calculations for 10 consecutive cases demonstrated wide 95% confidence intervals, spanning two points on both postoperative grading scales. Ninety-five percent confidence intervals narrowed within one qualitative grade (±0.30) and one point (±0.50) on the 10-point scale for 27 consecutive cases. Conclusions : Larger numbers of consecutive cases (n > 27) are increasingly representative of average results, but less practical in presentation format. Ten consecutive cases lack statistical support. Cleft surgeons showed low interrater reliability for postoperative assessments, which may reflect personal bias when evaluating another surgeon's results.

  11. Rapid determination of long-lived artificial alpha radionuclides using time interval analysis

    International Nuclear Information System (INIS)

    Uezu, Yasuhiro; Koarashi, Jun; Sanada, Yukihisa; Hashimoto, Tetsuo

    2003-01-01

    It is important to monitor long lived alpha radionuclides as plutonium ( 238 Pu, 239+240 Pu) in the field of working area and environment of nuclear fuel cycle facilities, because it is well known that potential risks of cancer-causing from alpha radiation is higher than gamma radiations. Thus, these monitoring are required high sensitivity, high resolution and rapid determination in order to measure a very low-level concentration of plutonium isotopes. In such high sensitive monitoring, natural radionuclides, including radon ( 222 Rn or 220 Rn) and their progenies, should be eliminated as low as possible. In this situation, a sophisticated discrimination method between Pu and progenies of 222 Rn or 220 Rn using time interval analysis (TIA), which was able to subtract short-lived radionuclides using the time interval distributions calculation of successive alpha and beta decay events within millisecond or microsecond orders, was designed and developed. In this system, alpha rays from 214 Po, 216 Po and 212 Po are extractable. TIA measuring system composes of Silicon Surface Barrier Detector (SSD), an amplifier, an Analog to Digital Converter (ADC), a Multi-Channel Analyzer (MCA), a high-resolution timer (TIMER), a multi-parameter collector and a personal computer. In ADC, incidental alpha and beta pulses are sent to the MCA and the TIMER simultaneously. Pulses from them are synthesized by the multi-parameter collector. After measurement, natural radionuclides are subtracted. Airborne particles were collected on membrane filter for 60 minutes at 100 L/min. Small Pu particles were added on the surface of it. Alpha and beta rays were measured and natural radionuclides were subtracted within 5 times of 145 msec. by TIA. As a result of it, the hidden Pu in natural background could be recognized clearly. The lower limit of determination of 239 Pu is calculated as 6x10 -9 Bq/cm 3 . This level is satisfied with the derived air concentration (DAC) of 239 Pu (8x10 -9 Bq/cm 3

  12. Primary care physicians' perceptions about and confidence in deciding which patients to refer for total joint arthroplasty of the hip and knee.

    Science.gov (United States)

    Waugh, E J; Badley, E M; Borkhoff, C M; Croxford, R; Davis, A M; Dunn, S; Gignac, M A; Jaglal, S B; Sale, J; Hawker, G A

    2016-03-01

    The purpose of this study is to examine the perceptions of primary care physicians (PCPs) regarding indications, contraindications, risks and benefits of total joint arthroplasty (TJA) and their confidence in selecting patients for referral for TJA. PCPs recruited from among those providing care to participants in an established community cohort with hip or knee osteoarthritis (OA). Self-completed questionnaires were used to collect demographic and practice characteristics and perceptions about TJA. Confidence in referring appropriate patients for TJA was measured on a scale from 1 to 10; respondents scoring in the lowest tertile were considered to have 'low confidence'. Descriptive analyses were conducted and multiple logistic regression was used to determine key predictors of low confidence. 212 PCPs participated (58% response rate) (65% aged 50+ years, 45% female, 77% >15 years of practice). Perceptions about TJA were highly variable but on average, PCPs perceived that a typical surgical candidate would have moderate pain and disability, identified few absolute contraindications to TJA, and overestimated both the effectiveness and risks of TJA. On average, PCPs indicated moderate confidence in deciding who to refer. Independent predictors of low confidence were female physicians (OR = 2.18, 95% confidence interval (CI): 1.06-4.46) and reporting a 'lack of clarity about surgical indications' (OR = 3.54, 95% CI: 1.87-6.66). Variability in perceptions and lack of clarity about surgical indications underscore the need for decision support tools to inform PCP - patient decision making regarding referral for TJA. Copyright © 2015 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.

  13. Magnetic Resonance Imaging in the measurement of whole body muscle mass: A comparison of interval gap methods

    International Nuclear Information System (INIS)

    Hellmanns, K.; McBean, K.; Thoirs, K.

    2015-01-01

    Purpose: Magnetic Resonance Imaging (MRI) is commonly used in body composition research to measure whole body skeletal muscle mass (SM). MRI calculation methods of SM can vary by analysing the images at different slice intervals (or interval gaps) along the length of the body. This study compared SM measurements made from MRI images of apparently healthy individuals using different interval gap methods to determine the error associated with each technique. It was anticipated that the results would inform researchers of optimum interval gap measurements to detect a predetermined minimum change in SM. Methods: A method comparison study was used to compare eight interval gap methods (interval gaps of 40, 50, 60, 70, 80, 100, 120 and 140 mm) against a reference 10 mm interval gap method for measuring SM from twenty MRI image sets acquired from apparently healthy participants. Pearson product-moment correlation analysis was used to determine the association between methods. Total error was calculated as the sum of the bias (systematic error) and the random error (limits of agreement) of the mean differences. Percentage error was used to demonstrate proportional error. Results: Pearson product-moment correlation analysis between the reference method and all interval gap methods demonstrated strong and significant associations (r > 0.99, p < 0.0001). The 40 mm interval gap method was comparable with the 10 mm interval reference method and had a low error (total error 0.95 kg, −3.4%). Analysis methods using wider interval gap techniques demonstrated larger errors than reported for dual-energy x-ray absorptiometry (DXA), a technique which is more available, less expensive, and less time consuming than MRI analysis of SM. Conclusions: Researchers using MRI to measure SM can be confident in using a 40 mm interval gap technique when analysing the images to detect minimum changes less than 1 kg. The use of wider intervals will introduce error that is no better

  14. Prevalence, determinants and prognosis of pulmonary hypertension among hemodialysis patients

    Science.gov (United States)

    Agarwal, Rajiv

    2012-01-01

    Background The prevalence, determinants and prognosis of pulmonary hypertension among long-term hemodialysis patients in the USA are poorly understood. Methods A cross-sectional survey of prevalence and determinants of pulmonary hypertension was performed, followed by longitudinal follow-up for all-cause mortality. Pulmonary hypertension was defined as an estimated systolic pulmonary artery pressure of >35 mmHg using echocardiograms performed within an hour after the end of dialysis. Results Prevalent in 110/288 patients (38%), the independent determinants of pulmonary hypertension were the following: left atrial diameter (odds ratio 10.1 per cm/m2, P pulmonary hypertension (53%, CMR 168.9/1000 patient-years) and 39 among 178 without pulmonary hypertension (22%, CMR 52.5/1000 patient-years) [unadjusted hazard ratio (HR) for death 2.12 (95% confidence interval 1.41–3.19), P pulmonary hypertension remained an independent predictor for all-cause mortality [HR 2.17 (95% confidence interval 1.31–3.61), P pulmonary hypertension is common and is strongly associated with an enlarged left atrium and poor long-term survival. Reducing left atrial size such as through volume control may be an attractive target to improve pulmonary hypertension. Improving pulmonary hypertension in this group of patients may improve the dismal outcomes. PMID:22290987

  15. Reference interval determination of hemoglobin fractions in umbilical cord and placental blood by capillary electrophoresis.

    Science.gov (United States)

    Bó, Suzane Dal; de Oliveira Lemos, Fabiane Kreutz; Pedrazzani, Fabiane Spagnol; Cagliari, Cláudia Rosa; Scotti, Luciana

    2016-04-01

    Umbilical cord and placental blood (UCPB) is a rich source of hematopoietic stem cells widely used to treat diseases that did not have effective treatments until recently. Umbilical cord and placental blood banks (UCPBBs) are needed to be created to store UCPB. UCPB is collected immediately after birth, processed, and frozen until infusion. Detection of abnormal hemoglobins is one of UCPB screening tests available. The objective of the present study was to determine the reference interval for HbA, HbF, and HbA2 in UCPB using capillary electrophoresis. Methods: Observational retrospective study of UCPB samples undergoing hemoglobin electrophoresis was performed between April 2012 and May 2013. We analyzed 273 UCPB samples. All cords met the criteria of BrasilCORD. We found 19.9% (10.5–36.7%) for HbA, 80.1% (62.7–89.4%) for HbF, and 0.1% (0.0–0.6%) for HbA2. Data were expressed as median (P2.5–P97.5). Establishing specific reference intervals is the best option for most tests because such ranges reflect the status of the population in which the tests will be applied. The use of appropriate reference intervals ensures that clinical labs provide reliable information, thus enabling clinicians to correctly interpret results and choose the best approach for the target population.

  16. Comparing interval estimates for small sample ordinal CFA models.

    Science.gov (United States)

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading

  17. Optimal Testing Intervals in the Squatting Test to Determine Baroreflex Sensitivity

    OpenAIRE

    Ishitsuka, S.; Kusuyama, N.; Tanaka, M.

    2014-01-01

    The recently introduced “squatting test” (ST) utilizes a simple postural change to perturb the blood pressure and to assess baroreflex sensitivity (BRS). In our study, we estimated the reproducibility of and the optimal testing interval between the STs in healthy volunteers. Thirty-four subjects free of cardiovascular disorders and taking no medication were instructed to perform the repeated ST at 30-sec, 1-min, and 3-min intervals in duplicate in a random sequence, while the systolic blood p...

  18. Food skills confidence and household gatekeepers' dietary practices.

    Science.gov (United States)

    Burton, Melissa; Reid, Mike; Worsley, Anthony; Mavondo, Felix

    2017-01-01

    Household food gatekeepers have the potential to influence the food attitudes and behaviours of family members, as they are mainly responsible for food-related tasks in the home. The aim of this study was to determine the role of gatekeepers' confidence in food-related skills and nutrition knowledge on food practices in the home. An online survey was completed by 1059 Australian dietary gatekeepers selected from the Global Market Insite (GMI) research database. Participants responded to questions about food acquisition and preparation behaviours, the home eating environment, perceptions and attitudes towards food, and demographics. Two-step cluster analysis was used to identify groups based on confidence regarding food skills and nutrition knowledge. Chi-square tests and one-way ANOVAs were used to compare the groups on the dependent variables. Three groups were identified: low confidence, moderate confidence and high confidence. Gatekeepers in the highest confidence group were significantly more likely to report lower body mass index (BMI), and indicate higher importance of fresh food products, vegetable prominence in meals, product information use, meal planning, perceived behavioural control and overall diet satisfaction. Gatekeepers in the lowest confidence group were significantly more likely to indicate more perceived barriers to healthy eating, report more time constraints and more impulse purchasing practices, and higher convenience ingredient use. Other smaller associations were also found. Household food gatekeepers with high food skills confidence were more likely to engage in several healthy food practices, while those with low food skills confidence were more likely to engage in unhealthy food practices. Food education strategies aimed at building food-skills and nutrition knowledge will enable current and future gatekeepers to make healthier food decisions for themselves and for their families. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Transverse micro-erosion meter measurements; determining minimum sample size

    Science.gov (United States)

    Trenhaile, Alan S.; Lakhan, V. Chris

    2011-11-01

    Two transverse micro-erosion meter (TMEM) stations were installed in each of four rock slabs, a slate/shale, basalt, phyllite/schist, and sandstone. One station was sprayed each day with fresh water and the other with a synthetic sea water solution (salt water). To record changes in surface elevation (usually downwearing but with some swelling), 100 measurements (the pilot survey), the maximum for the TMEM used in this study, were made at each station in February 2010, and then at two-monthly intervals until February 2011. The data were normalized using Box-Cox transformations and analyzed to determine the minimum number of measurements needed to obtain station means that fall within a range of confidence limits of the population means, and the means of the pilot survey. The effect on the confidence limits of reducing an already small number of measurements (say 15 or less) is much greater than that of reducing a much larger number of measurements (say more than 50) by the same amount. There was a tendency for the number of measurements, for the same confidence limits, to increase with the rate of downwearing, although it was also dependent on whether the surface was treated with fresh or salt water. About 10 measurements often provided fairly reasonable estimates of rates of surface change but with fairly high percentage confidence intervals in slowly eroding rocks; however, many more measurements were generally needed to derive means within 10% of the population means. The results were tabulated and graphed to provide an indication of the approximate number of measurements required for given confidence limits, and the confidence limits that might be attained for a given number of measurements.

  20. We will be champions: Leaders' confidence in 'us' inspires team members' team confidence and performance.

    Science.gov (United States)

    Fransen, K; Steffens, N K; Haslam, S A; Vanbeselaere, N; Vande Broek, G; Boen, F

    2016-12-01

    The present research examines the impact of leaders' confidence in their team on the team confidence and performance of their teammates. In an experiment involving newly assembled soccer teams, we manipulated the team confidence expressed by the team leader (high vs neutral vs low) and assessed team members' responses and performance as they unfolded during a competition (i.e., in a first baseline session and a second test session). Our findings pointed to team confidence contagion such that when the leader had expressed high (rather than neutral or low) team confidence, team members perceived their team to be more efficacious and were more confident in the team's ability to win. Moreover, leaders' team confidence affected individual and team performance such that teams led by a highly confident leader performed better than those led by a less confident leader. Finally, the results supported a hypothesized mediational model in showing that the effect of leaders' confidence on team members' team confidence and performance was mediated by the leader's perceived identity leadership and members' team identification. In conclusion, the findings of this experiment suggest that leaders' team confidence can enhance members' team confidence and performance by fostering members' identification with the team. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  1. Interval selection with machine-dependent intervals

    OpenAIRE

    Bohmova K.; Disser Y.; Mihalak M.; Widmayer P.

    2013-01-01

    We study an offline interval scheduling problem where every job has exactly one associated interval on every machine. To schedule a set of jobs, exactly one of the intervals associated with each job must be selected, and the intervals selected on the same machine must not intersect.We show that deciding whether all jobs can be scheduled is NP-complete already in various simple cases. In particular, by showing the NP-completeness for the case when all the intervals associated with the same job...

  2. Cardiopulmonary resuscitation; use, training and self-confidence in skills. A self-report study among hospital personnel

    Directory of Open Access Journals (Sweden)

    Hopstock Laila A

    2008-12-01

    Full Text Available Abstract Background Immediate start of basic cardiopulmonary resuscitation (CPR and early defibrillation have been highlighted as crucial for survival from cardiac arrest, but despite new knowledge, new technology and massive personnel training the survival rates from in-hospital cardiac arrest are still low. National guidelines recommend regular intervals of CPR training to make all hospital personnel able to perform basic CPR till advanced care is available. This study investigates CPR training, resuscitation experience and self-confidence in skills among hospital personnel outside critical care areas. Methods A cross-sectional study was performed at three Norwegian hospitals. Data on CPR training and CPR use were collected by self-reports from 361 hospital personnel. Results A total of 89% reported training in CPR, but only 11% had updated their skills in accordance with the time interval recommended by national guidelines. Real resuscitation experience was reported by one third of the respondents. Both training intervals and use of skills in resuscitation situations differed among the professions. Self-reported confidence decreased only after more than two years since last CPR training. Conclusion There is a gap between recommendations and reality in CPR training among hospital personnel working outside critical care areas.

  3. Exact nonparametric confidence bands for the survivor function.

    Science.gov (United States)

    Matthews, David

    2013-10-12

    A method to produce exact simultaneous confidence bands for the empirical cumulative distribution function that was first described by Owen, and subsequently corrected by Jager and Wellner, is the starting point for deriving exact nonparametric confidence bands for the survivor function of any positive random variable. We invert a nonparametric likelihood test of uniformity, constructed from the Kaplan-Meier estimator of the survivor function, to obtain simultaneous lower and upper bands for the function of interest with specified global confidence level. The method involves calculating a null distribution and associated critical value for each observed sample configuration. However, Noe recursions and the Van Wijngaarden-Decker-Brent root-finding algorithm provide the necessary tools for efficient computation of these exact bounds. Various aspects of the effect of right censoring on these exact bands are investigated, using as illustrations two observational studies of survival experience among non-Hodgkin's lymphoma patients and a much larger group of subjects with advanced lung cancer enrolled in trials within the North Central Cancer Treatment Group. Monte Carlo simulations confirm the merits of the proposed method of deriving simultaneous interval estimates of the survivor function across the entire range of the observed sample. This research was supported by the Natural Sciences and Engineering Research Council (NSERC) of Canada. It was begun while the author was visiting the Department of Statistics, University of Auckland, and completed during a subsequent sojourn at the Medical Research Council Biostatistics Unit in Cambridge. The support of both institutions, in addition to that of NSERC and the University of Waterloo, is greatly appreciated.

  4. Raising Confident Kids

    Science.gov (United States)

    ... First Aid & Safety Doctors & Hospitals Videos Recipes for Kids Kids site Sitio para niños How the Body ... Videos for Educators Search English Español Raising Confident Kids KidsHealth / For Parents / Raising Confident Kids What's in ...

  5. An appraisal of statistical procedures used in derivation of reference intervals.

    Science.gov (United States)

    Ichihara, Kiyoshi; Boyd, James C

    2010-11-01

    When conducting studies to derive reference intervals (RIs), various statistical procedures are commonly applied at each step, from the planning stages to final computation of RIs. Determination of the necessary sample size is an important consideration, and evaluation of at least 400 individuals in each subgroup has been recommended to establish reliable common RIs in multicenter studies. Multiple regression analysis allows identification of the most important factors contributing to variation in test results, while accounting for possible confounding relationships among these factors. Of the various approaches proposed for judging the necessity of partitioning reference values, nested analysis of variance (ANOVA) is the likely method of choice owing to its ability to handle multiple groups and being able to adjust for multiple factors. Box-Cox power transformation often has been used to transform data to a Gaussian distribution for parametric computation of RIs. However, this transformation occasionally fails. Therefore, the non-parametric method based on determination of the 2.5 and 97.5 percentiles following sorting of the data, has been recommended for general use. The performance of the Box-Cox transformation can be improved by introducing an additional parameter representing the origin of transformation. In simulations, the confidence intervals (CIs) of reference limits (RLs) calculated by the parametric method were narrower than those calculated by the non-parametric approach. However, the margin of difference was rather small owing to additional variability in parametrically-determined RLs introduced by estimation of parameters for the Box-Cox transformation. The parametric calculation method may have an advantage over the non-parametric method in allowing identification and exclusion of extreme values during RI computation.

  6. Determining optimal preventive maintenance interval for component of Well Barrier Element in an Oil & Gas Company

    Science.gov (United States)

    Siswanto, A.; Kurniati, N.

    2018-04-01

    An oil and gas company has 2,268 oil and gas wells. Well Barrier Element (WBE) is installed in a well to protect human, prevent asset damage and minimize harm to the environment. The primary WBE component is Surface Controlled Subsurface Safety Valve (SCSSV). The secondary WBE component is Christmas Tree Valves that consist of four valves i.e. Lower Master Valve (LMV), Upper Master Valve (UMV), Swab Valve (SV) and Wing Valve (WV). Current practice on WBE Preventive Maintenance (PM) program is conducted by considering the suggested schedule as stated on manual. Corrective Maintenance (CM) program is conducted when the component fails unexpectedly. Both PM and CM need cost and may cause production loss. This paper attempts to analyze the failure data and reliability based on historical data. Optimal PM interval is determined in order to minimize the total cost of maintenance per unit time. The optimal PM interval for SCSSV is 730 days, LMV is 985 days, UMV is 910 days, SV is 900 days and WV is 780 days. In average of all components, the cost reduction by implementing the suggested interval is 52%, while the reliability is improved by 4% and the availability is increased by 5%.

  7. Assessing QT interval prolongation and its associated risks with antipsychotics

    DEFF Research Database (Denmark)

    Nielsen, Jimmi; Graff, Claus; Kanters, Jørgen K.

    2011-01-01

    markers for TdP have been developed but none of them is clinically implemented yet and QT interval prolongation is still considered the most valid surrogate marker. Although automated QT interval determination may offer some assistance, QT interval determination is best performed by a cardiologist skilled...

  8. Maximum-confidence discrimination among symmetric qudit states

    International Nuclear Information System (INIS)

    Jimenez, O.; Solis-Prosser, M. A.; Delgado, A.; Neves, L.

    2011-01-01

    We study the maximum-confidence (MC) measurement strategy for discriminating among nonorthogonal symmetric qudit states. Restricting to linearly dependent and equally likely pure states, we find the optimal positive operator valued measure (POVM) that maximizes our confidence in identifying each state in the set and minimizes the probability of obtaining inconclusive results. The physical realization of this POVM is completely determined and it is shown that after an inconclusive outcome, the input states may be mapped into a new set of equiprobable symmetric states, restricted, however, to a subspace of the original qudit Hilbert space. By applying the MC measurement again onto this new set, we can still gain some information about the input states, although with less confidence than before. This leads us to introduce the concept of sequential maximum-confidence (SMC) measurements, where the optimized MC strategy is iterated in as many stages as allowed by the input set, until no further information can be extracted from an inconclusive result. Within each stage of this measurement our confidence in identifying the input states is the highest possible, although it decreases from one stage to the next. In addition, the more stages we accomplish within the maximum allowed, the higher will be the probability of correct identification. We will discuss an explicit example of the optimal SMC measurement applied in the discrimination among four symmetric qutrit states and propose an optical network to implement it.

  9. Determining frequentist confidence limits using a directed parameter space search

    International Nuclear Information System (INIS)

    Daniel, Scott F.; Connolly, Andrew J.; Schneider, Jeff

    2014-01-01

    We consider the problem of inferring constraints on a high-dimensional parameter space with a computationally expensive likelihood function. We propose a machine learning algorithm that maps out the Frequentist confidence limit on parameter space by intelligently targeting likelihood evaluations so as to quickly and accurately characterize the likelihood surface in both low- and high-likelihood regions. We compare our algorithm to Bayesian credible limits derived by the well-tested Markov Chain Monte Carlo (MCMC) algorithm using both multi-modal toy likelihood functions and the seven yr Wilkinson Microwave Anisotropy Probe cosmic microwave background likelihood function. We find that our algorithm correctly identifies the location, general size, and general shape of high-likelihood regions in parameter space while being more robust against multi-modality than MCMC.

  10. Degradation of Insecticides in Poultry Manure: Determining the Insecticidal Treatment Interval for Managing House Fly (Diptera: Muscidae) Populations in Poultry Farms.

    Science.gov (United States)

    Ong, Song-Quan; Ab Majid, Abdul Hafiz; Ahmad, Hamdan

    2016-04-01

    It is crucial to understand the degradation pattern of insecticides when designing a sustainable control program for the house fly, Musca domestica (L.), on poultry farms. The aim of this study was to determine the half-life and degradation rates of cyromazine, chlorpyrifos, and cypermethrin by spiking these insecticides into poultry manure, and then quantitatively analyzing the insecticide residue using ultra-performance liquid chromatography. The insecticides were later tested in the field in order to study the appropriate insecticidal treatment intervals. Bio-assays on manure samples were later tested at 3, 7, 10, and 15 d for bio-efficacy on susceptible house fly larvae. Degradation analysis demonstrated that cyromazine has the shortest half-life (3.01 d) compared with chlorpyrifos (4.36 d) and cypermethrin (3.75 d). Cyromazine also had a significantly greater degradation rate compared with chlorpyrifos and cypermethrin. For the field insecticidal treatment interval study, 10 d was the interval that had been determined for cyromazine due to its significantly lower residue; for ChCy (a mixture of chlorpyrifos and cypermethrin), the suggested interval was 7 d. Future work should focus on the effects of insecticide metabolites on targeted pests and the poultry manure environment.

  11. Applying Fuzzy Logic and Data Mining Techniques in Wireless Sensor Network for Determination Residential Fire Confidence

    Directory of Open Access Journals (Sweden)

    Mirjana Maksimović

    2014-09-01

    Full Text Available The main goal of soft computing technologies (fuzzy logic, neural networks, fuzzy rule-based systems, data mining techniques… is to find and describe the structural patterns in the data in order to try to explain connections between data and on their basis create predictive or descriptive models. Integration of these technologies in sensor nodes seems to be a good idea because it can significantly lead to network performances improvements, above all to reduce the energy consumption and enhance the lifetime of the network. The purpose of this paper is to analyze different algorithms in the case of fire confidence determination in order to see which of the methods and parameter values work best for the given problem. Hence, an analysis between different classification algorithms in a case of nominal and numerical d

  12. Establishment of reference intervals for plasma protein electrophoresis in Indo-Pacific green sea turtles, Chelonia mydas.

    Science.gov (United States)

    Flint, Mark; Matthews, Beren J; Limpus, Colin J; Mills, Paul C

    2015-01-01

    Biochemical and haematological parameters are increasingly used to diagnose disease in green sea turtles. Specific clinical pathology tools, such as plasma protein electrophoresis analysis, are now being used more frequently to improve our ability to diagnose disease in the live animal. Plasma protein reference intervals were calculated from 55 clinically healthy green sea turtles using pulsed field electrophoresis to determine pre-albumin, albumin, α-, β- and γ-globulin concentrations. The estimated reference intervals were then compared with data profiles from clinically unhealthy turtles admitted to a local wildlife hospital to assess the validity of the derived intervals and identify the clinically useful plasma protein fractions. Eighty-six per cent {19 of 22 [95% confidence interval (CI) 65-97]} of clinically unhealthy turtles had values outside the derived reference intervals, including the following: total protein [six of 22 turtles or 27% (95% CI 11-50%)], pre-albumin [two of five, 40% (95% CI 5-85%)], albumin [13 of 22, 59% (95% CI 36-79%)], total albumin [13 of 22, 59% (95% CI 36-79%)], α- [10 of 22, 45% (95% CI 24-68%)], β- [two of 10, 20% (95% CI 3-56%)], γ- [one of 10, 10% (95% CI 0.3-45%)] and β-γ-globulin [one of 12, 8% (95% CI 0.2-38%)] and total globulin [five of 22, 23% (8-45%)]. Plasma protein electrophoresis shows promise as an accurate adjunct tool to identify a disease state in marine turtles. This study presents the first reference interval for plasma protein electrophoresis in the Indo-Pacific green sea turtle.

  13. Intraclass Correlation Coefficients in Hierarchical Design Studies with Discrete Response Variables: A Note on a Direct Interval Estimation Procedure

    Science.gov (United States)

    Raykov, Tenko; Marcoulides, George A.

    2015-01-01

    A latent variable modeling procedure that can be used to evaluate intraclass correlation coefficients in two-level settings with discrete response variables is discussed. The approach is readily applied when the purpose is to furnish confidence intervals at prespecified confidence levels for these coefficients in setups with binary or ordinal…

  14. Dynamic visual noise reduces confidence in short-term memory for visual information.

    Science.gov (United States)

    Kemps, Eva; Andrade, Jackie

    2012-05-01

    Previous research has shown effects of the visual interference technique, dynamic visual noise (DVN), on visual imagery, but not on visual short-term memory, unless retention of precise visual detail is required. This study tested the prediction that DVN does also affect retention of gross visual information, specifically by reducing confidence. Participants performed a matrix pattern memory task with three retention interval interference conditions (DVN, static visual noise and no interference control) that varied from trial to trial. At recall, participants indicated whether or not they were sure of their responses. As in previous research, DVN did not impair recall accuracy or latency on the task, but it did reduce recall confidence relative to static visual noise and no interference. We conclude that DVN does distort visual representations in short-term memory, but standard coarse-grained recall measures are insensitive to these distortions.

  15. Five-Year Risk of Interval-Invasive Second Breast Cancer

    Science.gov (United States)

    Buist, Diana S. M.; Houssami, Nehmat; Dowling, Emily C.; Halpern, Elkan F.; Gazelle, G. Scott; Lehman, Constance D.; Henderson, Louise M.; Hubbard, Rebecca A.

    2015-01-01

    Background: Earlier detection of second breast cancers after primary breast cancer (PBC) treatment improves survival, yet mammography is less accurate in women with prior breast cancer. The purpose of this study was to examine women presenting clinically with second breast cancers after negative surveillance mammography (interval cancers), and to estimate the five-year risk of interval-invasive second cancers for women with varying risk profiles. Methods: We evaluated a prospective cohort of 15 114 women with 47 717 surveillance mammograms diagnosed with stage 0-II unilateral PBC from 1996 through 2008 at facilities in the Breast Cancer Surveillance Consortium. We used discrete time survival models to estimate the association between odds of an interval-invasive second breast cancer and candidate predictors, including demographic, PBC, and imaging characteristics. All statistical tests were two-sided. Results: The cumulative incidence of second breast cancers after five years was 54.4 per 1000 women, with 325 surveillance-detected and 138 interval-invasive second breast cancers. The five-year risk of interval-invasive second cancer for women with referent category characteristics was 0.60%. For women with the most and least favorable profiles, the five-year risk ranged from 0.07% to 6.11%. Multivariable modeling identified grade II PBC (odds ratio [OR] = 1.95, 95% confidence interval [CI] = 1.15 to 3.31), treatment with lumpectomy without radiation (OR = 3.27, 95% CI = 1.91 to 5.62), interval PBC presentation (OR = 2.01, 95% CI 1.28 to 3.16), and heterogeneously dense breasts on mammography (OR = 1.54, 95% CI = 1.01 to 2.36) as independent predictors of interval-invasive second breast cancers. Conclusions: PBC diagnosis and treatment characteristics contribute to variation in subsequent-interval second breast cancer risk. Consideration of these factors may be useful in developing tailored post-treatment imaging surveillance plans. PMID:25904721

  16. Exclusive breastfeeding duration and determinants among Brazilian children under two years of age

    Directory of Open Access Journals (Sweden)

    Sarah Warkentin

    2013-06-01

    Full Text Available OBJECTIVE: The present study described the duration and identified the determinants of exclusive breastfeeding. METHODS: The study used data from the Pesquisa Nacional de Demografia e Saúde da Criança e da Mulher 2006 (National Demographic and Health Survey on Women and Children 2006. Data were collected using questionnaires administered by trained professionals and refer to a subsample of 1,704 children aged less than 24 months. The estimated durations of exclusive breastfeeding are presented according to socioeconomic, demographic and epidemiological variables. Kaplan Meier estimator curves were used to produce valid estimates of breastfeeding duration and the Cox's proportional hazards model was fitted to identify risks. RESULTS: The median estimated duration of exclusive breastfeeding was 60 days. The final Cox model consisted of mother's age <20 years (hazard ratio=1.53, 95% confidence interval=1.11-1.48, use of pacifier (hazard ratio=1.53, 95% confidence interval=1.37-1.71, not residing in the country's southeast region (hazard ratio=1.22, 95% confidence interval=1.07-1.40 and socioeconomic status (hazard ratio=1.28, 95% confidence interval=1.06-1.55. CONCLUSION: The Kaplan Meier estimator corrected the underestimated duration of breastfeeding in the country when calculated by the current status methodology. Despite the national efforts done in the last decades to promote breastfeeding, the results indicate that the duration of exclusive breastfeeding is still half of that recommended for this dietary practice to promote health. Ways to revert this situation would be ongoing educational activities involving the educational and health systems, associated with advertising campaigns on television and radio mainly targeting young mothers with low education level and low income, identified as those at high risk of weaning their children early.

  17. Diverse interpretations of confidence building

    International Nuclear Information System (INIS)

    Macintosh, J.

    1998-01-01

    This paper explores the variety of operational understandings associated with the term 'confidence building'. Collectively, these understandings constitute what should be thought of as a 'family' of confidence building approaches. This unacknowledged and generally unappreciated proliferation of operational understandings that function under the rubric of confidence building appears to be an impediment to effective policy. The paper's objective is to analyze these different understandings, stressing the important differences in their underlying assumptions. In the process, the paper underlines the need for the international community to clarify its collective thinking about what it means when it speaks of 'confidence building'. Without enhanced clarity, it will be unnecessarily difficult to employ the confidence building approach effectively due to the lack of consistent objectives and common operating assumptions. Although it is not the intention of this paper to promote a particular account of confidence building, dissecting existing operational understandings should help to identify whether there are fundamental elements that define what might be termed 'authentic' confidence building. Implicit here is the view that some operational understandings of confidence building may diverge too far from consensus models to count as meaningful members of the confidence building family. (author)

  18. Chinese Management Research Needs Self-Confidence but not Over-confidence

    DEFF Research Database (Denmark)

    Li, Xin; Ma, Li

    2018-01-01

    Chinese management research aims to contribute to global management knowledge by offering rigorous and innovative theories and practical recommendations both for managing in China and outside. However, two seemingly opposite directions that researchers are taking could prove detrimental......-confidence, limiting theoretical innovation and practical relevance. Yet going in the other direction of overly indigenous research reflects over-confidence, often isolating the Chinese management research from the mainstream academia and at times, even becoming anti-science. A more integrated approach of conducting...... to the healthy development of Chinese management research. We argue that the two directions share a common ground that lies in the mindset regarding the confidence in the work on and from China. One direction of simply following the American mainstream on academic rigor demonstrates a lack of self...

  19. Advanced Interval Management: A Benefit Analysis

    Science.gov (United States)

    Timer, Sebastian; Peters, Mark

    2016-01-01

    This document is the final report for the NASA Langley Research Center (LaRC)- sponsored task order 'Possible Benefits for Advanced Interval Management Operations.' Under this research project, Architecture Technology Corporation performed an analysis to determine the maximum potential benefit to be gained if specific Advanced Interval Management (AIM) operations were implemented in the National Airspace System (NAS). The motivation for this research is to guide NASA decision-making on which Interval Management (IM) applications offer the most potential benefit and warrant further research.

  20. Bootstrap confidence intervals for principal response curves

    NARCIS (Netherlands)

    Timmerman, Marieke E.; Ter Braak, Cajo J. F.

    2008-01-01

    The principal response curve (PRC) model is of use to analyse multivariate data resulting from experiments involving repeated sampling in time. The time-dependent treatment effects are represented by PRCs, which are functional in nature. The sample PRCs can be estimated using a raw approach, or the

  1. Bootstrap Confidence Intervals for Principal Response Curves

    NARCIS (Netherlands)

    Timmerman, M.E.; Braak, ter C.J.F.

    2008-01-01

    The principal response curve (PRC) model is of use to analyse multivariate data resulting from experiments involving repeated sampling in time. The time-dependent treatment effects are represented by PRCs, which are functional in nature. The sample PRCs can be estimated using a raw approach, or the

  2. Autoimmune antibodies and recurrence-free interval in melanoma patients treated with adjuvant interferon

    DEFF Research Database (Denmark)

    Bouwhuis, Marna G; Suciu, Stefan; Collette, Sandra

    2009-01-01

    relapse-free interval in both trials (EORTC 18952, hazard ratio [HR] = 0.41, 95% confidence interval [CI] = 0.25 to 0.68, P P ... (model 2: EORTC 18952, HR = 0.81, 95% CI = 0.46 to 1.40, P = .44; and Nordic IFN, HR = 0.85, 95% CI = 0.55 to 1.30, P = .45; model 3: EORTC 18952, HR = 1.05, 95% CI = 0.59 to 1.87, P = .88; and Nordic IFN, HR = 0.78, 95% CI = 0.49 to 1.24, P = .30). CONCLUSIONS: In two randomized trials of IFN...

  3. Probability Distribution for Flowing Interval Spacing

    International Nuclear Information System (INIS)

    S. Kuzio

    2004-01-01

    Fracture spacing is a key hydrologic parameter in analyses of matrix diffusion. Although the individual fractures that transmit flow in the saturated zone (SZ) cannot be identified directly, it is possible to determine the fractured zones that transmit flow from flow meter survey observations. The fractured zones that transmit flow as identified through borehole flow meter surveys have been defined in this report as flowing intervals. The flowing interval spacing is measured between the midpoints of each flowing interval. The determination of flowing interval spacing is important because the flowing interval spacing parameter is a key hydrologic parameter in SZ transport modeling, which impacts the extent of matrix diffusion in the SZ volcanic matrix. The output of this report is input to the ''Saturated Zone Flow and Transport Model Abstraction'' (BSC 2004 [DIRS 170042]). Specifically, the analysis of data and development of a data distribution reported herein is used to develop the uncertainty distribution for the flowing interval spacing parameter for the SZ transport abstraction model. Figure 1-1 shows the relationship of this report to other model reports that also pertain to flow and transport in the SZ. Figure 1-1 also shows the flow of key information among the SZ reports. It should be noted that Figure 1-1 does not contain a complete representation of the data and parameter inputs and outputs of all SZ reports, nor does it show inputs external to this suite of SZ reports. Use of the developed flowing interval spacing probability distribution is subject to the limitations of the assumptions discussed in Sections 5 and 6 of this analysis report. The number of fractures in a flowing interval is not known. Therefore, the flowing intervals are assumed to be composed of one flowing zone in the transport simulations. This analysis may overestimate the flowing interval spacing because the number of fractures that contribute to a flowing interval cannot be

  4. Correlation of prolonged QT interval and severity of cirrhosis of liver

    International Nuclear Information System (INIS)

    Tarique, S.; Sarwar, S.

    2011-01-01

    To determine correlation between prolonged QT interval and severity of disease in patients of cirrhosis of liver. Study Design: Descriptive cross sectional study. Patients and Methods: One hundred and seventeen patients of cirrhosis were included. Baseline haematological and biochemical parameters were determined. Model for end stage liver disease (MELD) score was determined for all patients to document stage of liver disease. Corrected QT interval was determined from electrocardiography of each patient using QT cirrhosis formula. Correlation between QT interval and MELD score was determined using Pearson correlation and Receiver Operating Characteristic (ROC) curve. Results: One hundred and seventeen included patients had mean age of 53.58 (+- 12.11) while male to female ratio was 1.78/1 (75 / 42). Mean MELD score was 17.08 (+- 6.54) in study patients varying between 6 and 37 while mean corrected QT interval was 0.44 seconds (+- 0.06). Pearson correlation revealed no significant relation between severity of liver disease as determined with MELD score and prolonged QT interval (p value 0.18) Area under curve with ROC curve for correlation between prolonged QT interval and severity of liver disease was 0.42. Conclusion: Prolonged QT interval is not an indicator of severity of disease in cirrhosis of liver. (author)

  5. An Interval Estimation Method of Patent Keyword Data for Sustainable Technology Forecasting

    Directory of Open Access Journals (Sweden)

    Daiho Uhm

    2017-11-01

    Full Text Available Technology forecasting (TF is forecasting the future state of a technology. It is exciting to know the future of technologies, because technology changes the way we live and enhances the quality of our lives. In particular, TF is an important area in the management of technology (MOT for R&D strategy and new product development. Consequently, there are many studies on TF. Patent analysis is one method of TF because patents contain substantial information regarding developed technology. The conventional methods of patent analysis are based on quantitative approaches such as statistics and machine learning. The most traditional TF methods based on patent analysis have a common problem. It is the sparsity of patent keyword data structured from collected patent documents. After preprocessing with text mining techniques, most frequencies of technological keywords in patent data have values of zero. This problem creates a disadvantage for the performance of TF, and we have trouble analyzing patent keyword data. To solve this problem, we propose an interval estimation method (IEM. Using an adjusted Wald confidence interval called the Agresti–Coull confidence interval, we construct our IEM for efficient TF. In addition, we apply the proposed method to forecast the technology of an innovative company. To show how our work can be applied in the real domain, we conduct a case study using Apple technology.

  6. Serum prolactin revisited: parametric reference intervals and cross platform evaluation of polyethylene glycol precipitation-based methods for discrimination between hyperprolactinemia and macroprolactinemia.

    Science.gov (United States)

    Overgaard, Martin; Pedersen, Susanne Møller

    2017-10-26

    Hyperprolactinemia diagnosis and treatment is often compromised by the presence of biologically inactive and clinically irrelevant higher-molecular-weight complexes of prolactin, macroprolactin. The objective of this study was to evaluate the performance of two macroprolactin screening regimes across commonly used automated immunoassay platforms. Parametric total and monomeric gender-specific reference intervals were determined for six immunoassay methods using female (n=96) and male sera (n=127) from healthy donors. The reference intervals were validated using 27 hyperprolactinemic and macroprolactinemic sera, whose presence of monomeric and macroforms of prolactin were determined using gel filtration chromatography (GFC). Normative data for six prolactin assays included the range of values (2.5th-97.5th percentiles). Validation sera (hyperprolactinemic and macroprolactinemic; n=27) showed higher discordant classification [mean=2.8; 95% confidence interval (CI) 1.2-4.4] for the monomer reference interval method compared to the post-polyethylene glycol (PEG) recovery cutoff method (mean=1.8; 95% CI 0.8-2.8). The two monomer/macroprolactin discrimination methods did not differ significantly (p=0.089). Among macroprolactinemic sera evaluated by both discrimination methods, the Cobas and Architect/Kryptor prolactin assays showed the lowest and the highest number of misclassifications, respectively. Current automated immunoassays for prolactin testing require macroprolactin screening methods based on PEG precipitation in order to discriminate truly from falsely elevated serum prolactin. While the recovery cutoff and monomeric reference interval macroprolactin screening methods demonstrate similar discriminative ability, the latter method also provides the clinician with an easy interpretable monomeric prolactin concentration along with a monomeric reference interval.

  7. Change in Breast Cancer Screening Intervals Since the 2009 USPSTF Guideline.

    Science.gov (United States)

    Wernli, Karen J; Arao, Robert F; Hubbard, Rebecca A; Sprague, Brian L; Alford-Teaster, Jennifer; Haas, Jennifer S; Henderson, Louise; Hill, Deidre; Lee, Christoph I; Tosteson, Anna N A; Onega, Tracy

    2017-08-01

    In 2009, the U.S. Preventive Services Task Force (USPSTF) recommended biennial mammography for women aged 50-74 years and shared decision-making for women aged 40-49 years for breast cancer screening. We evaluated changes in mammography screening interval after the 2009 recommendations. We conducted a prospective cohort study of women aged 40-74 years who received 821,052 screening mammograms between 2006 and 2012 using data from the Breast Cancer Surveillance Consortium. We compared changes in screening intervals and stratified intervals based on whether the mammogram at the end of the interval occurred before or after the 2009 recommendation. Differences in mean interval length by woman-level characteristics were compared using linear regression. The mean interval (in months) minimally decreased after the 2009 USPSTF recommendations. Among women aged 40-49 years, the mean interval decreased from 17.2 months to 17.1 months (difference -0.16%, 95% confidence interval [CI] -0.30 to -0.01). Similar small reductions were seen for most age groups. The largest change in interval length in the post-USPSTF period was declines among women with a first-degree family history of breast cancer (difference -0.68%, 95% CI -0.82 to -0.54) or a 5-year breast cancer risk ≥2.5% (difference -0.58%, 95% CI -0.73 to -0.44). The 2009 USPSTF recommendation did not lengthen the average mammography interval among women routinely participating in mammography screening. Future studies should evaluate whether breast cancer screening intervals lengthen toward biennial intervals following new national 2016 breast cancer screening recommendations, particularly among women less than 50 years of age.

  8. Leadership by Confidence in Teams

    OpenAIRE

    Kobayashi, Hajime; Suehiro, Hideo

    2008-01-01

    We study endogenous signaling by analyzing a team production problem with endogenous timing. Each agent of the team is privately endowed with some level of confidence about team productivity. Each of them must then commit a level of effort in one of two periods. At the end of each period, each agent observes his partner' s move in this period. Both agents are rewarded by a team output determined by team productivity and total invested effort. Each agent must personally incur the cost of effor...

  9. The idiosyncratic nature of confidence.

    Science.gov (United States)

    Navajas, Joaquin; Hindocha, Chandni; Foda, Hebah; Keramati, Mehdi; Latham, Peter E; Bahrami, Bahador

    2017-11-01

    Confidence is the 'feeling of knowing' that accompanies decision making. Bayesian theory proposes that confidence is a function solely of the perceived probability of being correct. Empirical research has suggested, however, that different individuals may perform different computations to estimate confidence from uncertain evidence. To test this hypothesis, we collected confidence reports in a task where subjects made categorical decisions about the mean of a sequence. We found that for most individuals, confidence did indeed reflect the perceived probability of being correct. However, in approximately half of them, confidence also reflected a different probabilistic quantity: the perceived uncertainty in the estimated variable. We found that the contribution of both quantities was stable over weeks. We also observed that the influence of the perceived probability of being correct was stable across two tasks, one perceptual and one cognitive. Overall, our findings provide a computational interpretation of individual differences in human confidence.

  10. Sequential Interval Estimation of a Location Parameter with Fixed Width in the Nonregular Case

    OpenAIRE

    Koike, Ken-ichi

    2007-01-01

    For a location-scale parameter family of distributions with a finite support, a sequential confidence interval with a fixed width is obtained for the location parameter, and its asymptotic consistency and efficiency are shown. Some comparisons with the Chow-Robbins procedure are also done.

  11. Calculation of solar irradiation prediction intervals combining volatility and kernel density estimates

    International Nuclear Information System (INIS)

    Trapero, Juan R.

    2016-01-01

    In order to integrate solar energy into the grid it is important to predict the solar radiation accurately, where forecast errors can lead to significant costs. Recently, the increasing statistical approaches that cope with this problem is yielding a prolific literature. In general terms, the main research discussion is centred on selecting the “best” forecasting technique in accuracy terms. However, the need of the users of such forecasts require, apart from point forecasts, information about the variability of such forecast to compute prediction intervals. In this work, we will analyze kernel density estimation approaches, volatility forecasting models and combination of both of them in order to improve the prediction intervals performance. The results show that an optimal combination in terms of prediction interval statistical tests can achieve the desired confidence level with a lower average interval width. Data from a facility located in Spain are used to illustrate our methodology. - Highlights: • This work explores uncertainty forecasting models to build prediction intervals. • Kernel density estimators, exponential smoothing and GARCH models are compared. • An optimal combination of methods provides the best results. • A good compromise between coverage and average interval width is shown.

  12. A comparison of confidence/credible interval methods for the area under the ROC curve for continuous diagnostic tests with small sample size.

    Science.gov (United States)

    Feng, Dai; Cortese, Giuliana; Baumgartner, Richard

    2017-12-01

    The receiver operating characteristic (ROC) curve is frequently used as a measure of accuracy of continuous markers in diagnostic tests. The area under the ROC curve (AUC) is arguably the most widely used summary index for the ROC curve. Although the small sample size scenario is common in medical tests, a comprehensive study of small sample size properties of various methods for the construction of the confidence/credible interval (CI) for the AUC has been by and large missing in the literature. In this paper, we describe and compare 29 non-parametric and parametric methods for the construction of the CI for the AUC when the number of available observations is small. The methods considered include not only those that have been widely adopted, but also those that have been less frequently mentioned or, to our knowledge, never applied to the AUC context. To compare different methods, we carried out a simulation study with data generated from binormal models with equal and unequal variances and from exponential models with various parameters and with equal and unequal small sample sizes. We found that the larger the true AUC value and the smaller the sample size, the larger the discrepancy among the results of different approaches. When the model is correctly specified, the parametric approaches tend to outperform the non-parametric ones. Moreover, in the non-parametric domain, we found that a method based on the Mann-Whitney statistic is in general superior to the others. We further elucidate potential issues and provide possible solutions to along with general guidance on the CI construction for the AUC when the sample size is small. Finally, we illustrate the utility of different methods through real life examples.

  13. Long-term maintenance of immediate or delayed extinction is determined by the extinction-test interval

    OpenAIRE

    Johnson, Justin S.; Escobar, Martha; Kimble, Whitney L.

    2010-01-01

    Short acquisition-extinction intervals (immediate extinction) can lead to either more or less spontaneous recovery than long acquisition-extinction intervals (delayed extinction). Using rat subjects, we observed less spontaneous recovery following immediate than delayed extinction (Experiment 1). However, this was the case only if a relatively long extinction-test interval was used; a relatively short extinction-test interval yielded the opposite result (Experiment 2). Previous data appear co...

  14. Geometric Least Square Models for Deriving [0,1]-Valued Interval Weights from Interval Fuzzy Preference Relations Based on Multiplicative Transitivity

    Directory of Open Access Journals (Sweden)

    Xuan Yang

    2015-01-01

    Full Text Available This paper presents a geometric least square framework for deriving [0,1]-valued interval weights from interval fuzzy preference relations. By analyzing the relationship among [0,1]-valued interval weights, multiplicatively consistent interval judgments, and planes, a geometric least square model is developed to derive a normalized [0,1]-valued interval weight vector from an interval fuzzy preference relation. Based on the difference ratio between two interval fuzzy preference relations, a geometric average difference ratio between one interval fuzzy preference relation and the others is defined and employed to determine the relative importance weights for individual interval fuzzy preference relations. A geometric least square based approach is further put forward for solving group decision making problems. An individual decision numerical example and a group decision making problem with the selection of enterprise resource planning software products are furnished to illustrate the effectiveness and applicability of the proposed models.

  15. Reducing public communication apprehension by boosting self confidence on communication competence

    Directory of Open Access Journals (Sweden)

    Eva Rachmi

    2012-07-01

    medical doctor should be competent in communicating with others. Some students at the medical faculty Universitas Mulawarman tend to be silent at public communication training, and this is thought to be influenced by communication anxiety. This study aimed to analyze the possibility of self-confidence on communication competence and communication skills are risk factors of communication apprehension. Methods: This study was conducted on 55 students at the medical faculty Universitas Mulawarman.  Public communication apprehension was measured using the Personal Report of Communication Apprehension (PRCA-24. Confidence in communication competence was determined by the Self Perceived Communication Competence scale (SPCC.  Communication skills were based on the instructor’s score during the communication training program. Data were analyzed by linear regression to identify dominant factors using STATA 9.0. Results: The study showed a negative association between public communication apprehension and students’ self confidence in communication competence [coefficient regression (CR =-0.13; p=0.000; 95% confidence interval (CI=-0.20; -0.52]. However, it was not related to communication skills (p=0.936. Among twelve traits of self confidence on communication competence, students who had confidence to talk to a group of strangers had lower public communication apprehension (adjusted CR=-0.13; CI=-0.21; 0.05; p=0.002. Conclusions:  Increased confidence in their communication competence will reduce the degree of public communication apprehension by students. Therefore, the faculty should provide more opportunities for students to practice public communication, in particular, talking to a group of strangers more frequently. (Health Science Indones 2010; 1: 37 - 42

  16. Lack of an Effect of Standard and Supratherapeutic Doses of Linezolid on QTc Interval Prolongation▿†

    Science.gov (United States)

    Damle, Bharat; LaBadie, Robert R.; Cuozzo, Cheryl; Alvey, Christine; Choo, Heng Wee; Riley, Steve; Kirby, Deborah

    2011-01-01

    A double-blind, placebo-controlled, four-way crossover study was conducted in 40 subjects to assess the effect of linezolid on corrected QT (QTc) interval prolongation. Time-matched, placebo-corrected QT intervals were determined predose and at 0.5, 1 (end of infusion), 2, 4, 8, 12, and 24 h after intravenous dosing of linezolid 600 and 1,200 mg. Oral moxifloxacin at 400 mg was used as an active control. The pharmacokinetic profile of linezolid was also evaluated. At each time point, the upper bound of the 90% confidence interval (CI) for placebo-corrected QTcF values (i.e., QTc values adjusted for ventricular rate using the correction methods of Fridericia) for linezolid 600 and 1,200-mg doses were 5 ms, indicating that the study was adequately sensitive to assess QTc prolongation. The pharmacokinetic profile of linezolid at 600 mg was consistent with previous observations. Systemic exposure to linezolid increased in a slightly more than dose-proportional manner at supratherapeutic doses, but the degree of nonlinearity was small. At a supratherapeutic single dose of 1,200 mg of linezolid, no treatment-related increase in adverse events was seen compared to 600 mg of linezolid, and no clinically meaningful effects on vital signs and safety laboratory evaluations were noted. PMID:21709083

  17. Technical Report: Algorithm and Implementation for Quasispecies Abundance Inference with Confidence Intervals from Metagenomic Sequence Data

    Energy Technology Data Exchange (ETDEWEB)

    McLoughlin, Kevin [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-01-11

    This report describes the design and implementation of an algorithm for estimating relative microbial abundances, together with confidence limits, using data from metagenomic DNA sequencing. For the background behind this project and a detailed discussion of our modeling approach for metagenomic data, we refer the reader to our earlier technical report, dated March 4, 2014. Briefly, we described a fully Bayesian generative model for paired-end sequence read data, incorporating the effects of the relative abundances, the distribution of sequence fragment lengths, fragment position bias, sequencing errors and variations between the sampled genomes and the nearest reference genomes. A distinctive feature of our modeling approach is the use of a Chinese restaurant process (CRP) to describe the selection of genomes to be sampled, and thus the relative abundances. The CRP component is desirable for fitting abundances to reads that may map ambiguously to multiple targets, because it naturally leads to sparse solutions that select the best representative from each set of nearly equivalent genomes.

  18. Distribution of keratometry and its determinants in a general population of 6- to 12-year-old children.

    Science.gov (United States)

    Hashemi, Hassan; Saatchi, Mohammad; Khabazkhoob, Mehdi; Emamian, Mohammad Hassan; Yekta, Abbasali; Fotouhi, Akbar

    2018-03-01

    To determine the distribution of keratometry and its determinants in Iranian school children. The present cross-sectional study was conducted in 2015 in Shahroud in the north of Iran. The entire rural population of elementary school children was invited to the study. In urban areas, cluster sampling was conducted. Pentacam HR (Oculus Inc., Lynnwood, WA) was used to measure the flat meridian, the steep meridian, and the mean keratometry. Linear regression was used to determine the associated variables with mean keratometry. Of 5620 participated in the study, 5559 children were analyzed after applying the exclusion criteria. Mean keratometry was 43.56 ± 1.96 diopters (D) (95% confidence interval = 43.48-43.64) in the total sample, 43.18 ± 2.23 D (95% confidence interval = 43.09-43.26) in boys, and 44.01 ± 1.46 D (95% confidence interval = 43.95-44.07) in girls (p < 0.001). The highest and lowest mean keratometry was 43.28 ± 1.66 D (95% confidence interval = 43.00-43.55) and 42.89 ±2.70 D (95% confidence interval = 42.68-43.11) in 6-year-old and 10-year-old children, respectively (p = 0.031). The results of multiple linear regression showed that mean keratometry in girls was 0.82 D higher than in boys (p < 0.001), and in groups older than 9 years, it was significantly decreased. Mean keratometry in myopic children was 0.62 D higher than emmetropic children (p < 0.001). This study provided valuable findings from the status of keratometry in Iranian children. In line with other studies, corneal power was higher in girls than in boys, and the cornea becomes flatter with age in children.

  19. A systematic review of maternal confidence for physiologic birth: characteristics of prenatal care and confidence measurement.

    Science.gov (United States)

    Avery, Melissa D; Saftner, Melissa A; Larson, Bridget; Weinfurter, Elizabeth V

    2014-01-01

    Because a focus on physiologic labor and birth has reemerged in recent years, care providers have the opportunity in the prenatal period to help women increase confidence in their ability to give birth without unnecessary interventions. However, most research has only examined support for women during labor. The purpose of this systematic review was to examine the research literature for information about prenatal care approaches that increase women's confidence for physiologic labor and birth and tools to measure that confidence. Studies were reviewed that explored any element of a pregnant woman's interaction with her prenatal care provider that helped build confidence in her ability to labor and give birth. Timing of interaction with pregnant women included during pregnancy, labor and birth, and the postpartum period. In addition, we looked for studies that developed a measure of women's confidence related to labor and birth. Outcome measures included confidence or similar concepts, descriptions of components of prenatal care contributing to maternal confidence for birth, and reliability and validity of tools measuring confidence. The search of MEDLINE, CINAHL, PsycINFO, and Scopus databases provided a total of 893 citations. After removing duplicates and articles that did not meet inclusion criteria, 6 articles were included in the review. Three relate to women's confidence for labor during the prenatal period, and 3 describe tools to measure women's confidence for birth. Research about enhancing women's confidence for labor and birth was limited to qualitative studies. Results suggest that women desire information during pregnancy and want to use that information to participate in care decisions in a relationship with a trusted provider. Further research is needed to develop interventions to help midwives and physicians enhance women's confidence in their ability to give birth and to develop a tool to measure confidence for use during prenatal care. © 2014 by

  20. Flexible regression models for estimating postmortem interval (PMI) in forensic medicine.

    Science.gov (United States)

    Muñoz Barús, José Ignacio; Febrero-Bande, Manuel; Cadarso-Suárez, Carmen

    2008-10-30

    Correct determination of time of death is an important goal in forensic medicine. Numerous methods have been described for estimating postmortem interval (PMI), but most are imprecise, poorly reproducible and/or have not been validated with real data. In recent years, however, some progress in PMI estimation has been made, notably through the use of new biochemical methods for quantifying relevant indicator compounds in the vitreous humour. The best, but unverified, results have been obtained with [K+] and hypoxanthine [Hx], using simple linear regression (LR) models. The main aim of this paper is to offer more flexible alternatives to LR, such as generalized additive models (GAMs) and support vector machines (SVMs) in order to obtain improved PMI estimates. The present study, based on detailed analysis of [K+] and [Hx] in more than 200 vitreous humour samples from subjects with known PMI, compared classical LR methodology with GAM and SVM methodologies. Both proved better than LR for estimation of PMI. SVM showed somewhat greater precision than GAM, but GAM offers a readily interpretable graphical output, facilitating understanding of findings by legal professionals; there are thus arguments for using both types of models. R code for these methods is available from the authors, permitting accurate prediction of PMI from vitreous humour [K+], [Hx] and [U], with confidence intervals and graphical output provided. Copyright 2008 John Wiley & Sons, Ltd.

  1. New interval forecast for stationary autoregressive models ...

    African Journals Online (AJOL)

    In this paper, we proposed a new forecasting interval for stationary Autoregressive, AR(p) models using the Akaike information criterion (AIC) function. Ordinarily, the AIC function is used to determine the order of an AR(p) process. In this study however, AIC forecast interval compared favorably with the theoretical forecast ...

  2. Monitoring molecular interactions using photon arrival-time interval distribution analysis

    Science.gov (United States)

    Laurence, Ted A [Livermore, CA; Weiss, Shimon [Los Angels, CA

    2009-10-06

    A method for analyzing/monitoring the properties of species that are labeled with fluorophores. A detector is used to detect photons emitted from species that are labeled with one or more fluorophores and located in a confocal detection volume. The arrival time of each of the photons is determined. The interval of time between various photon pairs is then determined to provide photon pair intervals. The number of photons that have arrival times within the photon pair intervals is also determined. The photon pair intervals are then used in combination with the corresponding counts of intervening photons to analyze properties and interactions of the molecules including brightness, concentration, coincidence and transit time. The method can be used for analyzing single photon streams and multiple photon streams.

  3. Sources of sport confidence, imagery type and performance among competitive athletes: the mediating role of sports confidence.

    Science.gov (United States)

    Levy, A R; Perry, J; Nicholls, A R; Larkin, D; Davies, J

    2015-01-01

    This study explored the mediating role of sport confidence upon (1) sources of sport confidence-performance relationship and (2) imagery-performance relationship. Participants were 157 competitive athletes who completed state measures of confidence level/sources, imagery type and performance within one hour after competition. Among the current sample, confirmatory factor analysis revealed appropriate support for the nine-factor SSCQ and the five-factor SIQ. Mediational analysis revealed that sport confidence had a mediating influence upon the achievement source of confidence-performance relationship. In addition, both cognitive and motivational imagery types were found to be important sources of confidence, as sport confidence mediated imagery type- performance relationship. Findings indicated that athletes who construed confidence from their own achievements and report multiple images on a more frequent basis are likely to benefit from enhanced levels of state sport confidence and subsequent performance.

  4. Determination of hematology and plasma chemistry reference intervals for 3 populations of captive Atlantic sturgeon (Acipenser oxyrinchus oxyrinchus).

    Science.gov (United States)

    Matsche, Mark A; Arnold, Jill; Jenkins, Erin; Townsend, Howard; Rosemary, Kevin

    2014-09-01

    The imperiled status of Atlantic sturgeon (Acipenser oxyrinchus oxyrinchus), a large, long-lived, anadromous fish found along the Atlantic coast of North America, has prompted efforts at captive propagation for research and stock enhancement. The purpose of this study was to establish hematology and plasma chemistry reference intervals of captive Atlantic sturgeon maintained under different culture conditions. Blood specimens were collected from a total of 119 fish at 3 hatcheries: Lamar, PA (n = 36, ages 10-14 years); Chalk Point, MD (n = 40, siblings of Lamar); and Horn Point, Cambridge, MD (n = 43, mixed population from Chesapeake Bay). Reference intervals (using robust techniques), median, mean, and standard deviations were determined for WBC, RBC, thrombocytes, PCV, HGB, MCV, MCH, MCHC, and absolute counts for lymphocytes (L), neutrophils (N), monocytes, and eosinophils. Chemistry analytes included concentrations of total proteins, albumin, glucose, urea, calcium, phosphate, sodium, potassium, chloride, and globulins, AST, CK, and LDH activities, and osmolality. Mean concentrations of total proteins, albumin, and glucose were at or below the analytic range. Statistical comparisons showed significant differences among hatcheries for each remaining plasma chemistry analyte and for PCV, RBC, MCHC, MCH, eosinophil and monocyte counts, and N:L ratio throughout all 3 groups. Therefore, reference intervals were calculated separately for each population. Reference intervals for fish maintained under differing conditions should be established per population. © 2014 American Society for Veterinary Clinical Pathology and European Society for Veterinary Clinical Pathology.

  5. Confidence-Based Learning in Investment Analysis

    Science.gov (United States)

    Serradell-Lopez, Enric; Lara-Navarra, Pablo; Castillo-Merino, David; González-González, Inés

    The aim of this study is to determine the effectiveness of using multiple choice tests in subjects related to the administration and business management. To this end we used a multiple-choice test with specific questions to verify the extent of knowledge gained and the confidence and trust in the answers. The tests were performed in a group of 200 students at the bachelor's degree in Business Administration and Management. The analysis made have been implemented in one subject of the scope of investment analysis and measured the level of knowledge gained and the degree of trust and security in the responses at two different times of the course. The measurements have been taken into account different levels of difficulty in the questions asked and the time spent by students to complete the test. The results confirm that students are generally able to obtain more knowledge along the way and get increases in the degree of trust and confidence in the answers. It is confirmed as the difficulty level of the questions set a priori by the heads of the subjects are related to levels of security and confidence in the answers. It is estimated that the improvement in the skills learned is viewed favourably by businesses and are especially important for job placement of students.

  6. Five-year risk of interval-invasive second breast cancer.

    Science.gov (United States)

    Lee, Janie M; Buist, Diana S M; Houssami, Nehmat; Dowling, Emily C; Halpern, Elkan F; Gazelle, G Scott; Lehman, Constance D; Henderson, Louise M; Hubbard, Rebecca A

    2015-07-01

    Earlier detection of second breast cancers after primary breast cancer (PBC) treatment improves survival, yet mammography is less accurate in women with prior breast cancer. The purpose of this study was to examine women presenting clinically with second breast cancers after negative surveillance mammography (interval cancers), and to estimate the five-year risk of interval-invasive second cancers for women with varying risk profiles. We evaluated a prospective cohort of 15 114 women with 47 717 surveillance mammograms diagnosed with stage 0-II unilateral PBC from 1996 through 2008 at facilities in the Breast Cancer Surveillance Consortium. We used discrete time survival models to estimate the association between odds of an interval-invasive second breast cancer and candidate predictors, including demographic, PBC, and imaging characteristics. All statistical tests were two-sided. The cumulative incidence of second breast cancers after five years was 54.4 per 1000 women, with 325 surveillance-detected and 138 interval-invasive second breast cancers. The five-year risk of interval-invasive second cancer for women with referent category characteristics was 0.60%. For women with the most and least favorable profiles, the five-year risk ranged from 0.07% to 6.11%. Multivariable modeling identified grade II PBC (odds ratio [OR] = 1.95, 95% confidence interval [CI] = 1.15 to 3.31), treatment with lumpectomy without radiation (OR = 3.27, 95% CI = 1.91 to 5.62), interval PBC presentation (OR = 2.01, 95% CI 1.28 to 3.16), and heterogeneously dense breasts on mammography (OR = 1.54, 95% CI = 1.01 to 2.36) as independent predictors of interval-invasive second breast cancers. PBC diagnosis and treatment characteristics contribute to variation in subsequent-interval second breast cancer risk. Consideration of these factors may be useful in developing tailored post-treatment imaging surveillance plans. © The Author 2015. Published by Oxford University Press. All rights reserved

  7. Evaluating a Computer Flash-Card Sight-Word Recognition Intervention with Self-Determined Response Intervals in Elementary Students with Intellectual Disability

    Science.gov (United States)

    Cazzell, Samantha; Skinner, Christopher H.; Ciancio, Dennis; Aspiranti, Kathleen; Watson, Tiffany; Taylor, Kala; McCurdy, Merilee; Skinner, Amy

    2017-01-01

    A concurrent multiple-baseline across-tasks design was used to evaluate the effectiveness of a computer flash-card sight-word recognition intervention with elementary-school students with intellectual disability. This intervention allowed the participants to self-determine each response interval and resulted in both participants acquiring…

  8. Determination of strong ion gap in healthy dogs.

    Science.gov (United States)

    Fettig, Pamela K; Bailey, Dennis B; Gannon, Kristi M

    2012-08-01

    To determine and compare reference intervals of the strong ion gap (SIG) in a group of healthy dogs determined with 2 different equations. Prospective observational study. Tertiary referral and teaching hospital. Fifty-four healthy dogs. None. Serum biochemistry and blood gas analyses were performed for each dog. From these values, SIG was calculated using 2 different equations: SIG(1) = SID(a) {[Na (+)] + [K(+)] - [Cl(-)]+ [2 × Ca(2+)] + [2 × Mg(2+)] - [L-lactate]}- SID(e) {TCO(2) + A(-)} and SIG(2) = [albumin] × 4.9-anion gap. Reference intervals were established for each SIG equation using the mean ± 1.96 × standard deviation (SD). For SIG(1), the median was 7.13 mEq/L (range, 1.05-11.30 mEq/L) and the derived reference interval was 1.85-10.61 mEq/L. Median SIG(2) was -0.22 mEq/L (range, -5.34-6.61 mEq/L) and the mean SIG(2) was -0.09 mEq/L (95% confidence interval for the mean, -0.82-0.65 mEq/L). The derived reference interval was -5.36-5.18 mEq/L. The results of the SIG calculations were significantly different (P SIG yielded significantly different results and cannot be used interchangeably. The authors believe SIG(2) to be a more accurate reflection of acid-base status in healthy dogs, and recommend that this calculation be used for future studies. © Veterinary Emergency and Critical Care Society 2012.

  9. PMICALC: an R code-based software for estimating post-mortem interval (PMI) compatible with Windows, Mac and Linux operating systems.

    Science.gov (United States)

    Muñoz-Barús, José I; Rodríguez-Calvo, María Sol; Suárez-Peñaranda, José M; Vieira, Duarte N; Cadarso-Suárez, Carmen; Febrero-Bande, Manuel

    2010-01-30

    In legal medicine the correct determination of the time of death is of utmost importance. Recent advances in estimating post-mortem interval (PMI) have made use of vitreous humour chemistry in conjunction with Linear Regression, but the results are questionable. In this paper we present PMICALC, an R code-based freeware package which estimates PMI in cadavers of recent death by measuring the concentrations of potassium ([K+]), hypoxanthine ([Hx]) and urea ([U]) in the vitreous humor using two different regression models: Additive Models (AM) and Support Vector Machine (SVM), which offer more flexibility than the previously used Linear Regression. The results from both models are better than those published to date and can give numerical expression of PMI with confidence intervals and graphic support within 20 min. The program also takes into account the cause of death. 2009 Elsevier Ireland Ltd. All rights reserved.

  10. Point and interval estimation of pollinator importance: a study using pollination data of Silene caroliniana.

    Science.gov (United States)

    Reynolds, Richard J; Fenster, Charles B

    2008-05-01

    Pollinator importance, the product of visitation rate and pollinator effectiveness, is a descriptive parameter of the ecology and evolution of plant-pollinator interactions. Naturally, sources of its variation should be investigated, but the SE of pollinator importance has never been properly reported. Here, a Monte Carlo simulation study and a result from mathematical statistics on the variance of the product of two random variables are used to estimate the mean and confidence limits of pollinator importance for three visitor species of the wildflower, Silene caroliniana. Both methods provided similar estimates of mean pollinator importance and its interval if the sample size of the visitation and effectiveness datasets were comparatively large. These approaches allowed us to determine that bumblebee importance was significantly greater than clearwing hawkmoth, which was significantly greater than beefly. The methods could be used to statistically quantify temporal and spatial variation in pollinator importance of particular visitor species. The approaches may be extended for estimating the variance of more than two random variables. However, unless the distribution function of the resulting statistic is known, the simulation approach is preferable for calculating the parameter's confidence limits.

  11. Distinguishing highly confident accurate and inaccurate memory: insights about relevant and irrelevant influences on memory confidence.

    Science.gov (United States)

    Chua, Elizabeth F; Hannula, Deborah E; Ranganath, Charan

    2012-01-01

    It is generally believed that accuracy and confidence in one's memory are related, but there are many instances when they diverge. Accordingly it is important to disentangle the factors that contribute to memory accuracy and confidence, especially those factors that contribute to confidence, but not accuracy. We used eye movements to separately measure fluent cue processing, the target recognition experience, and relative evidence assessment on recognition confidence and accuracy. Eye movements were monitored during a face-scene associative recognition task, in which participants first saw a scene cue, followed by a forced-choice recognition test for the associated face, with confidence ratings. Eye movement indices of the target recognition experience were largely indicative of accuracy, and showed a relationship to confidence for accurate decisions. In contrast, eye movements during the scene cue raised the possibility that more fluent cue processing was related to higher confidence for both accurate and inaccurate recognition decisions. In a second experiment we manipulated cue familiarity, and therefore cue fluency. Participants showed higher confidence for cue-target associations for when the cue was more familiar, especially for incorrect responses. These results suggest that over-reliance on cue familiarity and under-reliance on the target recognition experience may lead to erroneous confidence.

  12. Experiential Education Builds Student Self-Confidence in Delivering Medication Therapy Management

    Directory of Open Access Journals (Sweden)

    Wendy M. Parker

    2017-07-01

    Full Text Available To determine the impact of advanced pharmacy practice experiences (APPE on student self-confidence related to medication therapy management (MTM, fourth-year pharmacy students were surveyed pre/post APPE to: identify exposure to MTM learning opportunities, assess knowledge of the MTM core components, and assess self-confidence performing MTM services. An anonymous electronic questionnaire administered pre/post APPE captured demographics, factors predicted to impact student self-confidence (Grade point average (GPA, work experience, exposure to MTM learning opportunities, MTM knowledge and self-confidence conducting MTM using a 5-point Likert scale (1 = Not at all Confident; 5 = Extremely Confident. Sixty-two students (26% response rate responded to the pre-APPE questionnaire and n = 44 (18% to the post-APPE. Over 90% demonstrated MTM knowledge and 68.2% completed MTM learning activities. APPE experiences significantly improved students’ overall self-confidence (pre-APPE = 3.27 (0.85 SD, post-APPE = 4.02 (0.88, p < 0.001. Students engaging in MTM learning opportunities had higher self-confidence post-APPE (4.20 (0.71 vs. those not reporting MTM learning opportunities (3.64 (1.08, p = 0.05. Post-APPE, fewer students reported MTM was patient-centric or anticipated engaging in MTM post-graduation. APPE learning opportunities increased student self-confidence to provide MTM services. However, the reduction in anticipated engagement in MTM post-graduation and reduction in sensing the patient-centric nature of MTM practice, may reveal a gap between practice expectations and reality.

  13. Self-confidence and metacognitive processes

    Directory of Open Access Journals (Sweden)

    Kleitman Sabina

    2005-01-01

    Full Text Available This paper examines the status of Self-confidence trait. Two studies strongly suggest that Self-confidence is a component of metacognition. In the first study, participants (N=132 were administered measures of Self-concept, a newly devised Memory and Reasoning Competence Inventory (MARCI, and a Verbal Reasoning Test (VRT. The results indicate a significant relationship between confidence ratings on the VRT and the Reasoning component of MARCI. The second study (N=296 employed an extensive battery of cognitive tests and several metacognitive measures. Results indicate the presence of robust Self-confidence and Metacognitive Awareness factors, and a significant correlation between them. Self-confidence taps not only processes linked to performance on items that have correct answers, but also beliefs about events that may never occur.

  14. Graduating general surgery resident operative confidence: perspective from a national survey.

    Science.gov (United States)

    Fonseca, Annabelle L; Reddy, Vikram; Longo, Walter E; Gusberg, Richard J

    2014-08-01

    General surgical training has changed significantly over the last decade with work hour restrictions, increasing subspecialization, the expanding use of minimally invasive techniques, and nonoperative management for solid organ trauma. Given these changes, this study was undertaken to assess the confidence of graduating general surgery residents in performing open surgical operations and to determine factors associated with increased confidence. A survey was developed and sent to general surgery residents nationally. We queried them regarding demographics and program characteristics, asked them to rate their confidence (rated 1-5 on a Likert scale) in performing open surgical procedures and compared those who indicated confidence with those who did not. We received 653 responses from the fifth year (postgraduate year 5) surgical residents: 69% male, 68% from university programs, and 51% from programs affiliated with a Veterans Affairs hospital; 22% from small programs, 34% from medium programs, and 44% from large programs. Anticipated postresidency operative confidence was 72%. More than 25% of residents reported a lack of confidence in performing eight of the 13 operations they were queried about. Training at a university program, a large program, dedicated research years, future fellowship plans, and training at a program that performed a large percentage of operations laparoscopically was associated with decreased confidence in performing a number of open surgical procedures. Increased surgical volume was associated with increased operative confidence. Confidence in performing open surgery also varied regionally. Graduating surgical residents indicated a significant lack of confidence in performing a variety of open surgical procedures. This decreased confidence was associated with age, operative volume as well as type, and location of training program. Analyzing and addressing this confidence deficit merits further study. Copyright © 2014 Elsevier Inc. All

  15. Increasing Product Confidence-Shifting Paradigms.

    Science.gov (United States)

    Phillips, Marla; Kashyap, Vishal; Cheung, Mee-Shew

    2015-01-01

    Leaders in the pharmaceutical, medical device, and food industries expressed a unilateral concern over product confidence throughout the total product lifecycle, an unsettling fact for these leaders to manage given that their products affect the lives of millions of people each year. Fueled by the heparin incident of intentional adulteration in 2008, initial efforts for increasing product confidence were focused on improving the confidence of incoming materials, with a belief that supplier performance must be the root cause. As in the heparin case, concern over supplier performance extended deep into the supply chain to include suppliers of the suppliers-which is often a blind spot for pharmaceutical, device, and food manufacturers. Resolved to address the perceived lack of supplier performance, these U.S. Food and Drug Administration (FDA)-regulated industries began to adopt the supplier relationship management strategy, developed by the automotive industry, that emphasizes "management" of suppliers for the betterment of the manufacturers. Current product and supplier management strategies, however, have not led to a significant improvement in product confidence. As a result of the enduring concern by industry leaders over the lack of product confidence, Xavier University launched the Integrity of Supply Initiative in 2012 with a team of industry leaders and FDA officials. Through a methodical research approach, data generated by the pharmaceutical, medical device, and food manufacturers surprisingly pointed to themselves as a source of the lack of product confidence, and revealed that manufacturers either unknowingly increase the potential for error or can control/prevent many aspects of product confidence failure. It is only through this paradigm shift that manufacturers can work collaboratively with their suppliers as equal partners, instead of viewing their suppliers as "lesser" entities needing to be controlled. The basis of this shift provides manufacturers

  16. Teachers and Counselors: Building Math Confidence in Schools

    Directory of Open Access Journals (Sweden)

    Joseph M. Furner

    2017-08-01

    Full Text Available Mathematics teachers need to take on the role of counselors in addressing the math anxious in today's math classrooms. This paper looks at the impact math anxiety has on the future of young adults in our high-tech society. Teachers and professional school counselors are encouraged to work together to prevent and reduce math anxiety. It is important that all students feel confident in their ability to do mathematics in an age that relies so heavily on problem solving, technology, science, and mathematics. It really is a school's obligation to see that their students value and feel confident in their ability to do math, because ultimately a child's life: all decisions they will make and careers choices may be determined based on their disposition toward mathematics. This paper raises some interesting questions and provides some strategies (See Appendix A for teachers and counselors for addressing the issue of math anxiety while discussing the importance of developing mathematically confident young people for a high-tech world of STEM.

  17. Precision Interval Estimation of the Response Surface by Means of an Integrated Algorithm of Neural Network and Linear Regression

    Science.gov (United States)

    Lo, Ching F.

    1999-01-01

    The integration of Radial Basis Function Networks and Back Propagation Neural Networks with the Multiple Linear Regression has been accomplished to map nonlinear response surfaces over a wide range of independent variables in the process of the Modem Design of Experiments. The integrated method is capable to estimate the precision intervals including confidence and predicted intervals. The power of the innovative method has been demonstrated by applying to a set of wind tunnel test data in construction of response surface and estimation of precision interval.

  18. Intact interval timing in circadian CLOCK mutants.

    Science.gov (United States)

    Cordes, Sara; Gallistel, C R

    2008-08-28

    While progress has been made in determining the molecular basis for the circadian clock, the mechanism by which mammalian brains time intervals measured in seconds to minutes remains a mystery. An obvious question is whether the interval-timing mechanism shares molecular machinery with the circadian timing mechanism. In the current study, we trained circadian CLOCK +/- and -/- mutant male mice in a peak-interval procedure with 10 and 20-s criteria. The mutant mice were more active than their wild-type littermates, but there were no reliable deficits in the accuracy or precision of their timing as compared with wild-type littermates. This suggests that expression of the CLOCK protein is not necessary for normal interval timing.

  19. Errors and Predictors of Confidence in Condom Use amongst Young Australians Attending a Music Festival

    OpenAIRE

    Hall, Karina M.; Brieger, Daniel G.; De Silva, Sukhita H.; Pfister, Benjamin F.; Youlden, Daniel J.; John-Leader, Franklin; Pit, Sabrina W.

    2016-01-01

    Objectives. To determine the confidence and ability to use condoms correctly and consistently and the predictors of confidence in young Australians attending a festival. Methods. 288 young people aged 18 to 29 attending a mixed-genre music festival completed a survey measuring demographics, self-reported confidence using condoms, ability to use condoms, and issues experienced when using condoms in the past 12 months. Results. Self-reported confidence using condoms was high (77%). Multivariate...

  20. Low back pain: what determines functional outcome at six months? An observational study

    Directory of Open Access Journals (Sweden)

    Peers Charles E

    2010-10-01

    Full Text Available Abstract Background The rise in disability due to back pain has been exponential with escalating medical and societal costs. The relative contribution of individual prognostic indicators to the pattern of recovery remains unclear. The objective of this study was to determine the prognostic value of demographic, psychosocial, employment and clinical factors on outcome in patients with low back pain Methods A prospective cohort study with six-month follow-up was undertaken at a multidisciplinary back pain clinic in central London employing physiotherapists, osteopaths, clinical psychologists and physicians, receiving referrals from 123 general practitioners. Over a twelve-month period, 593 consecutive patients referred from general practice with simple low back pain were recruited. A baseline questionnaire was developed to elicit information on potential prognostic variables. The primary outcome measures were change in 24-item Roland Morris disability questionnaire score at six months as a measure of low back related functional disability and the physical functioning scale of the SF-36, adjusted for baseline scores. Results Roland Morris scores improved by 3.8 index points (95% confidence interval 3.23 to 4.32 at six months and SF-36 physical functioning score by 10.7 points (95% confidence interval 8.36 to 12.95. Ten factors were linked to outcome yet in a multiple regression model only two remained predictive. Those with episodic rather than continuous pain were more likely to have recovered at six months (odds ratio 2.64 confidence interval 1.25 to 5.60, while those that classified themselves as non-white were less likely to have recovered (0.41 confidence interval 0.18 to 0.96. Conclusions Analysis controlling for confounding variables, demonstrated that participants showed greater improvement if their episodes of pain during the previous year were short-lived while those with Middle Eastern, North African and Chinese ethnicity demonstrated

  1. Caregiver Confidence: Does It Predict Changes in Disability among Elderly Home Care Recipients?

    Science.gov (United States)

    Li, Lydia W.; McLaughlin, Sara J.

    2012-01-01

    Purpose of the study: The primary aim of this investigation was to determine whether caregiver confidence in their care recipients' functional capabilities predicts changes in the performance of activities of daily living (ADL) among elderly home care recipients. A secondary aim was to explore how caregiver confidence and care recipient functional…

  2. High intensity aerobic interval training improves peak oxygen consumption in patients with metabolic syndrome: CAT

    Directory of Open Access Journals (Sweden)

    Alexis Espinoza Salinas

    2014-06-01

    Full Text Available Introduction A number of cardiovascular risk factors characterizes the metabolic syndrome: insulin resistance (IR, low HDL cholesterol and high triglycerides. The aforementioned risk factors lead to elevated levels of abdominal adipose tissue, resulting in oxygen consumption deficiency. Purpose To verify the validity and applicability of using high intensity interval training (HIIT in subjects with metabolic syndrome and to answer the following question: Can HIIT improve peak oxygen consumption? Method The systematic review "Effects of aerobic interval training on exercise capacity and metabolic risk factors in individuals with cardiometabolic disorders" was analyzed. Results Data suggests high intensity aerobic interval training increases peak oxygen consumption by a standardized mean difference of 3.60 mL/kg-1/min-1 (95% confidence interval, 0.28-4.91. Conclusion In spite of the methodological shortcomings of the primary studies included in the systematic review, we reasonably conclude that implementation of high intensity aerobic interval training in subjects with metabolic syndrome, leads to increases in peak oxygen consumption.

  3. The design and analysis of salmonid tagging studies in the Columbia River. Volume 7: Monte-Carlo comparison of confidence internal procedures for estimating survival in a release-recapture study, with applications to Snake River salmonids

    International Nuclear Information System (INIS)

    Lowther, A.B.; Skalski, J.

    1996-06-01

    Confidence intervals for survival probabilities between hydroelectric facilities of migrating juvenile salmonids can be computed from the output of the SURPH software developed at the Center for Quantitative Science at the University of Washington. These intervals have been constructed using the estimate of the survival probability, its associated standard error, and assuming the estimate is normally distributed. In order to test the validity and performance of this procedure, two additional confidence interval procedures for estimating survival probabilities were tested and compared using simulated mark-recapture data. Intervals were constructed using normal probability theory, using a percentile-based empirical bootstrap algorithm, and using the profile likelihood concept. Performance of each method was assessed for a variety of initial conditions (release sizes, survival probabilities, detection probabilities). These initial conditions were chosen to encompass the range of parameter values seen in the 1993 and 1994 Snake River juvenile salmonid survival studies. The comparisons among the three estimation methods included average interval width, interval symmetry, and interval coverage

  4. Distinguishing highly confident accurate and inaccurate memory: insights about relevant and irrelevant influences on memory confidence

    OpenAIRE

    Chua, Elizabeth F.; Hannula, Deborah E.; Ranganath, Charan

    2012-01-01

    It is generally believed that accuracy and confidence in one’s memory are related, but there are many instances when they diverge. Accordingly, it is important to disentangle the factors which contribute to memory accuracy and confidence, especially those factors that contribute to confidence, but not accuracy. We used eye movements to separately measure fluent cue processing, the target recognition experience, and relative evidence assessment on recognition confidence and accuracy. Eye movem...

  5. Bootstrap Prediction Intervals in Non-Parametric Regression with Applications to Anomaly Detection

    Science.gov (United States)

    Kumar, Sricharan; Srivistava, Ashok N.

    2012-01-01

    Prediction intervals provide a measure of the probable interval in which the outputs of a regression model can be expected to occur. Subsequently, these prediction intervals can be used to determine if the observed output is anomalous or not, conditioned on the input. In this paper, a procedure for determining prediction intervals for outputs of nonparametric regression models using bootstrap methods is proposed. Bootstrap methods allow for a non-parametric approach to computing prediction intervals with no specific assumptions about the sampling distribution of the noise or the data. The asymptotic fidelity of the proposed prediction intervals is theoretically proved. Subsequently, the validity of the bootstrap based prediction intervals is illustrated via simulations. Finally, the bootstrap prediction intervals are applied to the problem of anomaly detection on aviation data.

  6. Interval Solution for Nonlinear Programming of Maximizing the Fatigue Life of V-Belt under Polymorphic Uncertain Environment

    Directory of Open Access Journals (Sweden)

    Zhong Wan

    2013-01-01

    Full Text Available In accord with the practical engineering design conditions, a nonlinear programming model is constructed for maximizing the fatigue life of V-belt drive in which some polymorphic uncertainties are incorporated. For a given satisfaction level and a confidence level, an equivalent formulation of this uncertain optimization model is obtained where only interval parameters are involved. Based on the concepts of maximal and minimal range inequalities for describing interval inequality, the interval parameter model is decomposed into two standard nonlinear programming problems, and an algorithm, called two-step based sampling algorithm, is developed to find an interval optimal solution for the original problem. Case study is employed to demonstrate the validity and practicability of the constructed model and the algorithm.

  7. A comparison of Probability Of Detection (POD) data determined using different statistical methods

    Science.gov (United States)

    Fahr, A.; Forsyth, D.; Bullock, M.

    1993-12-01

    Different statistical methods have been suggested for determining probability of detection (POD) data for nondestructive inspection (NDI) techniques. A comparative assessment of various methods of determining POD was conducted using results of three NDI methods obtained by inspecting actual aircraft engine compressor disks which contained service induced cracks. The study found that the POD and 95 percent confidence curves as a function of crack size as well as the 90/95 percent crack length vary depending on the statistical method used and the type of data. The distribution function as well as the parameter estimation procedure used for determining POD and the confidence bound must be included when referencing information such as the 90/95 percent crack length. The POD curves and confidence bounds determined using the range interval method are very dependent on information that is not from the inspection data. The maximum likelihood estimators (MLE) method does not require such information and the POD results are more reasonable. The log-logistic function appears to model POD of hit/miss data relatively well and is easy to implement. The log-normal distribution using MLE provides more realistic POD results and is the preferred method. Although it is more complicated and slower to calculate, it can be implemented on a common spreadsheet program.

  8. Profiling of RNA degradation for estimation of post mortem [corrected] interval.

    Directory of Open Access Journals (Sweden)

    Fernanda Sampaio-Silva

    Full Text Available An estimation of the post mortem interval (PMI is frequently touted as the Holy Grail of forensic pathology. During the first hours after death, PMI estimation is dependent on the rate of physical observable modifications including algor, rigor and livor mortis. However, these assessment methods are still largely unreliable and inaccurate. Alternatively, RNA has been put forward as a valuable tool in forensic pathology, namely to identify body fluids, estimate the age of biological stains and to study the mechanism of death. Nevertheless, the attempts to find correlation between RNA degradation and PMI have been unsuccessful. The aim of this study was to characterize the RNA degradation in different post mortem tissues in order to develop a mathematical model that can be used as coadjuvant method for a more accurate PMI determination. For this purpose, we performed an eleven-hour kinetic analysis of total extracted RNA from murine's visceral and muscle tissues. The degradation profile of total RNA and the expression levels of several reference genes were analyzed by quantitative real-time PCR. A quantitative analysis of normalized transcript levels on the former tissues allowed the identification of four quadriceps muscle genes (Actb, Gapdh, Ppia and Srp72 that were found to significantly correlate with PMI. These results allowed us to develop a mathematical model with predictive value for estimation of the PMI (confidence interval of ±51 minutes at 95% that can become an important complementary tool for traditional methods.

  9. Determination of molecular markers associated with anthesis-silking interval in maize

    International Nuclear Information System (INIS)

    Simpson, J.

    1998-01-01

    Maize lines contrasting in anthesis-silking, interval (ASI), a trait strongly linked to drought tolerance, have been analyzed under different water stress conditions in the field and with molecular markers. Correlation of marker and field data has revealed molecular markers strongly associated with flowering and yield traits. (author)

  10. An analysis of confidence limit calculations used in AAPM Task Group No. 119

    International Nuclear Information System (INIS)

    Knill, Cory; Snyder, Michael

    2011-01-01

    Purpose: The report issued by AAPM Task Group No. 119 outlined a procedure for evaluating the effectiveness of IMRT commissioning. The procedure involves measuring gamma pass-rate indices for IMRT plans of standard phantoms and determining if the results fall within a confidence limit set by assuming normally distributed data. As stated in the TG report, the assumption of normally distributed gamma pass rates is a convenient approximation for commissioning purposes, but may not accurately describe the data. Here the authors attempt to better describe gamma pass-rate data by fitting it to different distributions. The authors then calculate updated confidence limits using those distributions and compare them to those derived using TG No. 119 method. Methods: Gamma pass-rate data from 111 head and neck patients are fitted using the TG No. 119 normal distribution, a truncated normal distribution, and a Weibull distribution. Confidence limits to 95% are calculated for each and compared. A more general analysis of the expected differences between the TG No. 119 method of determining confidence limits and a more time-consuming curve fitting method is performed. Results: The TG No. 119 standard normal distribution does not fit the measured data. However, due to the small range of measured data points, the inaccuracy of the fit has only a small effect on the final value of the confidence limits. The confidence limits for the 111 patient plans are within 0.1% of each other for all distributions. The maximum expected difference in confidence limits, calculated using TG No. 119's approximation and a truncated distribution, is 1.2%. Conclusions: A three-parameter Weibull probability distribution more accurately fits the clinical gamma index pass-rate data than the normal distribution adopted by TG No. 119. However, the sensitivity of the confidence limit on distribution fit is low outside of exceptional circumstances.

  11. Determinants of child anthropometric indicators in Ethiopia.

    Science.gov (United States)

    Ahmadi, Davod; Amarnani, Ekta; Sen, Akankasha; Ebadi, Narges; Cortbaoui, Patrick; Melgar-Quiñonez, Hugo

    2018-05-15

    Malnutrition is one of the major contributors to child mortality in Ethiopia. Currently established, child nutrition status is assessed by four anthropometric indicators. However, there are other factors affecting children's anthropometric statuses. Thus, the main objective of this paper is to explore some of the determinants of child anthropometric indicators in Ethiopia. Data from GROW (the Growing Nutrition for Mothers and Children), a survey including 1261 mothers and 1261 children was carried out in Ethiopia in 2016. Based on the data gathered, the goal of GROW is to improve the nutritional status of women of reproductive age (15-49), as well as boys and girls under 5 years of age in Ethiopia. In order to investigate the association between different factors and child anthropometric indicators, this study employs various statistical methods, such as ANOVA, T-test, and linear regressions. Child's sex (confidence intervals for (wasting = - 0.782, - 0.151; stunting = - 0.936,-0.243) (underweight = - 0.530, - 0.008), child's age (confidence intervals for (wasting = - 0.020, 0.007; stunting = - 0.042,-0.011) (underweight = - 0.025, - 0.002), maternal MUAC (confidence intervals for (wasting = 0.189, 0.985; BMI-for-age = 0.077, 0.895), maternal education (stunting = 0.095, 0.897; underweight = 0.120, 0.729), and open defecation (stunting = 0.055, 0.332; underweight = 0.042, 0.257) were found to be significantly associated with anthropometric indicators. Contrary to some findings, maternal dietary diversity does not present significance in aforementioned child anthropometric indicators. Depending on the choice of children anthropometric indicator, different conclusions were drawn demonstrating the association between each factor to child nutritional status. Results showed child's sex, age, region, open defecation, and maternal MUAC significantly increases the risk of child anthropometric indicators

  12. Can follow-up controls improve the confidence of MR of the breast? A retrospective analysis of follow-up MR images of the breast

    International Nuclear Information System (INIS)

    Betsch, A.; Arndt, E.; Stern, W.; Claussen, C.D.; Mueller-Schimpfle, M.; Wallwiener, D.

    2001-01-01

    Purpose: To assess the change in diagnostic confidence between first and follow-up dynamic MR examination of the breast (MRM). Methods: The reports of a total of 175 MRM in 77 patients (mean age 50 years; 36-76) with 98 follow-up MRM were analyzed. All examinations were performed as a dynamic study (Gd-DTPA, 0.16 mmol/kg; 6-7 repetitive studies). The change in diagnostic confidence was retrospectively classified as follows: Controlled lesion vanished during follow-up (category I); diagnostic confidence increases during follow-up (II), more likely benign (IIa), more suspicious (IIb); no difference in diagnostic confidence (III). Long-term follow-up over an average of four years was obtained for 57 patients with category IIa/III findings. Results: In 98 follow-up examinations, only two lesions vanished (2%). In 77/98 cases a category IIa lesion was diagnosed, in 11 cases a category IIb lesion. In 8 cases (8%) there was no change in diagnostic confidence during follow-up. Lesions in category IIb underwent biopsy in 10/11 cases, in one case long-term follow-up proved a completely regredient inflammatory change. In 8/11 suspicious findings (IIb) a malignant tumor was detected. The mean time interval between first and follow-up MRM was 8 months for I-IIb lesions, and 4 months for category III lesions. In the longterm follow-up two patients with a category II a lesion developed a carcinoma in a different breast area after four and five years. Conclusion: MRM follow up increases the diagnostic confidence if the time interval is adequate (>4 months). A persistently or increasingly suspicious finding warrants biopsy. (orig.) [de

  13. Confidence in critical care nursing.

    Science.gov (United States)

    Evans, Jeanne; Bell, Jennifer L; Sweeney, Annemarie E; Morgan, Jennifer I; Kelly, Helen M

    2010-10-01

    The purpose of the study was to gain an understanding of the nursing phenomenon, confidence, from the experience of nurses in the nursing subculture of critical care. Leininger's theory of cultural care diversity and universality guided this qualitative descriptive study. Questions derived from the sunrise model were used to elicit nurses' perspectives about cultural and social structures that exist within the critical care nursing subculture and the influence that these factors have on confidence. Twenty-eight critical care nurses from a large Canadian healthcare organization participated in semistructured interviews about confidence. Five themes arose from the descriptions provided by the participants. The three themes, tenuously navigating initiation rituals, deliberately developing holistic supportive relationships, and assimilating clinical decision-making rules were identified as social and cultural factors related to confidence. The remaining two themes, preserving a sense of security despite barriers and accommodating to diverse challenges, were identified as environmental factors related to confidence. Practice and research implications within the culture of critical care nursing are discussed in relation to each of the themes.

  14. Professional confidence: a concept analysis.

    Science.gov (United States)

    Holland, Kathlyn; Middleton, Lyn; Uys, Leana

    2012-03-01

    Professional confidence is a concept that is frequently used and or implied in occupational therapy literature, but often without specifying its meaning. Rodgers's Model of Concept Analysis was used to analyse the term "professional confidence". Published research obtained from a federated search in four health sciences databases was used to inform the concept analysis. The definitions, attributes, antecedents, and consequences of professional confidence as evidenced in the literature are discussed. Surrogate terms and related concepts are identified, and a model case of the concept provided. Based on the analysis, professional confidence can be described as a dynamic, maturing personal belief held by a professional or student. This includes an understanding of and a belief in the role, scope of practice, and significance of the profession, and is based on their capacity to competently fulfil these expectations, fostered through a process of affirming experiences. Developing and fostering professional confidence should be nurtured and valued to the same extent as professional competence, as the former underpins the latter, and both are linked to professional identity.

  15. Nearest unlike neighbor (NUN): an aid to decision confidence estimation

    Science.gov (United States)

    Dasarathy, Belur V.

    1995-09-01

    The concept of nearest unlike neighbor (NUN), proposed and explored previously in the design of nearest neighbor (NN) based decision systems, is further exploited in this study to develop a measure of confidence in the decisions made by NN-based decision systems. This measure of confidence, on the basis of comparison with a user-defined threshold, may be used to determine the acceptability of the decision provided by the NN-based decision system. The concepts, associated methodology, and some illustrative numerical examples using the now classical Iris data to bring out the ease of implementation and effectiveness of the proposed innovations are presented.

  16. Confidence Intervals for Omega Coefficient: Proposal for Calculus.

    Science.gov (United States)

    Ventura-León, José Luis

    2018-01-01

    La confiabilidad es entendida como una propiedad métrica de las puntuaciones de un instrumento de medida. Recientemente se viene utilizando el coeficiente omega (ω) para la estimación de la confiabilidad. No obstante, la medición nunca es exacta por la influencia del error aleatorio, por esa razón es necesario calcular y reportar el intervalo de confianza (IC) que permite encontrar en valor verdadero en un rango de medida. En ese contexto, el artículo plantea una forma de estimar el IC mediante el método de bootstrap para facilitar este procedimiento se brindan códigos de R (un software de acceso libre) para que puedan realizarse los cálculos de una forma amigable. Se espera que el artículo sirva de ayuda a los investigadores de ámbito de salud.

  17. Intervals of confidence: Uncertain accounts of global hunger

    NARCIS (Netherlands)

    Yates-Doerr, E.

    2015-01-01

    Global health policy experts tend to organize hunger through scales of ‘the individual’, ‘the community’ and ‘the global’. This organization configures hunger as a discrete, measurable object to be scaled up or down with mathematical certainty. This article offers a counter to this approach, using

  18. A quick method to calculate QTL confidence interval

    Indian Academy of Sciences (India)

    2011-08-19

    Aug 19, 2011 ... experimental design and analysis to reveal the real molecular nature of the ... strap sample form the bootstrap distribution of QTL location. The 2.5 and ..... ative probability to harbour a true QTL, hence x-LOD rule is not stable ... Darvasi A. and Soller M. 1997 A simple method to calculate resolv- ing power ...

  19. An approximate confidence interval for recombination fraction in ...

    African Journals Online (AJOL)

    user

    2011-02-14

    Feb 14, 2011 ... whose parents are not in the pedigree) and θ be the recombination fraction. ( )|. P x g is the penetrance probability, that is, the probability that an individual with genotype g has phenotype x . Let (. ) | , k k k f m. P g g g be the transmission probability, that is, the probability that an individual having genotype k.

  20. Confidence level in the calculations of HCDA consequences using large codes

    International Nuclear Information System (INIS)

    Nguyen, D.H.; Wilburn, N.P.

    1979-01-01

    The probabilistic approach to nuclear reactor safety is playing an increasingly significant role. For the liquid-metal fast breeder reactor (LMFBR) in particular, the ultimate application of this approach could be to determine the probability of achieving the goal of a specific line-of-assurance (LOA). Meanwhile a more pressing problem is one of quantifying the uncertainty in a calculated consequence for hypothetical core disruptive accident (HCDA) using large codes. Such uncertainty arises from imperfect modeling of phenomenology and/or from inaccuracy in input data. A method is presented to determine the confidence level in consequences calculated by a large computer code due to the known uncertainties in input invariables. A particular application was made to the initial time of pin failure in a transient overpower HCDA calculated by the code MELT-IIIA in order to demonstrate the method. A probability distribution function (pdf) for the time of failure was first constructed, then the confidence level for predicting this failure parameter within a desired range was determined

  1. PHYSIOTHERAPISTS ATTIRE: DOES IT AFFECT PATIENTS COMFORT, CONFIDENCE AND OVERALL PATIENT-THERAPIST RELATIONSHIP

    Directory of Open Access Journals (Sweden)

    Adamu Ahmad Rufa'i

    2015-10-01

    Full Text Available Background: Attire is one of the major determinants of appearance and a key element of non-verbal communication that plays a critical role in the establishment and sustainability of therapeutic relationships. This study aimed to determine the patients’ preferred physiotherapists’ attire and the effect of physiotherapists’ attire on patients’ confidence, comfort and patient-therapists relationship. Methods: A questionnaire was used to collect data in this cross sectional study design. Patients (N=281 attending outpatients physiotherapy clinics in six selected tertiary health institutions in North-eastern Nigeria completed a questionnaire consisting of two sections. Section one solicited sociodemographic information while in section two patients rated their level of confidence and comfort with physiotherapists based on a photo pictures of a male and a female physiotherapists models in four different attires. Descriptive statistics were performed to characterize participants and the differences in patients’ confidence and comfort level by different types of attire were assessed using chi-square. The correlation between physiotherapists’ attire and patient-physiotherapist relationship was determined using spearman rank correlation. Results: Overwhelming majority of the participants were more comfortable (91.1% and more confident (89.0% with the physiotherapists dressed in white coat, while they were less comfortable and less confident when their therapists are dressed in suit, native or casual wear. Positive patient-therapist relationship was observed with white coat dressed physiotherapists; while the relationship with business, native and casual wears were inverse. Conclusion: The study supports for continuing recommendation of lab coat as a professional dressing for physiotherapists in Nigeria and affirms the importance of professional dressing in patient-therapists relationship.

  2. Probability and Confidence Trade-space (PACT) Evaluation: Accounting for Uncertainty in Sparing Assessments

    Science.gov (United States)

    Anderson, Leif; Box, Neil; Carter, Katrina; DiFilippo, Denise; Harrington, Sean; Jackson, David; Lutomski, Michael

    2012-01-01

    There are two general shortcomings to the current annual sparing assessment: 1. The vehicle functions are currently assessed according to confidence targets, which can be misleading- overly conservative or optimistic. 2. The current confidence levels are arbitrarily determined and do not account for epistemic uncertainty (lack of knowledge) in the ORU failure rate. There are two major categories of uncertainty that impact Sparing Assessment: (a) Aleatory Uncertainty: Natural variability in distribution of actual failures around an Mean Time Between Failure (MTBF) (b) Epistemic Uncertainty : Lack of knowledge about the true value of an Orbital Replacement Unit's (ORU) MTBF We propose an approach to revise confidence targets and account for both categories of uncertainty, an approach we call Probability and Confidence Trade-space (PACT) evaluation.

  3. Forecasting overhaul or replacement intervals based on estimated system failure intensity

    Science.gov (United States)

    Gannon, James M.

    1994-12-01

    System reliability can be expressed in terms of the pattern of failure events over time. Assuming a nonhomogeneous Poisson process and Weibull intensity function for complex repairable system failures, the degree of system deterioration can be approximated. Maximum likelihood estimators (MLE's) for the system Rate of Occurrence of Failure (ROCOF) function are presented. Evaluating the integral of the ROCOF over annual usage intervals yields the expected number of annual system failures. By associating a cost of failure with the expected number of failures, budget and program policy decisions can be made based on expected future maintenance costs. Monte Carlo simulation is used to estimate the range and the distribution of the net present value and internal rate of return of alternative cash flows based on the distributions of the cost inputs and confidence intervals of the MLE's.

  4. Decoded fMRI neurofeedback can induce bidirectional confidence changes within single participants.

    Science.gov (United States)

    Cortese, Aurelio; Amano, Kaoru; Koizumi, Ai; Lau, Hakwan; Kawato, Mitsuo

    2017-04-01

    Neurofeedback studies using real-time functional magnetic resonance imaging (rt-fMRI) have recently incorporated the multi-voxel pattern decoding approach, allowing for fMRI to serve as a tool to manipulate fine-grained neural activity embedded in voxel patterns. Because of its tremendous potential for clinical applications, certain questions regarding decoded neurofeedback (DecNef) must be addressed. Specifically, can the same participants learn to induce neural patterns in opposite directions in different sessions? If so, how does previous learning affect subsequent induction effectiveness? These questions are critical because neurofeedback effects can last for months, but the short- to mid-term dynamics of such effects are unknown. Here we employed a within-subjects design, where participants underwent two DecNef training sessions to induce behavioural changes of opposing directionality (up or down regulation of perceptual confidence in a visual discrimination task), with the order of training counterbalanced across participants. Behavioral results indicated that the manipulation was strongly influenced by the order and the directionality of neurofeedback training. We applied nonlinear mathematical modeling to parametrize four main consequences of DecNef: main effect of change in confidence, strength of down-regulation of confidence relative to up-regulation, maintenance of learning effects, and anterograde learning interference. Modeling results revealed that DecNef successfully induced bidirectional confidence changes in different sessions within single participants. Furthermore, the effect of up- compared to down-regulation was more prominent, and confidence changes (regardless of the direction) were largely preserved even after a week-long interval. Lastly, the effect of the second session was markedly diminished as compared to the effect of the first session, indicating strong anterograde learning interference. These results are interpreted in the framework

  5. Long-Term Maintenance of Immediate or Delayed Extinction Is Determined by the Extinction-Test Interval

    Science.gov (United States)

    Johnson, Justin S.; Escobar, Martha; Kimble, Whitney L.

    2010-01-01

    Short acquisition-extinction intervals (immediate extinction) can lead to either more or less spontaneous recovery than long acquisition-extinction intervals (delayed extinction). Using rat subjects, we observed less spontaneous recovery following immediate than delayed extinction (Experiment 1). However, this was the case only if a relatively…

  6. Confidence and trust: empirical investigations for the Netherlands and the financial sector

    OpenAIRE

    Mosch, Robert; Prast, Henriëtte

    2010-01-01

    This paper reviews the state of confidence and trust in the Netherlands, with special attention to the financial sector. An attempt has been made to identify the factors that determine individual trust and confidence and to uncover connections between the various variables. Based on surveys over the period 2003-2006, the data show that interpersonal trust in the Netherlands - the extent to which the Dutch trust each other - is high from both an international and an historical perspective. Peo...

  7. [Sources of leader's confidence in organizations].

    Science.gov (United States)

    Ikeda, Hiroshi; Furukawa, Hisataka

    2006-04-01

    The purpose of this study was to examine the sources of confidence that organization leaders had. As potential sources of the confidence, we focused on fulfillment of expectations made by self and others, reflection on good as well as bad job experiences, and awareness of job experiences in terms of commonality, differentiation, and multiple viewpoints. A questionnaire was administered to 170 managers of Japanese companies. Results were as follows: First, confidence in leaders was more strongly related to fulfillment of expectations made by self and others than reflection on and awareness of job experiences. Second, the confidence was weakly related to internal processing of job experiences, in the form of commonality awareness and reflection on good job experiences. And finally, years of managerial experiences had almost no relation to the confidence. These findings suggested that confidence in leaders was directly acquired from fulfillment of expectations made by self and others, rather than indirectly through internal processing of job experiences. Implications of the findings for leadership training were also discussed.

  8. Reference Value Advisor: a new freeware set of macroinstructions to calculate reference intervals with Microsoft Excel.

    Science.gov (United States)

    Geffré, Anne; Concordet, Didier; Braun, Jean-Pierre; Trumel, Catherine

    2011-03-01

    International recommendations for determination of reference intervals have been recently updated, especially for small reference sample groups, and use of the robust method and Box-Cox transformation is now recommended. Unfortunately, these methods are not included in most software programs used for data analysis by clinical laboratories. We have created a set of macroinstructions, named Reference Value Advisor, for use in Microsoft Excel to calculate reference limits applying different methods. For any series of data, Reference Value Advisor calculates reference limits (with 90% confidence intervals [CI]) using a nonparametric method when n≥40 and by parametric and robust methods from native and Box-Cox transformed values; tests normality of distributions using the Anderson-Darling test and outliers using Tukey and Dixon-Reed tests; displays the distribution of values in dot plots and histograms and constructs Q-Q plots for visual inspection of normality; and provides minimal guidelines in the form of comments based on international recommendations. The critical steps in determination of reference intervals are correct selection of as many reference individuals as possible and analysis of specimens in controlled preanalytical and analytical conditions. Computing tools cannot compensate for flaws in selection and size of the reference sample group and handling and analysis of samples. However, if those steps are performed properly, Reference Value Advisor, available as freeware at http://www.biostat.envt.fr/spip/spip.php?article63, permits rapid assessment and comparison of results calculated using different methods, including currently unavailable methods. This allows for selection of the most appropriate method, especially as the program provides the CI of limits. It should be useful in veterinary clinical pathology when only small reference sample groups are available. ©2011 American Society for Veterinary Clinical Pathology.

  9. CLUSTERING OF THE COUNTRIES ACCORDING TO CONSUMER CONFIDENCE INDEX AND EVALUATING WITH HUMAN DEVELOPMENT INDEX

    Directory of Open Access Journals (Sweden)

    Seda BAĞDATLI KALKAN

    2018-01-01

    Full Text Available Consumer confidence index is a national indicator that suggest about current and future expectations of the economic conditions. With consumer confidence index, it is aimed to determine the trends and expectations of consumers according to their general economic situation, employment opportunities, their financial situations and developments in the markets. Another parameter is also the Human Development Index (HDI. This index is an indicator that examines the development of countries both economically and socially. Countries are sorted by these two indices and are considered as basic parameters in international platforms. The purpose of this study is to group the selected countries according to the consumer confidence index and reveal the features of the groups and then determine the position of the grouped countries with the Human Development Index. According to the results of cluster analysis, it is shown that India, China, Sweden and USA have the highest total consumer confidence index, employment, expectation and investment index

  10. Experimental congruence of interval scale production from paired comparisons and ranking for image evaluation

    Science.gov (United States)

    Handley, John C.; Babcock, Jason S.; Pelz, Jeff B.

    2003-12-01

    Image evaluation tasks are often conducted using paired comparisons or ranking. To elicit interval scales, both methods rely on Thurstone's Law of Comparative Judgment in which objects closer in psychological space are more often confused in preference comparisons by a putative discriminal random process. It is often debated whether paired comparisons and ranking yield the same interval scales. An experiment was conducted to assess scale production using paired comparisons and ranking. For this experiment a Pioneer Plasma Display and Apple Cinema Display were used for stimulus presentation. Observers performed rank order and paired comparisons tasks on both displays. For each of five scenes, six images were created by manipulating attributes such as lightness, chroma, and hue using six different settings. The intention was to simulate the variability from a set of digital cameras or scanners. Nineteen subjects, (5 females, 14 males) ranging from 19-51 years of age participated in this experiment. Using a paired comparison model and a ranking model, scales were estimated for each display and image combination yielding ten scale pairs, ostensibly measuring the same psychological scale. The Bradley-Terry model was used for the paired comparisons data and the Bradley-Terry-Mallows model was used for the ranking data. Each model was fit using maximum likelihood estimation and assessed using likelihood ratio tests. Approximate 95% confidence intervals were also constructed using likelihood ratios. Model fits for paired comparisons were satisfactory for all scales except those from two image/display pairs; the ranking model fit uniformly well on all data sets. Arguing from overlapping confidence intervals, we conclude that paired comparisons and ranking produce no conflicting decisions regarding ultimate ordering of treatment preferences, but paired comparisons yield greater precision at the expense of lack-of-fit.

  11. Confidence-building and Canadian leadership

    International Nuclear Information System (INIS)

    Cleminson, F.R.

    1998-01-01

    Confidence-building has come into its own as a 'tool of choice' in facilitating the non-proliferation, arms control and disarmament (NACD) agenda, whether regional or global. From the Middle East Peace Process (MEPP) to the ASEAN Intersessional Group on Confidence-Building (ARF ISG on CBMS), confidence-building has assumed a central profile in regional terms. In the Four Power Talks begun in Geneva on December 9, 1997, the United States identified confidence-building as one of two subject areas for initial discussion as part of a structured peace process between North and South Korea. Thus, with CBMs assuming such a high profile internationally, it seems prudent for Canadians to pause and take stock of the significant role which Canada has already played in the conceptual development of the process over the last two decades. Since the Helsinki accords of 1975, Canada has developed a significant expertise in this area through an unbroken series of original, basic research projects. These have contributed to defining the process internationally from concept to implementation. Today, these studies represent a solid and unique Departmental investment in basic research from which to draw in meeting Canada's current commitments to multilateral initiatives in the area of confidence-building and to provide a 'step up' in terms of future-oriented leadership. (author)

  12. On-line confidence monitoring during decision making.

    Science.gov (United States)

    Dotan, Dror; Meyniel, Florent; Dehaene, Stanislas

    2018-02-01

    Humans can readily assess their degree of confidence in their decisions. Two models of confidence computation have been proposed: post hoc computation using post-decision variables and heuristics, versus online computation using continuous assessment of evidence throughout the decision-making process. Here, we arbitrate between these theories by continuously monitoring finger movements during a manual sequential decision-making task. Analysis of finger kinematics indicated that subjects kept separate online records of evidence and confidence: finger deviation continuously reflected the ongoing accumulation of evidence, whereas finger speed continuously reflected the momentary degree of confidence. Furthermore, end-of-trial finger speed predicted the post-decisional subjective confidence rating. These data indicate that confidence is computed on-line, throughout the decision process. Speed-confidence correlations were previously interpreted as a post-decision heuristics, whereby slow decisions decrease subjective confidence, but our results suggest an adaptive mechanism that involves the opposite causality: by slowing down when unconfident, participants gain time to improve their decisions. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. How do regulators measure public confidence?

    International Nuclear Information System (INIS)

    Schmitt, A.; Besenyei, E.

    2006-01-01

    The conclusions and recommendations of this session can be summarized this way. - There are some important elements of confidence: visibility, satisfaction, credibility and reputation. The latter can consist of trust, positive image and knowledge of the role the organisation plays. A good reputation is hard to achieve but easy to lose. - There is a need to define what public confidence is and what to measure. The difficulty is that confidence is a matter of perception of the public, so what we try to measure is the perception. - It is controversial how to take into account the results of confidence measurement because of the influence of the context. It is not an exact science, results should be examined cautiously and surveys should be conducted frequently, at least every two years. - Different experiences were explained: - Quantitative surveys - among the general public or more specific groups like the media; - Qualitative research - with test groups and small panels; - Semi-quantitative studies - among stakeholders who have regular contracts with the regulatory body. It is not clear if the results should be shared with the public or just with other authorities and governmental organisations. - Efforts are needed to increase visibility, which is a prerequisite for confidence. - A practical example of organizing an emergency exercise and an information campaign without taking into account the real concerns of the people was given to show how public confidence can be decreased. - We learned about a new method - the so-called socio-drama - which addresses another issue also connected to confidence - the notion of understanding between stakeholders around a nuclear site. It is another way of looking at confidence in a more restricted group. (authors)

  14. The Great Recession and confidence in homeownership

    OpenAIRE

    Anat Bracha; Julian Jamison

    2013-01-01

    Confidence in homeownership shifts for those who personally experienced real estate loss during the Great Recession. Older Americans are confident in the value of homeownership. Younger Americans are less confident.

  15. Reference intervals for serum total cholesterol, HDL cholesterol and ...

    African Journals Online (AJOL)

    Reference intervals of total cholesterol, HDL cholesterol and non-HDL cholesterol concentrations were determined on 309 blood donors from an urban and peri-urban population of Botswana. Using non-parametric methods to establish 2.5th and 97.5th percentiles of the distribution, the intervals were: total cholesterol 2.16 ...

  16. Convex Interval Games

    NARCIS (Netherlands)

    Alparslan-Gok, S.Z.; Brânzei, R.; Tijs, S.H.

    2008-01-01

    In this paper, convex interval games are introduced and some characterizations are given. Some economic situations leading to convex interval games are discussed. The Weber set and the Shapley value are defined for a suitable class of interval games and their relations with the interval core for

  17. Confidence-building and Canadian leadership

    Energy Technology Data Exchange (ETDEWEB)

    Cleminson, F.R. [Dept. of Foreign Affairs and International Trade, Verification, Non-Proliferation, Arms Control and Disarmament Div (IDA), Ottawa, Ontario (Canada)

    1998-07-01

    Confidence-building has come into its own as a 'tool of choice' in facilitating the non-proliferation, arms control and disarmament (NACD) agenda, whether regional or global. From the Middle East Peace Process (MEPP) to the ASEAN Intersessional Group on Confidence-Building (ARF ISG on CBMS), confidence-building has assumed a central profile in regional terms. In the Four Power Talks begun in Geneva on December 9, 1997, the United States identified confidence-building as one of two subject areas for initial discussion as part of a structured peace process between North and South Korea. Thus, with CBMs assuming such a high profile internationally, it seems prudent for Canadians to pause and take stock of the significant role which Canada has already played in the conceptual development of the process over the last two decades. Since the Helsinki accords of 1975, Canada has developed a significant expertise in this area through an unbroken series of original, basic research projects. These have contributed to defining the process internationally from concept to implementation. Today, these studies represent a solid and unique Departmental investment in basic research from which to draw in meeting Canada's current commitments to multilateral initiatives in the area of confidence-building and to provide a 'step up' in terms of future-oriented leadership. (author)

  18. Confidence assessment. Site descriptive modelling SDM-Site Forsmark

    International Nuclear Information System (INIS)

    2008-09-01

    The objective of this report is to assess the confidence that can be placed in the Forsmark site descriptive model, based on the information available at the conclusion of the surface-based investigations (SDM-Site Forsmark). In this exploration, an overriding question is whether remaining uncertainties are significant for repository engineering design or long-term safety assessment and could successfully be further reduced by more surface based investigations or more usefully by explorations underground made during construction of the repository. The confidence in the Forsmark site descriptive model, based on the data available at the conclusion of the surface-based site investigations, have been assessed by exploring: Confidence in the site characterisation data base; Key remaining issues and their handling; Handling of alternative models; Consistency between disciplines; and, Main reasons for confidence and lack of confidence in the model. It is generally found that the key aspects of importance for safety assessment and repository engineering of the Forsmark site descriptive model are associated with a high degree of confidence. Because of the robust geological model that describes the site, the overall confidence in Forsmark site descriptive model is judged to be high. While some aspects have lower confidence this lack of confidence is handled by providing wider uncertainty ranges, bounding estimates and/or alternative models. Most, but not all, of the low confidence aspects have little impact on repository engineering design or for long-term safety. Poor precision in the measured data are judged to have limited impact on uncertainties on the site descriptive model, with the exceptions of inaccuracy in determining the position of some boreholes at depth in 3-D space, as well as the poor precision of the orientation of BIPS images in some boreholes, and the poor precision of stress data determined by overcoring at the locations where the pre

  19. Psychosocial determinants of nurses' intention to practise euthanasia in palliative care.

    Science.gov (United States)

    Lavoie, Mireille; Godin, Gaston; Vézina-Im, Lydi-Anne; Blondeau, Danielle; Martineau, Isabelle; Roy, Louis

    2016-02-01

    Most studies on euthanasia fail to explain the intentions of health professionals when faced with performing euthanasia and are atheoretical. The purpose of this study was to identify the psychosocial determinants of nurses' intention to practise euthanasia in palliative care if it were legalised. A cross-sectional study using a validated anonymous questionnaire based on an extended version of the Theory of Planned Behaviour. A random sample of 445 nurses from the province of Quebec, Canada, was selected for participation in the study. The study was reviewed and approved by the Ethics Committee of the Centre hospitalier universitaire de Québec. The response rate was 44.2% and the mean score for intention was 4.61 ± 1.90 (range: 1-7). The determinants of intention were the subjective (odds ratio = 3.08; 95% confidence interval: 1.50-6.35) and moral (odds ratio = 2.95; 95% confidence interval: 1.58-5.49) norms. Specific beliefs which could discriminate nurses according to their level of intention were identified. Overall, nurses have a slightly positive intention to practise euthanasia. Their family approval seems particularly important and also the approval of their medical colleagues. Nurses' moral norm was related to beneficence, an ethical principle. To our knowledge, this is the first study to identify nurses' motivations to practise euthanasia in palliative care using a validated psychosocial theory. It also has the distinction of identifying the ethical principles underlying nurses' moral norm and intention. © The Author(s) 2014.

  20. Confidence limits for data mining models of options prices

    Science.gov (United States)

    Healy, J. V.; Dixon, M.; Read, B. J.; Cai, F. F.

    2004-12-01

    Non-parametric methods such as artificial neural nets can successfully model prices of financial options, out-performing the Black-Scholes analytic model (Eur. Phys. J. B 27 (2002) 219). However, the accuracy of such approaches is usually expressed only by a global fitting/error measure. This paper describes a robust method for determining prediction intervals for models derived by non-linear regression. We have demonstrated it by application to a standard synthetic example (29th Annual Conference of the IEEE Industrial Electronics Society, Special Session on Intelligent Systems, pp. 1926-1931). The method is used here to obtain prediction intervals for option prices using market data for LIFFE “ESX” FTSE 100 index options ( http://www.liffe.com/liffedata/contracts/month_onmonth.xls). We avoid special neural net architectures and use standard regression procedures to determine local error bars. The method is appropriate for target data with non constant variance (or volatility).

  1. Restricted Interval Valued Neutrosophic Sets and Restricted Interval Valued Neutrosophic Topological Spaces

    Directory of Open Access Journals (Sweden)

    Anjan Mukherjee

    2016-08-01

    Full Text Available In this paper we introduce the concept of restricted interval valued neutrosophic sets (RIVNS in short. Some basic operations and properties of RIVNS are discussed. The concept of restricted interval valued neutrosophic topology is also introduced together with restricted interval valued neutrosophic finer and restricted interval valued neutrosophic coarser topology. We also define restricted interval valued neutrosophic interior and closer of a restricted interval valued neutrosophic set. Some theorems and examples are cites. Restricted interval valued neutrosophic subspace topology is also studied.

  2. Determination of the biodiesel acidity index by potentiometric titration by using different methods; Determinacao do indice de acidez de biodiesel por titulacao potenciometrica utilizando-se diferentes metodos

    Energy Technology Data Exchange (ETDEWEB)

    Goncalves, Mary Ane; Sobral, Sidney Pereira; Borges, Paulo Paschoal [Instituto Nacional de Metrologia, Normalizacao e Qualidade Industrial (DIMCI/INMETRO), Duque de Caxias, RJ (Brazil). Diretoria de Metrologia Cientifica e Industrial], E-mail: magoncalves@inmetro.gov.br

    2009-07-01

    This work determined the index of the soybean/fat bio diesel through the potentiometric titration. Four different methods were used with variation of solvent and electrodes. The results were compared by F and t (Student) and it was verified that they were agreed in a 95% confidence interval.

  3. Prehospital factors determining regional variation in thrombolytic therapy in acute ischemic stroke.

    Science.gov (United States)

    Lahr, Maarten M H; Vroomen, Patrick C A J; Luijckx, Gert-Jan; van der Zee, Durk-Jouke; de Vos, Ronald; Buskens, Erik

    2014-10-01

    Treatment rates with intravenous tissue plasminogen activator vary by region, which can be partially explained by organizational models of stroke care. A recent study demonstrated that prehospital factors determine a higher thrombolysis rate in a centralized vs. decentralized model in the north of the Netherlands. To investigate prehospital factors that may explain variation in thrombolytic therapy between a centralized and a decentralized model. A consecutive case observational study was conducted in the north of the Netherlands comparing patients arriving within 4·5 h in a centralized vs. decentralized stroke care model. Factors investigated were transportation mode, prehospital diagnostic accuracy, and preferential referral of thrombolysis candidates. Potential confounders were adjusted using logistic regression analysis. A total of 172 and 299 arriving within 4·5 h were enrolled in centralized and decentralized settings, respectively. The rate of transportation by emergency medical services was greater in the centralized model (adjusted odds ratio 3·11; 95% confidence interval, 1·59-6·06). Also, more misdiagnoses of stroke occurred in the central model (P = 0·05). In postal code areas with and without potential preferential referral of thrombolysis candidates due to overlapping catchment areas, the odds of hospital arrival within 4·5 h in the central vs. decentral model were 2·15 (95% confidence interval, 1·39-3·32) and 1·44 (95% confidence interval, 1·04-2·00), respectively. These results suggest that the larger proportion of patients arriving within 4·5 h in the centralized model might be related to a lower threshold to use emergency services to transport stroke patients and partly to preferential referral of thrombolysis candidates. © 2013 The Authors. International Journal of Stroke © 2013 World Stroke Organization.

  4. T(peak)T(end) interval in long QT syndrome

    DEFF Research Database (Denmark)

    Kanters, Jørgen Kim; Haarmark, Christian; Vedel-Larsen, Esben

    2008-01-01

    BACKGROUND: The T(peak)T(end) (T(p)T(e)) interval is believed to reflect the transmural dispersion of repolarization. Accordingly, it should be a risk factor in long QT syndrome (LQTS). The aim of the study was to determine the effect of genotype on T(p)T(e) interval and test whether it was relat...

  5. Simultaneous determination of radionuclides separable into natural decay series by use of time-interval analysis

    International Nuclear Information System (INIS)

    Hashimoto, Tetsuo; Sanada, Yukihisa; Uezu, Yasuhiro

    2004-01-01

    A delayed coincidence method, time-interval analysis (TIA), has been applied to successive α-α decay events on the millisecond time-scale. Such decay events are part of the 220 Rn→ 216 Po (T 1/2 145 ms) (Th-series) and 219 Rn→ 215 Po (T 1/2 1.78 ms) (Ac-series). By using TIA in addition to measurement of 226 Ra (U-series) from α-spectrometry by liquid scintillation counting (LSC), two natural decay series could be identified and separated. The TIA detection efficiency was improved by using the pulse-shape discrimination technique (PSD) to reject β-pulses, by solvent extraction of Ra combined with simple chemical separation, and by purging the scintillation solution with dry N 2 gas. The U- and Th-series together with the Ac-series were determined, respectively, from alpha spectra and TIA carried out immediately after Ra-extraction. Using the 221 Fr→ 217 At (T 1/2 32.3 ms) decay process as a tracer, overall yields were estimated from application of TIA to the 225 Ra (Np-decay series) at the time of maximum growth. The present method has proven useful for simultaneous determination of three radioactive decay series in environmental samples. (orig.)

  6. Intuitive Feelings of Warmth and Confidence in Insight and Noninsight Problem Solving of Magic Tricks

    Science.gov (United States)

    Hedne, Mikael R.; Norman, Elisabeth; Metcalfe, Janet

    2016-01-01

    The focus of the current study is on intuitive feelings of insight during problem solving and the extent to which such feelings are predictive of successful problem solving. We report the results from an experiment (N = 51) that applied a procedure where the to-be-solved problems were 32 short (15 s) video recordings of magic tricks. The procedure included metacognitive ratings similar to the “warmth ratings” previously used by Metcalfe and colleagues, as well as confidence ratings. At regular intervals during problem solving, participants indicated the perceived closeness to the correct solution. Participants also indicated directly whether each problem was solved by insight or not. Problems that people claimed were solved by insight were characterized by higher accuracy and higher confidence than noninsight solutions. There was no difference between the two types of solution in warmth ratings, however. Confidence ratings were more strongly associated with solution accuracy for noninsight than insight trials. Moreover, for insight trials the participants were more likely to repeat their incorrect solutions on a subsequent recognition test. The results have implications for understanding people's metacognitive awareness of the cognitive processes involved in problem solving. They also have general implications for our understanding of how intuition and insight are related. PMID:27630598

  7. Limited Rationality and Its Quantification Through the Interval Number Judgments With Permutations.

    Science.gov (United States)

    Liu, Fang; Pedrycz, Witold; Zhang, Wei-Guo

    2017-12-01

    The relative importance of alternatives expressed in terms of interval numbers in the fuzzy analytic hierarchy process aims to capture the uncertainty experienced by decision makers (DMs) when making a series of comparisons. Under the assumption of full rationality, the judgements of DMs in the typical analytic hierarchy process could be consistent. However, since the uncertainty in articulating the opinions of DMs is unavoidable, the interval number judgements are associated with the limited rationality. In this paper, we investigate the concept of limited rationality by introducing interval multiplicative reciprocal comparison matrices. By analyzing the consistency of interval multiplicative reciprocal comparison matrices, it is observed that the interval number judgements are inconsistent. By considering the permutations of alternatives, the concepts of approximation-consistency and acceptable approximation-consistency of interval multiplicative reciprocal comparison matrices are proposed. The exchange method is designed to generate all the permutations. A novel method of determining the interval weight vector is proposed under the consideration of randomness in comparing alternatives, and a vector of interval weights is determined. A new algorithm of solving decision making problems with interval multiplicative reciprocal preference relations is provided. Two numerical examples are carried out to illustrate the proposed approach and offer a comparison with the methods available in the literature.

  8. Confidence Leak in Perceptual Decision Making.

    Science.gov (United States)

    Rahnev, Dobromir; Koizumi, Ai; McCurdy, Li Yan; D'Esposito, Mark; Lau, Hakwan

    2015-11-01

    People live in a continuous environment in which the visual scene changes on a slow timescale. It has been shown that to exploit such environmental stability, the brain creates a continuity field in which objects seen seconds ago influence the perception of current objects. What is unknown is whether a similar mechanism exists at the level of metacognitive representations. In three experiments, we demonstrated a robust intertask confidence leak-that is, confidence in one's response on a given task or trial influencing confidence on the following task or trial. This confidence leak could not be explained by response priming or attentional fluctuations. Better ability to modulate confidence leak predicted higher capacity for metacognition as well as greater gray matter volume in the prefrontal cortex. A model based on normative principles from Bayesian inference explained the results by postulating that observers subjectively estimate the perceptual signal strength in a stable environment. These results point to the existence of a novel metacognitive mechanism mediated by regions in the prefrontal cortex. © The Author(s) 2015.

  9. Effect of immersive workplace experience on undergraduate nurses' mental health clinical confidence.

    Science.gov (United States)

    Patterson, Christopher; Moxham, Lorna; Taylor, Ellie K; Perlman, Dana; Brighton, Renee; Sumskis, Susan; Heffernan, Tim; Lee-Bates, Benjamin

    2017-12-01

    Preregistration education needs to ensure that student nurses are properly trained with the required skills and knowledge, and have the confidence to work with people who have a mental illness. With increased attention on non-traditional mental health clinical placements, further research is required to determine the effects of non-traditional mental health clinical placements on mental health clinical confidence. The aim of the present study was to investigate the impact of a non-traditional mental health clinical placement on mental health nursing clinical confidence compared to nursing students undergoing traditional clinical placements. Using the Mental Health Nursing Clinical Confidence Scale, the study investigated the relative effects of two placement programmes on the mental health clinical confidence of 79 nursing students. The two placement programmes included a non-traditional clinical placement of Recovery Camp and a comparison group that attended traditional clinical placements. Overall, the results indicated that, for both groups, mental health placement had a significant effect on improving mean mental health clinical confidence, both immediately upon conclusion of placement and at the 3-month follow up. Students who attended Recovery Camp reported a significant positive difference, compared to the comparison group, for ratings related to communicating effectively with clients with a mental illness, having a basic knowledge of antipsychotic medications and their side-effects, and providing client education regarding the effects and side-effects of medications. The findings suggest that a unique clinical placement, such as Recovery Camp, can improve and maintain facets of mental health clinical confidence for students of nursing. © 2017 Australian College of Mental Health Nurses Inc.

  10. Variation in Cancer Incidence among Patients with ESRD during Kidney Function and Nonfunction Intervals.

    Science.gov (United States)

    Yanik, Elizabeth L; Clarke, Christina A; Snyder, Jon J; Pfeiffer, Ruth M; Engels, Eric A

    2016-05-01

    Among patients with ESRD, cancer risk is affected by kidney dysfunction and by immunosuppression after transplant. Assessing patterns across periods of dialysis and kidney transplantation may inform cancer etiology. We evaluated 202,195 kidney transplant candidates and recipients from a linkage between the Scientific Registry of Transplant Recipients and cancer registries, and compared incidence in kidney function intervals (time with a transplant) with incidence in nonfunction intervals (waitlist or time after transplant failure), adjusting for demographic factors. Incidence of infection-related and immune-related cancer was higher during kidney function intervals than during nonfunction intervals. Incidence was most elevated for Kaposi sarcoma (hazard ratio [HR], 9.1; 95% confidence interval (95% CI), 4.7 to 18), non-Hodgkin's lymphoma (HR, 3.2; 95% CI, 2.8 to 3.7), Hodgkin's lymphoma (HR, 3.0; 95% CI, 1.7 to 5.3), lip cancer (HR, 3.4; 95% CI, 2.0 to 6.0), and nonepithelial skin cancers (HR, 3.8; 95% CI, 2.5 to 5.8). Conversely, ESRD-related cancer incidence was lower during kidney function intervals (kidney cancer: HR, 0.8; 95% CI, 0.7 to 0.8 and thyroid cancer: HR, 0.7; 95% CI, 0.6 to 0.8). With each successive interval, incidence changed in alternating directions for non-Hodgkin's lymphoma, melanoma, and lung, pancreatic, and nonepithelial skin cancers (higher during function intervals), and kidney and thyroid cancers (higher during nonfunction intervals). For many cancers, incidence remained higher than in the general population across all intervals. These data indicate strong short-term effects of kidney dysfunction and immunosuppression on cancer incidence in patients with ESRD, suggesting a need for persistent cancer screening and prevention. Copyright © 2016 by the American Society of Nephrology.

  11. Targeting Low Career Confidence Using the Career Planning Confidence Scale

    Science.gov (United States)

    McAuliffe, Garrett; Jurgens, Jill C.; Pickering, Worth; Calliotte, James; Macera, Anthony; Zerwas, Steven

    2006-01-01

    The authors describe the development and validation of a test of career planning confidence that makes possible the targeting of specific problem issues in employment counseling. The scale, developed using a rational process and the authors' experience with clients, was tested for criterion-related validity against 2 other measures. The scale…

  12. Confidence Building Strategies in the Public Schools.

    Science.gov (United States)

    Achilles, C. M.; And Others

    1985-01-01

    Data from the Phi Delta Kappa Commission on Public Confidence in Education indicate that "high-confidence" schools make greater use of marketing and public relations strategies. Teacher attitudes were ranked first and administrator attitudes second by 409 respondents for both gain and loss of confidence in schools. (MLF)

  13. Confidence rating of marine eutrophication assessments

    DEFF Research Database (Denmark)

    Murray, Ciarán; Andersen, Jesper Harbo; Kaartokallio, Hermanni

    2011-01-01

    of the 'value' of the indicators on which the primary assessment is made. Such secondary assessment of confidence represents a first step towards linking status classification with information regarding their accuracy and precision and ultimately a tool for improving or targeting actions to improve the health......This report presents the development of a methodology for assessing confidence in eutrophication status classifications. The method can be considered as a secondary assessment, supporting the primary assessment of eutrophication status. The confidence assessment is based on a transparent scoring...

  14. Confidence and Use of Communication Skills in Medical Students

    OpenAIRE

    Mahnaz Jalalvandi; Akhtar Jamali; Ali Taghipoor-Zahir; Mohammad-Reza Sohrabi

    2014-01-01

    Background: Well-designed interventions can improve the communication skills of physicians. Since the understanding of the current situation is essential for designing effective interventions, this study was performed to determine medical interns’ confidence and use of communication skills.Materials and Methods: This descriptive-analytical study was performed in spring 2013 within 3 branches of Islamic Azad University (Tehran, Mashhad, and Yazd), on 327 randomly selected interns. Data gatheri...

  15. Short-Term Wind Power Interval Forecasting Based on an EEMD-RT-RVM Model

    Directory of Open Access Journals (Sweden)

    Haixiang Zang

    2016-01-01

    Full Text Available Accurate short-term wind power forecasting is important for improving the security and economic success of power grids. Existing wind power forecasting methods are mostly types of deterministic point forecasting. Deterministic point forecasting is vulnerable to forecasting errors and cannot effectively deal with the random nature of wind power. In order to solve the above problems, we propose a short-term wind power interval forecasting model based on ensemble empirical mode decomposition (EEMD, runs test (RT, and relevance vector machine (RVM. First, in order to reduce the complexity of data, the original wind power sequence is decomposed into a plurality of intrinsic mode function (IMF components and residual (RES component by using EEMD. Next, we use the RT method to reconstruct the components and obtain three new components characterized by the fine-to-coarse order. Finally, we obtain the overall forecasting results (with preestablished confidence levels by superimposing the forecasting results of each new component. Our results show that, compared with existing methods, our proposed short-term interval forecasting method has less forecasting errors, narrower interval widths, and larger interval coverage percentages. Ultimately, our forecasting model is more suitable for engineering applications and other forecasting methods for new energy.

  16. Optimal Wind Power Uncertainty Intervals for Electricity Market Operation

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Ying; Zhou, Zhi; Botterud, Audun; Zhang, Kaifeng

    2018-01-01

    It is important to select an appropriate uncertainty level of the wind power forecast for power system scheduling and electricity market operation. Traditional methods hedge against a predefined level of wind power uncertainty, such as a specific confidence interval or uncertainty set, which leaves the questions of how to best select the appropriate uncertainty levels. To bridge this gap, this paper proposes a model to optimize the forecast uncertainty intervals of wind power for power system scheduling problems, with the aim of achieving the best trade-off between economics and reliability. Then we reformulate and linearize the models into a mixed integer linear programming (MILP) without strong assumptions on the shape of the probability distribution. In order to invest the impacts on cost, reliability, and prices in a electricity market, we apply the proposed model on a twosettlement electricity market based on a six-bus test system and on a power system representing the U.S. state of Illinois. The results show that the proposed method can not only help to balance the economics and reliability of the power system scheduling, but also help to stabilize the energy prices in electricity market operation.

  17. Transmission line sag calculations using interval mathematics

    Energy Technology Data Exchange (ETDEWEB)

    Shaalan, H. [Institute of Electrical and Electronics Engineers, Washington, DC (United States)]|[US Merchant Marine Academy, Kings Point, NY (United States)

    2007-07-01

    Electric utilities are facing the need for additional generating capacity, new transmission systems and more efficient use of existing resources. As such, there are several uncertainties associated with utility decisions. These uncertainties include future load growth, construction times and costs, and performance of new resources. Regulatory and economic environments also present uncertainties. Uncertainty can be modeled based on a probabilistic approach where probability distributions for all of the uncertainties are assumed. Another approach to modeling uncertainty is referred to as unknown but bounded. In this approach, the upper and lower bounds on the uncertainties are assumed without probability distributions. Interval mathematics is a tool for the practical use and extension of the unknown but bounded concept. In this study, the calculation of transmission line sag was used as an example to demonstrate the use of interval mathematics. The objective was to determine the change in cable length, based on a fixed span and an interval of cable sag values for a range of temperatures. The resulting change in cable length was an interval corresponding to the interval of cable sag values. It was shown that there is a small change in conductor length due to variation in sag based on the temperature ranges used in this study. 8 refs.

  18. Haematological and biochemical reference intervals for free-ranging brown bears (Ursus arctos) in Sweden

    Science.gov (United States)

    2014-01-01

    Background Establishment of haematological and biochemical reference intervals is important to assess health of animals on individual and population level. Reference intervals for 13 haematological and 34 biochemical variables were established based on 88 apparently healthy free-ranging brown bears (39 males and 49 females) in Sweden. The animals were chemically immobilised by darting from a helicopter with a combination of medetomidine, tiletamine and zolazepam in April and May 2006–2012 in the county of Dalarna, Sweden. Venous blood samples were collected during anaesthesia for radio collaring and marking for ecological studies. For each of the variables, the reference interval was described based on the 95% confidence interval, and differences due to host characteristics sex and age were included if detected. To our knowledge, this is the first report of reference intervals for free-ranging brown bears in Sweden. Results The following variables were not affected by host characteristics: red blood cell, white blood cell, monocyte and platelet count, alanine transaminase, amylase, bilirubin, free fatty acids, glucose, calcium, chloride, potassium, and cortisol. Age differences were seen for the majority of the haematological variables, whereas sex influenced only mean corpuscular haemoglobin concentration, aspartate aminotransferase, lipase, lactate dehydrogenase, β-globulin, bile acids, triglycerides and sodium. Conclusions The biochemical and haematological reference intervals provided and the differences due to host factors age and gender can be useful for evaluation of health status in free-ranging European brown bears. PMID:25139149

  19. How Much Confidence Can We Have in EU-SILC? Complex Sample Designs and the Standard Error of the Europe 2020 Poverty Indicators

    Science.gov (United States)

    Goedeme, Tim

    2013-01-01

    If estimates are based on samples, they should be accompanied by appropriate standard errors and confidence intervals. This is true for scientific research in general, and is even more important if estimates are used to inform and evaluate policy measures such as those aimed at attaining the Europe 2020 poverty reduction target. In this article I…

  20. Consideration of statistical uncertainties for the determination of representative values of the specific activity of wastes

    International Nuclear Information System (INIS)

    Barthel, R.

    2008-01-01

    The German Radiation Protection Commission has recommended 'Principles and Methods for the Consideration of Statistical Uncertainties for the Determination of Representative Values of the Specific Activity of NORM wastes' concerning the proof of compliance with supervision limits or dose standards according to paragraph 97 and paragraph 98 of the Radiation Protection Ordinance, respectively. The recommendation comprises a method ensuring the representativeness of estimates for the specific activity of NORM wastes, which also assures the required evidence for conformity with respect to supervision limits or dose standards, respectively. On the basis of a sampling survey, confidence limits for expectation values of specific activities are determined, which will be used to show that the supervision limit or the dose standard is met or exceeded with certainty, or that the performed sampling is not sufficient for the intended assessment. The sampling effort depends on the type and the width of the distribution of specific activities and is determined by the position of the confidence interval with respect to the supervision limit or of the resulting doses with respect to the dose standard. The statistical uncertainties that are described by confidence limits may be reduced by an optimised extension of the sample number, as far as necessary. (orig.)

  1. Preventive maintenance and the interval availability distribution of an unreliable production system

    International Nuclear Information System (INIS)

    Dijkhuizen, G. van; Heijden, M. van der

    1999-01-01

    Traditionally, the optimal preventive maintenance interval for an unreliable production system has been determined by maximizing its limiting availability. Nowadays, it is widely recognized that this performance measure does not always provide relevant information for practical purposes. This is particularly true for order-driven manufacturing systems, in which due date performance has become a more important, and even a competitive factor. Under these circumstances, the so-called interval availability distribution is often seen as a more appropriate performance measure. Surprisingly enough, the relation between preventive maintenance and interval availability has received little attention in the existing literature. In this article, a series of mathematical models and optimization techniques is presented, with which the optimal preventive maintenance interval can be determined from an interval availability point of view, rather than from a limiting availability perspective. Computational results for a class of representative test problems indicate that significant improvements of up to 30% in the guaranteed interval availability can be obtained, by increasing preventive maintenance frequencies somewhere between 10 and 70%

  2. Electron density diagnostics in the 10-100 A interval for a solar flare

    Science.gov (United States)

    Brown, W. A.; Bruner, M. E.; Acton, L. W.; Mason, H. E.

    1986-01-01

    Electron density measurements from spectral-line diagnostics are reported for a solar flare on July 13, 1982, 1627 UT. The spectrogram, covering the 10-95 A interval, contained usable lines of helium-like ions C V, N VI, O VII, and Ne IX which are formed over the temperature interval 0.7-3.5 x 10 to the 6th K. In addition, spectral-line ratios of Si IX, Fe XIV, and Ca XV were compared with new theoretical estimates of their electron density sensitivity to obtain additional electron density diagnostics. An electron density of 3 x 10 to the 10th/cu cm was obtained. The comparison of these results from helium-like and other ions gives confidence in the utility of these tools for solar coronal analysis and will lead to a fuller understanding of the phenomena observed in this flare.

  3. Ratio-based lengths of intervals to improve fuzzy time series forecasting.

    Science.gov (United States)

    Huarng, Kunhuang; Yu, Tiffany Hui-Kuang

    2006-04-01

    The objective of this study is to explore ways of determining the useful lengths of intervals in fuzzy time series. It is suggested that ratios, instead of equal lengths of intervals, can more properly represent the intervals among observations. Ratio-based lengths of intervals are, therefore, proposed to improve fuzzy time series forecasting. Algebraic growth data, such as enrollments and the stock index, and exponential growth data, such as inventory demand, are chosen as the forecasting targets, before forecasting based on the various lengths of intervals is performed. Furthermore, sensitivity analyses are also carried out for various percentiles. The ratio-based lengths of intervals are found to outperform the effective lengths of intervals, as well as the arbitrary ones in regard to the different statistical measures. The empirical analysis suggests that the ratio-based lengths of intervals can also be used to improve fuzzy time series forecasting.

  4. Interval Continuous Plant Identification from Value Sets

    Directory of Open Access Journals (Sweden)

    R. Hernández

    2012-01-01

    Full Text Available This paper shows how to obtain the values of the numerator and denominator Kharitonov polynomials of an interval plant from its value set at a given frequency. Moreover, it is proven that given a value set, all the assigned polynomials of the vertices can be determined if and only if there is a complete edge or a complete arc lying on a quadrant. This algorithm is nonconservative in the sense that if the value-set boundary of an interval plant is exactly known, and particularly its vertices, then the Kharitonov rectangles are exactly those used to obtain these value sets.

  5. Nuclear power: restoring public confidence

    International Nuclear Information System (INIS)

    Arnold, L.

    1986-01-01

    The paper concerns a one day conference on nuclear power organised by the Centre for Science Studies and Science Policy, Lancaster, April 1986. Following the Chernobyl reactor accident, the conference concentrated on public confidence in nuclear power. Causes of lack of public confidence, public perceptions of risk, and the effect of Chernobyl in the United Kingdom, were all discussed. A Select Committee on the Environment examined the problems of radioactive waste disposal. (U.K.)

  6. Balance confidence is related to features of balance and gait in individuals with chronic stroke

    Science.gov (United States)

    Schinkel-Ivy, Alison; Wong, Jennifer S.; Mansfield, Avril

    2016-01-01

    Reduced balance confidence is associated with impairments in features of balance and gait in individuals with sub-acute stroke. However, an understanding of these relationships in individuals at the chronic stage of stroke recovery is lacking. This study aimed to quantify relationships between balance confidence and specific features of balance and gait in individuals with chronic stroke. Participants completed a balance confidence questionnaire and clinical balance assessment (quiet standing, walking, and reactive stepping) at 6 months post-discharge from inpatient stroke rehabilitation. Regression analyses were performed using balance confidence as a predictor variable and quiet standing, walking, and reactive stepping outcome measures as the dependent variables. Walking velocity was positively correlated with balance confidence, while medio-lateral centre of pressure excursion (quiet standing) and double support time, step width variability, and step time variability (walking) were negatively correlated with balance confidence. This study provides insight into the relationships between balance confidence and balance and gait measures in individuals with chronic stroke, suggesting that individuals with low balance confidence exhibited impaired control of quiet standing as well as walking characteristics associated with cautious gait strategies. Future work should identify the direction of these relationships to inform community-based stroke rehabilitation programs for individuals with chronic stroke, and determine the potential utility of incorporating interventions to improve balance confidence into these programs. PMID:27955809

  7. Identification of atrial fibrillation using electrocardiographic RR-interval difference

    Science.gov (United States)

    Eliana, M.; Nuryani, N.

    2017-11-01

    Automated detection of atrial fibrillation (AF) is an interesting topic. It is an account of very dangerous, not only as a trigger of embolic stroke, but it’s also related to some else chronical disease. In this study, we analyse the presence of AF by determining irregularities of RR-interval. We utilize the interval comparison to measure the degree of irregularities of RR-interval in a defined segment. The series of RR-interval is segmented with the length of 10 of them. In this study, we use interval comparison for the method. We were comparing all of the intervals there each other. Then we put the threshold to define the low difference and high difference (δ). A segment is defined as AF or Normal Sinus by the number of high δ, so we put the tolerance (β) of high δ there. We have used this method to test the 23 patients data from MIT-BIH. Using the approach and the clinical data we find accuracy, sensitivity, and specificity of 84.98%, 91.99%, and 77.85% respectively.

  8. Programming with Intervals

    Science.gov (United States)

    Matsakis, Nicholas D.; Gross, Thomas R.

    Intervals are a new, higher-level primitive for parallel programming with which programmers directly construct the program schedule. Programs using intervals can be statically analyzed to ensure that they do not deadlock or contain data races. In this paper, we demonstrate the flexibility of intervals by showing how to use them to emulate common parallel control-flow constructs like barriers and signals, as well as higher-level patterns such as bounded-buffer producer-consumer. We have implemented intervals as a publicly available library for Java and Scala.

  9. Organic labbeling systems and consumer confidence

    OpenAIRE

    Sønderskov, Kim Mannemar; Daugbjerg, Carsten

    2009-01-01

    A research analysis suggests that a state certification and labelling system creates confidence in organic labelling systems and consequently green consumerism. Danish consumers have higher levels of confidence in the labelling system than consumers in countries where the state plays a minor role in labelling and certification.

  10. Magnetic Resonance Fingerprinting with short relaxation intervals.

    Science.gov (United States)

    Amthor, Thomas; Doneva, Mariya; Koken, Peter; Sommer, Karsten; Meineke, Jakob; Börnert, Peter

    2017-09-01

    The aim of this study was to investigate a technique for improving the performance of Magnetic Resonance Fingerprinting (MRF) in repetitive sampling schemes, in particular for 3D MRF acquisition, by shortening relaxation intervals between MRF pulse train repetitions. A calculation method for MRF dictionaries adapted to short relaxation intervals and non-relaxed initial spin states is presented, based on the concept of stationary fingerprints. The method is applicable to many different k-space sampling schemes in 2D and 3D. For accuracy analysis, T 1 and T 2 values of a phantom are determined by single-slice Cartesian MRF for different relaxation intervals and are compared with quantitative reference measurements. The relevance of slice profile effects is also investigated in this case. To further illustrate the capabilities of the method, an application to in-vivo spiral 3D MRF measurements is demonstrated. The proposed computation method enables accurate parameter estimation even for the shortest relaxation intervals, as investigated for different sampling patterns in 2D and 3D. In 2D Cartesian measurements, we achieved a scan acceleration of more than a factor of two, while maintaining acceptable accuracy: The largest T 1 values of a sample set deviated from their reference values by 0.3% (longest relaxation interval) and 2.4% (shortest relaxation interval). The largest T 2 values showed systematic deviations of up to 10% for all relaxation intervals, which is discussed. The influence of slice profile effects for multislice acquisition is shown to become increasingly relevant for short relaxation intervals. In 3D spiral measurements, a scan time reduction of 36% was achieved, maintaining the quality of in-vivo T1 and T2 maps. Reducing the relaxation interval between MRF sequence repetitions using stationary fingerprint dictionaries is a feasible method to improve the scan efficiency of MRF sequences. The method enables fast implementations of 3D spatially

  11. Adaptive SLICE method: an enhanced method to determine nonlinear dynamic respiratory system mechanics

    International Nuclear Information System (INIS)

    Zhao, Zhanqi; Möller, Knut; Guttmann, Josef

    2012-01-01

    The objective of this paper is to introduce and evaluate the adaptive SLICE method (ASM) for continuous determination of intratidal nonlinear dynamic compliance and resistance. The tidal volume is subdivided into a series of volume intervals called slices. For each slice, one compliance and one resistance are calculated by applying a least-squares-fit method. The volume window (width) covered by each slice is determined based on the confidence interval of the parameter estimation. The method was compared to the original SLICE method and evaluated using simulation and animal data. The ASM was also challenged with separate analysis of dynamic compliance during inspiration. If the signal-to-noise ratio (SNR) in the respiratory data decreased from +∞ to 10 dB, the relative errors of compliance increased from 0.1% to 22% for the ASM and from 0.2% to 227% for the SLICE method. Fewer differences were found in resistance. When the SNR was larger than 40 dB, the ASM delivered over 40 parameter estimates (42.2 ± 1.3). When analyzing the compliance during inspiration separately, the estimates calculated with the ASM were more stable. The adaptive determination of slice bounds results in consistent and reliable parameter values. Online analysis of nonlinear respiratory mechanics will profit from such an adaptive selection of interval size. (paper)

  12. Modified Dempster-Shafer approach using an expected utility interval decision rule

    Science.gov (United States)

    Cheaito, Ali; Lecours, Michael; Bosse, Eloi

    1999-03-01

    The combination operation of the conventional Dempster- Shafer algorithm has a tendency to increase exponentially the number of propositions involved in bodies of evidence by creating new ones. The aim of this paper is to explore a 'modified Dempster-Shafer' approach of fusing identity declarations emanating form different sources which include a number of radars, IFF and ESM systems in order to limit the explosion of the number of propositions. We use a non-ad hoc decision rule based on the expected utility interval to select the most probable object in a comprehensive Platform Data Base containing all the possible identity values that a potential target may take. We study the effect of the redistribution of the confidence levels of the eliminated propositions which otherwise overload the real-time data fusion system; these eliminated confidence levels can in particular be assigned to ignorance, or uniformly added to the remaining propositions and to ignorance. A scenario has been selected to demonstrate the performance of our modified Dempster-Shafer method of evidential reasoning.

  13. Consumer confidence or the business cycle

    DEFF Research Database (Denmark)

    Møller, Stig Vinther; Nørholm, Henrik; Rangvid, Jesper

    2014-01-01

    Answer: The business cycle. We show that consumer confidence and the output gap both excess returns on stocks in many European countries: When the output gap is positive (the economy is doing well), expected returns are low, and when consumer confidence is high, expected returns are also low...

  14. Non-probabilistic defect assessment for structures with cracks based on interval model

    International Nuclear Information System (INIS)

    Dai, Qiao; Zhou, Changyu; Peng, Jian; Chen, Xiangwei; He, Xiaohua

    2013-01-01

    Highlights: • Non-probabilistic approach is introduced to defect assessment. • Definition and establishment of IFAC are put forward. • Determination of assessment rectangle is proposed. • Solution of non-probabilistic reliability index is presented. -- Abstract: Traditional defect assessment methods conservatively treat uncertainty of parameters as safety factors, while the probabilistic method is based on the clear understanding of detailed statistical information of parameters. In this paper, the non-probabilistic approach is introduced to the failure assessment diagram (FAD) to propose a non-probabilistic defect assessment method for structures with cracks. This novel defect assessment method contains three critical processes: establishment of the interval failure assessment curve (IFAC), determination of the assessment rectangle, and solution of the non-probabilistic reliability degree. Based on the interval theory, uncertain parameters such as crack sizes, material properties and loads are considered as interval variables. As a result, the failure assessment curve (FAC) will vary in a certain range, which is defined as IFAC. And the assessment point will vary within a rectangle zone which is defined as an assessment rectangle. Based on the interval model, the establishment of IFAC and the determination of the assessment rectangle are presented. Then according to the interval possibility degree method, the non-probabilistic reliability degree of IFAC can be determined. Meanwhile, in order to clearly introduce the non-probabilistic defect assessment method, a numerical example for the assessment of a pipe with crack is given. In addition, the assessment result of the proposed method is compared with that of the traditional probabilistic method, which confirms that this non-probabilistic defect assessment can reasonably resolve the practical problem with interval variables

  15. Non-probabilistic defect assessment for structures with cracks based on interval model

    Energy Technology Data Exchange (ETDEWEB)

    Dai, Qiao; Zhou, Changyu, E-mail: changyu_zhou@163.com; Peng, Jian; Chen, Xiangwei; He, Xiaohua

    2013-09-15

    Highlights: • Non-probabilistic approach is introduced to defect assessment. • Definition and establishment of IFAC are put forward. • Determination of assessment rectangle is proposed. • Solution of non-probabilistic reliability index is presented. -- Abstract: Traditional defect assessment methods conservatively treat uncertainty of parameters as safety factors, while the probabilistic method is based on the clear understanding of detailed statistical information of parameters. In this paper, the non-probabilistic approach is introduced to the failure assessment diagram (FAD) to propose a non-probabilistic defect assessment method for structures with cracks. This novel defect assessment method contains three critical processes: establishment of the interval failure assessment curve (IFAC), determination of the assessment rectangle, and solution of the non-probabilistic reliability degree. Based on the interval theory, uncertain parameters such as crack sizes, material properties and loads are considered as interval variables. As a result, the failure assessment curve (FAC) will vary in a certain range, which is defined as IFAC. And the assessment point will vary within a rectangle zone which is defined as an assessment rectangle. Based on the interval model, the establishment of IFAC and the determination of the assessment rectangle are presented. Then according to the interval possibility degree method, the non-probabilistic reliability degree of IFAC can be determined. Meanwhile, in order to clearly introduce the non-probabilistic defect assessment method, a numerical example for the assessment of a pipe with crack is given. In addition, the assessment result of the proposed method is compared with that of the traditional probabilistic method, which confirms that this non-probabilistic defect assessment can reasonably resolve the practical problem with interval variables.

  16. Secure and Usable Bio-Passwords based on Confidence Interval

    OpenAIRE

    Aeyoung Kim; Geunshik Han; Seung-Hyun Seo

    2017-01-01

    The most popular user-authentication method is the password. Many authentication systems try to enhance their security by enforcing a strong password policy, and by using the password as the first factor, something you know, with the second factor being something you have. However, a strong password policy and a multi-factor authentication system can make it harder for a user to remember the password and login in. In this paper a bio-password-based scheme is proposed as a unique authenticatio...

  17. The Perceived Importance of Youth Educator's Confidence in Delivering Leadership Development Programming

    Science.gov (United States)

    Brumbaugh, Laura; Cater, Melissa

    2016-01-01

    A successful component of programs designed to deliver youth leadership develop programs are youth educators who understand the importance of utilizing research-based information and seeking professional development opportunities. The purpose of this study was to determine youth educator's perceived confidence in leading youth leadership…

  18. Determination of lithium, rubidium and strontium in foodstuffs

    International Nuclear Information System (INIS)

    Evans, W.H.; Read, J.I.

    1985-01-01

    For the determination of total lithium, rubidium and strontium in foodstuffs the organic matter is destroyed by a wet-oxidation procedure. Both lithium and rubidium are measured by flame atomic-emission spectrophotometry, rubidium with the addition of a radiation buffer and strontium is measured by flame atomic-absorption spectrophotometry using the same radiation buffer. The optimum conditions for measurement are described and interferences noted. The accuracy of the method was assessed by measuring the recovery of these metals from foodstuff homogenates and values for standard reference materials are listed, for comparison with certified levels where these exist. From the results obtained standard deviations were calculated and derived limits of detection and confidence intervals are given. (author)

  19. Confidence-based learning CME: overcoming barriers in irritable bowel syndrome with constipation.

    Science.gov (United States)

    Cash, Brooks; Mitchner, Natasha A; Ravyn, Dana

    2011-01-01

    Performance of health care professionals depends on both medical knowledge and the certainty with which they possess it. Conventional continuing medical education interventions assess the correctness of learners' responses but do not determine the degree of confidence with which they hold incorrect information. This study describes the use of confidence-based learning (CBL) in an activity designed to enhance learners' knowledge, confidence in their knowledge, and clinical competence with regard to constipation-predominant IBS (IBS-C), a frequently underdiagnosed and misdiagnosed condition. The online CBL activity included multiple-choice questions in 2 modules: Burden of Care (BOC; 28 questions) and Patient Scenarios (PS; 9 case-based questions). After formative assessment, targeted feedback was provided, and the learner focused on material with demonstrated knowledge and/or confidence gaps. The process was repeated until 85% of questions were answered correctly and confidently (ie, mastery was attained). Of 275 participants (24% internal medicine, 13% gastroenterology, 32% family medicine, and 31% other), 249 and 167 completed the BOC and PS modules, respectively. Among all participants, 61.8% and 98.2% achieved mastery in the BOC and PS modules, respectively. Baseline mastery levels between specialties were significantly different in the BOC module (p = 0.002); no significant differences were evident between specialties in final mastery levels. Approximately one-third of learners were confident and wrong in topics of epidemiology, defining IBS and constipation, and treatments in the first iteration. No significant difference was observed between specialties for the PS module in either the first or last iterations. Learners achieved mastery in topics pertaining to IBS-C regardless of baseline knowledge or specialty. These data indicate that CME activities employing CBL can be used to address knowledge and confidence gaps. Copyright © 2010 The Alliance for

  20. Analysing of 228Th, 232Th, 228Ra in human bone tissues for the purpose of determining the post mortal interval

    International Nuclear Information System (INIS)

    Kandlbinder, R.; Geissler, V.; Schupfner, R.; Wolfbeis, O.; Zinka, B.

    2009-01-01

    Bone tissues of thirteen deceased persons were analyzed to determine the activity concentration of the radionuclides 228 Ra, 228 Th, 232 Th and 2 30 Th. The activity ratios enable to assess the post-mortem-interval PMI). The samples were prepared for analysis by incinerating and pulverizing. 228 Ra was directly detected by γ-spectrometry. 2 28 Th, 230 Th, 232 Th were detected by α-spectrometry after radiochemical purification and electrodeposition. It is shown that the method s principally suited to determine the PMI. A minimum of 300 g (wet weight) f human bone tissue is required for the analysis. Counting times are in the range of one to two weeks. (author)

  1. Event- and interval-based measurement of stuttering: a review.

    Science.gov (United States)

    Valente, Ana Rita S; Jesus, Luis M T; Hall, Andreia; Leahy, Margaret

    2015-01-01

    Event- and interval-based measurements are two different ways of computing frequency of stuttering. Interval-based methodology emerged as an alternative measure to overcome problems associated with reproducibility in the event-based methodology. No review has been made to study the effect of methodological factors in interval-based absolute reliability data or to compute the agreement between the two methodologies in terms of inter-judge, intra-judge and accuracy (i.e., correspondence between raters' scores and an established criterion). To provide a review related to reproducibility of event-based and time-interval measurement, and to verify the effect of methodological factors (training, experience, interval duration, sample presentation order and judgment conditions) on agreement of time-interval measurement; in addition, to determine if it is possible to quantify the agreement between the two methodologies The first two authors searched for articles on ERIC, MEDLINE, PubMed, B-on, CENTRAL and Dissertation Abstracts during January-February 2013 and retrieved 495 articles. Forty-eight articles were selected for review. Content tables were constructed with the main findings. Articles related to event-based measurements revealed values of inter- and intra-judge greater than 0.70 and agreement percentages beyond 80%. The articles related to time-interval measures revealed that, in general, judges with more experience with stuttering presented significantly higher levels of intra- and inter-judge agreement. Inter- and intra-judge values were beyond the references for high reproducibility values for both methodologies. Accuracy (regarding the closeness of raters' judgements with an established criterion), intra- and inter-judge agreement were higher for trained groups when compared with non-trained groups. Sample presentation order and audio/video conditions did not result in differences in inter- or intra-judge results. A duration of 5 s for an interval appears to be

  2. Sex differences in confidence influence patterns of conformity.

    Science.gov (United States)

    Cross, Catharine P; Brown, Gillian R; Morgan, Thomas J H; Laland, Kevin N

    2017-11-01

    Lack of confidence in one's own ability can increase the likelihood of relying on social information. Sex differences in confidence have been extensively investigated in cognitive tasks, but implications for conformity have not been directly tested. Here, we tested the hypothesis that, in a task that shows sex differences in confidence, an indirect effect of sex on social information use will also be evident. Participants (N = 168) were administered a mental rotation (MR) task or a letter transformation (LT) task. After providing an answer, participants reported their confidence before seeing the responses of demonstrators and being allowed to change their initial answer. In the MR, but not the LT, task, women showed lower levels of confidence than men, and confidence mediated an indirect effect of sex on the likelihood of switching answers. These results provide novel, experimental evidence that confidence is a general explanatory mechanism underpinning susceptibility to social influences. Our results have implications for the interpretation of the wider literature on sex differences in conformity. © 2016 The British Psychological Society.

  3. Comprehensive Plan for Public Confidence in Nuclear Regulator

    International Nuclear Information System (INIS)

    Choi, Kwang Sik; Choi, Young Sung; Kim, Ho ki

    2008-01-01

    Public confidence in nuclear regulator has been discussed internationally. Public trust or confidence is needed for achieving regulatory goal of assuring nuclear safety to the level that is acceptable by the public or providing public ease for nuclear safety. In Korea, public ease or public confidence has been suggested as major policy goal in the 'Nuclear regulatory policy direction' annually announced. This paper reviews theory of trust, its definitions and defines nuclear safety regulation, elements of public trust or public confidence developed based on the study conducted so far. Public ease model developed and 10 measures for ensuring public confidence are also presented and future study directions are suggested

  4. Simultaneous confidence bands for the integrated hazard function

    OpenAIRE

    Dudek, Anna; Gocwin, Maciej; Leskow, Jacek

    2006-01-01

    The construction of the simultaneous confidence bands for the integrated hazard function is considered. The Nelson--Aalen estimator is used. The simultaneous confidence bands based on bootstrap methods are presented. Two methods of construction of such confidence bands are proposed. The weird bootstrap method is used for resampling. Simulations are made to compare the actual coverage probability of the bootstrap and the asymptotic simultaneous confidence bands. It is shown that the equal--tai...

  5. Errors and Predictors of Confidence in Condom Use amongst Young Australians Attending a Music Festival.

    Science.gov (United States)

    Hall, Karina M; Brieger, Daniel G; De Silva, Sukhita H; Pfister, Benjamin F; Youlden, Daniel J; John-Leader, Franklin; Pit, Sabrina W

    2016-01-01

    Objectives . To determine the confidence and ability to use condoms correctly and consistently and the predictors of confidence in young Australians attending a festival. Methods . 288 young people aged 18 to 29 attending a mixed-genre music festival completed a survey measuring demographics, self-reported confidence using condoms, ability to use condoms, and issues experienced when using condoms in the past 12 months. Results . Self-reported confidence using condoms was high (77%). Multivariate analyses showed confidence was associated with being male ( P < 0.001) and having had five or more lifetime sexual partners ( P = 0.038). Reading packet instructions was associated with increased condom use confidence ( P = 0.011). Amongst participants who had used a condom in the last year, 37% had experienced the condom breaking and 48% had experienced the condom slipping off during intercourse and 51% when withdrawing the penis after sex. Conclusion . This population of young people are experiencing high rates of condom failures and are using them inconsistently or incorrectly, demonstrating the need to improve attitudes, behaviour, and knowledge about correct and consistent condom usage. There is a need to empower young Australians, particularly females, with knowledge and confidence in order to improve condom use self-efficacy.

  6. Entyvio lengthen dose-interval study: lengthening vedolizumab dose interval and the risk of clinical relapse in inflammatory bowel disease.

    Science.gov (United States)

    Chan, Webber; Lynch, Nicole; Bampton, Peter; Chang, Jeff; Chung, Alvin; Florin, Timothy; Hetzel, David J; Jakobovits, Simon; Moore, Gregory; Pavli, Paul; Radford-Smith, Graham; Thin, Lena; Baraty, Brandon; Haifer, Craig; Yau, Yunki; Leong, Rupert W L

    2018-07-01

    Vedolizumab (VDZ), an α4β7 anti-integrin antibody, is efficacious in the induction and maintenance of remission in ulcerative colitis (UC) and Crohn's disease (CD). In the GEMINI long-term safety study, enrolled patients received 4-weekly VDZ. Upon completion, patients were switched to 8-weekly VDZ in Australia. The clinical success rate of treatment de-escalation for patients in remission on VDZ has not been described previously. To determine the proportion of patients who relapsed after switching from 4 to 8-weekly VDZ, the mean time to relapse, and the recapture rate when switching back to 8-weekly dosing. This was a retrospective, observational, multicenter study of patients previously recruited into GEMINI long-term safety in Australia. Data on the demographics and biochemical findings were collected. There were 34 patients [23 men, mean age 49.1 (±13.1) years] and their mean disease duration was 17.6 (±8.5) years. The mean 4-weekly VDZ infusion duration was 286.5 (±48.8) weeks. A total of five (15%) patients relapsed on dose-interval increase (4/17 UC, 1/17 CD) at a median duration from dose interval lengthening to flare of 14 weeks (interquartile range=6-25). Eighty percent (4/5) of patients re-entered remission following dose-interval decrease back to 4-weekly. No clinical predictors of relapse could be determined because of the small cohort size. The risk of patients relapsing when switching from 4 to 8-weekly VDZ ∼15% and is similar between CD and UC. Dose-interval decrease recaptures 80% of patients who relapsed. Therapeutic drug monitoring of VDZ may be of clinical relevance.

  7. Beyond hypercorrection: remembering corrective feedback for low-confidence errors.

    Science.gov (United States)

    Griffiths, Lauren; Higham, Philip A

    2018-02-01

    Correcting errors based on corrective feedback is essential to successful learning. Previous studies have found that corrections to high-confidence errors are better remembered than low-confidence errors (the hypercorrection effect). The aim of this study was to investigate whether corrections to low-confidence errors can also be successfully retained in some cases. Participants completed an initial multiple-choice test consisting of control, trick and easy general-knowledge questions, rated their confidence after answering each question, and then received immediate corrective feedback. After a short delay, they were given a cued-recall test consisting of the same questions. In two experiments, we found high-confidence errors to control questions were better corrected on the second test compared to low-confidence errors - the typical hypercorrection effect. However, low-confidence errors to trick questions were just as likely to be corrected as high-confidence errors. Most surprisingly, we found that memory for the feedback and original responses, not confidence or surprise, were significant predictors of error correction. We conclude that for some types of material, there is an effortful process of elaboration and problem solving prior to making low-confidence errors that facilitates memory of corrective feedback.

  8. A new approach for the determination of sulphur in food samples by high-resolution continuum source flame atomic absorption spectrometer.

    Science.gov (United States)

    Ozbek, N; Baysal, A

    2015-02-01

    The new approach for the determination of sulphur in foods was developed, and the sulphur concentrations of various fresh and dried food samples determined using a high-resolution continuum source flame atomic absorption spectrometer with an air/acetylene flame. The proposed method was optimised and the validated using standard reference materials, and certified values were found to be within the 95% confidence interval. The sulphur content of foods ranged from less than the LOD to 1.5mgg(-1). The method is accurate, fast, simple and sensitive. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Determination of Ca, Cu, Fe and Pb in sugarcane raw spirits by atomic absorption spectrophotometry

    International Nuclear Information System (INIS)

    Lorenzo, Magdalena; Reyes, Arlyn; Blanco, Idania; Vasallo, Maria C

    2010-01-01

    The determination of Ca, Cu, Fe and Pb in sugarcane raw spirits by atomic absorption spectrophotometry was carried out. For 20 μL injected sample, calibration within the 0,5-25,0 mg. L -1 Ca; 0,25-5,0 mg. L -1 Cu, Pb and Cu intervals were established using the ratios Cu, Ca, Fe and Pb absorbance versus analyte concentration, respectively. Typical linear correlations of r = 0,999 were obtained. The proposed method was applied for the direct determination of Ca, Cu, Fe and Pb in sugar cane spirits, and in samples. The results obtained were in accordance to those obtained at 95% confidence level

  10. Extended score interval in the assessment of basic surgical skills.

    Science.gov (United States)

    Acosta, Stefan; Sevonius, Dan; Beckman, Anders

    2015-01-01

    The Basic Surgical Skills course uses an assessment score interval of 0-3. An extended score interval, 1-6, was proposed by the Swedish steering committee of the course. The aim of this study was to analyze the trainee scores in the current 0-3 scored version compared to a proposed 1-6 scored version. Sixteen participants, seven females and nine males, were evaluated in the current and proposed assessment forms by instructors, observers, and learners themselves during the first and second day. In each assessment form, 17 tasks were assessed. The inter-rater reliability between the current and the proposed score sheets were evaluated with intraclass correlation (ICC) with 95% confidence intervals (CI). The distribution of scores for 'knot tying' at the last time point and 'bowel anastomosis side to side' given by the instructors in the current assessment form showed that the highest score was given in 31 and 62%, respectively. No ceiling effects were found in the proposed assessment form. The overall ICC between the current and proposed score sheets after assessment by the instructors increased from 0.38 (95% CI 0.77-0.78) on Day 1 to 0.83 (95% CI 0.51-0.94) on Day 2. A clear ceiling effect of scores was demonstrated in the current assessment form, questioning its validity. The proposed score sheet provides more accurate scores and seems to be a better feedback instrument for learning technical surgical skills in the Basic Surgical Skills course.

  11. Maternal Confidence for Physiologic Childbirth: A Concept Analysis.

    Science.gov (United States)

    Neerland, Carrie E

    2018-06-06

    Confidence is a term often used in research literature and consumer media in relation to birth, but maternal confidence has not been clearly defined, especially as it relates to physiologic labor and birth. The aim of this concept analysis was to define maternal confidence in the context of physiologic labor and childbirth. Rodgers' evolutionary method was used to identify attributes, antecedents, and consequences of maternal confidence for physiologic birth. Databases searched included Ovid MEDLINE, CINAHL, PsycINFO, and Sociological Abstracts from the years 1995 to 2015. A total of 505 articles were retrieved, using the search terms pregnancy, obstetric care, prenatal care, and self-efficacy and the keyword confidence. Articles were identified for in-depth review and inclusion based on whether the term confidence was used or assessed in relationship to labor and/or birth. In addition, a hand search of the reference lists of the selected articles was performed. Twenty-four articles were reviewed in this concept analysis. We define maternal confidence for physiologic birth as a woman's belief that physiologic birth can be achieved, based on her view of birth as a normal process and her belief in her body's innate ability to birth, which is supported by social support, knowledge, and information founded on a trusted relationship with a maternity care provider in an environment where the woman feels safe. This concept analysis advances the concept of maternal confidence for physiologic birth and provides new insight into how women's confidence for physiologic birth might be enhanced during the prenatal period. Further investigation of confidence for physiologic birth across different cultures is needed to identify cultural differences in constructions of the concept. © 2018 by the American College of Nurse-Midwives.

  12. Conductometric titration to determine total volatile basic nitrogen (TVB-N) for post-mortem interval (PMI).

    Science.gov (United States)

    Xia, Zhiyuan; Zhai, Xiandun; Liu, Beibei; Mo, Yaonan

    2016-11-01

    Precise measurement of cadaver decomposition rate is the basis to accurate post-mortem interval (PMI) estimation. There are many approaches explored in recent years, however, it is still unsolved completely. Total volatile basic nitrogen (TVB-N), which is an important index to predict meat freshness and shelf life in food science, could serve as an indicator for measuring PMI associated decomposition rate of cadavers. The aim of this work was to establish a practical method to determine TVB-N in cadaver soft tissues (mainly skeletal muscle) for measuring decomposition rate. Determination of TVB-N in the simulation and animal experiments was conducted by steam distillation and conductometric titration using Kjeldahl distillation unit and conductivity meter. In simulation, standard concentrations of ammonium were used as TVB analogies, TVB-N contents were determined and the recovery rates of nitrogen were calculated. In animal experiment, TVB-N in skeletal muscle of forty-two rats was determined at different PMIs for 312 h at 24 °C ± 1 °C. The relationship between PMI and TVB-N was investigated also. The method showed high precision with 99%-100% recovery rates. TVB-N in skeletal muscle changed significantly with PMI especially after 24 h, and the data fit well to y = 3.35 E -5 x 3 -2.17 E -2 x 2 +6.13x-85.82 (adj. R 2  = 0.985). EC i (initial electrical conductivity in the samples just before titration) had positive linear relationship to final measured TVB-N values, y = 1.98x+16.16 (adj. R 2  = 0.985). The overall results demonstrated that the method is accurate, rapid and flexible, which could be expected as a basic technique for measuring decomposition rate in later PMI-estimation researches. Further studies are needed to validate our findings. Copyright © 2016 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  13. Assessing Confidence in Pliocene Sea Surface Temperatures to Evaluate Predictive Models

    Science.gov (United States)

    Dowsett, Harry J.; Robinson, Marci M.; Haywood, Alan M.; Hill, Daniel J.; Dolan, Aisling. M.; Chan, Wing-Le; Abe-Ouchi, Ayako; Chandler, Mark A.; Rosenbloom, Nan A.; Otto-Bliesner, Bette L.; hide

    2012-01-01

    In light of mounting empirical evidence that planetary warming is well underway, the climate research community looks to palaeoclimate research for a ground-truthing measure with which to test the accuracy of future climate simulations. Model experiments that attempt to simulate climates of the past serve to identify both similarities and differences between two climate states and, when compared with simulations run by other models and with geological data, to identify model-specific biases. Uncertainties associated with both the data and the models must be considered in such an exercise. The most recent period of sustained global warmth similar to what is projected for the near future occurred about 3.33.0 million years ago, during the Pliocene epoch. Here, we present Pliocene sea surface temperature data, newly characterized in terms of level of confidence, along with initial experimental results from four climate models. We conclude that, in terms of sea surface temperature, models are in good agreement with estimates of Pliocene sea surface temperature in most regions except the North Atlantic. Our analysis indicates that the discrepancy between the Pliocene proxy data and model simulations in the mid-latitudes of the North Atlantic, where models underestimate warming shown by our highest-confidence data, may provide a new perspective and insight into the predictive abilities of these models in simulating a past warm interval in Earth history.This is important because the Pliocene has a number of parallels to present predictions of late twenty-first century climate.

  14. ADAM SMITH: THE INVISIBLE HAND OR CONFIDENCE

    Directory of Open Access Journals (Sweden)

    Fernando Luis, Gache

    2010-01-01

    Full Text Available In 1776 Adam Smith raised the matter that an invisible hand was the one which moved the markets to obtain its efficiency. Despite in the present paper we are going to raise the hypothesis, that this invisible hand is in fact the confidence that each person feels when he is going to do business. That in addition it is unique, because it is different from the confidence of the others and that is a variable nonlinear that essentially is ligatured to respective personal histories. For that we are going to take as its bases the paper by Leopoldo Abadía (2009, with respect to the financial economy crisis that happened in 2007-2008, to evidence the form in which confidence operates. Therefore the contribution that we hope to do with this paper is to emphasize that, the level of confidence of the different actors, is the one which really moves the markets, (therefore the economy and that the crisis of the subprime mortgages is a confidence crisis at world-wide level.

  15. Etiological classifications of transient ischemic attacks: subtype classification by TOAST, CCS and ASCO--a pilot study.

    Science.gov (United States)

    Amort, Margareth; Fluri, Felix; Weisskopf, Florian; Gensicke, Henrik; Bonati, Leo H; Lyrer, Philippe A; Engelter, Stefan T

    2012-01-01

    In patients with transient ischemic attacks (TIA), etiological classification systems are not well studied. The Trial of ORG 10172 in Acute Stroke Treatment (TOAST), the Causative Classification System (CCS), and the Atherosclerosis Small Vessel Disease Cardiac Source Other Cause (ASCO) classification may be useful to determine the underlying etiology. We aimed at testing the feasibility of each of the 3 systems. Furthermore, we studied and compared their prognostic usefulness. In a single-center TIA registry prospectively ascertained over 2 years, we applied 3 etiological classification systems. We compared the distribution of underlying etiologies, the rates of patients with determined versus undetermined etiology, and studied whether etiological subtyping distinguished TIA patients with versus without subsequent stroke or TIA within 3 months. The 3 systems were applicable in all 248 patients. A determined etiology with the highest level of causality was assigned similarly often with TOAST (35.9%), CCS (34.3%), and ASCO (38.7%). However, the frequency of undetermined causes differed significantly between the classification systems and was lowest for ASCO (TOAST: 46.4%; CCS: 37.5%; ASCO: 18.5%; p CCS, and ASCO, cardioembolism (19.4/14.5/18.5%) was the most common etiology, followed by atherosclerosis (11.7/12.9/14.5%). At 3 months, 33 patients (13.3%, 95% confidence interval 9.3-18.2%) had recurrent cerebral ischemic events. These were strokes in 13 patients (5.2%; 95% confidence interval 2.8-8.8%) and TIAs in 20 patients (8.1%, 95% confidence interval 5.0-12.2%). Patients with a determined etiology (high level of causality) had higher rates of subsequent strokes than those without a determined etiology [TOAST: 6.7% (95% confidence interval 2.5-14.1%) vs. 4.4% (95% confidence interval 1.8-8.9%); CSS: 9.3% (95% confidence interval 4.1-17.5%) vs. 3.1% (95% confidence interval 1.0-7.1%); ASCO: 9.4% (95% confidence interval 4.4-17.1%) vs. 2.6% (95% confidence interval

  16. The role of confidence in the evolution of the Spanish economy: empirical evidence from an ARDL model

    Directory of Open Access Journals (Sweden)

    Pablo Castellanos García

    2014-12-01

    Full Text Available The aim of this paper is to verify the existence and to determine the nature of long-term relationships between economic agents’ confidence, measured by the Economic Sentiment Index (ESI, with some of the "fundamentals" of the Spanish economy. In particular, by modeling this type of relations, we try to determine whether confidence is a dependent (explained or independent (explanatory variable. Along with confidence, in our model we incorporate variables such as risk premium of sovereign debt, financial market volatility, unemployment, inflation, public and private debt and the net lending/net borrowing of the economy. For the purpose of obtaining some empirical evidence on the exogenous or endogenous character of the above mentioned variables an ARDL (Autoregressive-Distributed Lag model is formulated. The model is estimated with quarterly data of the Spanish economy for the period 1990-2012. Our findings suggest that: (a unemployment is the dependent variable, (b there is an inverse relationship between ESI in Spain and unemployment; and (c the Granger causality goes from confidence to unemployment.

  17. Prolonged Q-T(c) interval in mild portal hypertensive cirrhosis

    DEFF Research Database (Denmark)

    Ytting, Henriette; Henriksen, Jens Henrik; Fuglsang, Stefan

    2005-01-01

    BACKGROUND/AIMS: The Q-T(c) interval is prolonged in a substantial fraction of patients with cirrhosis, thus indicating delayed repolarisation. However, no information is available in mild portal hypertensive patients. We therefore determined the Q-T(c) interval in cirrhotic patients with hepatic...... venous pressure gradient (HVPG) portal hypertension (HVPG> or = 12 mmHg) and controls without liver disease. RESULTS......), values which are significantly above that of the controls (0.410 s(1/2), P portal hypertensive group, the Q-T(c) interval was inversely related to indicators of liver function, such as indocyanine green clearance (r = -0.34, P

  18. Determinants of Attrition to Follow-Up in a Multicentre Cohort Study in Children-Results from the IDEFICS Study

    DEFF Research Database (Denmark)

    Hense, Sabrina; Pohlabeln, Hermann; Michels, Nathalie

    2013-01-01

    (OR = 1.46; 99% CI: 1.19, 1.79) was lacking. Drop-out proportion rose with the number of missing items. Overweight, low education, single parenthood and low well-being scores were independent determinants of attrition. Baseline participation, and the individual determinant effects seemed unrelated...... at 훼 = 0 . 0 1 to account for the large sample size. The strongest associations were seen for baseline item non-response, especially when information on migration background (odds ratio (OR) = 1.55; 99% confidence interval (CI): 1.04, 2.31), single parenthood (OR = 1.37; 99% CI: 1.12, 1.67), or well-being...

  19. Trust, confidence, and the 2008 global financial crisis.

    Science.gov (United States)

    Earle, Timothy C

    2009-06-01

    The 2008 global financial crisis has been compared to a "once-in-a-century credit tsunami," a disaster in which the loss of trust and confidence played key precipitating roles and the recovery from which will require the restoration of these crucial factors. Drawing on the analogy between the financial crisis and environmental and technological hazards, recent research on the role of trust and confidence in the latter is used to provide a perspective on the former. Whereas "trust" and "confidence" are used interchangeably and without explicit definition in most discussions of the financial crisis, this perspective uses the TCC model of cooperation to clearly distinguish between the two and to demonstrate how this distinction can lead to an improved understanding of the crisis. The roles of trust and confidence-both in precipitation and in possible recovery-are discussed for each of the three major sets of actors in the crisis, the regulators, the banks, and the public. The roles of trust and confidence in the larger context of risk management are also examined; trust being associated with political approaches, confidence with technical. Finally, the various stances that government can take with regard to trust-such as supportive or skeptical-are considered. Overall, it is argued that a clear understanding of trust and confidence and a close examination of the specific, concrete circumstances of a crisis-revealing when either trust or confidence is appropriate-can lead to useful insights for both recovery and prevention of future occurrences.

  20. The Effect of Learning Method and Confidence Level on the Ability of Interpreting Religious Poem

    Directory of Open Access Journals (Sweden)

    Kinayati Djojosuroto

    2017-11-01

    Full Text Available This research aims to determine the effect of the learning method (expository and authentic and the level of confidence in the ability of religious poetry interpretation of the students of the third semester, majoring in the Indonesian Language and Literature Education of Universitas Negeri Manado. The method used is the quasi-experimental method with 2 x 2 factorial designs. The measurement of Y variable (ability to interpret the religious poetry uses the writing test and the level of confidence uses a questionnaire. Data analysis technique in this study is analysis of variance (ANOVA followed by two lanes and Tuckey test to look at the interaction of the group. Before the test, the hypothesis is that analysis requirements normality data test using Liliefors test and homogeneity test data using Bartlett test. The results show differences in the ability to explain the religious poetry among students who study with the expository method and the students who study with the authentic method. That is, overall, the expository method is better than the authentic method to improve the ability of the students. To improve the ability of the students to interpret the religious poetry, it is better to use the authentic method for the group that has a lower level of confidence. There is the influence of the interaction between learning method (expository and authentic and the level of confidence in the ability of religious poetry interpretation. Based on these results, it can be concluded that: First, lecturers can determine what materials and method that can be used to enhance the ability to interpret religious poetry when the level of confidence of the students has been known. Second, expository teaching methods and authentic teaching method for group of students with different level of confidence will give you different result on the ability of that group of students to interpret religious poetry as well. Third, the increase of the ability to interpret

  1. 49 CFR 1103.23 - Confidences of a client.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 8 2010-10-01 2010-10-01 false Confidences of a client. 1103.23 Section 1103.23... Responsibilities Toward A Client § 1103.23 Confidences of a client. (a) The practitioner's duty to preserve his client's confidence outlasts the practitioner's employment by the client, and this duty extends to the...

  2. POSTMORTAL CHANGES AND ASSESSMENT OF POSTMORTEM INTERVAL

    Directory of Open Access Journals (Sweden)

    Edin Šatrović

    2013-01-01

    Full Text Available This paper describes in a simple way the changes that occur in the body after death.They develop in a specific order, and the speed of their development and their expression are strongly influenced by various endogenous and exogenous factors. The aim of the authors is to indicate the characteristics of the postmortem changes, and their significance in establishing time since death, which can be established precisely within 72 hours. Accurate evaluation of the age of the corpse based on the common changes is not possible with longer postmortem intervals, so the entomological findings become the most significant change on the corpse for determination of the postmortem interval (PMI.

  3. Confidence assessment. Site-descriptive modelling SDM-Site Laxemar

    International Nuclear Information System (INIS)

    2009-06-01

    The objective of this report is to assess the confidence that can be placed in the Laxemar site descriptive model, based on the information available at the conclusion of the surface-based investigations (SDM-Site Laxemar). In this exploration, an overriding question is whether remaining uncertainties are significant for repository engineering design or long-term safety assessment and could successfully be further reduced by more surface-based investigations or more usefully by explorations underground made during construction of the repository. Procedures for this assessment have been progressively refined during the course of the site descriptive modelling, and applied to all previous versions of the Forsmark and Laxemar site descriptive models. They include assessment of whether all relevant data have been considered and understood, identification of the main uncertainties and their causes, possible alternative models and their handling, and consistency between disciplines. The assessment then forms the basis for an overall confidence statement. The confidence in the Laxemar site descriptive model, based on the data available at the conclusion of the surface based site investigations, has been assessed by exploring: - Confidence in the site characterization data base, - remaining issues and their handling, - handling of alternatives, - consistency between disciplines and - main reasons for confidence and lack of confidence in the model. Generally, the site investigation database is of high quality, as assured by the quality procedures applied. It is judged that the Laxemar site descriptive model has an overall high level of confidence. Because of the relatively robust geological model that describes the site, the overall confidence in the Laxemar Site Descriptive model is judged to be high, even though details of the spatial variability remain unknown. The overall reason for this confidence is the wide spatial distribution of the data and the consistency between

  4. Confidence assessment. Site-descriptive modelling SDM-Site Laxemar

    Energy Technology Data Exchange (ETDEWEB)

    2008-12-15

    The objective of this report is to assess the confidence that can be placed in the Laxemar site descriptive model, based on the information available at the conclusion of the surface-based investigations (SDM-Site Laxemar). In this exploration, an overriding question is whether remaining uncertainties are significant for repository engineering design or long-term safety assessment and could successfully be further reduced by more surface-based investigations or more usefully by explorations underground made during construction of the repository. Procedures for this assessment have been progressively refined during the course of the site descriptive modelling, and applied to all previous versions of the Forsmark and Laxemar site descriptive models. They include assessment of whether all relevant data have been considered and understood, identification of the main uncertainties and their causes, possible alternative models and their handling, and consistency between disciplines. The assessment then forms the basis for an overall confidence statement. The confidence in the Laxemar site descriptive model, based on the data available at the conclusion of the surface based site investigations, has been assessed by exploring: - Confidence in the site characterization data base, - remaining issues and their handling, - handling of alternatives, - consistency between disciplines and - main reasons for confidence and lack of confidence in the model. Generally, the site investigation database is of high quality, as assured by the quality procedures applied. It is judged that the Laxemar site descriptive model has an overall high level of confidence. Because of the relatively robust geological model that describes the site, the overall confidence in the Laxemar Site Descriptive model is judged to be high, even though details of the spatial variability remain unknown. The overall reason for this confidence is the wide spatial distribution of the data and the consistency between

  5. Application of derivative spectrophotometry under orthogonal polynomial at unequal intervals: determination of metronidazole and nystatin in their pharmaceutical mixture.

    Science.gov (United States)

    Korany, Mohamed A; Abdine, Heba H; Ragab, Marwa A A; Aboras, Sara I

    2015-05-15

    This paper discusses a general method for the use of orthogonal polynomials for unequal intervals (OPUI) to eliminate interferences in two-component spectrophotometric analysis. In this paper, a new approach was developed by using first derivative D1 curve instead of absorbance curve to be convoluted using OPUI method for the determination of metronidazole (MTR) and nystatin (NYS) in their mixture. After applying derivative treatment of the absorption data many maxima and minima points appeared giving characteristic shape for each drug allowing the selection of different number of points for the OPUI method for each drug. This allows the specific and selective determination of each drug in presence of the other and in presence of any matrix interference. The method is particularly useful when the two absorption spectra have considerable overlap. The results obtained are encouraging and suggest that the method can be widely applied to similar problems. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. High Confidence Software and Systems Research Needs

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — This White Paper presents a survey of high confidence software and systems research needs. It has been prepared by the High Confidence Software and Systems...

  7. Confidence in Alternative Dispute Resolution: Experience from Switzerland

    Directory of Open Access Journals (Sweden)

    Christof Schwenkel

    2014-06-01

    Full Text Available Alternative Dispute Resolution plays a crucial role in the justice system of Switzerland. With the unified Swiss Code of Civil Procedure, it is required that each litigation session shall be preceded by an attempt at conciliation before a conciliation authority. However, there has been little research on conciliation authorities and the public's perception of the authorities. This paper looks at public confidence in conciliation authorities and provides results of a survey conducted with more than 3,400 participants. This study found that public confidence in Swiss conciliation authorities is generally high, exceeds the ratings for confidence in cantonal governments and parliaments, but is lower than confidence in courts.Since the institutional models of the conciliation authorities (meaning the organization of the authorities and the selection of the conciliators differ widely between the 26 Swiss cantons, the influence of the institutional models on public confidence is analyzed. Contrary to assumptions based on New Institutional-ism approaches, this study reports that the institutional models do not impact public confidence. Also, the relationship between a participation in an election of justices of the peace or conciliators and public confidence in these authorities is found to be at most very limited (and negative. Similar to common findings on courts, the results show that general contacts with conciliation authorities decrease public confidence in these institutions whereas a positive experience with a conciliation authority leads to more confidence.The Study was completed as part of the research project 'Basic Research into Court Management in Switzerland', supported by the Swiss National Science Foundation (SNSF. Christof Schwenkel is a PhD student at the University of Lucerne and a research associate and project manager at Interface Policy Studies. A first version of this article was presented at the 2013 European Group for Public

  8. An Extended Step-Wise Weight Assessment Ratio Analysis with Symmetric Interval Type-2 Fuzzy Sets for Determining the Subjective Weights of Criteria in Multi-Criteria Decision-Making Problems

    Directory of Open Access Journals (Sweden)

    Mehdi Keshavarz-Ghorabaee

    2018-03-01

    Full Text Available Determination of subjective weights, which are based on the opinions and preferences of decision-makers, is one of the most important matters in the process of multi-criteria decision-making (MCDM. Step-wise Weight Assessment Ratio Analysis (SWARA is an efficient method for obtaining the subjective weights of criteria in the MCDM problems. On the other hand, decision-makers may express their opinions with a degree of uncertainty. Using the symmetric interval type-2 fuzzy sets enables us to not only capture the uncertainty of information flexibly but also to perform computations simply. In this paper, we propose an extended SWARA method with symmetric interval type-2 fuzzy sets to determine the weights of criteria based on the opinions of a group of decision-makers. The weights determined by the proposed approach involve the uncertainty of decision-makers’ preferences and the symmetric form of the weights makes them more interpretable. To show the procedure of the proposed approach, it is used to determine the importance of intellectual capital dimensions and components in a company. The results show that the proposed approach is efficient in determining the subjective weights of criteria and capturing the uncertainty of information.

  9. Listening to music during sprint interval exercise: The impact on exercise attitudes and intentions.

    Science.gov (United States)

    Stork, Matthew J; Martin Ginis, Kathleen A

    2017-10-01

    This study investigated the impact of listening to music during exercise on perceived enjoyment, attitudes and intentions towards sprint interval training (SIT). Twenty men (24.8 ± 4.5 years) and women (20.1 ± 2.6 years) unfamiliar with SIT exercise completed two acute sessions of SIT, one with and one without music. Perceived enjoyment, attitudes and intentions towards SIT were measured post-exercise for each condition. Attitudes and intentions to engage in SIT were also measured at baseline and follow-up. Post-exercise attitudes mediated the effects of enjoyment on intentions in the music condition (95% confidence interval [CI]: [0.01, 0.07], κ 2  = 0.36) and in the no music condition (95% CI: [0.01, 0.08], κ 2  = 0.37). Attitudes towards SIT were significantly more positive following the music than no music condition (P = 0.004), while intentions towards SIT were not (P = 0.29). Further, attitudes and intentions towards SIT did not change from baseline to follow-up (Ps > 0.05). These findings revealed that participants had relatively positive attitudes and intentions towards SIT, which did not become more negative despite experiencing intense SIT protocols. This study highlights the importance of acute affective responses to SIT exercise for influencing one's attitudes and intentions towards participating in SIT exercise. Such factors could ultimately play a key role in determining whether an individual engages in SIT exercise in the long term.

  10. Multilayer perceptron for robust nonlinear interval regression analysis using genetic algorithms.

    Science.gov (United States)

    Hu, Yi-Chung

    2014-01-01

    On the basis of fuzzy regression, computational models in intelligence such as neural networks have the capability to be applied to nonlinear interval regression analysis for dealing with uncertain and imprecise data. When training data are not contaminated by outliers, computational models perform well by including almost all given training data in the data interval. Nevertheless, since training data are often corrupted by outliers, robust learning algorithms employed to resist outliers for interval regression analysis have been an interesting area of research. Several approaches involving computational intelligence are effective for resisting outliers, but the required parameters for these approaches are related to whether the collected data contain outliers or not. Since it seems difficult to prespecify the degree of contamination beforehand, this paper uses multilayer perceptron to construct the robust nonlinear interval regression model using the genetic algorithm. Outliers beyond or beneath the data interval will impose slight effect on the determination of data interval. Simulation results demonstrate that the proposed method performs well for contaminated datasets.

  11. Normal probability plots with confidence.

    Science.gov (United States)

    Chantarangsi, Wanpen; Liu, Wei; Bretz, Frank; Kiatsupaibul, Seksan; Hayter, Anthony J; Wan, Fang

    2015-01-01

    Normal probability plots are widely used as a statistical tool for assessing whether an observed simple random sample is drawn from a normally distributed population. The users, however, have to judge subjectively, if no objective rule is provided, whether the plotted points fall close to a straight line. In this paper, we focus on how a normal probability plot can be augmented by intervals for all the points so that, if the population distribution is normal, then all the points should fall into the corresponding intervals simultaneously with probability 1-α. These simultaneous 1-α probability intervals provide therefore an objective mean to judge whether the plotted points fall close to the straight line: the plotted points fall close to the straight line if and only if all the points fall into the corresponding intervals. The powers of several normal probability plot based (graphical) tests and the most popular nongraphical Anderson-Darling and Shapiro-Wilk tests are compared by simulation. Based on this comparison, recommendations are given in Section 3 on which graphical tests should be used in what circumstances. An example is provided to illustrate the methods. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. The Total Deviation Index estimated by Tolerance Intervals to evaluate the concordance of measurement devices

    Directory of Open Access Journals (Sweden)

    Ascaso Carlos

    2010-04-01

    Full Text Available Abstract Background In an agreement assay, it is of interest to evaluate the degree of agreement between the different methods (devices, instruments or observers used to measure the same characteristic. We propose in this study a technical simplification for inference about the total deviation index (TDI estimate to assess agreement between two devices of normally-distributed measurements and describe its utility to evaluate inter- and intra-rater agreement if more than one reading per subject is available for each device. Methods We propose to estimate the TDI by constructing a probability interval of the difference in paired measurements between devices, and thereafter, we derive a tolerance interval (TI procedure as a natural way to make inferences about probability limit estimates. We also describe how the proposed method can be used to compute bounds of the coverage probability. Results The approach is illustrated in a real case example where the agreement between two instruments, a handle mercury sphygmomanometer device and an OMRON 711 automatic device, is assessed in a sample of 384 subjects where measures of systolic blood pressure were taken twice by each device. A simulation study procedure is implemented to evaluate and compare the accuracy of the approach to two already established methods, showing that the TI approximation produces accurate empirical confidence levels which are reasonably close to the nominal confidence level. Conclusions The method proposed is straightforward since the TDI estimate is derived directly from a probability interval of a normally-distributed variable in its original scale, without further transformations. Thereafter, a natural way of making inferences about this estimate is to derive the appropriate TI. Constructions of TI based on normal populations are implemented in most standard statistical packages, thus making it simpler for any practitioner to implement our proposal to assess agreement.

  13. Maternal and paternal age, birth order and interpregnancy interval evaluation for cleft lip-palate.

    Science.gov (United States)

    Martelli, Daniella Reis Barbosa; Cruz, Kaliany Wanessa da; Barros, Letízia Monteiro de; Silveira, Marise Fernandes; Swerts, Mário Sérgio Oliveira; Martelli Júnior, Hercílio

    2010-01-01

    Cleft lip and palate (CL/P) are the most common congenital craniofacial anomalies. To evaluate environmental risk factors for non-syndromic CL/P in a reference care center in Minas Gerais. we carried out a case-controlled study, assessing 100 children with clefts and 100 children without clinical alterations. The analysis dimensions (age, skin color, gender, fissure classification, maternal and paternal age, birth order and interpregnancy interval), obtained from a questionnaire; and later we build a data base and the analyses were carried out by the SPSS 17.0 software. The results were analyzed with the relative risk for each variable, in order to estimate the odds ratio with a 95% confidence interval, followed by a bivariate and multivariate analysis. among 200 children, 54% were males and 46% were females. As far as skin color is concerned most were brown, white and black, respectively. Cleft palates were the most common fissures found (54%), followed by lip cleft (30%) and palate cleft (16%). although with a limited sample, we noticed an association between maternal age and an increased risk for cleft lip and palate; however, paternal age, pregnancy order and interpregnancy interval were not significant.

  14. High-intensity cycle interval training improves cycling and running performance in triathletes.

    Science.gov (United States)

    Etxebarria, Naroa; Anson, Judith M; Pyne, David B; Ferguson, Richard A

    2014-01-01

    Effective cycle training for triathlon is a challenge for coaches. We compared the effects of two variants of cycle high-intensity interval training (HIT) on triathlon-specific cycling and running. Fourteen moderately-trained male triathletes ([Formula: see text]O2peak 58.7 ± 8.1 mL kg(-1) min(-1); mean ± SD) completed on separate occasions a maximal incremental test ([Formula: see text]O2peak and maximal aerobic power), 16 × 20 s cycle sprints and a 1-h triathlon-specific cycle followed immediately by a 5 km run time trial. Participants were then pair-matched and assigned randomly to either a long high-intensity interval training (LONG) (6-8 × 5 min efforts) or short high-intensity interval training (SHORT) (9-11 × 10, 20 and 40 s efforts) HIT cycle training intervention. Six training sessions were completed over 3 weeks before participants repeated the baseline testing. Both groups had an ∼7% increase in [Formula: see text]O2peak (SHORT 7.3%, ±4.6%; mean, ±90% confidence limits; LONG 7.5%, ±1.7%). There was a moderate improvement in mean power for both the SHORT (10.3%, ±4.4%) and LONG (10.7%, ±6.8%) groups during the last eight 20-s sprints. There was a small to moderate decrease in heart rate, blood lactate and perceived exertion in both groups during the 1-h triathlon-specific cycling but only the LONG group had a substantial decrease in the subsequent 5-km run time (64, ±59 s). Moderately-trained triathletes should use both short and long high-intensity intervals to improve cycling physiology and performance. Longer 5-min intervals on the bike are more likely to benefit 5 km running performance.

  15. Integration of multiple biological features yields high confidence human protein interactome.

    Science.gov (United States)

    Karagoz, Kubra; Sevimoglu, Tuba; Arga, Kazim Yalcin

    2016-08-21

    The biological function of a protein is usually determined by its physical interaction with other proteins. Protein-protein interactions (PPIs) are identified through various experimental methods and are stored in curated databases. The noisiness of the existing PPI data is evident, and it is essential that a more reliable data is generated. Furthermore, the selection of a set of PPIs at different confidence levels might be necessary for many studies. Although different methodologies were introduced to evaluate the confidence scores for binary interactions, a highly reliable, almost complete PPI network of Homo sapiens is not proposed yet. The quality and coverage of human protein interactome need to be improved to be used in various disciplines, especially in biomedicine. In the present work, we propose an unsupervised statistical approach to assign confidence scores to PPIs of H. sapiens. To achieve this goal PPI data from six different databases were collected and a total of 295,288 non-redundant interactions between 15,950 proteins were acquired. The present scoring system included the context information that was assigned to PPIs derived from eight biological attributes. A high confidence network, which included 147,923 binary interactions between 13,213 proteins, had scores greater than the cutoff value of 0.80, for which sensitivity, specificity, and coverage were 94.5%, 80.9%, and 82.8%, respectively. We compared the present scoring method with others for evaluation. Reducing the noise inherent in experimental PPIs via our scoring scheme increased the accuracy significantly. As it was demonstrated through the assessment of process and cancer subnetworks, this study allows researchers to construct and analyze context-specific networks via valid PPI sets and one can easily achieve subnetworks around proteins of interest at a specified confidence level. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Music educators : their artistry and self-confidence

    NARCIS (Netherlands)

    Lion-Slovak, Brigitte; Stöger, Christine; Smilde, Rineke; Malmberg, Isolde; de Vugt, Adri

    2013-01-01

    How does artistic identity influence the self-confidence of music educators? What is the interconnection between the artistic and the teacher identity? What is actually meant by artistic identity in music education? What is a fruitful environment for the development of artistic self-confidence of

  17. Financial Literacy, Confidence and Financial Advice Seeking

    NARCIS (Netherlands)

    Kramer, Marc M.

    2016-01-01

    We find that people with higher confidence in their own financial literacy are less likely to seek financial advice, but no relation between objective measures of literacy and advice seeking. The negative association between confidence and advice seeking is more pronounced among wealthy households.

  18. Determining the optimal screening interval for type 2 diabetes mellitus using a risk prediction model.

    Directory of Open Access Journals (Sweden)

    Andrei Brateanu

    Full Text Available Progression to diabetes mellitus (DM is variable and the screening time interval not well defined. The American Diabetes Association and US Preventive Services Task Force suggest screening every 3 years, but evidence is limited. The objective of the study was to develop a model to predict the probability of developing DM and suggest a risk-based screening interval.We included non-diabetic adult patients screened for DM in the Cleveland Clinic Health System if they had at least two measurements of glycated hemoglobin (HbA1c, an initial one less than 6.5% (48 mmol/mol in 2008, and another between January, 2009 and December, 2013. Cox proportional hazards models were created. The primary outcome was DM defined as HbA1C greater than 6.4% (46 mmol/mol. The optimal rescreening interval was chosen based on the predicted probability of developing DM.Of 5084 participants, 100 (4.4% of the 2281 patients with normal HbA1c and 772 (27.5% of the 2803 patients with prediabetes developed DM within 5 years. Factors associated with developing DM included HbA1c (HR per 0.1 units increase 1.20; 95%CI, 1.13-1.27, family history (HR 1.31; 95%CI, 1.13-1.51, smoking (HR 1.18; 95%CI, 1.03-1.35, triglycerides (HR 1.01; 95%CI, 1.00-1.03, alanine aminotransferase (HR 1.07; 95%CI, 1.03-1.11, body mass index (HR 1.06; 95%CI, 1.01-1.11, age (HR 0.95; 95%CI, 0.91-0.99 and high-density lipoproteins (HR 0.93; 95% CI, 0.90-0.95. Five percent of patients in the highest risk tertile developed DM within 8 months, while it took 35 months for 5% of the middle tertile to develop DM. Only 2.4% percent of the patients in the lowest tertile developed DM within 5 years.A risk prediction model employing commonly available data can be used to guide screening intervals. Based on equal intervals for equal risk, patients in the highest risk category could be rescreened after 8 months, while those in the intermediate and lowest risk categories could be rescreened after 3 and 5 years

  19. Confidence building in implementation of geological disposal

    International Nuclear Information System (INIS)

    Umeki, Hiroyuki

    2004-01-01

    Long-term safety of the disposal system should be demonstrated to the satisfaction of the stakeholders. Convincing arguments are therefore required that instil in the stakeholders confidence in the safety of a particular concept for the siting and design of a geological disposal, given the uncertainties that inevitably exist in its a priori description and in its evolution. The step-wise approach associated with making safety case at each stage is a key to building confidence in the repository development programme. This paper discusses aspects and issues on confidence building in the implementation of HLW disposal in Japan. (author)

  20. On interval and cyclic interval edge colorings of (3,5)-biregular graphs

    DEFF Research Database (Denmark)

    Casselgren, Carl Johan; Petrosyan, Petros; Toft, Bjarne

    2017-01-01

    A proper edge coloring f of a graph G with colors 1,2,3,…,t is called an interval coloring if the colors on the edges incident to every vertex of G form an interval of integers. The coloring f is cyclic interval if for every vertex v of G, the colors on the edges incident to v either form an inte...

  1. Interval forecasting of cyber-attacks on industrial control systems

    Science.gov (United States)

    Ivanyo, Y. M.; Krakovsky, Y. M.; Luzgin, A. N.

    2018-03-01

    At present, cyber-security issues of industrial control systems occupy one of the key niches in a state system of planning and management Functional disruption of these systems via cyber-attacks may lead to emergencies related to loss of life, environmental disasters, major financial and economic damage, or disrupted activities of cities and settlements. There is then an urgent need to develop protection methods against cyber-attacks. This paper studied the results of cyber-attack interval forecasting with a pre-set intensity level of cyber-attacks. Interval forecasting is the forecasting of one interval from two predetermined ones in which a future value of the indicator will be obtained. For this, probability estimates of these events were used. For interval forecasting, a probabilistic neural network with a dynamic updating value of the smoothing parameter was used. A dividing bound of these intervals was determined by a calculation method based on statistical characteristics of the indicator. The number of cyber-attacks per hour that were received through a honeypot from March to September 2013 for the group ‘zeppo-norcal’ was selected as the indicator.

  2. The second birth interval in Egypt: the role of contraception

    OpenAIRE

    Baschieri, Angela

    2004-01-01

    The paper discusses problems of model specification in birth interval analysis. Using Bongaarts’s conceptual framework on the proximate determinants on fertility, the paper tests the hypothesis that all important variation in fertility is captured by differences in marriage, breastfeeding, contraception and induced abortion. The paper applies a discrete time hazard model to study the second birth interval using data from the Egyptian Demographic and Health Survey 2000 (EDHS), and the month by...

  3. An optimal dynamic interval preventive maintenance scheduling for series systems

    International Nuclear Information System (INIS)

    Gao, Yicong; Feng, Yixiong; Zhang, Zixian; Tan, Jianrong

    2015-01-01

    This paper studies preventive maintenance (PM) with dynamic interval for a multi-component system. Instead of equal interval, the time of PM period in the proposed dynamic interval model is not a fixed constant, which varies from interval-down to interval-up. It is helpful to reduce the outage loss on frequent repair parts and avoid lack of maintenance of the equipment by controlling the equipment maintenance frequency, when compared to a periodic PM scheme. According to the definition of dynamic interval, the reliability of system is analyzed from the failure mechanisms of its components and the different effects of non-periodic PM actions on the reliability of the components. Following the proposed model of reliability, a novel framework for solving the non-periodical PM schedule with dynamic interval based on the multi-objective genetic algorithm is proposed. The framework denotes the strategies include updating strategy, deleting strategy, inserting strategy and moving strategy, which is set to correct the invalid population individuals of the algorithm. The values of the dynamic interval and the selections of PM action for the components on every PM stage are determined by achieving a certain level of system availability with the minimum total PM-related cost. Finally, a typical rotary table system of NC machine tool is used as an example to describe the proposed method. - Highlights: • A non-periodic preventive maintenance scheduling model is proposed. • A framework for solving the non-periodical PM schedule problem is developed. • The interval of non-periodic PM is flexible and schedule can be better adjusted. • Dynamic interval leads to more efficient solutions than fixed interval does

  4. Interpersonal confidence as a factor in the prevention of disorganized interaction

    Directory of Open Access Journals (Sweden)

    Dontsov, Aleksander I.

    2014-03-01

    Full Text Available Human communities are based on a certain set of everyday attitudes, on the coordination of the actions of “the self ” in a group, and on the regulation of social practices. The results of this study show that a number of factors act as determinants of trust/ distrust ambivalence: the multidimensionality and the dynamics of interactions among people; the high level of subjectivity in evaluating risks resulting from openness and from confidence in partners involved in an interaction; and a subject’s contradictory attitude toward the personal traits of an interacting partner (power, activity, honesty, trustworthiness. Japanese scholars have proved the necessity of taking into account quality of life (QOL as one of the determinants of the development of interpersonal confidence. The study demonstrates that people try to bring trust into their daily routines as a way of organizing conscientious, emotionally open interactions that take into account the interests of all parties. Mistrust blocks access to the emotional, intellectual, and activity-related resources supporting life and undermines faith in the possibility of virtue and morality. Yet a supplementary study (using instant diagnostics indicates that in practice respondents did not demonstrate a high level of confidence (in two cities it was 0%; in one city, it was 4.6%. In spite of emotionally positive views regarding trust, as well as constructive estimates of its moral/behavioral potential, a considerable number of respondents were not open and oriented to the interests of others. A tendency toward caution, inwardness, and constrained sincerity leads to nonconformity in one’s actions in a group and to changes in the vector of social practices from socio-partner regulation to disorganized interaction.

  5. Weighting Mean and Variability during Confidence Judgments

    Science.gov (United States)

    de Gardelle, Vincent; Mamassian, Pascal

    2015-01-01

    Humans can not only perform some visual tasks with great precision, they can also judge how good they are in these tasks. However, it remains unclear how observers produce such metacognitive evaluations, and how these evaluations might be dissociated from the performance in the visual task. Here, we hypothesized that some stimulus variables could affect confidence judgments above and beyond their impact on performance. In a motion categorization task on moving dots, we manipulated the mean and the variance of the motion directions, to obtain a low-mean low-variance condition and a high-mean high-variance condition with matched performances. Critically, in terms of confidence, observers were not indifferent between these two conditions. Observers exhibited marked preferences, which were heterogeneous across individuals, but stable within each observer when assessed one week later. Thus, confidence and performance are dissociable and observers’ confidence judgments put different weights on the stimulus variables that limit performance. PMID:25793275

  6. STUDY ON THE LEVEL OF CONFIDENCE THAT ROMANIAN CONSUMERS HAVE REGARDING THE ORGANIC PRODUCTS

    Directory of Open Access Journals (Sweden)

    Narcis Alexandru BOZGA

    2015-04-01

    Full Text Available Organic agriculture is a domain that is growing rapidly both in Europe or worldwide and in Romania. However, there is a limited number of researches which, by the used methodology, are able to offer a definite and appropriate image of the Romanian market of organic products. In this respect, we considered as relevant to conduct certain market researches which can offer a wide image of the Romanian market of organic products. The present study aimed to briefly present some ideas learned from these researches concerning the level of confidence that the Romanian consumer has in organic products and the way in which the level of confidence may influence the purchasing behavior. Among the most important conclusions, it could be mentioned the low level of confidence that a large number of Romanian consumers has regarding the organic products, the decision to buy organic products is strongly influenced by the confidence expressed by the consumer, as well as the lack of confidence in organic products represents one of the main reasons for not buying it, in some cases being more important than the high price. After a deeper analysis, the final conclusion is that, at least partially, the low level of confidence in organic products is determined by the confusion and the low information level, on one hand, and by some producers' practices that so not seem to comply with the certification community norms.

  7. Perceived control of anxiety and its relationship to self-confidence and performance.

    Science.gov (United States)

    Hanton, Sheldon; Connaughton, Declan

    2002-03-01

    This study examined performers' retrospective explanations of the relationship between anxiety symptoms, self-confidence, and performance. Interviews were used to determine how the presence of symptoms and the accompanying directional interpretation affected performance in six elite and six subelite swimmers. Causal networks revealed that perceived control was the moderatingfactor in the directional interpretation of anxiety and not the experience of anxiety symptoms alone. Symptoms perceived to be under control were interpreted to have facilitative consequences for performance; however, symptoms not under control were viewed as debilitative. Increases or decreases in self-confidence wereperceived to improve or lower performance. Findings reveal how cognitive and somatic information was processed, what strategies were adopted, and how this series of events related to performance.

  8. Confidence and self-attribution bias in an artificial stock market

    Science.gov (United States)

    Bertella, Mario A.; Pires, Felipe R.; Rego, Henio H. A.; Vodenska, Irena; Stanley, H. Eugene

    2017-01-01

    Using an agent-based model we examine the dynamics of stock price fluctuations and their rates of return in an artificial financial market composed of fundamentalist and chartist agents with and without confidence. We find that chartist agents who are confident generate higher price and rate of return volatilities than those who are not. We also find that kurtosis and skewness are lower in our simulation study of agents who are not confident. We show that the stock price and confidence index—both generated by our model—are cointegrated and that stock price affects confidence index but confidence index does not affect stock price. We next compare the results of our model with the S&P 500 index and its respective stock market confidence index using cointegration and Granger tests. As in our model, we find that stock prices drive their respective confidence indices, but that the opposite relationship, i.e., the assumption that confidence indices drive stock prices, is not significant. PMID:28231255

  9. Interventions for addressing low balance confidence in older adults: a systematic review and meta-analysis.

    Science.gov (United States)

    Rand, Debbie; Miller, William C; Yiu, Jeanne; Eng, Janice J

    2011-05-01

    low balance confidence is a major health problem among older adults restricting their participation in daily life. to determine what interventions are most effective in increasing balance confidence in older adults. systematic review with meta-analysis of randomised controlled trials including at least one continuous end point of balance confidence. Studies, including adults 60 years or older without a neurological condition, were included in our study. the standardised mean difference (SMD) of continuous end points of balance confidence was calculated to estimate the pooled effect size with random-effect models. Methodological quality of trials was assessed using the Physical Therapy Evidence Database (PEDro) Scale. thirty studies were included in this review and a meta-analysis was conducted for 24 studies. Interventions were pooled into exercise (n = 9 trials, 453 subjects), Tai Chi (n = 5 trials, 468 subjects), multifactorial intervention (n = 10 trials, 1,233 subjects). Low significant effects were found for exercise and multifactorial interventions (SMD 0.22-0.31) and medium (SMD 0.48) significant effects were found for Tai Chi. Tai chi interventions are the most beneficial in increasing the balance confidence of older adults.

  10. Circadian profile of QT interval and QT interval variability in 172 healthy volunteers

    DEFF Research Database (Denmark)

    Bonnemeier, Hendrik; Wiegand, Uwe K H; Braasch, Wiebke

    2003-01-01

    of sleep. QT and R-R intervals revealed a characteristic day-night-pattern. Diurnal profiles of QT interval variability exhibited a significant increase in the morning hours (6-9 AM; P ... lower at day- and nighttime. Aging was associated with an increase of QT interval mainly at daytime and a significant shift of the T wave apex towards the end of the T wave. The circadian profile of ventricular repolarization is strongly related to the mean R-R interval, however, there are significant...

  11. Improved realism of confidence for an episodic memory event

    Directory of Open Access Journals (Sweden)

    Sandra Buratti

    2012-09-01

    Full Text Available We asked whether people can make their confidence judgments more realistic (accurate by adjusting them, with the aim of improving the relationship between the level of confidence and the correctness of the answer. This adjustment can be considered to include a so-called second-order metacognitive judgment. The participants first gave confidence judgments about their answers to questions about a video clip they had just watched. Next, they attempted to increase their accuracy by identifying confidence judgments in need of adjustment and then modifying them. The participants managed to increase their metacognitive realism, thus decreasing their absolute bias and improving their calibration, although the effects were small. We also examined the relationship between confidence judgments that were adjusted and the retrieval fluency and the phenomenological memory quality participants experienced when first answering the questions; this quality was one of either Remember (associated with concrete, vivid details or Know (associated with a feeling of familiarity. Confidence judgments associated with low retrieval fluency and the memory quality of knowing were modified more often. In brief, our results provide evidence that people can improve the realism of their confidence judgments, mainly by decreasing their confidence for incorrect answers. Thus, this study supports the conclusion that people can perform successful second-order metacognitive judgments.

  12. Australasian emergency physicians: a learning and educational needs analysis. Part two: confidence of FACEM for tasks and skills.

    Science.gov (United States)

    Paltridge, Debbie; Dent, Andrew W; Weiland, Tracey J

    2008-02-01

    To determine the degree of confidence perceived by Fellows of the Australasian College for Emergency Medicine for a variety of procedural, patient management, educational and research skills, and tasks that may be required of them. Mailed survey with Likert scales and grouped qualitative responses. More than 90% of emergency physicians (EP) feel usually or always confident of their skills for peripheral vascular access, procedural sedation, fluid resuscitation, tube thoracostomy, managing patients with altered conscious state, cardiac emergencies, behavioural disturbance, and interpreting acid base and other blood tests. Less than 50% felt confident performing surgical airways, ED ultrasound, managing neonatal emergencies or interpreting MRI. Of non-clinical skills, while most EP were confident of their ability to write references, debrief staff, lead group tutorials and prepare slides, a minority felt usually or always confident about budgeting and finance, preparing submissions, dealing with the media, appearing in court or marking examination papers. Whilst nearly 75% were confident about the information technology skills required of them for clinical practice, less than 25% of EP felt confident about conducting research and less than 15% were confident applying or interpreting statistics. This information may assist in the planning of future educational interventions for EP.

  13. Building Public Confidence in Nuclear Activities

    International Nuclear Information System (INIS)

    Isaacs, T

    2002-01-01

    Achieving public acceptance has become a central issue in discussions regarding the future of nuclear power and associated nuclear activities. Effective public communication and public participation are often put forward as the key building blocks in garnering public acceptance. A recent international workshop in Finland provided insights into other features that might also be important to building and sustaining public confidence in nuclear activities. The workshop was held in Finland in close cooperation with Finnish stakeholders. This was most appropriate because of the recent successes in achieving positive decisions at the municipal, governmental, and Parliamentary levels, allowing the Finnish high-level radioactive waste repository program to proceed, including the identification and approval of a proposed candidate repository site. Much of the workshop discussion appropriately focused on the roles of public participation and public communications in building public confidence. It was clear that well constructed and implemented programs of public involvement and communication and a sense of fairness were essential in building the extent of public confidence needed to allow the repository program in Finland to proceed. It was also clear that there were a number of other elements beyond public involvement that contributed substantially to the success in Finland to date. And, in fact, it appeared that these other factors were also necessary to achieving the Finnish public acceptance. In other words, successful public participation and communication were necessary but not sufficient. What else was important? Culture, politics, and history vary from country to country, providing differing contexts for establishing and maintaining public confidence. What works in one country will not necessarily be effective in another. Nonetheless, there appear to be certain elements that might be common to programs that are successful in sustaining public confidence and some of

  14. Building Public Confidence in Nuclear Activities

    International Nuclear Information System (INIS)

    Isaacs, T

    2002-01-01

    Achieving public acceptance has become a central issue in discussions regarding the future of nuclear power and associated nuclear activities. Effective public communication and public participation are often put forward as the key building blocks in garnering public acceptance. A recent international workshop in Finland provided insights into other features that might also be important to building and sustaining public confidence in nuclear activities. The workshop was held in Finland in close cooperation with Finnish stakeholders. This was most appropriate because of the recent successes in achieving positive decisions at the municipal, governmental, and Parliamentary levels, allowing the Finnish high-level radioactive waste repository program to proceed, including the identification and approval of a proposed candidate repository site Much of the workshop discussion appropriately focused on the roles of public participation and public communications in building public confidence. It was clear that well constructed and implemented programs of public involvement and communication and a sense of fairness were essential in building the extent of public confidence needed to allow the repository program in Finland to proceed. It was also clear that there were a number of other elements beyond public involvement that contributed substantially to the success in Finland to date. And, in fact, it appeared that these other factors were also necessary to achieving the Finnish public acceptance. In other words, successful public participation and communication were necessary but not sufficient. What else was important? Culture, politics, and history vary from country to country, providing differing contexts for establishing and maintaining public confidence. What works in one country will not necessarily be effective in another. Nonetheless, there appear to be certain elements that might be common to programs that are successful in sustaining public confidence, and some of

  15. An approach to solve group-decision-making problems with ordinal interval numbers.

    Science.gov (United States)

    Fan, Zhi-Ping; Liu, Yang

    2010-10-01

    The ordinal interval number is a form of uncertain preference information in group decision making (GDM), while it is seldom discussed in the existing research. This paper investigates how the ranking order of alternatives is determined based on preference information of ordinal interval numbers in GDM problems. When ranking a large quantity of ordinal interval numbers, the efficiency and accuracy of the ranking process are critical. A new approach is proposed to rank alternatives using ordinal interval numbers when every ranking ordinal in an ordinal interval number is thought to be uniformly and independently distributed in its interval. First, we give the definition of possibility degree on comparing two ordinal interval numbers and the related theory analysis. Then, to rank alternatives, by comparing multiple ordinal interval numbers, a collective expectation possibility degree matrix on pairwise comparisons of alternatives is built, and an optimization model based on this matrix is constructed. Furthermore, an algorithm is also presented to rank alternatives by solving the model. Finally, two examples are used to illustrate the use of the proposed approach.

  16. Confidence Estimation of Reliability Indices of the System with Elements Duplication and Recovery

    Directory of Open Access Journals (Sweden)

    I. V. Pavlov

    2017-01-01

    Full Text Available The article considers a problem to estimate a confidence interval of the main reliability indices such as availability rate, mean time between failures, and operative availability (in the stationary state for the model of the system with duplication and independent recovery of elements.Presents a solution of the problem for a situation that often arises in practice, when there are unknown exact values of the reliability parameters of the elements, and only test data of the system or its individual parts (elements, subsystems for reliability are known. It should be noted that the problems of the confidence estimate of reliability indices of the complex systems based on the testing results of their individual elements are fairly common function in engineering practice when designing and running the various engineering systems. The available papers consider this problem, mainly, for non-recovery systems.Describes a solution of this problem for the important particular case when the system elements are duplicated by the reserved elements, and the elements that have failed in the course of system operation are recovered (regardless of the state of other elements.An approximate solution of this problem is obtained for the case of high reliability or "fast recovery" of elements on the assumption that the average recovery time of elements is small as compared to the average time between failures.

  17. Determination of Kps and β1,H in a wide interval of initial concentrations of lutetium

    International Nuclear Information System (INIS)

    Lopez-G, H.; Jimenez R, M.; Solache R, M.; Rojas H, A.

    2006-01-01

    The solubility product constants and the first of lutetium hydrolysis in the interval of initial concentration of 3.72 X 10 -5 to 2.09 X 10 -3 M of lutetium, in a 2M of NaCIO 4 media, at 303 K and under conditions free of CO 2 its were considered. The solubility diagrams (pLu (ac) -pC H ) by means of a radiochemical method were obtained, and starting from its the pC H values that limit the saturation and no-saturation zones of the solutions were settled down. Those diagrams allowed, also, to calculate the solubility product constants of Lu(OH) 3 . The experimental data to the polynomial solubility equation were adjusted, what allowed to calculate those values of the solubility product constants of Lu(OH) 3 and to determine the first hydrolysis constant. The value of precipitation pC H diminishes when the initial concentration of the lutetium increases, while the values of K ps and β 1,H its remain constant. (Author)

  18. Correlates of emergency response interval and mortality from ...

    African Journals Online (AJOL)

    A retrospective study to determine the influence of blood transfusion emergency response interval on Mortality from childhood severe anemia was carried out. An admission record of all children with severe anemia over a 5-year period was reviewed. Those who either died before transfusion or got discharged against ...

  19. Time Interval to Initiation of Contraceptive Methods Following ...

    African Journals Online (AJOL)

    Objectives: The objectives of the study were to determine factors affecting the interval between a woman's last childbirth and the initiation of contraception. Materials and Methods: This was a retrospective study. Family planning clinic records of the Barau Dikko Teaching Hospital Kaduna from January 2000 to March 2014 ...

  20. Is consumer confidence an indicator of JSE performance?

    OpenAIRE

    Kamini Solanki; Yudhvir Seetharam

    2014-01-01

    While most studies examine the impact of business confidence on market performance, we instead focus on the consumer because consumer spending habits are a natural extension of trading activity on the equity market. This particular study examines investor sentiment as measured by the Consumer Confidence Index in South Africa and its effect on the Johannesburg Stock Exchange (JSE). We employ Granger causality tests to investigate the relationship across time between the Consumer Confidence Ind...

  1. Women's empowerment in India: assessment of women's confidence before and after training as a lay provider

    OpenAIRE

    Megan Storm; Alan Xi; Ayesha Khan

    2018-01-01

    Background: Gender is the main social determinant of health in India and affects women's health outcomes even before birth. As women mature into adulthood, lack of education and empowerment increases health inequities, acting as a barrier to seeking medical care and to making medical choices. Although the process of women's empowerment is complex to measure, one indicator is confidence in ability. We sought to increase the confidence of rural Indian women in their abilities by training them a...

  2. Self-confidence of anglers in identification of freshwater sport fish

    Science.gov (United States)

    Chizinski, C.J.; Martin, D. R.; Pope, Kevin L.

    2014-01-01

    Although several studies have focused on how well anglers identify species using replicas and pictures, there has been no study assessing the confidence that can be placed in angler's ability to identify recreationally important fish. Understanding factors associated with low self-confidence will be useful in tailoring education programmes to improve self-confidence in identifying common species. The purposes of this assessment were to quantify the confidence of recreational anglers to identify 13 commonly encountered warm water fish species and to relate self-confidence to species availability and angler experience. Significant variation was observed in anglers self-confidence among species and levels of self-declared skill, with greater confidence associated with greater skill and with greater exposure. This study of angler self-confidence strongly highlights the need for educational programmes that target lower skilled anglers and the importance of teaching all anglers about less common species, regardless of skill level.

  3. Effects of postidentification feedback on eyewitness identification and nonidentification confidence.

    Science.gov (United States)

    Semmler, Carolyn; Brewer, Neil; Wells, Gary L

    2004-04-01

    Two experiments investigated new dimensions of the effect of confirming feedback on eyewitness identification confidence using target-absent and target-present lineups and (previously unused) unbiased witness instructions (i.e., "offender not present" option highlighted). In Experiment 1, participants viewed a crime video and were later asked to try to identify the thief from an 8-person target-absent photo array. Feedback inflated witness confidence for both mistaken identifications and correct lineup rejections. With target-present lineups in Experiment 2, feedback inflated confidence for correct and mistaken identifications and lineup rejections. Although feedback had no influence on the confidence-accuracy correlation, it produced clear overconfidence. Confidence inflation varied with the confidence measure reference point (i.e., retrospective vs. current confidence) and identification response latency.

  4. Hemodilution on cardiopulmonary bypass as a determinant of early postoperative hyperlactatemia.

    Directory of Open Access Journals (Sweden)

    Marco Ranucci

    Full Text Available The nadir hematocrit (HCT on cardiopulmonary bypass (CPB is a recognized independent risk factor for major morbidity and mortality in cardiac surgery. The main interpretation is that low levels of HCT on CPB result in a poor oxygen delivery and dysoxia of end organs. Hyperlactatemia (HL is a marker of dysoxic metabolism, and is associated with bad outcomes in cardiac surgery. This study explores the relationship between nadir HCT on CPB and early postoperative HL.Retrospective study on 3,851 consecutive patients.Nadir HCT on CPB and other potential confounders were explored for association with blood lactate levels at the arrival in the Intensive Care Unit (ICU, and with the presence of moderate (2.1 - 6.0 mMol/L or severe (> 6.0 mMol/L HL. Nadir HCT on CPB demonstrated a significant negative association with blood lactate levels at the arrival in the ICU. After adjustment for the other confounders, the nadir HCT on CPB remained independently associated with moderate (odds ratio 0.96, 95% confidence interval 0.94-0.99 and severe HL (odds ratio 0.91, 95% confidence interval 0.86-0.97. Moderate and severe HL were significantly associated with increased morbidity and mortality.Hemodilution on CPB is an independent determinant of HL. This association, more evident for severe HL, strengthens the hypothesis that a poor oxygen delivery on CPB with consequent organ ischemia is the mechanism leading to hemodilution-associated bad outcomes.

  5. Rates and determinants of informed consent: a case study of an international thromboprophylaxis trial.

    Science.gov (United States)

    Smith, Orla M; McDonald, Ellen; Zytaruk, Nicole; Foster, Denise; Matte, Andrea; Clarke, France; Meade, Laurie; O'Callaghan, Nicole; Vallance, Shirley; Galt, Pauline; Rajbhandari, Dorrilyn; Rocha, Marcelo; Mehta, Sangeeta; Ferguson, Niall D; Hall, Richard; Fowler, Robert; Burns, Karen; Qushmaq, Ismael; Ostermann, Marlies; Heels-Ansdell, Diane; Cook, Deborah

    2013-02-01

    Successful completion of randomized trials depends upon efficiently and ethically screening patients and obtaining informed consent. Awareness of modifiable barriers to obtaining consent may inform ongoing and future trials. The objective of this study is to describe and examine determinants of consent rates in an international heparin thromboprophylaxis trial (Prophylaxis for ThromboEmbolism in Critical Care Trial, clinicaltrials.gov NCT00182143). Throughout the 4-year trial, research personnel approached eligible critically ill patients or their substitute decision makers for informed consent. Whether consent was obtained or declined was documented daily. The trial was conducted in 67 centers in 6 countries. A total of 3764 patients were randomized. The overall consent rate was 82.2% (range, 50%-100%) across participating centers. Consent was obtained from substitute decision makers and patients in 90.1% and 9.9% of cases, respectively. Five factors were independently associated with consent rates. Research coordinators with more experience achieved higher consent rates (odds ratio [OR], 3.43; 95% confidence interval, 2.42-4.86; P 10 years of experience). Consent rates were higher in smaller intensive care units with less than 15 beds compared with intensive care units with 15 to 20 beds, 21 to 25 beds, and greater than 25 beds (all ORs, <0.5; P < .001) and were higher in centers with more than 1 full-time research staff (OR, 1.95; 95% confidence interval, 1.28-2.99; P < .001). Consent rates were lower in centers affiliated with the Canadian Critical Care Trials Group or the Australian and New Zealand Intensive Care Society Clinical Trials Group compared with other centers (OR, 0.57; 95% confidence interval, 0.42-0.77; P < .001). Finally, consent rates were highest during the pilot trial, lowest during the initiation of the full trial, and increased over years of recruitment (P < .001). Characteristics of study centers, research infrastructure, and experience

  6. Registered nurse leadership style and confidence in delegation.

    Science.gov (United States)

    Saccomano, Scott J; Pinto-Zipp, Genevieve

    2011-05-01

      Leadership and confidence in delegation are two important explanatory constructs of nursing practice. The relationship between these constructs, however, is not clearly understood. To be successful in their roles as leaders, regardless of their experience, registered nurses (RNs) need to understand how to best delegate. The present study explored and described the relationship between RN leadership styles, demographic variables and confidence in delegation in a community teaching hospital. Utilizing a cross-sectional survey design, RNs employed in one acute care hospital completed questionnaires that measured leadership style [Path-Goal Leadership Questionnaire (PGLQ)] and confidence in delegating patient care tasks [Confidence and Intent to Delegate Scale (CIDS)]. Contrary to expectations, the data did not confirm a relationship between confidence in delegating tasks to unlicensed assistive personnel (UAPs) and leadership style. Nurses who were diploma or associate degree prepared were initially less confident in delegating tasks to UAPs as compared with RNs holding a bachelor's degree or higher. Further, after 5 years of clinical nursing experience, nurses with less educational experience reported more confidence in delegating tasks as compared with RNs with more educational experience. The lack of a relationship between leadership style and confidence in delegating patient care tasks were discussed in terms of the PGLQ classification criteria and hospital unit differences. As suggested by the significant two-way interaction between educational preparation and clinical nursing experience, changes in the nurse's confidence in delegating patient care tasks to UAPs was a dynamic changing variable that resulted from the interplay between amount of educational preparation and years of clinical nursing experience in this population of nurses. Clearly, generalizability of these findings to nurses outside the US is questionable, thus nurse managers must be familiar

  7. Detecting Disease in Radiographs with Intuitive Confidence

    Directory of Open Access Journals (Sweden)

    Stefan Jaeger

    2015-01-01

    Full Text Available This paper argues in favor of a specific type of confidence for use in computer-aided diagnosis and disease classification, namely, sine/cosine values of angles represented by points on the unit circle. The paper shows how this confidence is motivated by Chinese medicine and how sine/cosine values are directly related with the two forces Yin and Yang. The angle for which sine and cosine are equal (45° represents the state of equilibrium between Yin and Yang, which is a state of nonduality that indicates neither normality nor abnormality in terms of disease classification. The paper claims that the proposed confidence is intuitive and can be readily understood by physicians. The paper underpins this thesis with theoretical results in neural signal processing, stating that a sine/cosine relationship between the actual input signal and the perceived (learned input is key to neural learning processes. As a practical example, the paper shows how to use the proposed confidence values to highlight manifestations of tuberculosis in frontal chest X-rays.

  8. Confidence intervals and hypothesis testing for the Permutation Entropy with an application to epilepsy

    Science.gov (United States)

    Traversaro, Francisco; O. Redelico, Francisco

    2018-04-01

    In nonlinear dynamics, and to a lesser extent in other fields, a widely used measure of complexity is the Permutation Entropy. But there is still no known method to determine the accuracy of this measure. There has been little research on the statistical properties of this quantity that characterize time series. The literature describes some resampling methods of quantities used in nonlinear dynamics - as the largest Lyapunov exponent - but these seems to fail. In this contribution, we propose a parametric bootstrap methodology using a symbolic representation of the time series to obtain the distribution of the Permutation Entropy estimator. We perform several time series simulations given by well-known stochastic processes: the 1/fα noise family, and show in each case that the proposed accuracy measure is as efficient as the one obtained by the frequentist approach of repeating the experiment. The complexity of brain electrical activity, measured by the Permutation Entropy, has been extensively used in epilepsy research for detection in dynamical changes in electroencephalogram (EEG) signal with no consideration of the variability of this complexity measure. An application of the parametric bootstrap methodology is used to compare normal and pre-ictal EEG signals.

  9. High confidence in falsely recognizing prototypical faces.

    Science.gov (United States)

    Sampaio, Cristina; Reinke, Victoria; Mathews, Jeffrey; Swart, Alexandra; Wallinger, Stephen

    2018-06-01

    We applied a metacognitive approach to investigate confidence in recognition of prototypical faces. Participants were presented with sets of faces constructed digitally as deviations from prototype/base faces. Participants were then tested with a simple recognition task (Experiment 1) or a multiple-choice task (Experiment 2) for old and new items plus new prototypes, and they showed a high rate of confident false alarms to the prototypes. Confidence and accuracy relationship in this face recognition paradigm was found to be positive for standard items but negative for the prototypes; thus, it was contingent on the nature of the items used. The data have implications for lineups that employ match-to-suspect strategies.

  10. Sadhana | Indian Academy of Sciences

    Indian Academy of Sciences (India)

    The ICI technique is based on a consistency measure across confidence intervals corresponding to different window lengths. An approximate asymptotic analysis to determine the optimal confidence interval width shows that the asymptotic expressions are the same irrespective of whether one starts with a uniform sampling ...

  11. Confidence mediates the sex difference in mental rotation performance.

    Science.gov (United States)

    Estes, Zachary; Felker, Sydney

    2012-06-01

    On tasks that require the mental rotation of 3-dimensional figures, males typically exhibit higher accuracy than females. Using the most common measure of mental rotation (i.e., the Mental Rotations Test), we investigated whether individual variability in confidence mediates this sex difference in mental rotation performance. In each of four experiments, the sex difference was reliably elicited and eliminated by controlling or manipulating participants' confidence. Specifically, confidence predicted performance within and between sexes (Experiment 1), rendering confidence irrelevant to the task reliably eliminated the sex difference in performance (Experiments 2 and 3), and manipulating confidence significantly affected performance (Experiment 4). Thus, confidence mediates the sex difference in mental rotation performance and hence the sex difference appears to be a difference of performance rather than ability. Results are discussed in relation to other potential mediators and mechanisms, such as gender roles, sex stereotypes, spatial experience, rotation strategies, working memory, and spatial attention.

  12. Determination of aortic compliance from magnetic resonance images using an automatic active contour model

    International Nuclear Information System (INIS)

    Krug, Roland; Boese, Jan M; Schad, Lothar R

    2003-01-01

    The possibility of monitoring changes in aortic elasticity in humans has important applications for clinical trials because it estimates the efficacy of plaque-reducing therapies. The elasticity is usually quantified by compliance measurements. Therefore, the relative temporal change in the vessel cross-sectional area throughout the cardiac cycle has to be determined. In this work we determined and compared the compliance between three magnetic resonance (MR) methods (FLASH, TrueFISP and pulse-wave). Since manual outlining of the aortic wall area is a very time-consuming process and depends on an operator's variability, an algorithm for the automatic segmentation of the artery wall from MR images through the entire heart cycle is presented. The reliable detection of the artery cross-sectional area over the whole heart cycle was possible with a relative error of about 1%. Optimizing the temporal resolution to 60 ms we obtained a relative error in compliance of about 7% from TrueFISP (1.0 x 1.0 x 10 mm 3 , signal-to-noise ratio (SNR) > 12) and FLASH (0.7 x 0.7 x 10 mm 3 , SNR > 12) measurements in volunteers. Pulse-wave measurements yielded an error of more than 9%. In a study of ten volunteers, a compliance between C = 3 x 10 -5 Pa -1 and C = 8 x 10 -5 Pa -1 was determined, depending on age. The results of the TrueFISP and the pulse-wave measurements agreed very well with one another (confidence interval of 1 x 10 -5 Pa -1 ) while the results of the FLASH method more clearly deviated from the TrueFISP and pulse-wave (confidence interval of more than 2 x 10 -5 Pa -1 )

  13. Aging and Confidence Judgments in Item Recognition

    Science.gov (United States)

    Voskuilen, Chelsea; Ratcliff, Roger; McKoon, Gail

    2018-01-01

    We examined the effects of aging on performance in an item-recognition experiment with confidence judgments. A model for confidence judgments and response time (RTs; Ratcliff & Starns, 2013) was used to fit a large amount of data from a new sample of older adults and a previously reported sample of younger adults. This model of confidence…

  14. Time interval measurement between to emission: a systematics

    International Nuclear Information System (INIS)

    Bizard, G.; Bougault, R.; Brou, R.; Colin, J.; Durand, D.; Genoux-Lubain, A.; Horn, D.; Kerambrun, A.; Laville, J.L.; Le Brun, C.; Lecolley, J.F.; Lopez, O.; Louvel, M.; Mahi, M.; Meslin, C.; Steckmeyer, J.C.; Tamain, B.; Wieloch, A.

    1998-01-01

    A systematic study of the evolution of intervals of fragment emission times as a function of the energy deposited in the compound system was performed. Several measurements, Ne at 60 MeV/u, Ar at 30 and 60 MeV/u and two measurements for Kr at 60 MeV/u (central and semi-peripheral collisions) are presented. In all the experiments the target was Au and the mass of the compounds system was around A = 200. The excitation energies per nucleon reached in the case of these heavy systems cover the range of 3 to 5.5 MeV/u. The method used to determine the emission time intervals is based on the correlation functions associated to the relative angle distributions. The gaps between the data and simulations allow to evaluate the emission times. A rapid decrease of these time intervals was observed when the excitation energy increased. This variation starts at 500 fm/c which corresponds to a sequential emission. This relatively long time which indicates a weak interaction between fragments, corresponds practically to the measurement threshold. The shortest intervals (about 50 fm/c) are associated to a spontaneous multifragmentation and were observed in the case of central collisions at Ar+Au and Kr+Au at 60 MeV/u. Two interpretations are possible. The multifragmentation process might be viewed as a sequential process of very short time-separation or else, one can separate two zones heaving in mind that the multifragmentation is predominant from 4,5 MeV/u excitation energy upwards. This question is still open and its study is under way at LPC. An answer could come from the study of the rupture process of an excited nucleus, notably by the determination of its life-time

  15. HORMONAL RESPONSE TO DIFFERENT REST INTERVALS DURING RESISTANCE TRAINING WITH LIGHT LOADS

    Directory of Open Access Journals (Sweden)

    Payam Mohamad-Panahi

    2014-02-01

    Full Text Available Purpose: The purpose of the present study was to determine the appropriate rest time between sets during weight training with light load. Material: Seventeen cadet wrestlers (age =16.7В±0.6 yrs.; height =169.2В±8.2 cm; and weight =51.4В±7.9 kg were recruited from wrestling clubs in the Iranian province of Kurdistan and served as subjects in this study. This study was conducted over seven sessions with 48 hours recovery between sessions. In the first session, the characteristic features of subjects were recorded and the one repetition maximum in the bench press test was determined for each subject. On 6 separate occasions, subjects performed a 4 set of bench press at 60% 1RM with a 90 and 240 seconds rest interval until volitional fatigue. The numbers of repetition performed by the subjects, and also, cortisol and testosterone levels and 1RM were recorded. The results showed that there was a significant difference in the sustainability of repetitions during 4 sets bench press with 60 % load between 90 and 240 seconds rest intervals (rest interval effect (p<0.05 as well as with 90% load. Results: Additionally, there was a significant difference in the sustainability of repetitions during 4 sets bench press in 90 and 240 seconds rest intervals, both, between light and heavy loads (load effect. Plasma cortisol concentrations significantly increased after all bench press trials. Also, the rest interval effect was statistically significant in both 60 % and 90% load trials. But, the load effect was only statistically significant in 90 seconds rest interval trial (p<0.05. In contrast, plasma testosterone concentrations significantly increased after 4 sets bench press only in 90 seconds rest interval with heavy load and 240 seconds rest interval with light load (p<0.05. Accordingly, testosterone to cortisol (T:C ratio were significantly decreased after 4 sets bench press in 90 seconds rest interval with light load and 240 seconds rest interval with

  16. HORMONAL RESPONSE TO DIFFERENT REST INTERVALS DURING RESISTANCE TRAINING WITH LIGHT LOADS

    Directory of Open Access Journals (Sweden)

    Payam Mohamad-Panahi

    2014-02-01

    Full Text Available Purpose: The purpose of the present study was to determine the appropriate rest time between sets during weight training with light load. Material: Seventeen cadet wrestlers (age =16.7±0.6 yrs.; height =169.2±8.2 cm; and weight =51.4±7.9 kg were recruited from wrestling clubs in the Iranian province of Kurdistan and served as subjects in this study. This study was conducted over seven sessions with 48 hours recovery between sessions. In the first session, the characteristic features of subjects were recorded and the one repetition maximum in the bench press test was determined for each subject. On 6 separate occasions, subjects performed a 4 set of bench press at 60% 1RM with a 90 and 240 seconds rest interval until volitional fatigue. The numbers of repetition performed by the subjects, and also, cortisol and testosterone levels and 1RM were recorded. The results showed that there was a significant difference in the sustainability of repetitions during 4 sets bench press with 60 % load between 90 and 240 seconds rest intervals (rest interval effect (p<0.05 as well as with 90% load. Results: Additionally, there was a significant difference in the sustainability of repetitions during 4 sets bench press in 90 and 240 seconds rest intervals, both, between light and heavy loads (load effect. Plasma cortisol concentrations significantly increased after all bench press trials. Also, the rest interval effect was statistically significant in both 60 % and 90% load trials. But, the load effect was only statistically significant in 90 seconds rest interval trial (p<0.05. In contrast, plasma testosterone concentrations significantly increased after 4 sets bench press only in 90 seconds rest interval with heavy load and 240 seconds rest interval with light load (p<0.05. Accordingly, testosterone to cortisol (T:C ratio were significantly decreased after 4 sets bench press in 90 seconds rest interval with light load and 240 seconds rest interval with heavy

  17. Myocardial perfusion magnetic resonance imaging using sliding-window conjugate-gradient highly constrained back-projection reconstruction for detection of coronary artery disease.

    Science.gov (United States)

    Ma, Heng; Yang, Jun; Liu, Jing; Ge, Lan; An, Jing; Tang, Qing; Li, Han; Zhang, Yu; Chen, David; Wang, Yong; Liu, Jiabin; Liang, Zhigang; Lin, Kai; Jin, Lixin; Bi, Xiaoming; Li, Kuncheng; Li, Debiao

    2012-04-15

    Myocardial perfusion magnetic resonance imaging (MRI) with sliding-window conjugate-gradient highly constrained back-projection reconstruction (SW-CG-HYPR) allows whole left ventricular coverage, improved temporal and spatial resolution and signal/noise ratio, and reduced cardiac motion-related image artifacts. The accuracy of this technique for detecting coronary artery disease (CAD) has not been determined in a large number of patients. We prospectively evaluated the diagnostic performance of myocardial perfusion MRI with SW-CG-HYPR in patients with suspected CAD. A total of 50 consecutive patients who were scheduled for coronary angiography with suspected CAD underwent myocardial perfusion MRI with SW-CG-HYPR at 3.0 T. The perfusion defects were interpreted qualitatively by 2 blinded observers and were correlated with x-ray angiographic stenoses ≥50%. The prevalence of CAD was 56%. In the per-patient analysis, the sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of SW-CG-HYPR was 96% (95% confidence interval 82% to 100%), 82% (95% confidence interval 60% to 95%), 87% (95% confidence interval 70% to 96%), 95% (95% confidence interval 74% to100%), and 90% (95% confidence interval 82% to 98%), respectively. In the per-vessel analysis, the corresponding values were 98% (95% confidence interval 91% to 100%), 89% (95% confidence interval 80% to 94%), 86% (95% confidence interval 76% to 93%), 99% (95% confidence interval 93% to 100%), and 93% (95% confidence interval 89% to 97%), respectively. In conclusion, myocardial perfusion MRI using SW-CG-HYPR allows whole left ventricular coverage and high resolution and has high diagnostic accuracy in patients with suspected CAD. Copyright © 2012 Elsevier Inc. All rights reserved.

  18. Medical Students? Confidence Judgments Using a Factual Database and Personal Memory: A Comparison.

    Science.gov (United States)

    O'Keefe, Karen M.; Wildemuth, Barbara M.; Friedman, Charles P.

    1999-01-01

    This study examined the quality of medical students' confidence estimates in answering questions in bacteriology based on personal knowledge alone and what they retrieved from a factual database in microbiology, in order to determine whether medical students can recognize when an information need has been fulfilled and when it has not. (Author/LRW)

  19. Understanding public confidence in government to prevent terrorist attacks.

    Energy Technology Data Exchange (ETDEWEB)

    Baldwin, T. E.; Ramaprasad, A,; Samsa, M. E.; Decision and Information Sciences; Univ. of Illinois at Chicago

    2008-04-02

    A primary goal of terrorism is to instill a sense of fear and vulnerability in a population and to erode its confidence in government and law enforcement agencies to protect citizens against future attacks. In recognition of its importance, the Department of Homeland Security includes public confidence as one of the principal metrics used to assess the consequences of terrorist attacks. Hence, a detailed understanding of the variations in public confidence among individuals, terrorist event types, and as a function of time is critical to developing this metric. In this exploratory study, a questionnaire was designed, tested, and administered to small groups of individuals to measure public confidence in the ability of federal, state, and local governments and their public safety agencies to prevent acts of terrorism. Data was collected from three groups before and after they watched mock television news broadcasts portraying a smallpox attack, a series of suicide bomber attacks, a refinery explosion attack, and cyber intrusions on financial institutions, resulting in identity theft. Our findings are: (a) although the aggregate confidence level is low, there are optimists and pessimists; (b) the subjects are discriminating in interpreting the nature of a terrorist attack, the time horizon, and its impact; (c) confidence recovery after a terrorist event has an incubation period; and (d) the patterns of recovery of confidence of the optimists and the pessimists are different. These findings can affect the strategy and policies to manage public confidence after a terrorist event.

  20. Transparent predictive modelling of the twin screw granulation process using a compensated interval type-2 fuzzy system.

    Science.gov (United States)

    AlAlaween, Wafa' H; Khorsheed, Bilal; Mahfouf, Mahdi; Gabbott, Ian; Reynolds, Gavin K; Salman, Agba D

    2018-03-01

    In this research, a new systematic modelling framework which uses machine learning for describing the granulation process is presented. First, an interval type-2 fuzzy model is elicited in order to predict the properties of the granules produced by twin screw granulation (TSG) in the pharmaceutical industry. Second, a Gaussian mixture model (GMM) is integrated in the framework in order to characterize the error residuals emanating from the fuzzy model. This is done to refine the model by taking into account uncertainties and/or any other unmodelled behaviour, stochastic or otherwise. All proposed modelling algorithms were validated via a series of Laboratory-scale experiments. The size of the granules produced by TSG was successfully predicted, where most of the predictions fit within a 95% confidence interval. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Increasing Confidence and Ability in Implementing Kangaroo Mother Care Method Among Young Mothers.

    Science.gov (United States)

    Kenanga Purbasary, Eleni; Rustina, Yeni; Budiarti, Tri

    Mothers giving birth to low birth weight babies (LBWBs) have low confidence in caring for their babies because they are often still young and may lack the knowledge, experience, and ability to care for the baby. This research aims to determine the effect of education about kangaroo mother care (KMC) on the confidence and ability of young mothers to implement KMC. The research methodology used was a controlled-random experimental approach with pre- and post-test equivalent groups of 13 mothers and their LBWBs in the intervention group and 13 mothers and their LBWBs in the control group. Data were collected via an instrument measuring young mothers' confidence, the validity and reliability of which have been tested with a resulting r value of .941, and an observation sheet on KMC implementation. After conducting the education, the confidence score of young mothers and their ability to perform KMC increased meaningfully. The score of confidence of young mothers before education was 37 (p = .1555: and the ability score for KMC Implementation before education was 9 (p = .1555). The median score of confidence of young mothers after education in the intervention group was 87 and in the control group was 50 (p = .001, 95% CI 60.36-75.56), and ability median score for KMC implementation after education in the intervention group was 16 and in the control group was 12 (p = .001, 95% CI 1.50-1.88). KMC education should be conducted gradually, and it is necessary to involve the family, in order for KMC implementation to continue at home. A family visit can be done for LBWBs to evaluate the ability of the young mothers to implement KMC.

  2. Functional Dissociation of Confident and Not-Confident Errors in the Spatial Delayed Response Task Demonstrates Impairments in Working Memory Encoding and Maintenance in Schizophrenia

    Directory of Open Access Journals (Sweden)

    Jutta S. Mayer

    2018-05-01

    Full Text Available Even though extensively investigated, the nature of working memory (WM deficits in patients with schizophrenia (PSZ is not yet fully understood. In particular, the contribution of different WM sub-processes to the severe WM deficit observed in PSZ is a matter of debate. So far, most research has focused on impaired WM maintenance. By analyzing different types of errors in a spatial delayed response task (DRT, we have recently demonstrated that incorrect yet confident responses (which we labeled as false memory errors rather than incorrect/not-confident responses reflect failures of WM encoding, which was also impaired in PSZ. In the present study, we provide further evidence for a functional dissociation between confident and not-confident errors by manipulating the demands on WM maintenance, i.e., the length over which information has to be maintained in WM. Furthermore, we investigate whether these functionally distinguishable WM processes are impaired in PSZ. Twenty-four PSZ and 24 demographically matched healthy controls (HC performed a spatial DRT in which the length of the delay period was varied between 1, 2, 4, and 6 s. In each trial, participants also rated their level of response confidence. Across both groups, longer delays led to increased rates of incorrect/not-confident responses, while incorrect/confident responses were not affected by delay length. This functional dissociation provides additional support for our proposal that false memory errors (i.e., confident errors reflect problems at the level of WM encoding, while not-confident errors reflect failures of WM maintenance. Schizophrenic patients showed increased numbers of both confident and not-confident errors, suggesting that both sub-processes of WM—encoding and maintenance—are impaired in schizophrenia. Combined with the delay length-dependent functional dissociation, we propose that these impairments in schizophrenic patients are functionally distinguishable.

  3. Effects of confidence and anxiety on flow state in competition.

    Science.gov (United States)

    Koehn, Stefan

    2013-01-01

    Confidence and anxiety are important variables that underlie the experience of flow in sport. Specifically, research has indicated that confidence displays a positive relationship and anxiety a negative relationship with flow. The aim of this study was to assess potential direct and indirect effects of confidence and anxiety dimensions on flow state in tennis competition. A sample of 59 junior tennis players completed measures of Competitive State Anxiety Inventory-2d and Flow State Scale-2. Following predictive analysis, results showed significant positive correlations between confidence (intensity and direction) and anxiety symptoms (only directional perceptions) with flow state. Standard multiple regression analysis indicated confidence as the only significant predictor of flow. The results confirmed a protective function of confidence against debilitating anxiety interpretations, but there were no significant interaction effects between confidence and anxiety on flow state.

  4. Measurement of subcritical multiplication by the interval distribution method

    International Nuclear Information System (INIS)

    Nelson, G.W.

    1985-01-01

    The prompt decay constant or the subcritical neutron multiplication may be determined by measuring the distribution of the time intervals between successive neutron counts. The distribution data is analyzed by least-squares fitting to a theoretical distribution function derived from a point reactor probability model. Published results of measurements with one- and two-detector systems are discussed. Data collection times are shorter, and statistical errors are smaller the nearer the system is to delayed critical. Several of the measurements indicate that a shorter data collection time and higher accuracy are possible with the interval distribution method than with the Feynman variance method

  5. Persistent fluctuations in stride intervals under fractal auditory stimulation.

    Science.gov (United States)

    Marmelat, Vivien; Torre, Kjerstin; Beek, Peter J; Daffertshofer, Andreas

    2014-01-01

    Stride sequences of healthy gait are characterized by persistent long-range correlations, which become anti-persistent in the presence of an isochronous metronome. The latter phenomenon is of particular interest because auditory cueing is generally considered to reduce stride variability and may hence be beneficial for stabilizing gait. Complex systems tend to match their correlation structure when synchronizing. In gait training, can one capitalize on this tendency by using a fractal metronome rather than an isochronous one? We examined whether auditory cues with fractal variations in inter-beat intervals yield similar fractal inter-stride interval variability as isochronous auditory cueing in two complementary experiments. In Experiment 1, participants walked on a treadmill while being paced by either an isochronous or a fractal metronome with different variation strengths between beats in order to test whether participants managed to synchronize with a fractal metronome and to determine the necessary amount of variability for participants to switch from anti-persistent to persistent inter-stride intervals. Participants did synchronize with the metronome despite its fractal randomness. The corresponding coefficient of variation of inter-beat intervals was fixed in Experiment 2, in which participants walked on a treadmill while being paced by non-isochronous metronomes with different scaling exponents. As expected, inter-stride intervals showed persistent correlations similar to self-paced walking only when cueing contained persistent correlations. Our results open up a new window to optimize rhythmic auditory cueing for gait stabilization by integrating fractal fluctuations in the inter-beat intervals.

  6. Family Partner Intervention Influences Self-Care Confidence and Treatment Self-Regulation in Patients with Heart Failure

    Science.gov (United States)

    Stamp, Kelly D.; Dunbar, Sandra B.; Clark, Patricia C.; Reilly, Carolyn M.; Gary, Rebecca A.; Higgins, Melinda; Ryan, Richard M

    2015-01-01

    Background Heart failure self-care requires confidence in one’s ability and motivation to perform a recommended behavior. Most self-care occurs within a family context, yet little is known about the influence of family on heart failure self-care or motivating factors. Aims To examine the association of family functioning and the self-care antecedents of confidence and motivation among heart failure participants and determine if a family partnership intervention would promote higher levels of perceived confidence and treatment self-regulation (motivation) at four and eight months compared to patient-family education or usual care groups. Methods Heart failure patients (N = 117) and a family member were randomized to a family partnership intervention, patient-family education or usual care groups. Measures of patient’s perceived family functioning, confidence, motivation for medications and following a low-sodium diet were analyzed. Data were collected at baseline, four and eight months. Results Family functioning was related to self-care confidence for diet (p=.02) and autonomous motivation for adhering to their medications (p=.05 and diet p=0.2). The family partnership intervention group significantly improved confidence (p=.05) and motivation (medications (p=.004; diet p=.012) at four months whereas patient-family education group and usual care did not change. Conclusion Perceived confidence and motivation for self-care was enhanced by family partnership intervention, regardless of family functioning. Poor family functioning at baseline contributed to lower confidence. Family functioning should be assessed to guide tailored family-patient interventions for better outcomes. PMID:25673525

  7. Eyewitness confidence in simultaneous and sequential lineups: a criterion shift account for sequential mistaken identification overconfidence.

    Science.gov (United States)

    Dobolyi, David G; Dodson, Chad S

    2013-12-01

    Confidence judgments for eyewitness identifications play an integral role in determining guilt during legal proceedings. Past research has shown that confidence in positive identifications is strongly associated with accuracy. Using a standard lineup recognition paradigm, we investigated accuracy using signal detection and ROC analyses, along with the tendency to choose a face with both simultaneous and sequential lineups. We replicated past findings of reduced rates of choosing with sequential as compared to simultaneous lineups, but notably found an accuracy advantage in favor of simultaneous lineups. Moreover, our analysis of the confidence-accuracy relationship revealed two key findings. First, we observed a sequential mistaken identification overconfidence effect: despite an overall reduction in false alarms, confidence for false alarms that did occur was higher with sequential lineups than with simultaneous lineups, with no differences in confidence for correct identifications. This sequential mistaken identification overconfidence effect is an expected byproduct of the use of a more conservative identification criterion with sequential than with simultaneous lineups. Second, we found a steady drop in confidence for mistaken identifications (i.e., foil identifications and false alarms) from the first to the last face in sequential lineups, whereas confidence in and accuracy of correct identifications remained relatively stable. Overall, we observed that sequential lineups are both less accurate and produce higher confidence false identifications than do simultaneous lineups. Given the increasing prominence of sequential lineups in our legal system, our data argue for increased scrutiny and possibly a wholesale reevaluation of this lineup format. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  8. Preservice teachers' perceived confidence in teaching school violence prevention.

    Science.gov (United States)

    Kandakai, Tina L; King, Keith A

    2002-01-01

    To examine preservice teachers' perceived confidence in teaching violence prevention and the potential effect of violence-prevention training on preservice teachers' confidence in teaching violence prevention. Six Ohio universities participated in the study. More than 800 undergraduate and graduate students completed surveys. Violence-prevention training, area of certification, and location of student- teaching placement significantly influenced preservice teachers' perceived confidence in teaching violence prevention. Violence-prevention training positively influences preservice teachers' confidence in teaching violence prevention. The results suggest that such training should be considered as a requirement for teacher preparation programs.

  9. Prognostic Significance Of QT Interval Prolongation In Adult ...

    African Journals Online (AJOL)

    Prognostic survival studies for heart-rate corrected QT interval in patients with chronic heart failure are few; although these patients are known to have a high risk of sudden cardiac death. This study was aimed at determining the mortality risk associated with prolonged QTc in Nigerians with heart failure. Ninety-six ...

  10. Self Confidence Spillovers and Motivated Beliefs

    DEFF Research Database (Denmark)

    Banerjee, Ritwik; Gupta, Nabanita Datta; Villeval, Marie Claire

    that success when competing in a task increases the performers’ self-confidence and competitiveness in the subsequent task. We also find that such spillovers affect the self-confidence of low-status individuals more than that of high-status individuals. Receiving good news under Affirmative Action, however......Is success in a task used strategically by individuals to motivate their beliefs prior to taking action in a subsequent, unrelated, task? Also, is the distortion of beliefs reinforced for individuals who have lower status in society? Conducting an artefactual field experiment in India, we show...

  11. Reference Intervals of Common Clinical Chemistry Analytes for Adults in Hong Kong.

    Science.gov (United States)

    Lo, Y C; Armbruster, David A

    2012-04-01

    Defining reference intervals is a major challenge because of the difficulty in recruiting volunteers to participate and testing samples from a significant number of healthy reference individuals. Historical literature citation intervals are often suboptimal because they're be based on obsolete methods and/or only a small number of poorly defined reference samples. Blood donors in Hong Kong gave permission for additional blood to be collected for reference interval testing. The samples were tested for twenty-five routine analytes on the Abbott ARCHITECT clinical chemistry system. Results were analyzed using the Rhoads EP evaluator software program, which is based on the CLSI/IFCC C28-A guideline, and defines the reference interval as the 95% central range. Method specific reference intervals were established for twenty-five common clinical chemistry analytes for a Chinese ethnic population. The intervals were defined for each gender separately and for genders combined. Gender specific or combined gender intervals were adapted as appropriate for each analyte. A large number of healthy, apparently normal blood donors from a local ethnic population were tested to provide current reference intervals for a new clinical chemistry system. Intervals were determined following an accepted international guideline. Laboratories using the same or similar methodologies may adapt these intervals if deemed validated and deemed suitable for their patient population. Laboratories using different methodologies may be able to successfully adapt the intervals for their facilities using the reference interval transference technique based on a method comparison study.

  12. Alternative confidence measure for local matching stereo algorithms

    CSIR Research Space (South Africa)

    Ndhlovu, T

    2009-11-01

    Full Text Available The authors present a confidence measure applied to individual disparity estimates in local matching stereo correspondence algorithms. It aims at identifying textureless areas, where most local matching algorithms fail. The confidence measure works...

  13. Coping skills: role of trait sport confidence and trait anxiety.

    Science.gov (United States)

    Cresswell, Scott; Hodge, Ken

    2004-04-01

    The current research assesses relationships among coping skills, trait sport confidence, and trait anxiety. Two samples (n=47 and n=77) of international competitors from surf life saving (M=23.7 yr.) and touch rugby (M=26.2 yr.) completed the Athletic Coping Skills Inventory, Trait Sport Confidence Inventory, and Sport Anxiety Scale. Analysis yielded significant correlations amongst trait anxiety, sport confidence, and coping. Specifically confidence scores were positively associated with coping with adversity scores and anxiety scores were negatively associated. These findings support the inclusion of the personality characteristics of confidence and anxiety within the coping model presented by Hardy, Jones, and Gould, Researchers should be aware that confidence and anxiety may influence the coping processes of athletes.

  14. Reclaim your creative confidence.

    Science.gov (United States)

    Kelley, Tom; Kelley, David

    2012-12-01

    Most people are born creative. But over time, a lot of us learn to stifle those impulses. We become warier of judgment, more cautious more analytical. The world seems to divide into "creatives" and "noncreatives," and too many people resign themselves to the latter category. And yet we know that creativity is essential to success in any discipline or industry. The good news, according to authors Tom Kelley and David Kelley of IDEO, is that we all can rediscover our creative confidence. The trick is to overcome the four big fears that hold most of us back: fear of the messy unknown, fear of judgment, fear of the first step, and fear of losing control. The authors use an approach based on the work of psychologist Albert Bandura in helping patients get over their snake phobias: You break challenges down into small steps and then build confidence by succeeding on one after another. Creativity is something you practice, say the authors, not just a talent you are born with.

  15. Ambulatory Function and Perception of Confidence in Persons with Stroke with a Custom-Made Hinged versus a Standard Ankle Foot Orthosis

    Directory of Open Access Journals (Sweden)

    Angélique Slijper

    2012-01-01

    Full Text Available Objective. The aim was to compare walking with an individually designed dynamic hinged ankle foot orthosis (DAFO and a standard carbon composite ankle foot orthosis (C-AFO. Methods. Twelve participants, mean age 56 years (range 26–72, with hemiparesis due to stroke were included in the study. During the six-minute walk test (6MW, walking velocity, the Physiological Cost Index (PCI, and the degree of experienced exertion were measured with a DAFO and C-AFO, respectively, followed by a Stairs Test velocity and perceived confidence was rated. Results. The mean differences in favor for the DAFO were in 6MW 24.3 m (95% confidence interval [CI] 4.90, 43.76, PCI −0.09 beats/m (95% CI −0.27, 0.95, velocity 0.04 m/s (95% CI −0.01, 0.097, and in the Stairs Test −11.8 s (95% CI −19.05, −4.48. All participants except one perceived the degree of experienced exertion lower and felt more confident when walking with the DAFO. Conclusions. Wearing a DAFO resulted in longer walking distance and faster stair climbing compared to walking with a C-AFO. Eleven of twelve participants felt more confident with the DAFO, which may be more important than speed and distance and the most important reason for prescribing an AFO.

  16. Population density, call-response interval, and survival of out-of-hospital cardiac arrest

    Directory of Open Access Journals (Sweden)

    Ogawa Toshio

    2011-04-01

    Full Text Available Abstract Background Little is known about the effects of geographic variation on outcomes of out-of-hospital cardiac arrest (OHCA. The present study investigated the relationship between population density, time between emergency call and ambulance arrival, and survival of OHCA, using the All-Japan Utstein-style registry database, coupled with geographic information system (GIS data. Methods We examined data from 101,287 bystander-witnessed OHCA patients who received emergency medical services (EMS through 4,729 ambulatory centers in Japan between 2005 and 2007. Latitudes and longitudes of each center were determined with address-match geocoding, and linked with the Population Census data using GIS. The endpoints were 1-month survival and neurologically favorable 1-month survival defined as Glasgow-Pittsburgh cerebral performance categories 1 or 2. Results Overall 1-month survival was 7.8%. Neurologically favorable 1-month survival was 3.6%. In very low-density (2 and very high-density (≥10,000/km2 areas, the mean call-response intervals were 9.3 and 6.2 minutes, 1-month survival rates were 5.4% and 9.1%, and neurologically favorable 1-month survival rates were 2.7% and 4.3%, respectively. After adjustment for age, sex, cause of arrest, first aid by bystander and the proportion of neighborhood elderly people ≥65 yrs, patients in very high-density areas had a significantly higher survival rate (odds ratio (OR, 1.64; 95% confidence interval (CI, 1.44 - 1.87; p Conclusion Living in a low-density area was associated with an independent risk of delay in ambulance response, and a low survival rate in cases of OHCA. Distribution of EMS centers according to population size may lead to inequality in health outcomes between urban and rural areas.

  17. Factors affecting midwives' confidence in intrapartum care: a phenomenological study.

    Science.gov (United States)

    Bedwell, Carol; McGowan, Linda; Lavender, Tina

    2015-01-01

    midwives are frequently the lead providers of care for women throughout labour and birth. In order to perform their role effectively and provide women with the choices they require midwives need to be confident in their practice. This study explores factors which may affect midwives' confidence in their practice. hermeneutic phenomenology formed the theoretical basis for the study. Prospective longitudinal data collection was completed using diaries and semi-structured interviews. Twelve midwives providing intrapartum care in a variety of settings were recruited to ensure a variety of experiences in different contexts were captured. the principal factor affecting workplace confidence, both positively and negatively, was the influence of colleagues. Perceived autonomy and a sense of familiarity could also enhance confidence. However, conflict in the workplace was a critical factor in reducing midwives' confidence. Confidence was an important, but fragile, phenomenon to midwives and they used a variety of coping strategies, emotional intelligence and presentation management to maintain it. this is the first study to highlight both the factors influencing midwives' workplace confidence and the strategies midwives employed to maintain their confidence. Confidence is important in maintaining well-being and workplace culture may play a role in explaining the current low morale within the midwifery workforce. This may have implications for women's choices and care. Support, effective leadership and education may help midwives develop and sustain a positive sense of confidence. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Conquering Credibility for Monetary Policy Under Sticky Confidence

    Directory of Open Access Journals (Sweden)

    Jaylson Jair da Silveira

    2015-06-01

    Full Text Available We derive a best-reply monetary policy when the confidence by price setters on the monetary authority’s commitment to price level targeting may be both incomplete and sticky. We find that complete confidence (or full credibility is not a necessary condition for the achievement of a price level target even when heterogeneity in firms’ price level expectations is endogenously time-varying and may emerge as a long-run equilibrium outcome. In fact, in the absence of exogenous perturbations to the dynamic of confidence building, it is the achievement of a price level target for long enough that, due to stickiness in the state of confidence, rather ensures the conquering of full credibility. This result has relevant implications for the conduct of monetary policy in pursuit of price stability. One implication is that setting a price level target matters more as a means to provide monetary policy with a sharper focus on price stability than as a device to conquer credibility. As regards the conquering of credibility for monetary policy, it turns out that actions speak louder than words, as the continuing achievement of price stability is what ultimately performs better as a confidence-building device.

  19. Prevalence and Determinants of Corneal Blindness in a Semi-Urban ...

    African Journals Online (AJOL)

    2017-07-26

    Jul 26, 2017 ... blindness with a prevalence of 1.1% (95% confidence interval: 0.5–1.7). Corneal blindness .... Power Holding Company of Nigeria. The local ... trauma, the redness of the eye with or without pain, history suggestive of measles,.

  20. Pediatric Reference Intervals for Free Thyroxine and Free Triiodothyronine

    Science.gov (United States)

    Jang, Megan; Guo, Tiedong; Soldin, Steven J.

    2009-01-01

    Background The clinical value of free thyroxine (FT4) and free triiodothyronine (FT3) analysis depends on the reference intervals with which they are compared. We determined age- and sex-specific reference intervals for neonates, infants, and children 0–18 years of age for FT4 and FT3 using tandem mass spectrometry. Methods Reference intervals were calculated for serum FT4 (n = 1426) and FT3 (n = 1107) obtained from healthy children between January 1, 2008, and June 30, 2008, from Children's National Medical Center and Georgetown University Medical Center Bioanalytical Core Laboratory, Washington, DC. Serum samples were analyzed using isotope dilution liquid chromatography tandem mass spectrometry (LC/MS/MS) with deuterium-labeled internal standards. Results FT4 reference intervals were very similar for males and females of all ages and ranged between 1.3 and 2.4 ng/dL for children 1 to 18 years old. FT4 reference intervals for 1- to 12-month-old infants were 1.3–2.8 ng/dL. These 2.5 to 97.5 percentile intervals were much tighter than reference intervals obtained using immunoassay platforms 0.48–2.78 ng/dL for males and 0.85–2.09 ng/dL for females. Similarly, FT3 intervals were consistent and similar for males and females and for all ages, ranging between 1.5 pg/mL and approximately 6.0 pg/mL for children 1 month of age to 18 years old. Conclusions This is the first study to provide pediatric reference intervals of FT4 and FT3 for children from birth to 18 years of age using LC/MS/MS. Analysis using LC/MS/MS provides more specific quantification of thyroid hormones. A comparison of the ultrafiltration tandem mass spectrometric method with equilibrium dialysis showed very good correlation. PMID:19583487

  1. Uncertainty Determination Methodology, Sampling Maps Generation and Trend Studies with Biomass Thermogravimetric Analysis

    Science.gov (United States)

    Pazó, Jose A.; Granada, Enrique; Saavedra, Ángeles; Eguía, Pablo; Collazo, Joaquín

    2010-01-01

    This paper investigates a method for the determination of the maximum sampling error and confidence intervals of thermal properties obtained from thermogravimetric analysis (TG analysis) for several lignocellulosic materials (ground olive stone, almond shell, pine pellets and oak pellets), completing previous work of the same authors. A comparison has been made between results of TG analysis and prompt analysis. Levels of uncertainty and errors were obtained, demonstrating that properties evaluated by TG analysis were representative of the overall fuel composition, and no correlation between prompt and TG analysis exists. Additionally, a study of trends and time correlations is indicated. These results are particularly interesting for biomass energy applications. PMID:21152292

  2. Technical Report on Modeling for Quasispecies Abundance Inference with Confidence Intervals from Metagenomic Sequence Data

    Energy Technology Data Exchange (ETDEWEB)

    McLoughlin, K. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-01-11

    The overall aim of this project is to develop a software package, called MetaQuant, that can determine the constituents of a complex microbial sample and estimate their relative abundances by analysis of metagenomic sequencing data. The goal for Task 1 is to create a generative model describing the stochastic process underlying the creation of sequence read pairs in the data set. The stages in this generative process include the selection of a source genome sequence for each read pair, with probability dependent on its abundance in the sample. The other stages describe the evolution of the source genome from its nearest common ancestor with a reference genome, breakage of the source DNA into short fragments, and the errors in sequencing the ends of the fragments to produce read pairs.

  3. Active ankle motion may result in changes to the talofibular interval in individuals with chronic ankle instability and ankle sprain copers: a preliminary study.

    Science.gov (United States)

    Croy, Theodore; Cosby, Nicole L; Hertel, Jay

    2013-08-01

    Alterations in talocrural joint arthrokinematics related to repositioning of the talus or fibula following ankle sprain have been reported in radiological and clinical studies. It is unclear if these changes can result from normal active ankle motion. The study objective was to determine if active movement created changes in the sagittal plane talofibular interval in ankles with a history of lateral ankle sprain and instability. Three subject groups [control (n = 17), ankle sprain copers (n = 20), and chronic ankle instability (n = 20)] underwent ultrasound imaging of the anterolateral ankle gutter to identify the lateral malleolus and talus over three trials. Between trials, subjects actively plantar and dorsiflexed the ankle three times. The sagittal plane talofibular interval was assessed by measuring the anteroposterior distance (mm) between the lateral malleolus and talus from an ultrasound image. Between group and trial differences were analyzed with repeated measures analysis of variance and post-hoc t-tests. Fifty-seven subjects participated. A significant group-by-trial interaction was observed (F4,108 = 3.5; P = 0.009). The talofibular interval was increased in both copers [2.4±3.6 mm; 95% confidence interval (CI): 0.73-4.1; P = 0.007] and chronic ankle instability (4.1±4.6 mm; 95% CI: 1.9-6.2; P = 0.001) at trial 3 while no changes were observed in control ankle talar position (0.06±2.8mm; 95% CI: -1.5-1.4; P = 0.93). The talofibular interval increased only in subjects with a history of lateral ankle sprain with large clinical effect sizes observed. These findings suggest that an alteration in the position of the talus or fibula occurred with non-weight bearing sagittal plane motion. These findings may have diagnostic and therapeutic implications for manual therapists.

  4. The antecedents and belief-polarized effects of thought confidence.

    Science.gov (United States)

    Chou, Hsuan-Yi; Lien, Nai-Hwa; Liang, Kuan-Yu

    2011-01-01

    This article investigates 2 possible antecedents of thought confidence and explores the effects of confidence induced before or during ad exposure. The results of the experiments indicate that both consumers' dispositional optimism and spokesperson attractiveness have significant effects on consumers' confidence in thoughts that are generated after viewing the advertisement. Higher levels of thought confidence will influence the quality of the thoughts that people generate, lead to either positively or negatively polarized message processing, and therefore induce better or worse advertising effectiveness, depending on the valence of thoughts. The authors posit the belief-polarization hypothesis to explain these findings.

  5. Large Sample Confidence Intervals for Item Response Theory Reliability Coefficients

    Science.gov (United States)

    Andersson, Björn; Xin, Tao

    2018-01-01

    In applications of item response theory (IRT), an estimate of the reliability of the ability estimates or sum scores is often reported. However, analytical expressions for the standard errors of the estimators of the reliability coefficients are not available in the literature and therefore the variability associated with the estimated reliability…

  6. Complete Blood Count Reference Intervals for Healthy Han Chinese Adults

    Science.gov (United States)

    Mu, Runqing; Guo, Wei; Qiao, Rui; Chen, Wenxiang; Jiang, Hong; Ma, Yueyun; Shang, Hong

    2015-01-01

    Background Complete blood count (CBC) reference intervals are important to diagnose diseases, screen blood donors, and assess overall health. However, current reference intervals established by older instruments and technologies and those from American and European populations are not suitable for Chinese samples due to ethnic, dietary, and lifestyle differences. The aim of this multicenter collaborative study was to establish CBC reference intervals for healthy Han Chinese adults. Methods A total of 4,642 healthy individuals (2,136 males and 2,506 females) were recruited from six clinical centers in China (Shenyang, Beijing, Shanghai, Guangzhou, Chengdu, and Xi’an). Blood samples collected in K2EDTA anticoagulant tubes were analyzed. Analysis of variance was performed to determine differences in consensus intervals according to the use of data from the combined sample and selected samples. Results Median and mean platelet counts from the Chengdu center were significantly lower than those from other centers. Red blood cell count (RBC), hemoglobin (HGB), and hematocrit (HCT) values were higher in males than in females at all ages. Other CBC parameters showed no significant instrument-, region-, age-, or sex-dependent difference. Thalassemia carriers were found to affect the lower or upper limit of different RBC profiles. Conclusion We were able to establish consensus intervals for CBC parameters in healthy Han Chinese adults. RBC, HGB, and HCT intervals were established for each sex. The reference interval for platelets for the Chengdu center should be established independently. PMID:25769040

  7. INTERVALS OF ACTIVE PLAY AND BREAK IN BASKETBALL GAMES

    Directory of Open Access Journals (Sweden)

    Pavle Rubin

    2010-09-01

    Full Text Available The problem of the research comes from the need for decomposition of a basketball game. The aim was to determine the intervals of active game (“live ball” - term defined by rules and break (“dead ball” - term defined by rules, by analyzing basketball games. In order to obtain the relevant information, basketball games from five different competitions (top level of quality were analyzed. The sample consists of seven games played in the 2006/2007 season: NCAA Play - Off Final game, Adriatic League finals, ULEB Cup final game, Euroleague (2 games and the NBA league (2 games. The most important information gained by this research is that the average interval of active play lasts approximately 47 seconds, while the average break interval lasts approximately 57 seconds. This information is significant for coaches and should be used in planning the training process.

  8. Confidence in Phase Definition for Periodicity in Genes Expression Time Series.

    Science.gov (United States)

    El Anbari, Mohammed; Fadda, Abeer; Ptitsyn, Andrey

    2015-01-01

    Circadian oscillation in baseline gene expression plays an important role in the regulation of multiple cellular processes. Most of the knowledge of circadian gene expression is based on studies measuring gene expression over time. Our ability to dissect molecular events in time is determined by the sampling frequency of such experiments. However, the real peaks of gene activity can be at any time on or between the time points at which samples are collected. Thus, some genes with a peak activity near the observation point have their phase of oscillation detected with better precision then those which peak between observation time points. Separating genes for which we can confidently identify peak activity from ambiguous genes can improve the analysis of time series gene expression. In this study we propose a new statistical method to quantify the phase confidence of circadian genes. The numerical performance of the proposed method has been tested using three real gene expression data sets.

  9. An experimental determination of the drag coefficient of a Mens 8+ racing shell.

    Science.gov (United States)

    Buckmann, James G; Harris, Samuel D

    2014-01-01

    This study centered around an experimental analysis of a Mens Lightweight Eight racing shell and, specifically, determining an approximation for the drag coefficient. A testing procedure was employed that used a Global Positioning System (GPS) unit in order to determine the acceleration and drag force on the shell, and through calculations yield a drag coefficient. The testing was run over several days in numerous conditions, and a 95% confidence interval was established to capture the results. The results obtained, over these varying trials, maintained a successful level of consistency. The significance of this study transcends the determination an approximation for the drag coefficient of the racing shell; it defined a successful means of quantifying performance of the shell itself. The testing procedures outlined in the study represent a uniform means of evaluating the factors that influence drag on the shell, and thus influence speed.

  10. Interval stability for complex systems

    Science.gov (United States)

    Klinshov, Vladimir V.; Kirillov, Sergey; Kurths, Jürgen; Nekorkin, Vladimir I.

    2018-04-01

    Stability of dynamical systems against strong perturbations is an important problem of nonlinear dynamics relevant to many applications in various areas. Here, we develop a novel concept of interval stability, referring to the behavior of the perturbed system during a finite time interval. Based on this concept, we suggest new measures of stability, namely interval basin stability (IBS) and interval stability threshold (IST). IBS characterizes the likelihood that the perturbed system returns to the stable regime (attractor) in a given time. IST provides the minimal magnitude of the perturbation capable to disrupt the stable regime for a given interval of time. The suggested measures provide important information about the system susceptibility to external perturbations which may be useful for practical applications. Moreover, from a theoretical viewpoint the interval stability measures are shown to bridge the gap between linear and asymptotic stability. We also suggest numerical algorithms for quantification of the interval stability characteristics and demonstrate their potential for several dynamical systems of various nature, such as power grids and neural networks.

  11. BRIDGING GAPS BETWEEN ZOO AND WILDLIFE MEDICINE: ESTABLISHING REFERENCE INTERVALS FOR FREE-RANGING AFRICAN LIONS (PANTHERA LEO).

    Science.gov (United States)

    Broughton, Heather M; Govender, Danny; Shikwambana, Purvance; Chappell, Patrick; Jolles, Anna

    2017-06-01

    The International Species Information System has set forth an extensive database of reference intervals for zoologic species, allowing veterinarians and game park officials to distinguish normal health parameters from underlying disease processes in captive wildlife. However, several recent studies comparing reference values from captive and free-ranging animals have found significant variation between populations, necessitating the development of separate reference intervals in free-ranging wildlife to aid in the interpretation of health data. Thus, this study characterizes reference intervals for six biochemical analytes, eleven hematologic or immune parameters, and three hormones using samples from 219 free-ranging African lions ( Panthera leo ) captured in Kruger National Park, South Africa. Using the original sample population, exclusion criteria based on physical examination were applied to yield a final reference population of 52 clinically normal lions. Reference intervals were then generated via 90% confidence intervals on log-transformed data using parametric bootstrapping techniques. In addition to the generation of reference intervals, linear mixed-effect models and generalized linear mixed-effect models were used to model associations of each focal parameter with the following independent variables: age, sex, and body condition score. Age and sex were statistically significant drivers for changes in hepatic enzymes, renal values, hematologic parameters, and leptin, a hormone related to body fat stores. Body condition was positively correlated with changes in monocyte counts. Given the large variation in reference values taken from captive versus free-ranging lions, it is our hope that this study will serve as a baseline for future clinical evaluations and biomedical research targeting free-ranging African lions.

  12. Consumer’s and merchant’s confidence in internet payments

    Directory of Open Access Journals (Sweden)

    Franc Bračun

    2003-01-01

    Full Text Available Performing payment transactions over the Internet is becoming increasingly important. Whenever one interacts with others, he or she faces the problem of uncertainty because in interacting with others, one makes him or herself vulnerable, i.e. one can be betrayed. Thus, perceived risk and confidence are of fundamental importance in electronic payment transactions. A higher risk leads to greater hesitance about entering into a business relationship with a high degree of uncertainty; and therefore, to an increased need for confidence. This paper has two objectives. First, it aims to introduce and test a theoretical model that predicts consumer and merchant acceptance of the Internet payment solution by explaining the complex set of relationships among the key factors influencing confidence in electronic payment transactions. Second, the paper attempts to shed light on the complex interrelationship among confidence, control and perceived risk. An empirical study was conducted to test the proposed model using data from consumers and merchants in Slovenia. The results show how perceived risk dimensions and post-transaction control influence consumer’s and merchant’s confidence in electronic payment transactions, and the impact of confidence on the adoption of mass-market on-line payment solutions.

  13. Non-Asymptotic Confidence Sets for Circular Means

    Directory of Open Access Journals (Sweden)

    Thomas Hotz

    2016-10-01

    Full Text Available The mean of data on the unit circle is defined as the minimizer of the average squared Euclidean distance to the data. Based on Hoeffding’s mass concentration inequalities, non-asymptotic confidence sets for circular means are constructed which are universal in the sense that they require no distributional assumptions. These are then compared with asymptotic confidence sets in simulations and for a real data set.

  14. Persistent fluctuations in stride intervals under fractal auditory stimulation.

    Directory of Open Access Journals (Sweden)

    Vivien Marmelat

    Full Text Available Stride sequences of healthy gait are characterized by persistent long-range correlations, which become anti-persistent in the presence of an isochronous metronome. The latter phenomenon is of particular interest because auditory cueing is generally considered to reduce stride variability and may hence be beneficial for stabilizing gait. Complex systems tend to match their correlation structure when synchronizing. In gait training, can one capitalize on this tendency by using a fractal metronome rather than an isochronous one? We examined whether auditory cues with fractal variations in inter-beat intervals yield similar fractal inter-stride interval variability as isochronous auditory cueing in two complementary experiments. In Experiment 1, participants walked on a treadmill while being paced by either an isochronous or a fractal metronome with different variation strengths between beats in order to test whether participants managed to synchronize with a fractal metronome and to determine the necessary amount of variability for participants to switch from anti-persistent to persistent inter-stride intervals. Participants did synchronize with the metronome despite its fractal randomness. The corresponding coefficient of variation of inter-beat intervals was fixed in Experiment 2, in which participants walked on a treadmill while being paced by non-isochronous metronomes with different scaling exponents. As expected, inter-stride intervals showed persistent correlations similar to self-paced walking only when cueing contained persistent correlations. Our results open up a new window to optimize rhythmic auditory cueing for gait stabilization by integrating fractal fluctuations in the inter-beat intervals.

  15. Qt interval prolongation and ventricular arrhythmias in patients with chronic heart failure

    International Nuclear Information System (INIS)

    Sarwar, M.; Majeed, S.M.I.; Khan, M.A.; Majeed, S.M.

    2014-01-01

    To determine the association of QTc interval prolongation with ventricular arrhythmias in patients with chronic heart failure. Study Design: Descriptive study. Place and Duration of Study: This study was conducted at Armed Forces Institute of Cardiology/National Institute of Heart Diseases, Rawalpindi, Pakistan from April 2013 to August 2013. Patients and Methods: Fifty three heart failure patients were monitored for 48 hours using ambulatory holter electrocardiography recorders. Digital ECG data was analyzed for QTc interval along with frequency and severity of arrhythmias. Association of prolonged QTc interval with ventricular arrhythmias and severity of arrhythmias was analyzed. Results: Cardiac arrhythmias were observed in 79.2% patients. QT analysis revealed that 69.8% patients had prolonged QTc interval, 86.4% patients with prolonged QTc had ventricular arrhythmias. Of these 66% patients were found to have severe ventricular arrhythmias. Comparison of mean QTc interval of our study population with a reference value showed significantly higher QTc interval of our study group than the test value. Conclusion: Arrhythmia frequency and severity significantly increases with an increase in QTc interval in heart failure demonstrating association of prolonged QTc interval with high risk of severe ventricular arrhythmias and sudden cardiac death in chronic heart failure. (author)

  16. Remembering September 11th: the role of retention interval and rehearsal on flashbulb and event memory.

    Science.gov (United States)

    Shapiro, Lauren R

    2006-02-01

    Retention interval and rehearsal effects on flashbulb and event memory for 11th September 2001 (9/11) were examined. In Experiment 1, college students were assessed three times (Groups 1 and 2) or once (Group 3) over 11 weeks. In Experiment 2, three new groups assessed initially at 23 weeks (Group 4), 1 year (Group 5), or 2 years (Group 6) were compared at 1 year and at 2 years with subsamples of those assessed previously. No effects of retention interval length or rehearsal were found for flashbulb memory, which contained details at each assessment. Event memory, but not consistency, was detrimentally affected by long retention intervals, but improved with rehearsal. Recall was higher for the reception event than for the main events. Also, consistency from 1 day to 11 weeks, but not 1 year to 2 years, was higher for flashbulb memory than for event memory. Event recall was enhanced when respondents conceived of their memory as vivid, frozen, and encompassing a longer period of time. Positive correlations were found for event memory with confidence in accuracy and with rehearsal through discussion at 2 years.

  17. The relationship between confidence in charitable organizations and volunteering revisited

    NARCIS (Netherlands)

    Bekkers, René H.F.P.; Bowman, Woods

    2009-01-01

    Confidence in charitable organizations (charitable confidence) would seem to be an important prerequisite for philanthropic behavior. Previous research relying on cross-sectional data has suggested that volunteering promotes charitable confidence and vice versa. This research note, using new

  18. Disconnections Between Teacher Expectations and Student Confidence in Bioethics

    Science.gov (United States)

    Hanegan, Nikki L.; Price, Laura; Peterson, Jeremy

    2008-09-01

    This study examines how student practice of scientific argumentation using socioscientific bioethics issues affects both teacher expectations of students’ general performance and student confidence in their own work. When teachers use bioethical issues in the classroom students can gain not only biology content knowledge but also important decision-making skills. Learning bioethics through scientific argumentation gives students opportunities to express their ideas, formulate educated opinions and value others’ viewpoints. Research has shown that science teachers’ expectations of student success and knowledge directly influence student achievement and confidence levels. Our study analyzes pre-course and post-course surveys completed by students enrolled in a university level bioethics course ( n = 111) and by faculty in the College of Biology and Agriculture faculty ( n = 34) based on their perceptions of student confidence. Additionally, student data were collected from classroom observations and interviews. Data analysis showed a disconnect between faculty and students perceptions of confidence for both knowledge and the use of science argumentation. Student reports of their confidence levels regarding various bioethical issues were higher than faculty reports. A further disconnect showed up between students’ preferred learning styles and the general faculty’s common teaching methods; students learned more by practicing scientific argumentation than listening to traditional lectures. Students who completed a bioethics course that included practice in scientific argumentation, significantly increased their confidence levels. This study suggests that professors’ expectations and teaching styles influence student confidence levels in both knowledge and scientific argumentation.

  19. Self-Confidence and Quality of Life in Women Undergoing Treatment for Breast Cancer

    Science.gov (United States)

    Shafaee, Fahimeh Sehati; Mirghafourvand, Mojgan; Harischi, Sepideh; Esfahani, Ali; Amirzehni, Jalileh

    2018-01-01

    Introduction: Quality of life is an important topic in the study of chronic diseases, especially cancer which can have a major effect on patient self-confidence. This study was conducted to determine quality of life and its relationship with self-confidence in women undergoing treatment for breast cancer. Methods: This cross-sectional, descriptive, analytical study was conducted in 2016 on 166 women with breast cancer undergoing treatment at Ghazi, Al-Zahra, International and/or Shams hospitals in Tabriz. The subjects were selected through convenience sampling. A personal-demographic questionnaire, the Cancer Quality of Life Questionnaire (QLQ-C30), and the Rosenberg Self-Esteem Scale (RSES) were completed for each patient. The data obtained were analyzed using independent t-tests, one-way ANOVA, multivariate linear regression and Pearson’s correlation coefficients. Findings: The mean total score of quality of life was 59.1±17.4, ranging from 0 to 100. The highest mean score was obtained in the cognitive subscale (74.9±23.8) and the lowest in the emotional subscale (51.4±21.1). The mean score for self-confidence was 0.3 with a standard deviation of 0.1, ranging from -1 to +1. There was a significant positive relationship between self-confidence and quality of life, except in three symptom subscales for diarrhea, constipation and loss of appetite (Pquality of life. Discussion: Given the significant relationship between quality of life and self-confidence, health care providers may need to pay special attention to women undergoing treatment for breast cancer and perform timely measures to maintain their belief in themselves. PMID:29582628

  20. Statistics of return intervals between long heartbeat intervals and their usability for online prediction of disorders

    International Nuclear Information System (INIS)

    Bogachev, Mikhail I; Bunde, Armin; Kireenkov, Igor S; Nifontov, Eugene M

    2009-01-01

    We study the statistics of return intervals between large heartbeat intervals (above a certain threshold Q) in 24 h records obtained from healthy subjects. We find that both the linear and the nonlinear long-term memory inherent in the heartbeat intervals lead to power-laws in the probability density function P Q (r) of the return intervals. As a consequence, the probability W Q (t; Δt) that at least one large heartbeat interval will occur within the next Δt heartbeat intervals, with an increasing elapsed number of intervals t after the last large heartbeat interval, follows a power-law. Based on these results, we suggest a method of obtaining a priori information about the occurrence of the next large heartbeat interval, and thus to predict it. We show explicitly that the proposed method, which exploits long-term memory, is superior to the conventional precursory pattern recognition technique, which focuses solely on short-term memory. We believe that our results can be straightforwardly extended to obtain more reliable predictions in other physiological signals like blood pressure, as well as in other complex records exhibiting multifractal behaviour, e.g. turbulent flow, precipitation, river flows and network traffic.

  1. Family Health Histories and Their Impact on Retirement Confidence.

    Science.gov (United States)

    Zick, Cathleen D; Mayer, Robert N; Smith, Ken R

    2015-08-01

    Retirement confidence is a key social barometer. In this article, we examine how personal and parental health histories relate to working-age adults' feelings of optimism or pessimism about their overall retirement prospects. This study links survey data on retirement planning with information on respondents' own health histories and those of their parents. The multivariate models control for the respondents' socio-demographic and economic characteristics along with past retirement planning activities when estimating the relationships between family health histories and retirement confidence. Retirement confidence is inversely related to parental history of cancer and cardiovascular disease but not to personal health history. In contrast, retirement confidence is positively associated with both parents being deceased. As members of the public become increasingly aware of how genetics and other family factors affect intergenerational transmission of chronic diseases, it is likely that the link between family health histories and retirement confidence will intensify. © The Author(s) 2015.

  2. Multivoxel neurofeedback selectively modulates confidence without changing perceptual performance

    Science.gov (United States)

    Cortese, Aurelio; Amano, Kaoru; Koizumi, Ai; Kawato, Mitsuo; Lau, Hakwan

    2016-01-01

    A central controversy in metacognition studies concerns whether subjective confidence directly reflects the reliability of perceptual or cognitive processes, as suggested by normative models based on the assumption that neural computations are generally optimal. This view enjoys popularity in the computational and animal literatures, but it has also been suggested that confidence may depend on a late-stage estimation dissociable from perceptual processes. Yet, at least in humans, experimental tools have lacked the power to resolve these issues convincingly. Here, we overcome this difficulty by using the recently developed method of decoded neurofeedback (DecNef) to systematically manipulate multivoxel correlates of confidence in a frontoparietal network. Here we report that bi-directional changes in confidence do not affect perceptual accuracy. Further psychophysical analyses rule out accounts based on simple shifts in reporting strategy. Our results provide clear neuroscientific evidence for the systematic dissociation between confidence and perceptual performance, and thereby challenge current theoretical thinking. PMID:27976739

  3. Two-sorted Point-Interval Temporal Logics

    DEFF Research Database (Denmark)

    Balbiani, Philippe; Goranko, Valentin; Sciavicco, Guido

    2011-01-01

    There are two natural and well-studied approaches to temporal ontology and reasoning: point-based and interval-based. Usually, interval-based temporal reasoning deals with points as particular, duration-less intervals. Here we develop explicitly two-sorted point-interval temporal logical framework...... whereby time instants (points) and time periods (intervals) are considered on a par, and the perspective can shift between them within the formal discourse. We focus on fragments involving only modal operators that correspond to the inter-sort relations between points and intervals. We analyze...

  4. Magnitude of cyantraniliprole residues in tomato following open field application: pre-harvest interval determination and risk assessment.

    Science.gov (United States)

    Malhat, Farag; Kasiotis, Konstantinos M; Shalaby, Shehata

    2018-02-05

    Cyantraniliprole is an anthranilic diamide insecticide, belonging to the ryanoid class, with a broad range of applications against several pests. In the presented work, a reliable analytical technique employing high-performance liquid chromatography coupled with photodiode array detector (HPLC-DAD) for analyzing cyantraniliprole residues in tomato was developed. The method was then applied to field-incurred tomato samples collected after applications under open field conditions. The latter aimed to ensure the safe application of cyantraniliprole to tomato and contribute the derived residue data to the risk assessment under field conditions. Sample preparation involved a single step extraction with acetonitrile and sodium chloride for partitioning. The extract was purified utilizing florisil as cleanup reagent. The developed method was further evaluated by comparing the analytical results with those obtained using the QuEChERS technique. The novel method outbalanced QuEChERS regarding matrix interferences in the analysis, while it met all guideline criteria. Hence, it showed excellent linearity over the assayed concentration and yielded satisfactory recovery rate in the range of 88.9 to 96.5%. The half-life of degradation of cyantraniliprole was determined at 2.6 days. Based on the Codex MRL, the pre-harvest interval (PHI) for cyantraniliprole on tomato was 3 days, after treatment at the recommended dose. To our knowledge, the present work provides the first record on PHI determination of cyantraniliprole in tomato under open field conditions in Egypt and the broad Mediterranean region.

  5. High-intensity interval training: Modulating interval duration in overweight/obese men.

    Science.gov (United States)

    Smith-Ryan, Abbie E; Melvin, Malia N; Wingfield, Hailee L

    2015-05-01

    High-intensity interval training (HIIT) is a time-efficient strategy shown to induce various cardiovascular and metabolic adaptations. Little is known about the optimal tolerable combination of intensity and volume necessary for adaptations, especially in clinical populations. In a randomized controlled pilot design, we evaluated the effects of two types of interval training protocols, varying in intensity and interval duration, on clinical outcomes in overweight/obese men. Twenty-five men [body mass index (BMI) > 25 kg · m(2)] completed baseline body composition measures: fat mass (FM), lean mass (LM) and percent body fat (%BF) and fasting blood glucose, lipids and insulin (IN). A graded exercise cycling test was completed for peak oxygen consumption (VO2peak) and power output (PO). Participants were randomly assigned to high-intensity short interval (1MIN-HIIT), high-intensity interval (2MIN-HIIT) or control groups. 1MIN-HIIT and 2MIN-HIIT completed 3 weeks of cycling interval training, 3 days/week, consisting of either 10 × 1 min bouts at 90% PO with 1 min rests (1MIN-HIIT) or 5 × 2 min bouts with 1 min rests at undulating intensities (80%-100%) (2MIN-HIIT). There were no significant training effects on FM (Δ1.06 ± 1.25 kg) or %BF (Δ1.13% ± 1.88%), compared to CON. Increases in LM were not significant but increased by 1.7 kg and 2.1 kg for 1MIN and 2MIN-HIIT groups, respectively. Increases in VO2peak were also not significant for 1MIN (3.4 ml·kg(-1) · min(-1)) or 2MIN groups (2.7 ml · kg(-1) · min(-1)). IN sensitivity (HOMA-IR) improved for both training groups (Δ-2.78 ± 3.48 units; p < 0.05) compared to CON. HIIT may be an effective short-term strategy to improve cardiorespiratory fitness and IN sensitivity in overweight males.

  6. Confidence Measurement in the Light of Signal Detection Theory

    Directory of Open Access Journals (Sweden)

    Sébastien eMassoni

    2014-12-01

    Full Text Available We compare three alternative methods for eliciting retrospective confidence in the context of a simple perceptual task: the Simple Confidence Rating (a direct report on a numerical scale, the Quadratic Scoring Rule (a post-wagering procedure and the Matching Probability (a generalization of the no-loss gambling method. We systematically compare the results obtained with these three rules to the theoretical confidence levels that can be inferred from performance in the perceptual task using Signal Detection Theory. We find that the Matching Probability provides better results in that respect. We conclude that Matching Probability is particularly well suited for studies of confidence that use Signal Detection Theory as a theoretical framework.

  7. Characterization of Cardiac Time Intervals in Healthy Bonnet Macaques (Macaca radiata) by Using an Electronic Stethoscope

    Science.gov (United States)

    Kamran, Haroon; Salciccioli, Louis; Pushilin, Sergei; Kumar, Paraag; Carter, John; Kuo, John; Novotney, Carol; Lazar, Jason M

    2011-01-01

    Nonhuman primates are used frequently in cardiovascular research. Cardiac time intervals derived by phonocardiography have long been used to assess left ventricular function. Electronic stethoscopes are simple low-cost systems that display heart sound signals. We assessed the use of an electronic stethoscope to measure cardiac time intervals in 48 healthy bonnet macaques (age, 8 ± 5 y) based on recorded heart sounds. Technically adequate recordings were obtained from all animals and required 1.5 ± 1.3 min. The following cardiac time intervals were determined by simultaneously recording acoustic and single-lead electrocardiographic data: electromechanical activation time (QS1), electromechanical systole (QS2), the time interval between the first and second heart sounds (S1S2), and the time interval between the second and first sounds (S2S1). QS2 was correlated with heart rate, mean arterial pressure, diastolic blood pressure, and left ventricular ejection time determined by using echocardiography. S1S2 correlated with heart rate, mean arterial pressure, diastolic blood pressure, left ventricular ejection time, and age. S2S1 correlated with heart rate, mean arterial pressure, diastolic blood pressure, systolic blood pressure, and left ventricular ejection time. QS1 did not correlate with any anthropometric or echocardiographic parameter. The relation S1S2/S2S1 correlated with systolic blood pressure. On multivariate analyses, heart rate was the only independent predictor of QS2, S1S2, and S2S1. In conclusion, determination of cardiac time intervals is feasible and reproducible by using an electrical stethoscope in nonhuman primates. Heart rate is a major determinant of QS2, S1S2, and S2S1 but not QS1; regression equations for reference values for cardiac time intervals in bonnet macaques are provided. PMID:21439218

  8. The Gas Sampling Interval Effect on V˙O2peak Is Independent of Exercise Protocol.

    Science.gov (United States)

    Scheadler, Cory M; Garver, Matthew J; Hanson, Nicholas J

    2017-09-01

    There is a plethora of gas sampling intervals available during cardiopulmonary exercise testing to measure peak oxygen consumption (V˙O2peak). Different intervals can lead to altered V˙O2peak. Whether differences are affected by the exercise protocol or subject sample is not clear. The purpose of this investigation was to determine whether V˙O2peak differed because of the manipulation of sampling intervals and whether differences were independent of the protocol and subject sample. The first subject sample (24 ± 3 yr; V˙O2peak via 15-breath moving averages: 56.2 ± 6.8 mL·kg·min) completed the Bruce and the self-paced V˙O2max protocols. The second subject sample (21.9 ± 2.7 yr; V˙O2peak via 15-breath moving averages: 54.2 ± 8.0 mL·kg·min) completed the Bruce and the modified Astrand protocols. V˙O2peak was identified using five sampling intervals: 15-s block averages, 30-s block averages, 15-breath block averages, 15-breath moving averages, and 30-s block averages aligned to the end of exercise. Differences in V˙O2peak between intervals were determined using repeated-measures ANOVAs. The influence of subject sample on the sampling effect was determined using independent t-tests. There was a significant main effect of sampling interval on V˙O2peak (first sample Bruce and self-paced V˙O2max P sample Bruce and modified Astrand P sampling intervals followed a similar pattern for each protocol and subject sample, with 15-breath moving average presenting the highest V˙O2peak. The effect of manipulating gas sampling intervals on V˙O2peak appears to be protocol and sample independent. These findings highlight our recommendation that the clinical and scientific community request and report the sampling interval whenever metabolic data are presented. The standardization of reporting would assist in the comparison of V˙O2peak.

  9. Interval Mathematics Applied to Critical Point Transitions

    Directory of Open Access Journals (Sweden)

    Benito A. Stradi

    2012-03-01

    Full Text Available The determination of critical points of mixtures is important for both practical and theoretical reasons in the modeling of phase behavior, especially at high pressure. The equations that describe the behavior of complex mixtures near critical points are highly nonlinear and with multiplicity of solutions to the critical point equations. Interval arithmetic can be used to reliably locate all the critical points of a given mixture. The method also verifies the nonexistence of a critical point if a mixture of a given composition does not have one. This study uses an interval Newton/Generalized Bisection algorithm that provides a mathematical and computational guarantee that all mixture critical points are located. The technique is illustrated using several example problems. These problems involve cubic equation of state models; however, the technique is general purpose and can be applied in connection with other nonlinear problems.

  10. Self-confidence, overconfidence and prenatal testosterone exposure : Evidence from the lab

    NARCIS (Netherlands)

    Dalton, Patricio S.; Ghosal, Sayantan

    2018-01-01

    This paper examines whether foetal testosterone exposure predicts the extent of confidence and over-confidence in own absolute ability in adulthood. To study this question, we elicited incentive-compatible measures of confidence and over-confidence in the lab and correlate them with measures of

  11. A new view to uncertainty in Electre III method by introducing interval numbers

    Directory of Open Access Journals (Sweden)

    Mohammad Kazem Sayyadi

    2012-07-01

    Full Text Available The Electre III is a widely accepted multi attribute decision making model, which takes into account the uncertainty and vagueness. Uncertainty concept in Electre III is introduced by indifference, preference and veto thresholds, but sometimes determining their accurate values can be very hard. In this paper we represent the values of performance matrix as interval numbers and we define the links between interval numbers and concordance matrix .Without changing the concept of concordance, in our propose concept, Electre III is usable in decision making problems with interval numbers.

  12. The Interval-Valued Triangular Fuzzy Soft Set and Its Method of Dynamic Decision Making

    OpenAIRE

    Xiaoguo Chen; Hong Du; Yue Yang

    2014-01-01

    A concept of interval-valued triangular fuzzy soft set is presented, and some operations of “AND,” “OR,” intersection, union and complement, and so forth are defined. Then some relative properties are discussed and several conclusions are drawn. A dynamic decision making model is built based on the definition of interval-valued triangular fuzzy soft set, in which period weight is determined by the exponential decay method. The arithmetic weighted average operator of interval-valued triangular...

  13. Effect of Irrigation Intervals on Some Morphophysiological Traits of Basil (Ocimum basilicum L. Ecotypes

    Directory of Open Access Journals (Sweden)

    M Goldani

    2012-10-01

    Full Text Available In order to determine the effect of different irrigation intervals on some morphophysiological traits of basil (Ocimum basilicum L., an experiment was conducted as factorial based on randomized complete block design with three replications under greenhouse conditions during 2010. Treatments included five irrigation intervals with 4, 8, 12, 16 and 20 days intervals and two ecotypes of basil (green and purple. The results showed that by increasing irrigation interval plant height, spike number, spike weight and shoot dry weight between irrigation intervals decreased. Purple basil was more tolerant than basil green ecotype to drought stress. Interaction between irrigation intervals and ecotypes showed that the best treatment related to four days irrigation interval and purple basil ecotype. The effect of irrigation intervals on root area, root diameter mean, total length, root volume and dry weight of root was significant. In all irrigation intervals, purple basil had better performance compared to green ecotype. The results showed that by increasing in irrigation interval decreased root surface area, but increased total root length. It was concluded that increasing irrigation interval up to 12 days decreased shoot and root surface areas. Increasing irrigation interval decreased chlorophyll- a, b and increased prolin amino acid content of basil leaf.

  14. Stability in the metamemory realism of eyewitness confidence judgments.

    Science.gov (United States)

    Buratti, Sandra; Allwood, Carl Martin; Johansson, Marcus

    2014-02-01

    The stability of eyewitness confidence judgments over time in regard to their reported memory and accuracy of these judgments is of interest in forensic contexts because witnesses are often interviewed many times. The present study investigated the stability of the confidence judgments of memory reports of a witnessed event and of the accuracy of these judgments over three occasions, each separated by 1 week. Three age groups were studied: younger children (8-9 years), older children (10-11 years), and adults (19-31 years). A total of 93 participants viewed a short film clip and were asked to answer directed two-alternative forced-choice questions about the film clip and to confidence judge each answer. Different questions about details in the film clip were used on each of the three test occasions. Confidence as such did not exhibit stability over time on an individual basis. However, the difference between confidence and proportion correct did exhibit stability across time, in terms of both over/underconfidence and calibration. With respect to age, the adults and older children exhibited more stability than the younger children for calibration. Furthermore, some support for instability was found with respect to the difference between the average confidence level for correct and incorrect answers (slope). Unexpectedly, however, the younger children's slope was found to be more stable than the adults. Compared to the previous research, the present study's use of more advanced statistical methods provides a more nuanced understanding of the stability of confidence judgments in the eyewitness reports of children and adults.

  15. Some Characterizations of Convex Interval Games

    NARCIS (Netherlands)

    Brânzei, R.; Tijs, S.H.; Alparslan-Gok, S.Z.

    2008-01-01

    This paper focuses on new characterizations of convex interval games using the notions of exactness and superadditivity. We also relate big boss interval games with concave interval games and obtain characterizations of big boss interval games in terms of exactness and subadditivity.

  16. Nurses' training and confidence on deep venous catheterization.

    Science.gov (United States)

    Liachopoulou, A P; Synodinou-Kamilou, E E; Deligiannidi, P G; Giannakopoulou, M; Birbas, K N

    2008-01-01

    The rough estimation of the education and the self-confidence of nurses, both students and professionals, regarding deep venous catheterization in adult patients, the evaluation of the change in self-confidence of one team of students who were trained with a simulator on deep venous catheterization and the correlation of their self-confidence with their performance recorded by the simulator. Seventy-six nurses and one hundred twenty-four undergraduate students participated in the study. Fourty-four University students took part in a two-day educational seminar and were trained on subclavian and femoral vein paracentesis with a simulator and an anatomical model. Three questionnaires were filled in by the participants: one from nurses, one from students of Technological institutions, while the University students filled in the previous questionnaire before their attendance of the seminar, and another questionnaire after having attended it. Impressive results in improving the participants' self-confidence were recorded. However, the weak correlation of their self-confidence with the score automatically provided by the simulator after each user's training obligates us to be particularly cautious about the ability of the users to repeat the action successfully in a clinical environment. Educational courses and simulators are useful educational tools that are likely to shorten but in no case can efface the early phase of the learning curve in clinical setting, substituting the clinical training of inexperienced users.

  17. An Indirect Simulation-Optimization Model for Determining Optimal TMDL Allocation under Uncertainty

    Directory of Open Access Journals (Sweden)

    Feng Zhou

    2015-11-01

    Full Text Available An indirect simulation-optimization model framework with enhanced computational efficiency and risk-based decision-making capability was developed to determine optimal total maximum daily load (TMDL allocation under uncertainty. To convert the traditional direct simulation-optimization model into our indirect equivalent model framework, we proposed a two-step strategy: (1 application of interval regression equations derived by a Bayesian recursive regression tree (BRRT v2 algorithm, which approximates the original hydrodynamic and water-quality simulation models and accurately quantifies the inherent nonlinear relationship between nutrient load reductions and the credible interval of algal biomass with a given confidence interval; and (2 incorporation of the calibrated interval regression equations into an uncertain optimization framework, which is further converted to our indirect equivalent framework by the enhanced-interval linear programming (EILP method and provides approximate-optimal solutions at various risk levels. The proposed strategy was applied to the Swift Creek Reservoir’s nutrient TMDL allocation (Chesterfield County, VA to identify the minimum nutrient load allocations required from eight sub-watersheds to ensure compliance with user-specified chlorophyll criteria. Our results indicated that the BRRT-EILP model could identify critical sub-watersheds faster than the traditional one and requires lower reduction of nutrient loadings compared to traditional stochastic simulation and trial-and-error (TAE approaches. This suggests that our proposed framework performs better in optimal TMDL development compared to the traditional simulation-optimization models and provides extreme and non-extreme tradeoff analysis under uncertainty for risk-based decision making.

  18. Can confidence indicators forecast the probability of expansion in Croatia?

    Directory of Open Access Journals (Sweden)

    Mirjana Čižmešija

    2016-04-01

    Full Text Available The aim of this paper is to investigate how reliable are confidence indicators in forecasting the probability of expansion. We consider three Croatian Business Survey indicators: the Industrial Confidence Indicator (ICI, the Construction Confidence Indicator (BCI and the Retail Trade Confidence Indicator (RTCI. The quarterly data, used in the research, covered the periods from 1999/Q1 to 2014/Q1. Empirical analysis consists of two parts. The non-parametric Bry-Boschan algorithm is used for distinguishing periods of expansion from the period of recession in the Croatian economy. Then, various nonlinear probit models were estimated. The models differ with respect to the regressors (confidence indicators and the time lags. The positive signs of estimated parameters suggest that the probability of expansion increases with an increase in Confidence Indicators. Based on the obtained results, the conclusion is that ICI is the most powerful predictor of the probability of expansion in Croatia.

  19. Evaluation of locally established reference intervals for hematology and biochemistry parameters in Western Kenya.

    Science.gov (United States)

    Odhiambo, Collins; Oyaro, Boaz; Odipo, Richard; Otieno, Fredrick; Alemnji, George; Williamson, John; Zeh, Clement

    2015-01-01

    Important differences have been demonstrated in laboratory parameters from healthy persons in different geographical regions and populations, mostly driven by a combination of genetic, demographic, nutritional, and environmental factors. Despite this, European and North American derived laboratory reference intervals are used in African countries for patient management, clinical trial eligibility, and toxicity determination; which can result in misclassification of healthy persons as having laboratory abnormalities. An observational prospective cohort study known as the Kisumu Incidence Cohort Study (KICoS) was conducted to estimate the incidence of HIV seroconversion and identify determinants of successful recruitment and retention in preparation for an HIV vaccine/prevention trial among young adults and adolescents in western Kenya. Laboratory values generated from the KICoS were compared to published region-specific reference intervals and the 2004 NIH DAIDS toxicity tables used for the trial. About 1106 participants were screened for the KICoS between January 2007 and June 2010. Nine hundred and fifty-three participants aged 16 to 34 years, HIV-seronegative, clinically healthy, and non-pregnant were selected for this analysis. Median and 95% reference intervals were calculated for hematological and biochemistry parameters. When compared with both published region-specific reference values and the 2004 NIH DAIDS toxicity table, it was shown that the use of locally established reference intervals would have resulted in fewer participants classified as having abnormal hematological or biochemistry values compared to US derived reference intervals from DAIDS (10% classified as abnormal by local parameters vs. >40% by US DAIDS). Blood urea nitrogen was most often out of range if US based intervals were used: 83% by US based reference intervals. Differences in reference intervals for hematological and biochemical parameters between western and African populations

  20. Modeling Confidence and Response Time in Recognition Memory

    Science.gov (United States)

    Ratcliff, Roger; Starns, Jeffrey J.

    2009-01-01

    A new model for confidence judgments in recognition memory is presented. In the model, the match between a single test item and memory produces a distribution of evidence, with better matches corresponding to distributions with higher means. On this match dimension, confidence criteria are placed, and the areas between the criteria under the…