#### Sample records for confidence interval determination

1. Using the confidence interval confidently.

Science.gov (United States)

Hazra, Avijit

2017-10-01

Biomedical research is seldom done with entire populations but rather with samples drawn from a population. Although we work with samples, our goal is to describe and draw inferences regarding the underlying population. It is possible to use a sample statistic and estimates of error in the sample to get a fair idea of the population parameter, not as a single value, but as a range of values. This range is the confidence interval (CI) which is estimated on the basis of a desired confidence level. Calculation of the CI of a sample statistic takes the general form: CI = Point estimate ± Margin of error, where the margin of error is given by the product of a critical value (z) derived from the standard normal curve and the standard error of point estimate. Calculation of the standard error varies depending on whether the sample statistic of interest is a mean, proportion, odds ratio (OR), and so on. The factors affecting the width of the CI include the desired confidence level, the sample size and the variability in the sample. Although the 95% CI is most often used in biomedical research, a CI can be calculated for any level of confidence. A 99% CI will be wider than 95% CI for the same sample. Conflict between clinical importance and statistical significance is an important issue in biomedical research. Clinical importance is best inferred by looking at the effect size, that is how much is the actual change or difference. However, statistical significance in terms of P only suggests whether there is any difference in probability terms. Use of the CI supplements the P value by providing an estimate of actual clinical effect. Of late, clinical trials are being designed specifically as superiority, non-inferiority or equivalence studies. The conclusions from these alternative trial designs are based on CI values rather than the P value from intergroup comparison.

2. Robust misinterpretation of confidence intervals

NARCIS (Netherlands)

Hoekstra, Rink; Morey, Richard; Rouder, Jeffrey N.; Wagenmakers, Eric-Jan

2014-01-01

Null hypothesis significance testing (NHST) is undoubtedly the most common inferential technique used to justify claims in the social sciences. However, even staunch defenders of NHST agree that its outcomes are often misinterpreted. Confidence intervals (CIs) have frequently been proposed as a more

3. Uncertainty in population growth rates: determining confidence intervals from point estimates of parameters.

Directory of Open Access Journals (Sweden)

Eleanor S Devenish Nelson

Full Text Available BACKGROUND: Demographic models are widely used in conservation and management, and their parameterisation often relies on data collected for other purposes. When underlying data lack clear indications of associated uncertainty, modellers often fail to account for that uncertainty in model outputs, such as estimates of population growth. METHODOLOGY/PRINCIPAL FINDINGS: We applied a likelihood approach to infer uncertainty retrospectively from point estimates of vital rates. Combining this with resampling techniques and projection modelling, we show that confidence intervals for population growth estimates are easy to derive. We used similar techniques to examine the effects of sample size on uncertainty. Our approach is illustrated using data on the red fox, Vulpes vulpes, a predator of ecological and cultural importance, and the most widespread extant terrestrial mammal. We show that uncertainty surrounding estimated population growth rates can be high, even for relatively well-studied populations. Halving that uncertainty typically requires a quadrupling of sampling effort. CONCLUSIONS/SIGNIFICANCE: Our results compel caution when comparing demographic trends between populations without accounting for uncertainty. Our methods will be widely applicable to demographic studies of many species.

4. Robust misinterpretation of confidence intervals.

Science.gov (United States)

Hoekstra, Rink; Morey, Richard D; Rouder, Jeffrey N; Wagenmakers, Eric-Jan

2014-10-01

Null hypothesis significance testing (NHST) is undoubtedly the most common inferential technique used to justify claims in the social sciences. However, even staunch defenders of NHST agree that its outcomes are often misinterpreted. Confidence intervals (CIs) have frequently been proposed as a more useful alternative to NHST, and their use is strongly encouraged in the APA Manual. Nevertheless, little is known about how researchers interpret CIs. In this study, 120 researchers and 442 students-all in the field of psychology-were asked to assess the truth value of six particular statements involving different interpretations of a CI. Although all six statements were false, both researchers and students endorsed, on average, more than three statements, indicating a gross misunderstanding of CIs. Self-declared experience with statistics was not related to researchers' performance, and, even more surprisingly, researchers hardly outperformed the students, even though the students had not received any education on statistical inference whatsoever. Our findings suggest that many researchers do not know the correct interpretation of a CI. The misunderstandings surrounding p-values and CIs are particularly unfortunate because they constitute the main tools by which psychologists draw conclusions from data.

5. Using Confidence Interval-Based Estimation of Relevance to Select Social-Cognitive Determinants for Behavior Change Interventions

Directory of Open Access Journals (Sweden)

Rik Crutzen

2017-07-01

Full Text Available When developing an intervention aimed at behavior change, one of the crucial steps in the development process is to select the most relevant social-cognitive determinants. These determinants can be seen as the buttons one needs to push to establish behavior change. Insight into these determinants is needed to select behavior change methods (i.e., general behavior change techniques that are applied in an intervention in the development process. Therefore, a study on determinants is often conducted as formative research in the intervention development process. Ideally, all relevant determinants identified in such a study are addressed by an intervention. However, when developing a behavior change intervention, there are limits in terms of, for example, resources available for intervention development and the amount of content that participants of an intervention can be exposed to. Hence, it is important to select those determinants that are most relevant to the target behavior as these determinants should be addressed in an intervention. The aim of the current paper is to introduce a novel approach to select the most relevant social-cognitive determinants and use them in intervention development. This approach is based on visualization of confidence intervals for the means and correlation coefficients for all determinants simultaneously. This visualization facilitates comparison, which is necessary when making selections. By means of a case study on the determinants of using a high dose of 3,4-methylenedioxymethamphetamine (commonly known as ecstasy, we illustrate this approach. We provide a freely available tool to facilitate the analyses needed in this approach.

6. Using Confidence Interval-Based Estimation of Relevance to Select Social-Cognitive Determinants for Behavior Change Interventions.

Science.gov (United States)

Crutzen, Rik; Peters, Gjalt-Jorn Ygram; Noijen, Judith

2017-01-01

When developing an intervention aimed at behavior change, one of the crucial steps in the development process is to select the most relevant social-cognitive determinants. These determinants can be seen as the buttons one needs to push to establish behavior change. Insight into these determinants is needed to select behavior change methods (i.e., general behavior change techniques that are applied in an intervention) in the development process. Therefore, a study on determinants is often conducted as formative research in the intervention development process. Ideally, all relevant determinants identified in such a study are addressed by an intervention. However, when developing a behavior change intervention, there are limits in terms of, for example, resources available for intervention development and the amount of content that participants of an intervention can be exposed to. Hence, it is important to select those determinants that are most relevant to the target behavior as these determinants should be addressed in an intervention. The aim of the current paper is to introduce a novel approach to select the most relevant social-cognitive determinants and use them in intervention development. This approach is based on visualization of confidence intervals for the means and correlation coefficients for all determinants simultaneously. This visualization facilitates comparison, which is necessary when making selections. By means of a case study on the determinants of using a high dose of 3,4-methylenedioxymethamphetamine (commonly known as ecstasy), we illustrate this approach. We provide a freely available tool to facilitate the analyses needed in this approach.

7. Determination and Interpretation of Characteristic Limits for Radioactivity Measurements: Decision Threshhold, Detection Limit and Limits of the Confidence Interval

International Nuclear Information System (INIS)

2017-01-01

Since 2004, the environment programme of the IAEA has included activities aimed at developing a set of procedures for analytical measurements of radionuclides in food and the environment. Reliable, comparable and fit for purpose results are essential for any analytical measurement. Guidelines and national and international standards for laboratory practices to fulfil quality assurance requirements are extremely important when performing such measurements. The guidelines and standards should be comprehensive, clearly formulated and readily available to both the analyst and the customer. ISO 11929:2010 is the international standard on the determination of the characteristic limits (decision threshold, detection limit and limits of the confidence interval) for measuring ionizing radiation. For nuclear analytical laboratories involved in the measurement of radioactivity in food and the environment, robust determination of the characteristic limits of radioanalytical techniques is essential with regard to national and international regulations on permitted levels of radioactivity. However, characteristic limits defined in ISO 11929:2010 are complex, and the correct application of the standard in laboratories requires a full understanding of various concepts. This publication provides additional information to Member States in the understanding of the terminology, definitions and concepts in ISO 11929:2010, thus facilitating its implementation in Member State laboratories.

8. Interpretation of Confidence Interval Facing the Conflict

Science.gov (United States)

2016-01-01

As literature has reported, it is usual that university students in statistics courses, and even statistics teachers, interpret the confidence level associated with a confidence interval as the probability that the parameter value will be between the lower and upper interval limits. To confront this misconception, class activities have been…

9. Confidence Interval Approximation For Treatment Variance In ...

African Journals Online (AJOL)

In a random effects model with a single factor, variation is partitioned into two as residual error variance and treatment variance. While a confidence interval can be imposed on the residual error variance, it is not possible to construct an exact confidence interval for the treatment variance. This is because the treatment ...

10. A nonparametric statistical method for determination of a confidence interval for the mean of a set of results obtained in a laboratory intercomparison

International Nuclear Information System (INIS)

Veglia, A.

1981-08-01

In cases where sets of data are obviously not normally distributed, the application of a nonparametric method for the estimation of a confidence interval for the mean seems to be more suitable than some other methods because such a method requires few assumptions about the population of data. A two-step statistical method is proposed which can be applied to any set of analytical results: elimination of outliers by a nonparametric method based on Tchebycheff's inequality, and determination of a confidence interval for the mean by a non-parametric method based on binominal distribution. The method is appropriate only for samples of size n>=10

11. Coefficient Omega Bootstrap Confidence Intervals: Nonnormal Distributions

Science.gov (United States)

2013-01-01

The performance of the normal theory bootstrap (NTB), the percentile bootstrap (PB), and the bias-corrected and accelerated (BCa) bootstrap confidence intervals (CIs) for coefficient omega was assessed through a Monte Carlo simulation under conditions not previously investigated. Of particular interests were nonnormal Likert-type and binary items.…

12. Understanding Confidence Intervals With Visual Representations

OpenAIRE

Navruz, Bilgin; Delen, Erhan

2014-01-01

In the present paper, we showed how confidence intervals (CIs) are valuable and useful in research studies when they are used in the correct form with correct interpretations. The sixth edition of the APA (2010) Publication Manual strongly recommended reporting CIs in research studies, and it was described as “the best reporting strategy” (p. 34). Misconceptions and correct interpretations of CIs were presented from several textbooks. In addition, limitations of the null hypothesis statistica...

13. Generalized Confidence Intervals and Fiducial Intervals for Some Epidemiological Measures

Directory of Open Access Journals (Sweden)

Ionut Bebu

2016-06-01

Full Text Available For binary outcome data from epidemiological studies, this article investigates the interval estimation of several measures of interest in the absence or presence of categorical covariates. When covariates are present, the logistic regression model as well as the log-binomial model are investigated. The measures considered include the common odds ratio (OR from several studies, the number needed to treat (NNT, and the prevalence ratio. For each parameter, confidence intervals are constructed using the concepts of generalized pivotal quantities and fiducial quantities. Numerical results show that the confidence intervals so obtained exhibit satisfactory performance in terms of maintaining the coverage probabilities even when the sample sizes are not large. An appealing feature of the proposed solutions is that they are not based on maximization of the likelihood, and hence are free from convergence issues associated with the numerical calculation of the maximum likelihood estimators, especially in the context of the log-binomial model. The results are illustrated with a number of examples. The overall conclusion is that the proposed methodologies based on generalized pivotal quantities and fiducial quantities provide an accurate and unified approach for the interval estimation of the various epidemiological measures in the context of binary outcome data with or without covariates.

14. Confidence intervals for the lognormal probability distribution

International Nuclear Information System (INIS)

Smith, D.L.; Naberejnev, D.G.

2004-01-01

The present communication addresses the topic of symmetric confidence intervals for the lognormal probability distribution. This distribution is frequently utilized to characterize inherently positive, continuous random variables that are selected to represent many physical quantities in applied nuclear science and technology. The basic formalism is outlined herein and a conjured numerical example is provided for illustration. It is demonstrated that when the uncertainty reflected in a lognormal probability distribution is large, the use of a confidence interval provides much more useful information about the variable used to represent a particular physical quantity than can be had by adhering to the notion that the mean value and standard deviation of the distribution ought to be interpreted as best value and corresponding error, respectively. Furthermore, it is shown that if the uncertainty is very large a disturbing anomaly can arise when one insists on interpreting the mean value and standard deviation as the best value and corresponding error, respectively. Reliance on using the mode and median as alternative parameters to represent the best available knowledge of a variable with large uncertainties is also shown to entail limitations. Finally, a realistic physical example involving the decay of radioactivity over a time period that spans many half-lives is presented and analyzed to further illustrate the concepts discussed in this communication

15. Learning about confidence intervals with software R

Directory of Open Access Journals (Sweden)

Gariela Gonçalves

2013-08-01

Full Text Available 0 0 1 202 1111 USAL 9 2 1311 14.0 Normal 0 21 false false false ES JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Tabla normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:Calibri; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-ansi-language:ES; mso-fareast-language:EN-US;} This work was to study the feasibility of implementing a teaching method that employs software, in a Computational Mathematics course, involving students and teachers through the use of the statistical software R in carrying out practical work, such as strengthening the traditional teaching. The statistical inference, namely the determination of confidence intervals, was the content selected for this experience. It was intended show, first of all, that it is possible to promote, through the proposal methodology, the acquisition of basic skills in statistical inference and to promote the positive relationships between teachers and students. It presents also a comparative study between the methodologies used and their quantitative and qualitative results on two consecutive school years, in several indicators. The data used in the study were obtained from the students to the exam questions in the years 2010/2011 and 2011/2012, from the achievement of a working group in 2011/2012 and via the responses to a questionnaire (optional and anonymous also applied in 2011 / 2012. In terms of results, we emphasize a better performance of students in the examination questions in 2011/2012, the year that students used the software R, and a very favorable student’s perspective about

16. Confidence Intervals from Normalized Data: A correction to Cousineau (2005

Directory of Open Access Journals (Sweden)

Richard D. Morey

2008-09-01

Full Text Available Presenting confidence intervals around means is a common method of expressing uncertainty in data. Loftus and Masson (1994 describe confidence intervals for means in within-subjects designs. These confidence intervals are based on the ANOVA mean squared error. Cousineau (2005 presents an alternative to the Loftus and Masson method, but his method produces confidence intervals that are smaller than those of Loftus and Masson. I show why this is the case and offer a simple correction that makes the expected size of Cousineau confidence intervals the same as that of Loftus and Masson confidence intervals.

17. Confidence Intervals: From tests of statistical significance to confidence intervals, range hypotheses and substantial effects

Directory of Open Access Journals (Sweden)

Dominic Beaulieu-Prévost

2006-03-01

Full Text Available For the last 50 years of research in quantitative social sciences, the empirical evaluation of scientific hypotheses has been based on the rejection or not of the null hypothesis. However, more than 300 articles demonstrated that this method was problematic. In summary, null hypothesis testing (NHT is unfalsifiable, its results depend directly on sample size and the null hypothesis is both improbable and not plausible. Consequently, alternatives to NHT such as confidence intervals (CI and measures of effect size are starting to be used in scientific publications. The purpose of this article is, first, to provide the conceptual tools necessary to implement an approach based on confidence intervals, and second, to briefly demonstrate why such an approach is an interesting alternative to an approach based on NHT. As demonstrated in the article, the proposed CI approach avoids most problems related to a NHT approach and can often improve the scientific and contextual relevance of the statistical interpretations by testing range hypotheses instead of a point hypothesis and by defining the minimal value of a substantial effect. The main advantage of such a CI approach is that it replaces the notion of statistical power by an easily interpretable three-value logic (probable presence of a substantial effect, probable absence of a substantial effect and probabilistic undetermination. The demonstration includes a complete example.

18. Confidence intervals for experiments with background and small numbers of events

International Nuclear Information System (INIS)

Bruechle, W.

2003-01-01

Methods to find a confidence interval for Poisson distributed variables are illuminated, especially for the case of poor statistics. The application of 'central' and 'highest probability density' confidence intervals is compared for the case of low count-rates. A method to determine realistic estimates of the confidence intervals for Poisson distributed variables affected with background, and their ratios, is given. (orig.)

19. Confidence intervals for experiments with background and small numbers of events

International Nuclear Information System (INIS)

Bruechle, W.

2002-07-01

Methods to find a confidence interval for Poisson distributed variables are illuminated, especially for the case of poor statistics. The application of 'central' and 'highest probability density' confidence intervals is compared for the case of low count-rates. A method to determine realistic estimates of the confidence intervals for Poisson distributed variables affected with background, and their ratios, is given. (orig.)

20. Differentially Private Confidence Intervals for Empirical Risk Minimization

OpenAIRE

Wang, Yue; Kifer, Daniel; Lee, Jaewoo

2018-01-01

The process of data mining with differential privacy produces results that are affected by two types of noise: sampling noise due to data collection and privacy noise that is designed to prevent the reconstruction of sensitive information. In this paper, we consider the problem of designing confidence intervals for the parameters of a variety of differentially private machine learning models. The algorithms can provide confidence intervals that satisfy differential privacy (as well as the mor...

1. Confidence interval procedures for Monte Carlo transport simulations

International Nuclear Information System (INIS)

Pederson, S.P.

1997-01-01

The problem of obtaining valid confidence intervals based on estimates from sampled distributions using Monte Carlo particle transport simulation codes such as MCNP is examined. Such intervals can cover the true parameter of interest at a lower than nominal rate if the sampled distribution is extremely right-skewed by large tallies. Modifications to the standard theory of confidence intervals are discussed and compared with some existing heuristics, including batched means normality tests. Two new types of diagnostics are introduced to assess whether the conditions of central limit theorem-type results are satisfied: the relative variance of the variance determines whether the sample size is sufficiently large, and estimators of the slope of the right tail of the distribution are used to indicate the number of moments that exist. A simulation study is conducted to quantify the relationship between various diagnostics and coverage rates and to find sample-based quantities useful in indicating when intervals are expected to be valid. Simulated tally distributions are chosen to emulate behavior seen in difficult particle transport problems. Measures of variation in the sample variance s 2 are found to be much more effective than existing methods in predicting when coverage will be near nominal rates. Batched means tests are found to be overly conservative in this regard. A simple but pathological MCNP problem is presented as an example of false convergence using existing heuristics. The new methods readily detect the false convergence and show that the results of the problem, which are a factor of 4 too small, should not be used. Recommendations are made for applying these techniques in practice, using the statistical output currently produced by MCNP

2. Confidence intervals for correlations when data are not normal.

Science.gov (United States)

Bishara, Anthony J; Hittner, James B

2017-02-01

With nonnormal data, the typical confidence interval of the correlation (Fisher z') may be inaccurate. The literature has been unclear as to which of several alternative methods should be used instead, and how extreme a violation of normality is needed to justify an alternative. Through Monte Carlo simulation, 11 confidence interval methods were compared, including Fisher z', two Spearman rank-order methods, the Box-Cox transformation, rank-based inverse normal (RIN) transformation, and various bootstrap methods. Nonnormality often distorted the Fisher z' confidence interval-for example, leading to a 95 % confidence interval that had actual coverage as low as 68 %. Increasing the sample size sometimes worsened this problem. Inaccurate Fisher z' intervals could be predicted by a sample kurtosis of at least 2, an absolute sample skewness of at least 1, or significant violations of normality hypothesis tests. Only the Spearman rank-order and RIN transformation methods were universally robust to nonnormality. Among the bootstrap methods, an observed imposed bootstrap came closest to accurate coverage, though it often resulted in an overly long interval. The results suggest that sample nonnormality can justify avoidance of the Fisher z' interval in favor of a more robust alternative. R code for the relevant methods is provided in supplementary materials.

3. Estimation and interpretation of keff confidence intervals in MCNP

International Nuclear Information System (INIS)

Urbatsch, T.J.

1995-11-01

MCNP's criticality methodology and some basic statistics are reviewed. Confidence intervals are discussed, as well as how to build them and their importance in the presentation of a Monte Carlo result. The combination of MCNP's three k eff estimators is shown, theoretically and empirically, by statistical studies and examples, to be the best k eff estimator. The method of combining estimators is based on a solid theoretical foundation, namely, the Gauss-Markov Theorem in regard to the least squares method. The confidence intervals of the combined estimator are also shown to have correct coverage rates for the examples considered

4. Robust Confidence Interval for a Ratio of Standard Deviations

Science.gov (United States)

Bonett, Douglas G.

2006-01-01

Comparing variability of test scores across alternate forms, test conditions, or subpopulations is a fundamental problem in psychometrics. A confidence interval for a ratio of standard deviations is proposed that performs as well as the classic method with normal distributions and performs dramatically better with nonnormal distributions. A simple…

5. Comparing confidence intervals for Goodman and Kruskal's gamma coefficient

NARCIS (Netherlands)

van der Ark, L.A.; van Aert, R.C.M.

2015-01-01

This study was motivated by the question which type of confidence interval (CI) one should use to summarize sample variance of Goodman and Kruskal's coefficient gamma. In a Monte-Carlo study, we investigated the coverage and computation time of the Goodman-Kruskal CI, the Cliff-consistent CI, the

6. Confidence Intervals for Assessing Heterogeneity in Generalized Linear Mixed Models

Science.gov (United States)

Wagler, Amy E.

2014-01-01

Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…

7. Parametric change point estimation, testing and confidence interval ...

African Journals Online (AJOL)

In many applications like finance, industry and medicine, it is important to consider that the model parameters may undergo changes at unknown moment in time. This paper deals with estimation, testing and confidence interval of a change point for a univariate variable which is assumed to be normally distributed. To detect ...

8. On Bayesian treatment of systematic uncertainties in confidence interval calculation

CERN Document Server

Tegenfeldt, Fredrik

2005-01-01

In high energy physics, a widely used method to treat systematic uncertainties in confidence interval calculations is based on combining a frequentist construction of confidence belts with a Bayesian treatment of systematic uncertainties. In this note we present a study of the coverage of this method for the standard Likelihood Ratio (aka Feldman & Cousins) construction for a Poisson process with known background and Gaussian or log-Normal distributed uncertainties in the background or signal efficiency. For uncertainties in the signal efficiency of upto 40 % we find over-coverage on the level of 2 to 4 % depending on the size of uncertainties and the region in signal space. Uncertainties in the background generally have smaller effect on the coverage. A considerable smoothing of the coverage curves is observed. A software package is presented which allows fast calculation of the confidence intervals for a variety of assumptions on shape and size of systematic uncertainties for different nuisance paramete...

9. Quantifying uncertainty on sediment loads using bootstrap confidence intervals

Science.gov (United States)

Slaets, Johanna I. F.; Piepho, Hans-Peter; Schmitter, Petra; Hilger, Thomas; Cadisch, Georg

2017-01-01

Load estimates are more informative than constituent concentrations alone, as they allow quantification of on- and off-site impacts of environmental processes concerning pollutants, nutrients and sediment, such as soil fertility loss, reservoir sedimentation and irrigation channel siltation. While statistical models used to predict constituent concentrations have been developed considerably over the last few years, measures of uncertainty on constituent loads are rarely reported. Loads are the product of two predictions, constituent concentration and discharge, integrated over a time period, which does not make it straightforward to produce a standard error or a confidence interval. In this paper, a linear mixed model is used to estimate sediment concentrations. A bootstrap method is then developed that accounts for the uncertainty in the concentration and discharge predictions, allowing temporal correlation in the constituent data, and can be used when data transformations are required. The method was tested for a small watershed in Northwest Vietnam for the period 2010-2011. The results showed that confidence intervals were asymmetric, with the highest uncertainty in the upper limit, and that a load of 6262 Mg year-1 had a 95 % confidence interval of (4331, 12 267) in 2010 and a load of 5543 Mg an interval of (3593, 8975) in 2011. Additionally, the approach demonstrated that direct estimates from the data were biased downwards compared to bootstrap median estimates. These results imply that constituent loads predicted from regression-type water quality models could frequently be underestimating sediment yields and their environmental impact.

10. Confidence Intervals from Realizations of Simulated Nuclear Data

Energy Technology Data Exchange (ETDEWEB)

Younes, W. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Ratkiewicz, A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Ressler, J. J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

2017-09-28

Various statistical techniques are discussed that can be used to assign a level of confidence in the prediction of models that depend on input data with known uncertainties and correlations. The particular techniques reviewed in this paper are: 1) random realizations of the input data using Monte-Carlo methods, 2) the construction of confidence intervals to assess the reliability of model predictions, and 3) resampling techniques to impose statistical constraints on the input data based on additional information. These techniques are illustrated with a calculation of the keff value, based on the 235U(n, f) and 239Pu (n, f) cross sections.

11. Profile-likelihood Confidence Intervals in Item Response Theory Models.

Science.gov (United States)

Chalmers, R Philip; Pek, Jolynn; Liu, Yang

2017-01-01

Confidence intervals (CIs) are fundamental inferential devices which quantify the sampling variability of parameter estimates. In item response theory, CIs have been primarily obtained from large-sample Wald-type approaches based on standard error estimates, derived from the observed or expected information matrix, after parameters have been estimated via maximum likelihood. An alternative approach to constructing CIs is to quantify sampling variability directly from the likelihood function with a technique known as profile-likelihood confidence intervals (PL CIs). In this article, we introduce PL CIs for item response theory models, compare PL CIs to classical large-sample Wald-type CIs, and demonstrate important distinctions among these CIs. CIs are then constructed for parameters directly estimated in the specified model and for transformed parameters which are often obtained post-estimation. Monte Carlo simulation results suggest that PL CIs perform consistently better than Wald-type CIs for both non-transformed and transformed parameters.

12. Effect size, confidence intervals and statistical power in psychological research.

Directory of Open Access Journals (Sweden)

Téllez A.

2015-07-01

Full Text Available Quantitative psychological research is focused on detecting the occurrence of certain population phenomena by analyzing data from a sample, and statistics is a particularly helpful mathematical tool that is used by researchers to evaluate hypotheses and make decisions to accept or reject such hypotheses. In this paper, the various statistical tools in psychological research are reviewed. The limitations of null hypothesis significance testing (NHST and the advantages of using effect size and its respective confidence intervals are explained, as the latter two measurements can provide important information about the results of a study. These measurements also can facilitate data interpretation and easily detect trivial effects, enabling researchers to make decisions in a more clinically relevant fashion. Moreover, it is recommended to establish an appropriate sample size by calculating the optimum statistical power at the moment that the research is designed. Psychological journal editors are encouraged to follow APA recommendations strictly and ask authors of original research studies to report the effect size, its confidence intervals, statistical power and, when required, any measure of clinical significance. Additionally, we must account for the teaching of statistics at the graduate level. At that level, students do not receive sufficient information concerning the importance of using different types of effect sizes and their confidence intervals according to the different types of research designs; instead, most of the information is focused on the various tools of NHST.

13. Number of core samples: Mean concentrations and confidence intervals

International Nuclear Information System (INIS)

Jensen, L.; Cromar, R.D.; Wilmarth, S.R.; Heasler, P.G.

1995-01-01

This document provides estimates of how well the mean concentration of analytes are known as a function of the number of core samples, composite samples, and replicate analyses. The estimates are based upon core composite data from nine recently sampled single-shell tanks. The results can be used when determining the number of core samples needed to ''characterize'' the waste from similar single-shell tanks. A standard way of expressing uncertainty in the estimate of a mean is with a 95% confidence interval (CI). The authors investigate how the width of a 95% CI on the mean concentration decreases as the number of observations increase. Specifically, the tables and figures show how the relative half-width (RHW) of a 95% CI decreases as the number of core samples increases. The RHW of a CI is a unit-less measure of uncertainty. The general conclusions are as follows: (1) the RHW decreases dramatically as the number of core samples is increased, the decrease is much smaller when the number of composited samples or the number of replicate analyses are increase; (2) if the mean concentration of an analyte needs to be estimated with a small RHW, then a large number of core samples is required. The estimated number of core samples given in the tables and figures were determined by specifying different sizes of the RHW. Four nominal sizes were examined: 10%, 25%, 50%, and 100% of the observed mean concentration. For a majority of analytes the number of core samples required to achieve an accuracy within 10% of the mean concentration is extremely large. In many cases, however, two or three core samples is sufficient to achieve a RHW of approximately 50 to 100%. Because many of the analytes in the data have small concentrations, this level of accuracy may be satisfactory for some applications

14. On a linear method in bootstrap confidence intervals

Directory of Open Access Journals (Sweden)

Andrea Pallini

2007-10-01

Full Text Available A linear method for the construction of asymptotic bootstrap confidence intervals is proposed. We approximate asymptotically pivotal and non-pivotal quantities, which are smooth functions of means of n independent and identically distributed random variables, by using a sum of n independent smooth functions of the same analytical form. Errors are of order Op(n-3/2 and Op(n-2, respectively. The linear method allows a straightforward approximation of bootstrap cumulants, by considering the set of n independent smooth functions as an original random sample to be resampled with replacement.

15. Comparison of Bootstrap Confidence Intervals Using Monte Carlo Simulations

Directory of Open Access Journals (Sweden)

Roberto S. Flowers-Cano

2018-02-01

Full Text Available Design of hydraulic works requires the estimation of design hydrological events by statistical inference from a probability distribution. Using Monte Carlo simulations, we compared coverage of confidence intervals constructed with four bootstrap techniques: percentile bootstrap (BP, bias-corrected bootstrap (BC, accelerated bias-corrected bootstrap (BCA and a modified version of the standard bootstrap (MSB. Different simulation scenarios were analyzed. In some cases, the mother distribution function was fit to the random samples that were generated. In other cases, a distribution function different to the mother distribution was fit to the samples. When the fitted distribution had three parameters, and was the same as the mother distribution, the intervals constructed with the four techniques had acceptable coverage. However, the bootstrap techniques failed in several of the cases in which the fitted distribution had two parameters.

16. Confidence Intervals for True Scores Using the Skew-Normal Distribution

Science.gov (United States)

Garcia-Perez, Miguel A.

2010-01-01

A recent comparative analysis of alternative interval estimation approaches and procedures has shown that confidence intervals (CIs) for true raw scores determined with the Score method--which uses the normal approximation to the binomial distribution--have actual coverage probabilities that are closest to their nominal level. It has also recently…

17. Estimation and interpretation of keff confidence intervals in MCNP

International Nuclear Information System (INIS)

Urbatsch, T.J.

1995-01-01

MCNP has three different, but correlated, estimators for Calculating k eff in nuclear criticality calculations: collision, absorption, and track length estimators. The combination of these three estimators, the three-combined k eff estimator, is shown to be the best k eff estimator available in MCNP for estimating k eff confidence intervals. Theoretically, the Gauss-Markov Theorem provides a solid foundation for MCNP's three-combined estimator. Analytically, a statistical study, where the estimates are drawn using a known covariance matrix, shows that the three-combined estimator is superior to the individual estimator with the smallest variance. The importance of MCNP's batch statistics is demonstrated by an investigation of the effects of individual estimator variance bias on the combination of estimators, both heuristically with the analytical study and emprically with MCNP

18. The 95% confidence intervals of error rates and discriminant coefficients

Directory of Open Access Journals (Sweden)

Shuichi Shinmura

2015-02-01

Full Text Available Fisher proposed a linear discriminant function (Fisher’s LDF. From 1971, we analysed electrocardiogram (ECG data in order to develop the diagnostic logic between normal and abnormal symptoms by Fisher’s LDF and a quadratic discriminant function (QDF. Our four years research was inferior to the decision tree logic developed by the medical doctor. After this experience, we discriminated many data and found four problems of the discriminant analysis. A revised Optimal LDF by Integer Programming (Revised IP-OLDF based on the minimum number of misclassification (minimum NM criterion resolves three problems entirely [13, 18]. In this research, we discuss fourth problem of the discriminant analysis. There are no standard errors (SEs of the error rate and discriminant coefficient. We propose a k-fold crossvalidation method. This method offers a model selection technique and a 95% confidence intervals (C.I. of error rates and discriminant coefficients.

19. GENERALISED MODEL BASED CONFIDENCE INTERVALS IN TWO STAGE CLUSTER SAMPLING

Directory of Open Access Journals (Sweden)

Christopher Ouma Onyango

2010-09-01

Full Text Available Chambers and Dorfman (2002 constructed bootstrap confidence intervals in model based estimation for finite population totals assuming that auxiliary values are available throughout a target population and that the auxiliary values are independent. They also assumed that the cluster sizes are known throughout the target population. We now extend to two stage sampling in which the cluster sizes are known only for the sampled clusters, and we therefore predict the unobserved part of the population total. Jan and Elinor (2008 have done similar work, but unlike them, we use a general model, in which the auxiliary values are not necessarily independent. We demonstrate that the asymptotic properties of our proposed estimator and its coverage rates are better than those constructed under the model assisted local polynomial regression model.

20. Estimation and interpretation of keff confidence intervals in MCNP

International Nuclear Information System (INIS)

Urbatsch, T.J.

1995-01-01

The Monte Carlo code MCNP has three different, but correlated, estimators for calculating k eff in nuclear criticality calculations: collision, absorption, and track length estimators. The combination of these three estimators, the three-combined k eff estimator, is shown to be the best k eff estimator available in MCNP for estimating k eff confidence intervals. Theoretically, the Gauss-Markov theorem provides a solid foundation for MCNP's three-combined estimator. Analytically, a statistical study, where the estimates are drawn using a known covariance matrix, shows that the three-combined estimator is superior to the estimator with the smallest variance. Empirically, MCNP examples for several physical systems demonstrate the three-combined estimator's superiority over each of the three individual estimators and its correct coverage rates. Additionally, the importance of MCNP's statistical checks is demonstrated

1. Secure and Usable Bio-Passwords based on Confidence Interval

Directory of Open Access Journals (Sweden)

Aeyoung Kim

2017-02-01

Full Text Available The most popular user-authentication method is the password. Many authentication systems try to enhance their security by enforcing a strong password policy, and by using the password as the first factor, something you know, with the second factor being something you have. However, a strong password policy and a multi-factor authentication system can make it harder for a user to remember the password and login in. In this paper a bio-password-based scheme is proposed as a unique authentication method, which uses biometrics and confidence interval sets to enhance the security of the log-in process and make it easier as well. The method offers a user-friendly solution for creating and registering strong passwords without the user having to memorize them. Here we also show the results of our experiments which demonstrate the efficiency of this method and how it can be used to protect against a variety of malicious attacks.

2. Confidence Intervals for Asbestos Fiber Counts: Approximate Negative Binomial Distribution.

Science.gov (United States)

Bartley, David; Slaven, James; Harper, Martin

2017-03-01

The negative binomial distribution is adopted for analyzing asbestos fiber counts so as to account for both the sampling errors in capturing only a finite number of fibers and the inevitable human variation in identifying and counting sampled fibers. A simple approximation to this distribution is developed for the derivation of quantiles and approximate confidence limits. The success of the approximation depends critically on the use of Stirling's expansion to sufficient order, on exact normalization of the approximating distribution, on reasonable perturbation of quantities from the normal distribution, and on accurately approximating sums by inverse-trapezoidal integration. Accuracy of the approximation developed is checked through simulation and also by comparison to traditional approximate confidence intervals in the specific case that the negative binomial distribution approaches the Poisson distribution. The resulting statistics are shown to relate directly to early research into the accuracy of asbestos sampling and analysis. Uncertainty in estimating mean asbestos fiber concentrations given only a single count is derived. Decision limits (limits of detection) and detection limits are considered for controlling false-positive and false-negative detection assertions and are compared to traditional limits computed assuming normal distributions. Published by Oxford University Press on behalf of the British Occupational Hygiene Society 2017.

3. An Introduction to Confidence Intervals for Both Statistical Estimates and Effect Sizes.

Science.gov (United States)

Capraro, Mary Margaret

This paper summarizes methods of estimating confidence intervals, including classical intervals and intervals for effect sizes. The recent American Psychological Association (APA) Task Force on Statistical Inference report suggested that confidence intervals should always be reported, and the fifth edition of the APA "Publication Manual"…

4. How to Avoid Errors in Error Propagation: Prediction Intervals and Confidence Intervals in Forest Biomass

Science.gov (United States)

Lilly, P.; Yanai, R. D.; Buckley, H. L.; Case, B. S.; Woollons, R. C.; Holdaway, R. J.; Johnson, J.

2016-12-01

Calculations of forest biomass and elemental content require many measurements and models, each contributing uncertainty to the final estimates. While sampling error is commonly reported, based on replicate plots, error due to uncertainty in the regression used to estimate biomass from tree diameter is usually not quantified. Some published estimates of uncertainty due to the regression models have used the uncertainty in the prediction of individuals, ignoring uncertainty in the mean, while others have propagated uncertainty in the mean while ignoring individual variation. Using the simple case of the calcium concentration of sugar maple leaves, we compare the variation among individuals (the standard deviation) to the uncertainty in the mean (the standard error) and illustrate the declining importance in the prediction of individual concentrations as the number of individuals increases. For allometric models, the analogous statistics are the prediction interval (or the residual variation in the model fit) and the confidence interval (describing the uncertainty in the best fit model). The effect of propagating these two sources of error is illustrated using the mass of sugar maple foliage. The uncertainty in individual tree predictions was large for plots with few trees; for plots with 30 trees or more, the uncertainty in individuals was less important than the uncertainty in the mean. Authors of previously published analyses have reanalyzed their data to show the magnitude of these two sources of uncertainty in scales ranging from experimental plots to entire countries. The most correct analysis will take both sources of uncertainty into account, but for practical purposes, country-level reports of uncertainty in carbon stocks, as required by the IPCC, can ignore the uncertainty in individuals. Ignoring the uncertainty in the mean will lead to exaggerated estimates of confidence in estimates of forest biomass and carbon and nutrient contents.

5. A note on Nonparametric Confidence Interval for a Shift Parameter ...

African Journals Online (AJOL)

The method is illustrated using the Cauchy distribution as a location model. The kernel-based method is found to have a shorter interval for the shift parameter between two Cauchy distributions than the one based on the Mann-Whitney test statistic. Keywords: Best Asymptotic Normal; Cauchy distribution; Kernel estimates; ...

6. Bootstrap confidence intervals for three-way methods

NARCIS (Netherlands)

Kiers, Henk A.L.

Results from exploratory three-way analysis techniques such as CANDECOMP/PARAFAC and Tucker3 analysis are usually presented without giving insight into uncertainties due to sampling. Here a bootstrap procedure is proposed that produces percentile intervals for all output parameters. Special

7. Using an R Shiny to Enhance the Learning Experience of Confidence Intervals

Science.gov (United States)

Williams, Immanuel James; Williams, Kelley Kim

2018-01-01

Many students find understanding confidence intervals difficult, especially because of the amalgamation of concepts such as confidence levels, standard error, point estimates and sample sizes. An R Shiny application was created to assist the learning process of confidence intervals using graphics and data from the US National Basketball…

8. Estimating confidence intervals in predicted responses for oscillatory biological models.

Science.gov (United States)

St John, Peter C; Doyle, Francis J

2013-07-29

The dynamics of gene regulation play a crucial role in a cellular control: allowing the cell to express the right proteins to meet changing needs. Some needs, such as correctly anticipating the day-night cycle, require complicated oscillatory features. In the analysis of gene regulatory networks, mathematical models are frequently used to understand how a network's structure enables it to respond appropriately to external inputs. These models typically consist of a set of ordinary differential equations, describing a network of biochemical reactions, and unknown kinetic parameters, chosen such that the model best captures experimental data. However, since a model's parameter values are uncertain, and since dynamic responses to inputs are highly parameter-dependent, it is difficult to assess the confidence associated with these in silico predictions. In particular, models with complex dynamics - such as oscillations - must be fit with computationally expensive global optimization routines, and cannot take advantage of existing measures of identifiability. Despite their difficulty to model mathematically, limit cycle oscillations play a key role in many biological processes, including cell cycling, metabolism, neuron firing, and circadian rhythms. In this study, we employ an efficient parameter estimation technique to enable a bootstrap uncertainty analysis for limit cycle models. Since the primary role of systems biology models is the insight they provide on responses to rate perturbations, we extend our uncertainty analysis to include first order sensitivity coefficients. Using a literature model of circadian rhythms, we show how predictive precision is degraded with decreasing sample points and increasing relative error. Additionally, we show how this method can be used for model discrimination by comparing the output identifiability of two candidate model structures to published literature data. Our method permits modellers of oscillatory systems to confidently

9. The P Value Problem in Otolaryngology: Shifting to Effect Sizes and Confidence Intervals.

Science.gov (United States)

Vila, Peter M; Townsend, Melanie Elizabeth; Bhatt, Neel K; Kao, W Katherine; Sinha, Parul; Neely, J Gail

2017-06-01

10. Graphing within-subjects confidence intervals using SPSS and S-Plus.

Science.gov (United States)

Wright, Daniel B

2007-02-01

Within-subjects confidence intervals are often appropriate to report and to display. Loftus and Masson (1994) have reported methods to calculate these, and their use is becoming common. In the present article, procedures for calculating within-subjects confidence intervals in SPSS and S-Plus are presented (an R version is on the accompanying Web site). The procedure in S-Plus allows the user to report the bias corrected and adjusted bootstrap confidence intervals as well as the standard confidence intervals based on traditional methods. The presented code can be easily altered to fit the individual user's needs.

11. Confidence intervals for distinguishing ordinal and disordinal interactions in multiple regression.

Science.gov (United States)

Lee, Sunbok; Lei, Man-Kit; Brody, Gene H

2015-06-01

Distinguishing between ordinal and disordinal interaction in multiple regression is useful in testing many interesting theoretical hypotheses. Because the distinction is made based on the location of a crossover point of 2 simple regression lines, confidence intervals of the crossover point can be used to distinguish ordinal and disordinal interactions. This study examined 2 factors that need to be considered in constructing confidence intervals of the crossover point: (a) the assumption about the sampling distribution of the crossover point, and (b) the possibility of abnormally wide confidence intervals for the crossover point. A Monte Carlo simulation study was conducted to compare 6 different methods for constructing confidence intervals of the crossover point in terms of the coverage rate, the proportion of true values that fall to the left or right of the confidence intervals, and the average width of the confidence intervals. The methods include the reparameterization, delta, Fieller, basic bootstrap, percentile bootstrap, and bias-corrected accelerated bootstrap methods. The results of our Monte Carlo simulation study suggest that statistical inference using confidence intervals to distinguish ordinal and disordinal interaction requires sample sizes more than 500 to be able to provide sufficiently narrow confidence intervals to identify the location of the crossover point. (c) 2015 APA, all rights reserved).

12. Sample size planning for composite reliability coefficients: accuracy in parameter estimation via narrow confidence intervals.

Science.gov (United States)

Terry, Leann; Kelley, Ken

2012-11-01

Composite measures play an important role in psychology and related disciplines. Composite measures almost always have error. Correspondingly, it is important to understand the reliability of the scores from any particular composite measure. However, the point estimates of the reliability of composite measures are fallible and thus all such point estimates should be accompanied by a confidence interval. When confidence intervals are wide, there is much uncertainty in the population value of the reliability coefficient. Given the importance of reporting confidence intervals for estimates of reliability, coupled with the undesirability of wide confidence intervals, we develop methods that allow researchers to plan sample size in order to obtain narrow confidence intervals for population reliability coefficients. We first discuss composite reliability coefficients and then provide a discussion on confidence interval formation for the corresponding population value. Using the accuracy in parameter estimation approach, we develop two methods to obtain accurate estimates of reliability by planning sample size. The first method provides a way to plan sample size so that the expected confidence interval width for the population reliability coefficient is sufficiently narrow. The second method ensures that the confidence interval width will be sufficiently narrow with some desired degree of assurance (e.g., 99% assurance that the 95% confidence interval for the population reliability coefficient will be less than W units wide). The effectiveness of our methods was verified with Monte Carlo simulation studies. We demonstrate how to easily implement the methods with easy-to-use and freely available software. ©2011 The British Psychological Society.

13. Confidence interval of intrinsic optimum temperature estimated using thermodynamic SSI model

Institute of Scientific and Technical Information of China (English)

Takaya Ikemoto; Issei Kurahashi; Pei-Jian Shi

2013-01-01

The intrinsic optimum temperature for the development of ectotherms is one of the most important factors not only for their physiological processes but also for ecological and evolutional processes.The Sharpe-Schoolfield-Ikemoto (SSI) model succeeded in defining the temperature that can thermodynamically meet the condition that at a particular temperature the probability of an active enzyme reaching its maximum activity is realized.Previously,an algorithm was developed by Ikemoto (Tropical malaria does not mean hot environments.Journal of Medical Entomology,45,963-969) to estimate model parameters,but that program was computationally very time consuming.Now,investigators can use the SSI model more easily because a full automatic computer program was designed by Shi et al.(A modified program for estimating the parameters of the SSI model.Environmental Entomology,40,462-469).However,the statistical significance of the point estimate of the intrinsic optimum temperature for each ectotherm has not yet been determined.Here,we provided a new method for calculating the confidence interval of the estimated intrinsic optimum temperature by modifying the approximate bootstrap confidence intervals method.For this purpose,it was necessary to develop a new program for a faster estimation of the parameters in the SSI model,which we have also done.

14. Confidence Intervals for Weighted Composite Scores under the Compound Binomial Error Model

Science.gov (United States)

Kim, Kyung Yong; Lee, Won-Chan

2018-01-01

Reporting confidence intervals with test scores helps test users make important decisions about examinees by providing information about the precision of test scores. Although a variety of estimation procedures based on the binomial error model are available for computing intervals for test scores, these procedures assume that items are randomly…

15. Binomial confidence intervals for testing non-inferiority or superiority: a practitioner's dilemma.

Science.gov (United States)

Pradhan, Vivek; Evans, John C; Banerjee, Tathagata

2016-08-01

In testing for non-inferiority or superiority in a single arm study, the confidence interval of a single binomial proportion is frequently used. A number of such intervals are proposed in the literature and implemented in standard software packages. Unfortunately, use of different intervals leads to conflicting conclusions. Practitioners thus face a serious dilemma in deciding which one to depend on. Is there a way to resolve this dilemma? We address this question by investigating the performances of ten commonly used intervals of a single binomial proportion, in the light of two criteria, viz., coverage and expected length of the interval. © The Author(s) 2013.

16. Closed-form confidence intervals for functions of the normal mean and standard deviation.

Science.gov (United States)

Donner, Allan; Zou, G Y

2012-08-01

Confidence interval methods for a normal mean and standard deviation are well known and simple to apply. However, the same cannot be said for important functions of these parameters. These functions include the normal distribution percentiles, the Bland-Altman limits of agreement, the coefficient of variation and Cohen's effect size. We present a simple approach to this problem by using variance estimates recovered from confidence limits computed for the mean and standard deviation separately. All resulting confidence intervals have closed forms. Simulation results demonstrate that this approach performs very well for limits of agreement, coefficients of variation and their differences.

17. Confidence intervals for modeling anthocyanin retention in grape pomace during nonisothermal heating.

Science.gov (United States)

Mishra, D K; Dolan, K D; Yang, L

2008-01-01

Degradation of nutraceuticals in low- and intermediate-moisture foods heated at high temperature (>100 degrees C) is difficult to model because of the nonisothermal condition. Isothermal experiments above 100 degrees C are difficult to design because they require high pressure and small sample size in sealed containers. Therefore, a nonisothermal method was developed to estimate the thermal degradation kinetic parameter of nutraceuticals and determine the confidence intervals for the parameters and the predicted Y (concentration). Grape pomace at 42% moisture content (wb) was heated in sealed 202 x 214 steel cans in a steam retort at 126.7 degrees C for > 30 min. Can center temperature was measured by thermocouple and predicted using Comsol software. Thermal conductivity (k) and specific heat (C(p)) were estimated as quadratic functions of temperature using Comsol and nonlinear regression. The k and C(p) functions were then used to predict temperature inside the grape pomace during retorting. Similar heating experiments were run at different time-temperature treatments from 8 to 25 min for kinetic parameter estimation. Anthocyanin concentration in the grape pomace was measured using HPLC. Degradation rate constant (k(110 degrees C)) and activation energy (E(a)) were estimated using nonlinear regression. The thermophysical properties estimates at 100 degrees C were k = 0.501 W/m degrees C, Cp= 3600 J/kg and the kinetic parameters were k(110 degrees C)= 0.0607/min and E(a)= 65.32 kJ/mol. The 95% confidence intervals for the parameters and the confidence bands and prediction bands for anthocyanin retention were plotted. These methods are useful for thermal processing design for nutraceutical products.

18. Binomial Distribution Sample Confidence Intervals Estimation 1. Sampling and Medical Key Parameters Calculation

Directory of Open Access Journals (Sweden)

Tudor DRUGAN

2003-08-01

Full Text Available The aim of the paper was to present the usefulness of the binomial distribution in studying of the contingency tables and the problems of approximation to normality of binomial distribution (the limits, advantages, and disadvantages. The classification of the medical keys parameters reported in medical literature and expressing them using the contingency table units based on their mathematical expressions restrict the discussion of the confidence intervals from 34 parameters to 9 mathematical expressions. The problem of obtaining different information starting with the computed confidence interval for a specified method, information like confidence intervals boundaries, percentages of the experimental errors, the standard deviation of the experimental errors and the deviation relative to significance level was solves through implementation in PHP programming language of original algorithms. The cases of expression, which contain two binomial variables, were separately treated. An original method of computing the confidence interval for the case of two-variable expression was proposed and implemented. The graphical representation of the expression of two binomial variables for which the variation domain of one of the variable depend on the other variable was a real problem because the most of the software used interpolation in graphical representation and the surface maps were quadratic instead of triangular. Based on an original algorithm, a module was implements in PHP in order to represent graphically the triangular surface plots. All the implementation described above was uses in computing the confidence intervals and estimating their performance for binomial distributions sample sizes and variable.

19. The Applicability of Confidence Intervals of Quantiles for the Generalized Logistic Distribution

Science.gov (United States)

Shin, H.; Heo, J.; Kim, T.; Jung, Y.

2007-12-01

The generalized logistic (GL) distribution has been widely used for frequency analysis. However, there is a little study related to the confidence intervals that indicate the prediction accuracy of distribution for the GL distribution. In this paper, the estimation of the confidence intervals of quantiles for the GL distribution is presented based on the method of moments (MOM), maximum likelihood (ML), and probability weighted moments (PWM) and the asymptotic variances of each quantile estimator are derived as functions of the sample sizes, return periods, and parameters. Monte Carlo simulation experiments are also performed to verify the applicability of the derived confidence intervals of quantile. As the results, the relative bias (RBIAS) and relative root mean square error (RRMSE) of the confidence intervals generally increase as return period increases and reverse as sample size increases. And PWM for estimating the confidence intervals performs better than the other methods in terms of RRMSE when the data is almost symmetric while ML shows the smallest RBIAS and RRMSE when the data is more skewed and sample size is moderately large. The GL model was applied to fit the distribution of annual maximum rainfall data. The results show that there are little differences in the estimated quantiles between ML and PWM while distinct differences in MOM.

20. A Note on Confidence Interval for the Power of the One Sample Test

Directory of Open Access Journals (Sweden)

A. Wong

2010-01-01

Full Text Available In introductory statistics texts, the power of the test of a one-sample mean when the variance is known is widely discussed. However, when the variance is unknown, the power of the Student's -test is seldom mentioned. In this note, a general methodology for obtaining inference concerning a scalar parameter of interest of any exponential family model is proposed. The method is then applied to the one-sample mean problem with unknown variance to obtain a (1−100% confidence interval for the power of the Student's -test that detects the difference (−0. The calculations require only the density and the cumulative distribution functions of the standard normal distribution. In addition, the methodology presented can also be applied to determine the required sample size when the effect size and the power of a size test of mean are given.

1. Simulation data for an estimation of the maximum theoretical value and confidence interval for the correlation coefficient.

Science.gov (United States)

Rocco, Paolo; Cilurzo, Francesco; Minghetti, Paola; Vistoli, Giulio; Pedretti, Alessandro

2017-10-01

The data presented in this article are related to the article titled "Molecular Dynamics as a tool for in silico screening of skin permeability" (Rocco et al., 2017) [1]. Knowledge of the confidence interval and maximum theoretical value of the correlation coefficient r can prove useful to estimate the reliability of developed predictive models, in particular when there is great variability in compiled experimental datasets. In this Data in Brief article, data from purposely designed numerical simulations are presented to show how much the maximum r value is worsened by increasing the data uncertainty. The corresponding confidence interval of r is determined by using the Fisher r → Z transform.

2. Tests and Confidence Intervals for an Extended Variance Component Using the Modified Likelihood Ratio Statistic

DEFF Research Database (Denmark)

Christensen, Ole Fredslund; Frydenberg, Morten; Jensen, Jens Ledet

2005-01-01

The large deviation modified likelihood ratio statistic is studied for testing a variance component equal to a specified value. Formulas are presented in the general balanced case, whereas in the unbalanced case only the one-way random effects model is studied. Simulation studies are presented......, showing that the normal approximation to the large deviation modified likelihood ratio statistic gives confidence intervals for variance components with coverage probabilities very close to the nominal confidence coefficient....

3. The Optimal Confidence Intervals for Agricultural Products’ Price Forecasts Based on Hierarchical Historical Errors

Directory of Open Access Journals (Sweden)

Yi Wang

2016-12-01

Full Text Available With the levels of confidence and system complexity, interval forecasts and entropy analysis can deliver more information than point forecasts. In this paper, we take receivers’ demands as our starting point, use the trade-off model between accuracy and informativeness as the criterion to construct the optimal confidence interval, derive the theoretical formula of the optimal confidence interval and propose a practical and efficient algorithm based on entropy theory and complexity theory. In order to improve the estimation precision of the error distribution, the point prediction errors are STRATIFIED according to prices and the complexity of the system; the corresponding prediction error samples are obtained by the prices stratification; and the error distributions are estimated by the kernel function method and the stability of the system. In a stable and orderly environment for price forecasting, we obtain point prediction error samples by the weighted local region and RBF (Radial basis function neural network methods, forecast the intervals of the soybean meal and non-GMO (Genetically Modified Organism soybean continuous futures closing prices and implement unconditional coverage, independence and conditional coverage tests for the simulation results. The empirical results are compared from various interval evaluation indicators, different levels of noise, several target confidence levels and different point prediction methods. The analysis shows that the optimal interval construction method is better than the equal probability method and the shortest interval method and has good anti-noise ability with the reduction of system entropy; the hierarchical estimation error method can obtain higher accuracy and better interval estimation than the non-hierarchical method in a stable system.

4. Binomial Distribution Sample Confidence Intervals Estimation 7. Absolute Risk Reduction and ARR-like Expressions

Directory of Open Access Journals (Sweden)

2004-08-01

Full Text Available Assessments of a controlled clinical trial suppose to interpret some key parameters as the controlled event rate, experimental event date, relative risk, absolute risk reduction, relative risk reduction, number needed to treat when the effect of the treatment are dichotomous variables. Defined as the difference in the event rate between treatment and control groups, the absolute risk reduction is the parameter that allowed computing the number needed to treat. The absolute risk reduction is compute when the experimental treatment reduces the risk for an undesirable outcome/event. In medical literature when the absolute risk reduction is report with its confidence intervals, the method used is the asymptotic one, even if it is well know that may be inadequate. The aim of this paper is to introduce and assess nine methods of computing confidence intervals for absolute risk reduction and absolute risk reduction – like function.Computer implementations of the methods use the PHP language. Methods comparison uses the experimental errors, the standard deviations, and the deviation relative to the imposed significance level for specified sample sizes. Six methods of computing confidence intervals for absolute risk reduction and absolute risk reduction-like functions were assessed using random binomial variables and random sample sizes.The experiments shows that the ADAC, and ADAC1 methods obtains the best overall performance of computing confidence intervals for absolute risk reduction.

5. Comparing confidence intervals for Goodman and Kruskal’s gamma coefficient

NARCIS (Netherlands)

van der Ark, L.A.; van Aert, R.C.M.

2015-01-01

This study was motivated by the question which type of confidence interval (CI) one should use to summarize sample variance of Goodman and Kruskal's coefficient gamma. In a Monte-Carlo study, we investigated the coverage and computation time of the Goodman–Kruskal CI, the Cliff-consistent CI, the

6. The Distribution of the Product Explains Normal Theory Mediation Confidence Interval Estimation.

Science.gov (United States)

Kisbu-Sakarya, Yasemin; MacKinnon, David P; Miočević, Milica

2014-05-01

7. Optimal and Most Exact Confidence Intervals for Person Parameters in Item Response Theory Models

Science.gov (United States)

Doebler, Anna; Doebler, Philipp; Holling, Heinz

2013-01-01

The common way to calculate confidence intervals for item response theory models is to assume that the standardized maximum likelihood estimator for the person parameter [theta] is normally distributed. However, this approximation is often inadequate for short and medium test lengths. As a result, the coverage probabilities fall below the given…

8. WASP (Write a Scientific Paper) using Excel - 6: Standard error and confidence interval.

Science.gov (United States)

Grech, Victor

2018-03-01

The calculation of descriptive statistics includes the calculation of standard error and confidence interval, an inevitable component of data analysis in inferential statistics. This paper provides pointers as to how to do this in Microsoft Excel™. Copyright © 2018 Elsevier B.V. All rights reserved.

9. Methods for confidence interval estimation of a ratio parameter with application to location quotients

Directory of Open Access Journals (Sweden)

Beyene Joseph

2005-10-01

Full Text Available Abstract Background The location quotient (LQ ratio, a measure designed to quantify and benchmark the degree of relative concentration of an activity in the analysis of area localization, has received considerable attention in the geographic and economics literature. This index can also naturally be applied in the context of population health to quantify and compare health outcomes across spatial domains. However, one commonly observed limitation of LQ is its widespread use as only a point estimate without an accompanying confidence interval. Methods In this paper we present statistical methods that can be used to construct confidence intervals for location quotients. The delta and Fieller's methods are generic approaches for a ratio parameter and the generalized linear modelling framework is a useful re-parameterization particularly helpful for generating profile-likelihood based confidence intervals for the location quotient. A simulation experiment is carried out to assess the performance of each of the analytic approaches and a health utilization data set is used for illustration. Results Both the simulation results as well as the findings from the empirical data show that the different analytical methods produce very similar confidence limits for location quotients. When incidence of outcome is not rare and sample sizes are large, the confidence limits are almost indistinguishable. The confidence limits from the generalized linear model approach might be preferable in small sample situations. Conclusion LQ is a useful measure which allows quantification and comparison of health and other outcomes across defined geographical regions. It is a very simple index to compute and has a straightforward interpretation. Reporting this estimate with appropriate confidence limits using methods presented in this paper will make the measure particularly attractive for policy and decision makers.

10. Bayesian-statistical decision threshold, detection limit, and confidence interval in nuclear radiation measurement

International Nuclear Information System (INIS)

Weise, K.

1998-01-01

When a contribution of a particular nuclear radiation is to be detected, for instance, a spectral line of interest for some purpose of radiation protection, and quantities and their uncertainties must be taken into account which, such as influence quantities, cannot be determined by repeated measurements or by counting nuclear radiation events, then conventional statistics of event frequencies is not sufficient for defining the decision threshold, the detection limit, and the limits of a confidence interval. These characteristic limits are therefore redefined on the basis of Bayesian statistics for a wider applicability and in such a way that the usual practice remains as far as possible unaffected. The principle of maximum entropy is applied to establish probability distributions from available information. Quantiles of these distributions are used for defining the characteristic limits. But such a distribution must not be interpreted as a distribution of event frequencies such as the Poisson distribution. It rather expresses the actual state of incomplete knowledge of a physical quantity. The different definitions and interpretations and their quantitative consequences are presented and discussed with two examples. The new approach provides a theoretical basis for the DIN 25482-10 standard presently in preparation for general applications of the characteristic limits. (orig.) [de

11. Confidence Intervals Verification for Simulated Error Rate Performance of Wireless Communication System

KAUST Repository

2012-12-06

In this paper, we derived an efficient simulation method to evaluate the error rate of wireless communication system. Coherent binary phase-shift keying system is considered with imperfect channel phase recovery. The results presented demonstrate the system performance under very realistic Nakagami-m fading and additive white Gaussian noise channel. On the other hand, the accuracy of the obtained results is verified through running the simulation under a good confidence interval reliability of 95 %. We see that as the number of simulation runs N increases, the simulated error rate becomes closer to the actual one and the confidence interval difference reduces. Hence our results are expected to be of significant practical use for such scenarios. © 2012 Springer Science+Business Media New York.

12. Energy Performance Certificate of building and confidence interval in assessment: An Italian case study

International Nuclear Information System (INIS)

Tronchin, Lamberto; Fabbri, Kristian

2012-01-01

The Directive 2002/91/CE introduced the Energy Performance Certificate (EPC), an energy policy tool. The aim of the EPC is to inform building buyers about the energy performance and energy costs of buildings. The EPCs represent a specific energy policy tool to orient the building sector and real-estate markets toward higher energy efficiency buildings. The effectiveness of the EPC depends on two factors: •The accuracy of energy performance evaluation made by independent experts. •The capability of the energy classification and of the scale of energy performance to control the energy index fluctuations. In this paper, the results of a case study located in Italy are shown. In this example, 162 independent technicians on energy performance of building evaluation have studied the same building. The results reveal which part of confidence intervals is dependent on software misunderstanding and that the energy classification ranges are able to tolerate the fluctuation of energy indices. The example was chosen in accordance with the legislation of the Emilia-Romagna Region on Energy Efficiency of Buildings. Following these results, some thermo-economic evaluation related to building and energy labelling are illustrated, as the EPC, which is an energy policy tool for the real-estate market and building sector to find a way to build or retrofit an energy efficiency building. - Highlights: ► Evaluation of the accuracy of energy performance of buildings in relation with the knowledge of independent experts. ► Round robin test based on 162 case studies on the confidence intervals expressed by independent experts. ► Statistical considerations between the confidence intervals expressed by independent experts and energy simulation software. ► Relation between “proper class” in energy classification of buildings and confidence intervals of independent experts.

13. Growth Estimators and Confidence Intervals for the Mean of Negative Binomial Random Variables with Unknown Dispersion

Directory of Open Access Journals (Sweden)

David Shilane

2013-01-01

Full Text Available The negative binomial distribution becomes highly skewed under extreme dispersion. Even at moderately large sample sizes, the sample mean exhibits a heavy right tail. The standard normal approximation often does not provide adequate inferences about the data's expected value in this setting. In previous work, we have examined alternative methods of generating confidence intervals for the expected value. These methods were based upon Gamma and Chi Square approximations or tail probability bounds such as Bernstein's inequality. We now propose growth estimators of the negative binomial mean. Under high dispersion, zero values are likely to be overrepresented in the data. A growth estimator constructs a normal-style confidence interval by effectively removing a small, predetermined number of zeros from the data. We propose growth estimators based upon multiplicative adjustments of the sample mean and direct removal of zeros from the sample. These methods do not require estimating the nuisance dispersion parameter. We will demonstrate that the growth estimators' confidence intervals provide improved coverage over a wide range of parameter values and asymptotically converge to the sample mean. Interestingly, the proposed methods succeed despite adding both bias and variance to the normal approximation.

14. The best confidence interval of the failure rate and unavailability per demand when few experimental data are available

International Nuclear Information System (INIS)

Goodman, J.

1985-01-01

Using a few available data the likelihood functions for the failure rate and unavailability per demand are constructed. These likelihood functions are used to obtain likelihood density functions for the failure rate and unavailability per demand. The best (or shortest) confidence intervals for these functions are provided. The failure rate and unavailability per demand are important characteristics needed for reliability and availability analysis. The methods of estimation of these characteristics when plenty of observed data are available are well known. However, on many occasions when we deal with rare failure modes or with new equipment or components for which sufficient experience has not accumulated, we have scarce data where few or zero failures have occurred. In these cases, a technique which reflects exactly our state of knowledge is required. This technique is based on likelihood density function or Bayesian methods depending on the available prior distribution. To extract the maximum amount of information from the data the best confidence interval is determined

15. Bootstrap resampling: a powerful method of assessing confidence intervals for doses from experimental data

International Nuclear Information System (INIS)

Iwi, G.; Millard, R.K.; Palmer, A.M.; Preece, A.W.; Saunders, M.

1999-01-01

Bootstrap resampling provides a versatile and reliable statistical method for estimating the accuracy of quantities which are calculated from experimental data. It is an empirically based method, in which large numbers of simulated datasets are generated by computer from existing measurements, so that approximate confidence intervals of the derived quantities may be obtained by direct numerical evaluation. A simple introduction to the method is given via a detailed example of estimating 95% confidence intervals for cumulated activity in the thyroid following injection of 99m Tc-sodium pertechnetate using activity-time data from 23 subjects. The application of the approach to estimating confidence limits for the self-dose to the kidney following injection of 99m Tc-DTPA organ imaging agent based on uptake data from 19 subjects is also illustrated. Results are then given for estimates of doses to the foetus following administration of 99m Tc-sodium pertechnetate for clinical reasons during pregnancy, averaged over 25 subjects. The bootstrap method is well suited for applications in radiation dosimetry including uncertainty, reliability and sensitivity analysis of dose coefficients in biokinetic models, but it can also be applied in a wide range of other biomedical situations. (author)

16. A Note on Confidence Interval for the Power of the One Sample Test

OpenAIRE

A. Wong

2010-01-01

In introductory statistics texts, the power of the test of a one-sample mean when the variance is known is widely discussed. However, when the variance is unknown, the power of the Student's -test is seldom mentioned. In this note, a general methodology for obtaining inference concerning a scalar parameter of interest of any exponential family model is proposed. The method is then applied to the one-sample mean problem with unknown variance to obtain a ( 1 − ) 100% confidence interval for...

17. Rescaled Range Analysis and Detrended Fluctuation Analysis: Finite Sample Properties and Confidence Intervals

Czech Academy of Sciences Publication Activity Database

4/2010, č. 3 (2010), s. 236-250 ISSN 1802-4696 R&D Projects: GA ČR GD402/09/H045; GA ČR GA402/09/0965 Grant - others:GA UK(CZ) 118310 Institutional research plan: CEZ:AV0Z10750506 Keywords : rescaled range analysis * detrended fluctuation analysis * Hurst exponent * long-range dependence Subject RIV: AH - Economics http://library.utia.cas.cz/separaty/2010/E/kristoufek-rescaled range analysis and detrended fluctuation analysis finite sample properties and confidence intervals.pdf

18. A NEW METHOD FOR CONSTRUCTING CONFIDENCE INTERVAL FOR CPM BASED ON FUZZY DATA

Directory of Open Access Journals (Sweden)

2011-06-01

Full Text Available A measurement control system ensures that measuring equipment and measurement processes are fit for their intended use and its importance in achieving product quality objectives. In most real life applications, the observations are fuzzy. In some cases specification limits (SLs are not precise numbers and they are expressed in fuzzy terms, s o that the classical capability indices could not be applied. In this paper we obtain 100(1 - α% fuzzy confidence interval for C pm fuzzy process capability index, where instead of precise quality we have two membership functions for specification limits.

19. Confidence intervals for the first crossing point of two hazard functions.

Science.gov (United States)

Cheng, Ming-Yen; Qiu, Peihua; Tan, Xianming; Tu, Dongsheng

2009-12-01

The phenomenon of crossing hazard rates is common in clinical trials with time to event endpoints. Many methods have been proposed for testing equality of hazard functions against a crossing hazards alternative. However, there has been relatively few approaches available in the literature for point or interval estimation of the crossing time point. The problem of constructing confidence intervals for the first crossing time point of two hazard functions is considered in this paper. After reviewing a recent procedure based on Cox proportional hazard modeling with Box-Cox transformation of the time to event, a nonparametric procedure using the kernel smoothing estimate of the hazard ratio is proposed. The proposed procedure and the one based on Cox proportional hazard modeling with Box-Cox transformation of the time to event are both evaluated by Monte-Carlo simulations and applied to two clinical trial datasets.

20. Generalized additive models and Lucilia sericata growth: assessing confidence intervals and error rates in forensic entomology.

Science.gov (United States)

Tarone, Aaron M; Foran, David R

2008-07-01

Forensic entomologists use blow fly development to estimate a postmortem interval. Although accurate, fly age estimates can be imprecise for older developmental stages and no standard means of assigning confidence intervals exists. Presented here is a method for modeling growth of the forensically important blow fly Lucilia sericata, using generalized additive models (GAMs). Eighteen GAMs were created to predict the extent of juvenile fly development, encompassing developmental stage, length, weight, strain, and temperature data, collected from 2559 individuals. All measures were informative, explaining up to 92.6% of the deviance in the data, though strain and temperature exerted negligible influences. Predictions made with an independent data set allowed for a subsequent examination of error. Estimates using length and developmental stage were within 5% of true development percent during the feeding portion of the larval life cycle, while predictions for postfeeding third instars were less precise, but within expected error.

1. Statistical variability and confidence intervals for planar dose QA pass rates

Energy Technology Data Exchange (ETDEWEB)

Bailey, Daniel W.; Nelms, Benjamin E.; Attwood, Kristopher; Kumaraswamy, Lalith; Podgorsak, Matthew B. [Department of Physics, State University of New York at Buffalo, Buffalo, New York 14260 (United States) and Department of Radiation Medicine, Roswell Park Cancer Institute, Buffalo, New York 14263 (United States); Canis Lupus LLC, Merrimac, Wisconsin 53561 (United States); Department of Biostatistics, Roswell Park Cancer Institute, Buffalo, New York 14263 (United States); Department of Radiation Medicine, Roswell Park Cancer Institute, Buffalo, New York 14263 (United States); Department of Radiation Medicine, Roswell Park Cancer Institute, Buffalo, New York 14263 (United States); Department of Molecular and Cellular Biophysics and Biochemistry, Roswell Park Cancer Institute, Buffalo, New York 14263 (United States) and Department of Physiology and Biophysics, State University of New York at Buffalo, Buffalo, New York 14214 (United States)

2011-11-15

Purpose: The most common metric for comparing measured to calculated dose, such as for pretreatment quality assurance of intensity-modulated photon fields, is a pass rate (%) generated using percent difference (%Diff), distance-to-agreement (DTA), or some combination of the two (e.g., gamma evaluation). For many dosimeters, the grid of analyzed points corresponds to an array with a low areal density of point detectors. In these cases, the pass rates for any given comparison criteria are not absolute but exhibit statistical variability that is a function, in part, on the detector sampling geometry. In this work, the authors analyze the statistics of various methods commonly used to calculate pass rates and propose methods for establishing confidence intervals for pass rates obtained with low-density arrays. Methods: Dose planes were acquired for 25 prostate and 79 head and neck intensity-modulated fields via diode array and electronic portal imaging device (EPID), and matching calculated dose planes were created via a commercial treatment planning system. Pass rates for each dose plane pair (both centered to the beam central axis) were calculated with several common comparison methods: %Diff/DTA composite analysis and gamma evaluation, using absolute dose comparison with both local and global normalization. Specialized software was designed to selectively sample the measured EPID response (very high data density) down to discrete points to simulate low-density measurements. The software was used to realign the simulated detector grid at many simulated positions with respect to the beam central axis, thereby altering the low-density sampled grid. Simulations were repeated with 100 positional iterations using a 1 detector/cm{sup 2} uniform grid, a 2 detector/cm{sup 2} uniform grid, and similar random detector grids. For each simulation, %/DTA composite pass rates were calculated with various %Diff/DTA criteria and for both local and global %Diff normalization

2. The Precision of Effect Size Estimation From Published Psychological Research: Surveying Confidence Intervals.

Science.gov (United States)

2016-02-01

Confidence interval ( CI) widths were calculated for reported Cohen's d standardized effect sizes and examined in two automated surveys of published psychological literature. The first survey reviewed 1,902 articles from Psychological Science. The second survey reviewed a total of 5,169 articles from across the following four APA journals: Journal of Abnormal Psychology, Journal of Applied Psychology, Journal of Experimental Psychology: Human Perception and Performance, and Developmental Psychology. The median CI width for d was greater than 1 in both surveys. Hence, CI widths were, as Cohen (1994) speculated, embarrassingly large. Additional exploratory analyses revealed that CI widths varied across psychological research areas and that CI widths were not discernably decreasing over time. The theoretical implications of these findings are discussed along with ways of reducing the CI widths and thus improving precision of effect size estimation.

3. Assessing a disaggregated energy input: using confidence intervals around translog elasticity estimates

International Nuclear Information System (INIS)

Hisnanick, J.J.; Kyer, B.L.

1995-01-01

The role of energy in the production of manufacturing output has been debated extensively in the literature, particularly its relationship with capital and labor. In an attempt to provide some clarification in this debate, a two-step methodology was used. First under the assumption of a five-factor production function specification, we distinguished between electric and non-electric energy and assessed each component's relationship with capital and labor. Second, we calculated both the Allen and price elasticities and constructed 95% confidence intervals around these values. Our approach led to the following conclusions: that the disaggregation of the energy input into electric and non-electric energy is justified; that capital and electric energy and capital and non-electric energy are substitutes, while labor and electric energy and labor and non-electric energy are complements in production; and that capital and energy are substitutes, while labor and energy are complements. (author)

4. Test Statistics and Confidence Intervals to Establish Noninferiority between Treatments with Ordinal Categorical Data.

Science.gov (United States)

Zhang, Fanghong; Miyaoka, Etsuo; Huang, Fuping; Tanaka, Yutaka

2015-01-01

The problem for establishing noninferiority is discussed between a new treatment and a standard (control) treatment with ordinal categorical data. A measure of treatment effect is used and a method of specifying noninferiority margin for the measure is provided. Two Z-type test statistics are proposed where the estimation of variance is constructed under the shifted null hypothesis using U-statistics. Furthermore, the confidence interval and the sample size formula are given based on the proposed test statistics. The proposed procedure is applied to a dataset from a clinical trial. A simulation study is conducted to compare the performance of the proposed test statistics with that of the existing ones, and the results show that the proposed test statistics are better in terms of the deviation from nominal level and the power.

5. An SPSS Macro to Compute Confidence Intervals for Pearsons Correlation

Directory of Open Access Journals (Sweden)

Bruce Weaver

2014-04-01

Full Text Available In many disciplines, including psychology, medical research, epidemiology and public health, authors are required, or at least encouraged to report confidence intervals (CIs along with effect size estimates. Many students and researchers in these areas use IBM-SPSS for statistical analysis. Unfortunately, the CORRELATIONS procedure in SPSS does not provide CIs in the output. Various work-around solutions have been suggested for obtaining CIs for rhowith SPSS, but most of them have been sub-optimal. Since release 18, it has been possible to compute bootstrap CIs, but only if users have the optional bootstrap module. The !rhoCI macro described in this article is accessible to all SPSS users with release 14 or later. It directs output from the CORRELATIONS procedure to another dataset, restructures that dataset to have one row per correlation, computes a CI for each correlation, and displays the results in a single table. Because the macro uses the CORRELATIONS procedure, it allows users to specify a list of two or more variables to include in the correlation matrix, to choose a confidence level, and to select either listwise or pairwise deletion. Thus, it offers substantial improvements over previous solutions to theproblem of how to compute CIs for rho with SPSS.

6. Tablet potency of Tianeptine in coated tablets by near infrared spectroscopy: model optimisation, calibration transfer and confidence intervals.

Science.gov (United States)

Boiret, Mathieu; Meunier, Loïc; Ginot, Yves-Michel

2011-02-20

A near infrared (NIR) method was developed for determination of tablet potency of active pharmaceutical ingredient (API) in a complex coated tablet matrix. The calibration set contained samples from laboratory and production scale batches. The reference values were obtained by high performance liquid chromatography (HPLC) and partial least squares (PLS) regression was used to establish a model. The model was challenged by calculating tablet potency of two external test sets. Root mean square errors of prediction were respectively equal to 2.0% and 2.7%. To use this model with a second spectrometer from the production field, a calibration transfer method called piecewise direct standardisation (PDS) was used. After the transfer, the root mean square error of prediction of the first test set was 2.4% compared to 4.0% without transferring the spectra. A statistical technique using bootstrap of PLS residuals was used to estimate confidence intervals of tablet potency calculations. This method requires an optimised PLS model, selection of the bootstrap number and determination of the risk. In the case of a chemical analysis, the tablet potency value will be included within the confidence interval calculated by the bootstrap method. An easy to use graphical interface was developed to easily determine if the predictions, surrounded by minimum and maximum values, are within the specifications defined by the regulatory organisation. Copyright © 2010 Elsevier B.V. All rights reserved.

7. Computing confidence and prediction intervals of industrial equipment degradation by bootstrapped support vector regression

International Nuclear Information System (INIS)

Lins, Isis Didier; Droguett, Enrique López; Moura, Márcio das Chagas; Zio, Enrico; Jacinto, Carlos Magno

2015-01-01

Data-driven learning methods for predicting the evolution of the degradation processes affecting equipment are becoming increasingly attractive in reliability and prognostics applications. Among these, we consider here Support Vector Regression (SVR), which has provided promising results in various applications. Nevertheless, the predictions provided by SVR are point estimates whereas in order to take better informed decisions, an uncertainty assessment should be also carried out. For this, we apply bootstrap to SVR so as to obtain confidence and prediction intervals, without having to make any assumption about probability distributions and with good performance even when only a small data set is available. The bootstrapped SVR is first verified on Monte Carlo experiments and then is applied to a real case study concerning the prediction of degradation of a component from the offshore oil industry. The results obtained indicate that the bootstrapped SVR is a promising tool for providing reliable point and interval estimates, which can inform maintenance-related decisions on degrading components. - Highlights: • Bootstrap (pairs/residuals) and SVR are used as an uncertainty analysis framework. • Numerical experiments are performed to assess accuracy and coverage properties. • More bootstrap replications does not significantly improve performance. • Degradation of equipment of offshore oil wells is estimated by bootstrapped SVR. • Estimates about the scale growth rate can support maintenance-related decisions

8. Determining the confidence levels of sensor outputs using neural networks

Energy Technology Data Exchange (ETDEWEB)

1996-12-31

This paper describes an approach for determining the confidence level of a sensor output using multi-sensor arrays, sensor fusion and artificial neural networks. The authors have shown in previous work that sensor fusion and artificial neural networks can be used to learn the relationships between the outputs of an array of simulated partially selective sensors and the individual analyte concentrations in a mixture of analyses. Other researchers have shown that an array of partially selective sensors can be used to determine the individual gas concentrations in a gaseous mixture. The research reported in this paper shows that it is possible to extract confidence level information from an array of partially selective sensors using artificial neural networks. The confidence level of a sensor output is defined as a numeric value, ranging from 0% to 100%, that indicates the confidence associated with a output of a given sensor. A three layer back-propagation neural network was trained on a subset of the sensor confidence level space, and was tested for its ability to generalize, where the confidence level space is defined as all possible deviations from the correct sensor output. A learning rate of 0.1 was used and no momentum terms were used in the neural network. This research has shown that an artificial neural network can accurately estimate the confidence level of individual sensors in an array of partially selective sensors. This research has also shown that the neural networks ability to determine the confidence level is influenced by the complexity of the sensors response and that the neural network is able to estimate the confidence levels even if more than one sensor is in error. The fundamentals behind this research could be applied to other configurations besides arrays of partially selective sensors, such as an array of sensors separated spatially. An example of such a configuration could be an array of temperature sensors in a tank that is not in

9. Determination of confidence limits for experiments with low numbers of counts

International Nuclear Information System (INIS)

Kraft, R.P.; Burrows, D.N.; Nousek, J.A.

1991-01-01

Two different methods, classical and Bayesian, for determining confidence intervals involving Poisson-distributed data are compared. Particular consideration is given to cases where the number of counts observed is small and is comparable to the mean number of background counts. Reasons for preferring the Bayesian over the classical method are given. Tables of confidence limits calculated by the Bayesian method are provided for quick reference. 12 refs

10. Confidence interval estimation of the difference between two sensitivities to the early disease stage.

Science.gov (United States)

Dong, Tuochuan; Kang, Le; Hutson, Alan; Xiong, Chengjie; Tian, Lili

2014-03-01

Although most of the statistical methods for diagnostic studies focus on disease processes with binary disease status, many diseases can be naturally classified into three ordinal diagnostic categories, that is normal, early stage, and fully diseased. For such diseases, the volume under the ROC surface (VUS) is the most commonly used index of diagnostic accuracy. Because the early disease stage is most likely the optimal time window for therapeutic intervention, the sensitivity to the early diseased stage has been suggested as another diagnostic measure. For the purpose of comparing the diagnostic abilities on early disease detection between two markers, it is of interest to estimate the confidence interval of the difference between sensitivities to the early diseased stage. In this paper, we present both parametric and non-parametric methods for this purpose. An extensive simulation study is carried out for a variety of settings for the purpose of evaluating and comparing the performance of the proposed methods. A real example of Alzheimer's disease (AD) is analyzed using the proposed approaches. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

11. Confidence intervals permit, but don't guarantee, better inference than statistical significance testing

Directory of Open Access Journals (Sweden)

Melissa Coulson

2010-07-01

Full Text Available A statistically significant result, and a non-significant result may differ little, although significance status may tempt an interpretation of difference. Two studies are reported that compared interpretation of such results presented using null hypothesis significance testing (NHST, or confidence intervals (CIs. Authors of articles published in psychology, behavioural neuroscience, and medical journals were asked, via email, to interpret two fictitious studies that found similar results, one statistically significant, and the other non-significant. Responses from 330 authors varied greatly, but interpretation was generally poor, whether results were presented as CIs or using NHST. However, when interpreting CIs respondents who mentioned NHST were 60% likely to conclude, unjustifiably, the two results conflicted, whereas those who interpreted CIs without reference to NHST were 95% likely to conclude, justifiably, the two results were consistent. Findings were generally similar for all three disciplines. An email survey of academic psychologists confirmed that CIs elicit better interpretations if NHST is not invoked. Improved statistical inference can result from encouragement of meta-analytic thinking and use of CIs but, for full benefit, such highly desirable statistical reform requires also that researchers interpret CIs without recourse to NHST.

12. PCA-based bootstrap confidence interval tests for gene-disease association involving multiple SNPs

Directory of Open Access Journals (Sweden)

Xue Fuzhong

2010-01-01

Full Text Available Abstract Background Genetic association study is currently the primary vehicle for identification and characterization of disease-predisposing variant(s which usually involves multiple single-nucleotide polymorphisms (SNPs available. However, SNP-wise association tests raise concerns over multiple testing. Haplotype-based methods have the advantage of being able to account for correlations between neighbouring SNPs, yet assuming Hardy-Weinberg equilibrium (HWE and potentially large number degrees of freedom can harm its statistical power and robustness. Approaches based on principal component analysis (PCA are preferable in this regard but their performance varies with methods of extracting principal components (PCs. Results PCA-based bootstrap confidence interval test (PCA-BCIT, which directly uses the PC scores to assess gene-disease association, was developed and evaluated for three ways of extracting PCs, i.e., cases only(CAES, controls only(COES and cases and controls combined(CES. Extraction of PCs with COES is preferred to that with CAES and CES. Performance of the test was examined via simulations as well as analyses on data of rheumatoid arthritis and heroin addiction, which maintains nominal level under null hypothesis and showed comparable performance with permutation test. Conclusions PCA-BCIT is a valid and powerful method for assessing gene-disease association involving multiple SNPs.

13. Determining the confidence levels of sensor outputs using neural networks

International Nuclear Information System (INIS)

Broten, G.S.; Wood, H.C.

1995-01-01

This paper describes an approach for determining the confidence level of a sensor output using multi-sensor arrays, sensor fusion and artificial neural networks. The authors have shown in previous work that sensor fusion and artificial neural networks can be used to learn the relationships between the outputs of an array of simulated partially selective sensors and the individual analyte concentrations in a mixture of analyses. Other researchers have shown that an array of partially selective sensors can be used to determine the individual gas concentrations in a gaseous mixture. The research reported in this paper shows that it is possible to extract confidence level information from an array of partially selective sensors using artificial neural networks. The confidence level of a sensor output is defined as a numeric value, ranging from 0% to 100%, that indicates the confidence associated with a output of a given sensor. A three layer back-propagation neural network was trained on a subset of the sensor confidence level space, and was tested for its ability to generalize, where the confidence level space is defined as all possible deviations from the correct sensor output. A learning rate of 0.1 was used and no momentum terms were used in the neural network. This research has shown that an artificial neural network can accurately estimate the confidence level of individual sensors in an array of partially selective sensors. This research has also shown that the neural network's ability to determine the confidence level is influenced by the complexity of the sensor's response and that the neural network is able to estimate the confidence levels even if more than one sensor is in error. The fundamentals behind this research could be applied to other configurations besides arrays of partially selective sensors, such as an array of sensors separated spatially. An example of such a configuration could be an array of temperature sensors in a tank that is not in

14. R package to estimate intracluster correlation coefficient with confidence interval for binary data.

Science.gov (United States)

Chakraborty, Hrishikesh; Hossain, Akhtar

2018-03-01

The Intracluster Correlation Coefficient (ICC) is a major parameter of interest in cluster randomized trials that measures the degree to which responses within the same cluster are correlated. There are several types of ICC estimators and its confidence intervals (CI) suggested in the literature for binary data. Studies have compared relative weaknesses and advantages of ICC estimators as well as its CI for binary data and suggested situations where one is advantageous in practical research. The commonly used statistical computing systems currently facilitate estimation of only a very few variants of ICC and its CI. To address the limitations of current statistical packages, we developed an R package, ICCbin, to facilitate estimating ICC and its CI for binary responses using different methods. The ICCbin package is designed to provide estimates of ICC in 16 different ways including analysis of variance methods, moments based estimation, direct probabilistic methods, correlation based estimation, and resampling method. CI of ICC is estimated using 5 different methods. It also generates cluster binary data using exchangeable correlation structure. ICCbin package provides two functions for users. The function rcbin() generates cluster binary data and the function iccbin() estimates ICC and it's CI. The users can choose appropriate ICC and its CI estimate from the wide selection of estimates from the outputs. The R package ICCbin presents very flexible and easy to use ways to generate cluster binary data and to estimate ICC and it's CI for binary response using different methods. The package ICCbin is freely available for use with R from the CRAN repository (https://cran.r-project.org/package=ICCbin). We believe that this package can be a very useful tool for researchers to design cluster randomized trials with binary outcome. Copyright © 2017 Elsevier B.V. All rights reserved.

15. A comparison of confidence interval methods for the intraclass correlation coefficient in community-based cluster randomization trials with a binary outcome.

Science.gov (United States)

Braschel, Melissa C; Svec, Ivana; Darlington, Gerarda A; Donner, Allan

2016-04-01

Many investigators rely on previously published point estimates of the intraclass correlation coefficient rather than on their associated confidence intervals to determine the required size of a newly planned cluster randomized trial. Although confidence interval methods for the intraclass correlation coefficient that can be applied to community-based trials have been developed for a continuous outcome variable, fewer methods exist for a binary outcome variable. The aim of this study is to evaluate confidence interval methods for the intraclass correlation coefficient applied to binary outcomes in community intervention trials enrolling a small number of large clusters. Existing methods for confidence interval construction are examined and compared to a new ad hoc approach based on dividing clusters into a large number of smaller sub-clusters and subsequently applying existing methods to the resulting data. Monte Carlo simulation is used to assess the width and coverage of confidence intervals for the intraclass correlation coefficient based on Smith's large sample approximation of the standard error of the one-way analysis of variance estimator, an inverted modified Wald test for the Fleiss-Cuzick estimator, and intervals constructed using a bootstrap-t applied to a variance-stabilizing transformation of the intraclass correlation coefficient estimate. In addition, a new approach is applied in which clusters are randomly divided into a large number of smaller sub-clusters with the same methods applied to these data (with the exception of the bootstrap-t interval, which assumes large cluster sizes). These methods are also applied to a cluster randomized trial on adolescent tobacco use for illustration. When applied to a binary outcome variable in a small number of large clusters, existing confidence interval methods for the intraclass correlation coefficient provide poor coverage. However, confidence intervals constructed using the new approach combined with Smith

16. Adjusted Wald Confidence Interval for a Difference of Binomial Proportions Based on Paired Data

Science.gov (United States)

Bonett, Douglas G.; Price, Robert M.

2012-01-01

Adjusted Wald intervals for binomial proportions in one-sample and two-sample designs have been shown to perform about as well as the best available methods. The adjusted Wald intervals are easy to compute and have been incorporated into introductory statistics courses. An adjusted Wald interval for paired binomial proportions is proposed here and…

17. Confidence Intervals for Effect Sizes: Compliance and Clinical Significance in the "Journal of Consulting and Clinical Psychology"

Science.gov (United States)

Odgaard, Eric C.; Fowler, Robert L.

2010-01-01

Objective: In 2005, the "Journal of Consulting and Clinical Psychology" ("JCCP") became the first American Psychological Association (APA) journal to require statistical measures of clinical significance, plus effect sizes (ESs) and associated confidence intervals (CIs), for primary outcomes (La Greca, 2005). As this represents the single largest…

18. Coverage probability of bootstrap confidence intervals in heavy-tailed frequency models, with application to precipitation data

Czech Academy of Sciences Publication Activity Database

Kyselý, Jan

2010-01-01

Roč. 101, 3-4 (2010), s. 345-361 ISSN 0177-798X R&D Projects: GA AV ČR KJB300420801 Institutional research plan: CEZ:AV0Z30420517 Keywords : bootstrap * extreme value analysis * confidence intervals * heavy-tailed distributions * precipitation amounts Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 1.684, year: 2010

19. A computer program (COSTUM) to calculate confidence intervals for in situ stress measurements. V. 1

International Nuclear Information System (INIS)

Dzik, E.J.; Walker, J.R.; Martin, C.D.

1989-03-01

The state of in situ stress is one of the parameters required both for the design and analysis of underground excavations and for the evaluation of numerical models used to simulate underground conditions. To account for the variability and uncertainty of in situ stress measurements, it is desirable to apply confidence limits to measured stresses. Several measurements of the state of stress along a borehole are often made to estimate the average state of stress at a point. Since stress is a tensor, calculating the mean stress and confidence limits using scalar techniques is inappropriate as well as incorrect. A computer program has been written to calculate and present the mean principle stresses and the confidence limits for the magnitudes and directions of the mean principle stresses. This report describes the computer program, COSTUM

20. Confidence intervals for population allele frequencies: the general case of sampling from a finite diploid population of any size.

Science.gov (United States)

Fung, Tak; Keenan, Kevin

2014-01-01

The estimation of population allele frequencies using sample data forms a central component of studies in population genetics. These estimates can be used to test hypotheses on the evolutionary processes governing changes in genetic variation among populations. However, existing studies frequently do not account for sampling uncertainty in these estimates, thus compromising their utility. Incorporation of this uncertainty has been hindered by the lack of a method for constructing confidence intervals containing the population allele frequencies, for the general case of sampling from a finite diploid population of any size. In this study, we address this important knowledge gap by presenting a rigorous mathematical method to construct such confidence intervals. For a range of scenarios, the method is used to demonstrate that for a particular allele, in order to obtain accurate estimates within 0.05 of the population allele frequency with high probability (> or = 95%), a sample size of > 30 is often required. This analysis is augmented by an application of the method to empirical sample allele frequency data for two populations of the checkerspot butterfly (Melitaea cinxia L.), occupying meadows in Finland. For each population, the method is used to derive > or = 98.3% confidence intervals for the population frequencies of three alleles. These intervals are then used to construct two joint > or = 95% confidence regions, one for the set of three frequencies for each population. These regions are then used to derive a > or = 95%% confidence interval for Jost's D, a measure of genetic differentiation between the two populations. Overall, the results demonstrate the practical utility of the method with respect to informing sampling design and accounting for sampling uncertainty in studies of population genetics, important for scientific hypothesis-testing and also for risk-based natural resource management.

1. Confidence intervals for population allele frequencies: the general case of sampling from a finite diploid population of any size.

Directory of Open Access Journals (Sweden)

Tak Fung

Full Text Available The estimation of population allele frequencies using sample data forms a central component of studies in population genetics. These estimates can be used to test hypotheses on the evolutionary processes governing changes in genetic variation among populations. However, existing studies frequently do not account for sampling uncertainty in these estimates, thus compromising their utility. Incorporation of this uncertainty has been hindered by the lack of a method for constructing confidence intervals containing the population allele frequencies, for the general case of sampling from a finite diploid population of any size. In this study, we address this important knowledge gap by presenting a rigorous mathematical method to construct such confidence intervals. For a range of scenarios, the method is used to demonstrate that for a particular allele, in order to obtain accurate estimates within 0.05 of the population allele frequency with high probability (> or = 95%, a sample size of > 30 is often required. This analysis is augmented by an application of the method to empirical sample allele frequency data for two populations of the checkerspot butterfly (Melitaea cinxia L., occupying meadows in Finland. For each population, the method is used to derive > or = 98.3% confidence intervals for the population frequencies of three alleles. These intervals are then used to construct two joint > or = 95% confidence regions, one for the set of three frequencies for each population. These regions are then used to derive a > or = 95%% confidence interval for Jost's D, a measure of genetic differentiation between the two populations. Overall, the results demonstrate the practical utility of the method with respect to informing sampling design and accounting for sampling uncertainty in studies of population genetics, important for scientific hypothesis-testing and also for risk-based natural resource management.

2. Common pitfalls in statistical analysis: "P" values, statistical significance and confidence intervals

Directory of Open Access Journals (Sweden)

Priya Ranganathan

2015-01-01

Full Text Available In the second part of a series on pitfalls in statistical analysis, we look at various ways in which a statistically significant study result can be expressed. We debunk some of the myths regarding the ′P′ value, explain the importance of ′confidence intervals′ and clarify the importance of including both values in a paper

3. Common pitfalls in statistical analysis: “P” values, statistical significance and confidence intervals

Science.gov (United States)

Ranganathan, Priya; Pramesh, C. S.; Buyse, Marc

2015-01-01

In the second part of a series on pitfalls in statistical analysis, we look at various ways in which a statistically significant study result can be expressed. We debunk some of the myths regarding the ‘P’ value, explain the importance of ‘confidence intervals’ and clarify the importance of including both values in a paper PMID:25878958

4. The confidence-accuracy relationship for eyewitness identification decisions: Effects of exposure duration, retention interval, and divided attention.

Science.gov (United States)

Palmer, Matthew A; Brewer, Neil; Weber, Nathan; Nagesh, Ambika

2013-03-01

Prior research points to a meaningful confidence-accuracy (CA) relationship for positive identification decisions. However, there are theoretical grounds for expecting that different aspects of the CA relationship (calibration, resolution, and over/underconfidence) might be undermined in some circumstances. This research investigated whether the CA relationship for eyewitness identification decisions is affected by three, forensically relevant variables: exposure duration, retention interval, and divided attention at encoding. In Study 1 (N = 986), a field experiment, we examined the effects of exposure duration (5 s vs. 90 s) and retention interval (immediate testing vs. a 1-week delay) on the CA relationship. In Study 2 (N = 502), we examined the effects of attention during encoding on the CA relationship by reanalyzing data from a laboratory experiment in which participants viewed a stimulus video under full or divided attention conditions and then attempted to identify two targets from separate lineups. Across both studies, all three manipulations affected identification accuracy. The central analyses concerned the CA relation for positive identification decisions. For the manipulations of exposure duration and retention interval, overconfidence was greater in the more difficult conditions (shorter exposure; delayed testing) than the easier conditions. Only the exposure duration manipulation influenced resolution (which was better for 5 s than 90 s), and only the retention interval manipulation affected calibration (which was better for immediate testing than delayed testing). In all experimental conditions, accuracy and diagnosticity increased with confidence, particularly at the upper end of the confidence scale. Implications for theory and forensic settings are discussed.

5. Bootstrap confidence intervals and bias correction in the estimation of HIV incidence from surveillance data with testing for recent infection.

Science.gov (United States)

Carnegie, Nicole Bohme

2011-04-15

The incidence of new infections is a key measure of the status of the HIV epidemic, but accurate measurement of incidence is often constrained by limited data. Karon et al. (Statist. Med. 2008; 27:4617–4633) developed a model to estimate the incidence of HIV infection from surveillance data with biologic testing for recent infection for newly diagnosed cases. This method has been implemented by public health departments across the United States and is behind the new national incidence estimates, which are about 40 per cent higher than previous estimates. We show that the delta method approximation given for the variance of the estimator is incomplete, leading to an inflated variance estimate. This contributes to the generation of overly conservative confidence intervals, potentially obscuring important differences between populations. We demonstrate via simulation that an innovative model-based bootstrap method using the specified model for the infection and surveillance process improves confidence interval coverage and adjusts for the bias in the point estimate. Confidence interval coverage is about 94–97 per cent after correction, compared with 96–99 per cent before. The simulated bias in the estimate of incidence ranges from −6.3 to +14.6 per cent under the original model but is consistently under 1 per cent after correction by the model-based bootstrap. In an application to data from King County, Washington in 2007 we observe correction of 7.2 per cent relative bias in the incidence estimate and a 66 per cent reduction in the width of the 95 per cent confidence interval using this method. We provide open-source software to implement the method that can also be extended for alternate models.

6. [Confidence interval or p-value--similarities and differences between two important methods of statistical inference of quantitative studies].

Science.gov (United States)

Harari, Gil

2014-01-01

Statistic significance, also known as p-value, and CI (Confidence Interval) are common statistics measures and are essential for the statistical analysis of studies in medicine and life sciences. These measures provide complementary information about the statistical probability and conclusions regarding the clinical significance of study findings. This article is intended to describe the methodologies, compare between the methods, assert their suitability for the different needs of study results analysis and to explain situations in which each method should be used.

7. Monte Carlo simulation of parameter confidence intervals for non-linear regression analysis of biological data using Microsoft Excel.

Science.gov (United States)

Lambert, Ronald J W; Mytilinaios, Ioannis; Maitland, Luke; Brown, Angus M

2012-08-01

This study describes a method to obtain parameter confidence intervals from the fitting of non-linear functions to experimental data, using the SOLVER and Analysis ToolPaK Add-In of the Microsoft Excel spreadsheet. Previously we have shown that Excel can fit complex multiple functions to biological data, obtaining values equivalent to those returned by more specialized statistical or mathematical software. However, a disadvantage of using the Excel method was the inability to return confidence intervals for the computed parameters or the correlations between them. Using a simple Monte-Carlo procedure within the Excel spreadsheet (without recourse to programming), SOLVER can provide parameter estimates (up to 200 at a time) for multiple 'virtual' data sets, from which the required confidence intervals and correlation coefficients can be obtained. The general utility of the method is exemplified by applying it to the analysis of the growth of Listeria monocytogenes, the growth inhibition of Pseudomonas aeruginosa by chlorhexidine and the further analysis of the electrophysiological data from the compound action potential of the rodent optic nerve. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

8. Technical Report: Algorithm and Implementation for Quasispecies Abundance Inference with Confidence Intervals from Metagenomic Sequence Data

Energy Technology Data Exchange (ETDEWEB)

McLoughlin, Kevin [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

2016-01-11

This report describes the design and implementation of an algorithm for estimating relative microbial abundances, together with confidence limits, using data from metagenomic DNA sequencing. For the background behind this project and a detailed discussion of our modeling approach for metagenomic data, we refer the reader to our earlier technical report, dated March 4, 2014. Briefly, we described a fully Bayesian generative model for paired-end sequence read data, incorporating the effects of the relative abundances, the distribution of sequence fragment lengths, fragment position bias, sequencing errors and variations between the sampled genomes and the nearest reference genomes. A distinctive feature of our modeling approach is the use of a Chinese restaurant process (CRP) to describe the selection of genomes to be sampled, and thus the relative abundances. The CRP component is desirable for fitting abundances to reads that may map ambiguously to multiple targets, because it naturally leads to sparse solutions that select the best representative from each set of nearly equivalent genomes.

9. Technical Report on Modeling for Quasispecies Abundance Inference with Confidence Intervals from Metagenomic Sequence Data

Energy Technology Data Exchange (ETDEWEB)

McLoughlin, K. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

2016-01-11

The overall aim of this project is to develop a software package, called MetaQuant, that can determine the constituents of a complex microbial sample and estimate their relative abundances by analysis of metagenomic sequencing data. The goal for Task 1 is to create a generative model describing the stochastic process underlying the creation of sequence read pairs in the data set. The stages in this generative process include the selection of a source genome sequence for each read pair, with probability dependent on its abundance in the sample. The other stages describe the evolution of the source genome from its nearest common ancestor with a reference genome, breakage of the source DNA into short fragments, and the errors in sequencing the ends of the fragments to produce read pairs.

10. Confidence intervals and hypothesis testing for the Permutation Entropy with an application to epilepsy

Science.gov (United States)

Traversaro, Francisco; O. Redelico, Francisco

2018-04-01

In nonlinear dynamics, and to a lesser extent in other fields, a widely used measure of complexity is the Permutation Entropy. But there is still no known method to determine the accuracy of this measure. There has been little research on the statistical properties of this quantity that characterize time series. The literature describes some resampling methods of quantities used in nonlinear dynamics - as the largest Lyapunov exponent - but these seems to fail. In this contribution, we propose a parametric bootstrap methodology using a symbolic representation of the time series to obtain the distribution of the Permutation Entropy estimator. We perform several time series simulations given by well-known stochastic processes: the 1/fα noise family, and show in each case that the proposed accuracy measure is as efficient as the one obtained by the frequentist approach of repeating the experiment. The complexity of brain electrical activity, measured by the Permutation Entropy, has been extensively used in epilepsy research for detection in dynamical changes in electroencephalogram (EEG) signal with no consideration of the variability of this complexity measure. An application of the parametric bootstrap methodology is used to compare normal and pre-ictal EEG signals.

11. Weighted profile likelihood-based confidence interval for the difference between two proportions with paired binomial data.

Science.gov (United States)

Pradhan, Vivek; Saha, Krishna K; Banerjee, Tathagata; Evans, John C

2014-07-30

Inference on the difference between two binomial proportions in the paired binomial setting is often an important problem in many biomedical investigations. Tang et al. (2010, Statistics in Medicine) discussed six methods to construct confidence intervals (henceforth, we abbreviate it as CI) for the difference between two proportions in paired binomial setting using method of variance estimates recovery. In this article, we propose weighted profile likelihood-based CIs for the difference between proportions of a paired binomial distribution. However, instead of the usual likelihood, we use weighted likelihood that is essentially making adjustments to the cell frequencies of a 2 × 2 table in the spirit of Agresti and Min (2005, Statistics in Medicine). We then conduct numerical studies to compare the performances of the proposed CIs with that of Tang et al. and Agresti and Min in terms of coverage probabilities and expected lengths. Our numerical study clearly indicates that the weighted profile likelihood-based intervals and Jeffreys interval (cf. Tang et al.) are superior in terms of achieving the nominal level, and in terms of expected lengths, they are competitive. Finally, we illustrate the use of the proposed CIs with real-life examples. Copyright © 2014 John Wiley & Sons, Ltd.

12. Determining frequentist confidence limits using a directed parameter space search

International Nuclear Information System (INIS)

Daniel, Scott F.; Connolly, Andrew J.; Schneider, Jeff

2014-01-01

We consider the problem of inferring constraints on a high-dimensional parameter space with a computationally expensive likelihood function. We propose a machine learning algorithm that maps out the Frequentist confidence limit on parameter space by intelligently targeting likelihood evaluations so as to quickly and accurately characterize the likelihood surface in both low- and high-likelihood regions. We compare our algorithm to Bayesian credible limits derived by the well-tested Markov Chain Monte Carlo (MCMC) algorithm using both multi-modal toy likelihood functions and the seven yr Wilkinson Microwave Anisotropy Probe cosmic microwave background likelihood function. We find that our algorithm correctly identifies the location, general size, and general shape of high-likelihood regions in parameter space while being more robust against multi-modality than MCMC.

13. Indirect methods for reference interval determination - review and recommendations.

Science.gov (United States)

Jones, Graham R D; Haeckel, Rainer; Loh, Tze Ping; Sikaris, Ken; Streichert, Thomas; Katayev, Alex; Barth, Julian H; Ozarda, Yesim

2018-04-19

Reference intervals are a vital part of the information supplied by clinical laboratories to support interpretation of numerical pathology results such as are produced in clinical chemistry and hematology laboratories. The traditional method for establishing reference intervals, known as the direct approach, is based on collecting samples from members of a preselected reference population, making the measurements and then determining the intervals. An alternative approach is to perform analysis of results generated as part of routine pathology testing and using appropriate statistical techniques to determine reference intervals. This is known as the indirect approach. This paper from a working group of the International Federation of Clinical Chemistry (IFCC) Committee on Reference Intervals and Decision Limits (C-RIDL) aims to summarize current thinking on indirect approaches to reference intervals. The indirect approach has some major potential advantages compared with direct methods. The processes are faster, cheaper and do not involve patient inconvenience, discomfort or the risks associated with generating new patient health information. Indirect methods also use the same preanalytical and analytical techniques used for patient management and can provide very large numbers for assessment. Limitations to the indirect methods include possible effects of diseased subpopulations on the derived interval. The IFCC C-RIDL aims to encourage the use of indirect methods to establish and verify reference intervals, to promote publication of such intervals with clear explanation of the process used and also to support the development of improved statistical techniques for these studies.

14. A comparison of confidence interval methods for the concordance correlation coefficient and intraclass correlation coefficient with small number of raters.

Science.gov (United States)

Feng, Dai; Svetnik, Vladimir; Coimbra, Alexandre; Baumgartner, Richard

2014-01-01

The intraclass correlation coefficient (ICC) with fixed raters or, equivalently, the concordance correlation coefficient (CCC) for continuous outcomes is a widely accepted aggregate index of agreement in settings with small number of raters. Quantifying the precision of the CCC by constructing its confidence interval (CI) is important in early drug development applications, in particular in qualification of biomarker platforms. In recent years, there have been several new methods proposed for construction of CIs for the CCC, but their comprehensive comparison has not been attempted. The methods consisted of the delta method and jackknifing with and without Fisher's Z-transformation, respectively, and Bayesian methods with vague priors. In this study, we carried out a simulation study, with data simulated from multivariate normal as well as heavier tailed distribution (t-distribution with 5 degrees of freedom), to compare the state-of-the-art methods for assigning CI to the CCC. When the data are normally distributed, the jackknifing with Fisher's Z-transformation (JZ) tended to provide superior coverage and the difference between it and the closest competitor, the Bayesian method with the Jeffreys prior was in general minimal. For the nonnormal data, the jackknife methods, especially the JZ method, provided the coverage probabilities closest to the nominal in contrast to the others which yielded overly liberal coverage. Approaches based upon the delta method and Bayesian method with conjugate prior generally provided slightly narrower intervals and larger lower bounds than others, though this was offset by their poor coverage. Finally, we illustrated the utility of the CIs for the CCC in an example of a wake after sleep onset (WASO) biomarker, which is frequently used in clinical sleep studies of drugs for treatment of insomnia.

15. User guide to the UNC1NLI1 package and three utility programs for computation of nonlinear confidence and prediction intervals using MODFLOW-2000

DEFF Research Database (Denmark)

Christensen, Steen; Cooley, R.L.

a model (for example when using the Parameter-Estimation Process of MODFLOW-2000) it is advantageous to also use regression-based methods to quantify uncertainty. For this reason the UNC Process computes (1) confidence intervals for parameters of the Parameter-Estimation Process and (2) confidence...

16. Confidence intervals for effect sizes: compliance and clinical significance in the Journal of Consulting and clinical Psychology.

Science.gov (United States)

Odgaard, Eric C; Fowler, Robert L

2010-06-01

In 2005, the Journal of Consulting and Clinical Psychology (JCCP) became the first American Psychological Association (APA) journal to require statistical measures of clinical significance, plus effect sizes (ESs) and associated confidence intervals (CIs), for primary outcomes (La Greca, 2005). As this represents the single largest editorial effort to improve statistical reporting practices in any APA journal in at least a decade, in this article we investigate the efficacy of that change. All intervention studies published in JCCP in 2003, 2004, 2007, and 2008 were reviewed. Each article was coded for method of clinical significance, type of ES, and type of associated CI, broken down by statistical test (F, t, chi-square, r/R(2), and multivariate modeling). By 2008, clinical significance compliance was 75% (up from 31%), with 94% of studies reporting some measure of ES (reporting improved for individual statistical tests ranging from eta(2) = .05 to .17, with reasonable CIs). Reporting of CIs for ESs also improved, although only to 40%. Also, the vast majority of reported CIs used approximations, which become progressively less accurate for smaller sample sizes and larger ESs (cf. Algina & Kessleman, 2003). Changes are near asymptote for ESs and clinical significance, but CIs lag behind. As CIs for ESs are required for primary outcomes, we show how to compute CIs for the vast majority of ESs reported in JCCP, with an example of how to use CIs for ESs as a method to assess clinical significance.

17. Recurrence determinism and Li-Yorke chaos for interval maps

OpenAIRE

2017-01-01

Recurrence determinism, one of the fundamental characteristics of recurrence quantification analysis, measures predictability of a trajectory of a dynamical system. It is tightly connected with the conditional probability that, given a recurrence, following states of the trajectory will be recurrences. In this paper we study recurrence determinism of interval dynamical systems. We show that recurrence determinism distinguishes three main types of $\\omega$-limit sets of zero entropy maps: fini...

18. Bootstrap Signal-to-Noise Confidence Intervals: An Objective Method for Subject Exclusion and Quality Control in ERP Studies

Science.gov (United States)

Parks, Nathan A.; Gannon, Matthew A.; Long, Stephanie M.; Young, Madeleine E.

2016-01-01

Analysis of event-related potential (ERP) data includes several steps to ensure that ERPs meet an appropriate level of signal quality. One such step, subject exclusion, rejects subject data if ERP waveforms fail to meet an appropriate level of signal quality. Subject exclusion is an important quality control step in the ERP analysis pipeline as it ensures that statistical inference is based only upon those subjects exhibiting clear evoked brain responses. This critical quality control step is most often performed simply through visual inspection of subject-level ERPs by investigators. Such an approach is qualitative, subjective, and susceptible to investigator bias, as there are no standards as to what constitutes an ERP of sufficient signal quality. Here, we describe a standardized and objective method for quantifying waveform quality in individual subjects and establishing criteria for subject exclusion. The approach uses bootstrap resampling of ERP waveforms (from a pool of all available trials) to compute a signal-to-noise ratio confidence interval (SNR-CI) for individual subject waveforms. The lower bound of this SNR-CI (SNRLB) yields an effective and objective measure of signal quality as it ensures that ERP waveforms statistically exceed a desired signal-to-noise criterion. SNRLB provides a quantifiable metric of individual subject ERP quality and eliminates the need for subjective evaluation of waveform quality by the investigator. We detail the SNR-CI methodology, establish the efficacy of employing this approach with Monte Carlo simulations, and demonstrate its utility in practice when applied to ERP datasets. PMID:26903849

19. Five-band microwave radiometer system for noninvasive brain temperature measurement in newborn babies: Phantom experiment and confidence interval

Science.gov (United States)

Sugiura, T.; Hirata, H.; Hand, J. W.; van Leeuwen, J. M. J.; Mizushina, S.

2011-10-01

Clinical trials of hypothermic brain treatment for newborn babies are currently hindered by the difficulty in measuring deep brain temperatures. As one of the possible methods for noninvasive and continuous temperature monitoring that is completely passive and inherently safe is passive microwave radiometry (MWR). We have developed a five-band microwave radiometer system with a single dual-polarized, rectangular waveguide antenna operating within the 1-4 GHz range and a method for retrieving the temperature profile from five radiometric brightness temperatures. This paper addresses (1) the temperature calibration for five microwave receivers, (2) the measurement experiment using a phantom model that mimics the temperature profile in a newborn baby, and (3) the feasibility for noninvasive monitoring of deep brain temperatures. Temperature resolutions were 0.103, 0.129, 0.138, 0.105 and 0.111 K for 1.2, 1.65, 2.3, 3.0 and 3.6 GHz receivers, respectively. The precision of temperature estimation (2σ confidence interval) was about 0.7°C at a 5-cm depth from the phantom surface. Accuracy, which is the difference between the estimated temperature using this system and the measured temperature by a thermocouple at a depth of 5 cm, was about 2°C. The current result is not satisfactory for clinical application because the clinical requirement for accuracy must be better than 1°C for both precision and accuracy at a depth of 5 cm. Since a couple of possible causes for this inaccuracy have been identified, we believe that the system can take a step closer to the clinical application of MWR for hypothermic rescue treatment.

20. A spreadsheet template compatible with Microsoft Excel and iWork Numbers that returns the simultaneous confidence intervals for all pairwise differences between multiple sample means.

Science.gov (United States)

Brown, Angus M

2010-04-01

The objective of the method described in this paper is to develop a spreadsheet template for the purpose of comparing multiple sample means. An initial analysis of variance (ANOVA) test on the data returns F--the test statistic. If F is larger than the critical F value drawn from the F distribution at the appropriate degrees of freedom, convention dictates rejection of the null hypothesis and allows subsequent multiple comparison testing to determine where the inequalities between the sample means lie. A variety of multiple comparison methods are described that return the 95% confidence intervals for differences between means using an inclusive pairwise comparison of the sample means. 2009 Elsevier Ireland Ltd. All rights reserved.

1. Determination of post-burial interval using entomology: A review.

Science.gov (United States)

Singh, Rajinder; Sharma, Sahil; Sharma, Arun

2016-08-01

Insects and other arthropods are used in different matters pertinent to the criminal justice system as they play very important role in the decomposition of cadavers. They are used as evidence in a criminal investigation to determine post mortem interval (PMI). Various researches and review articles are available on forensic entomology to determine PMI in the terrestrial environment but very less work has been reported in context to buried bodies. Burring the carcass, is one of the methods used by criminals to conceal the crime. So, to drive the attention of researchers toward this growing field and to help various investigating agencies, the present paper reviews the studies done on determination of post-burial interval (PBI), its importance and future prospective. Copyright © 2016 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

2. The Confidence-Accuracy Relationship for Eyewitness Identification Decisions: Effects of Exposure Duration, Retention Interval, and Divided Attention

Science.gov (United States)

Palmer, Matthew A.; Brewer, Neil; Weber, Nathan; Nagesh, Ambika

2013-01-01

Prior research points to a meaningful confidence-accuracy (CA) relationship for positive identification decisions. However, there are theoretical grounds for expecting that different aspects of the CA relationship (calibration, resolution, and over/underconfidence) might be undermined in some circumstances. This research investigated whether the…

3. Factorial-based response-surface modeling with confidence intervals for optimizing thermal-optical transmission analysis of atmospheric black carbon

International Nuclear Information System (INIS)

Conny, J.M.; Norris, G.A.; Gould, T.R.

2009-01-01

Thermal-optical transmission (TOT) analysis measures black carbon (BC) in atmospheric aerosol on a fibrous filter. The method pyrolyzes organic carbon (OC) and employs laser light absorption to distinguish BC from the pyrolyzed OC; however, the instrument does not necessarily separate the two physically. In addition, a comprehensive temperature protocol for the analysis based on the Beer-Lambert Law remains elusive. Here, empirical response-surface modeling was used to show how the temperature protocol in TOT analysis can be modified to distinguish pyrolyzed OC from BC based on the Beer-Lambert Law. We determined the apparent specific absorption cross sections for pyrolyzed OC (σ Char ) and BC (σ BC ), which accounted for individual absorption enhancement effects within the filter. Response-surface models of these cross sections were derived from a three-factor central-composite factorial experimental design: temperature and duration of the high-temperature step in the helium phase, and the heating increase in the helium-oxygen phase. The response surface for σ BC , which varied with instrument conditions, revealed a ridge indicating the correct conditions for OC pyrolysis in helium. The intersection of the σ BC and σ Char surfaces indicated the conditions where the cross sections were equivalent, satisfying an important assumption upon which the method relies. 95% confidence interval surfaces defined a confidence region for a range of pyrolysis conditions. Analyses of wintertime samples from Seattle, WA revealed a temperature between 830 deg. C and 850 deg. C as most suitable for the helium high-temperature step lasting 150 s. However, a temperature as low as 750 deg. C could not be rejected statistically

4. Perpetrator admissions and earwitness renditions: the effects of retention interval and rehearsal on accuracy of and confidence in memory for criminal accounts

OpenAIRE

Boydell, Carroll

2008-01-01

While much research has explored how well earwitnesses can identify the voice of a perpetrator, little research has examined how well they can recall details from a perpetrator’s confession. This study examines the accuracy-confidence correlation for memory for details from a perpetrator’s verbal account of a crime, as well as the effects of two variables commonly encountered in a criminal investigation (rehearsal and length of retention interval) on that correlation. Results suggest that con...

5. Method and system for assigning a confidence metric for automated determination of optic disc location

Science.gov (United States)

Karnowski, Thomas P [Knoxville, TN; Tobin, Jr., Kenneth W.; Muthusamy Govindasamy, Vijaya Priya [Knoxville, TN; Chaum, Edward [Memphis, TN

2012-07-10

A method for assigning a confidence metric for automated determination of optic disc location that includes analyzing a retinal image and determining at least two sets of coordinates locating an optic disc in the retinal image. The sets of coordinates can be determined using first and second image analysis techniques that are different from one another. An accuracy parameter can be calculated and compared to a primary risk cut-off value. A high confidence level can be assigned to the retinal image if the accuracy parameter is less than the primary risk cut-off value and a low confidence level can be assigned to the retinal image if the accuracy parameter is greater than the primary risk cut-off value. The primary risk cut-off value being selected to represent an acceptable risk of misdiagnosis of a disease having retinal manifestations by the automated technique.

6. Empirical likelihood-based confidence intervals for the sensitivity of a continuous-scale diagnostic test at a fixed level of specificity.

Science.gov (United States)

Gengsheng Qin; Davis, Angela E; Jing, Bing-Yi

2011-06-01

For a continuous-scale diagnostic test, it is often of interest to find the range of the sensitivity of the test at the cut-off that yields a desired specificity. In this article, we first define a profile empirical likelihood ratio for the sensitivity of a continuous-scale diagnostic test and show that its limiting distribution is a scaled chi-square distribution. We then propose two new empirical likelihood-based confidence intervals for the sensitivity of the test at a fixed level of specificity by using the scaled chi-square distribution. Simulation studies are conducted to compare the finite sample performance of the newly proposed intervals with the existing intervals for the sensitivity in terms of coverage probability. A real example is used to illustrate the application of the recommended methods.

7. Metacognitive Confidence Increases with, but Does Not Determine, Visual Perceptual Learning.

Science.gov (United States)

Zizlsperger, Leopold; Kümmel, Florian; Haarmeier, Thomas

2016-01-01

While perceptual learning increases objective sensitivity, the effects on the constant interaction of the process of perception and its metacognitive evaluation have been rarely investigated. Visual perception has been described as a process of probabilistic inference featuring metacognitive evaluations of choice certainty. For visual motion perception in healthy, naive human subjects here we show that perceptual sensitivity and confidence in it increased with training. The metacognitive sensitivity-estimated from certainty ratings by a bias-free signal detection theoretic approach-in contrast, did not. Concomitant 3Hz transcranial alternating current stimulation (tACS) was applied in compliance with previous findings on effective high-low cross-frequency coupling subserving signal detection. While perceptual accuracy and confidence in it improved with training, there were no statistically significant tACS effects. Neither metacognitive sensitivity in distinguishing between their own correct and incorrect stimulus classifications, nor decision confidence itself determined the subjects' visual perceptual learning. Improvements of objective performance and the metacognitive confidence in it were rather determined by the perceptual sensitivity at the outset of the experiment. Post-decision certainty in visual perceptual learning was neither independent of objective performance, nor requisite for changes in sensitivity, but rather covaried with objective performance. The exact functional role of metacognitive confidence in human visual perception has yet to be determined.

8. Understanding Consumer Confidence in the Safety of Food: Its Two-Dimensional Structure and Determinants

NARCIS (Netherlands)

Jonge, de J.; Trijp, van J.C.M.; Renes, R.J.; Frewer, L.J.

2007-01-01

Understanding of the determinants of consumer confidence in the safety of food is important if effective risk management and communication are to be developed. In the research reported here, we attempt to understand the roles of consumer trust in actors in the food chain and regulators, consumer

9. Prediction of the distillation temperatures of crude oils using ¹H NMR and support vector regression with estimated confidence intervals.

Science.gov (United States)

Filgueiras, Paulo R; Terra, Luciana A; Castro, Eustáquio V R; Oliveira, Lize M S L; Dias, Júlio C M; Poppi, Ronei J

2015-09-01

This paper aims to estimate the temperature equivalent to 10% (T10%), 50% (T50%) and 90% (T90%) of distilled volume in crude oils using (1)H NMR and support vector regression (SVR). Confidence intervals for the predicted values were calculated using a boosting-type ensemble method in a procedure called ensemble support vector regression (eSVR). The estimated confidence intervals obtained by eSVR were compared with previously accepted calculations from partial least squares (PLS) models and a boosting-type ensemble applied in the PLS method (ePLS). By using the proposed boosting strategy, it was possible to identify outliers in the T10% property dataset. The eSVR procedure improved the accuracy of the distillation temperature predictions in relation to standard PLS, ePLS and SVR. For T10%, a root mean square error of prediction (RMSEP) of 11.6°C was obtained in comparison with 15.6°C for PLS, 15.1°C for ePLS and 28.4°C for SVR. The RMSEPs for T50% were 24.2°C, 23.4°C, 22.8°C and 14.4°C for PLS, ePLS, SVR and eSVR, respectively. For T90%, the values of RMSEP were 39.0°C, 39.9°C and 39.9°C for PLS, ePLS, SVR and eSVR, respectively. The confidence intervals calculated by the proposed boosting methodology presented acceptable values for the three properties analyzed; however, they were lower than those calculated by the standard methodology for PLS. Copyright © 2015 Elsevier B.V. All rights reserved.

10. INEMO: Distributed RF-Based Indoor Location Determination with Confidence Indicator

Directory of Open Access Journals (Sweden)

Youxian Sun

2007-12-01

Full Text Available Using radio signal strength (RSS in sensor networks localization is an attractive method since it is a cost-efficient method to provide range indication. In this paper, we present a two-tier distributed approach for RF-based indoor location determination. Our approach, namely, INEMO, provides positioning accuracy of room granularity and office cube granularity. A target can first give a room granularity request and the background anchor nodes cooperate to accomplish the positioning process. Anchors in the same room can give cube granularity if the target requires further accuracy. Fixed anchor nodes keep monitoring status of nearby anchors and local reference matching is used to support room separation. Furthermore, we utilize the RSS difference to infer the positioning confidence. The simulation results demonstrate the efficiency of the proposed RF-based indoor location determination.

11. Applying Fuzzy Logic and Data Mining Techniques in Wireless Sensor Network for Determination Residential Fire Confidence

Directory of Open Access Journals (Sweden)

Mirjana Maksimović

2014-09-01

Full Text Available The main goal of soft computing technologies (fuzzy logic, neural networks, fuzzy rule-based systems, data mining techniques… is to find and describe the structural patterns in the data in order to try to explain connections between data and on their basis create predictive or descriptive models. Integration of these technologies in sensor nodes seems to be a good idea because it can significantly lead to network performances improvements, above all to reduce the energy consumption and enhance the lifetime of the network. The purpose of this paper is to analyze different algorithms in the case of fire confidence determination in order to see which of the methods and parameter values work best for the given problem. Hence, an analysis between different classification algorithms in a case of nominal and numerical d

12. Post choice information integration as a causal determinant of confidence: Novel data and a computational account.

Science.gov (United States)

Moran, Rani; Teodorescu, Andrei R; Usher, Marius

2015-05-01

Confidence judgments are pivotal in the performance of daily tasks and in many domains of scientific research including the behavioral sciences, psychology and neuroscience. Positive resolution i.e., the positive correlation between choice-correctness and choice-confidence is a critical property of confidence judgments, which justifies their ubiquity. In the current paper, we study the mechanism underlying confidence judgments and their resolution by investigating the source of the inputs for the confidence-calculation. We focus on the intriguing debate between two families of confidence theories. According to single stage theories, confidence is based on the same information that underlies the decision (or on some other aspect of the decision process), whereas according to dual stage theories, confidence is affected by novel information that is collected after the decision was made. In three experiments, we support the case for dual stage theories by showing that post-choice perceptual availability manipulations exert a causal effect on confidence-resolution in the decision followed by confidence paradigm. These finding establish the role of RT2, the duration of the post-choice information-integration stage, as a prime dependent variable that theories of confidence should account for. We then present a novel list of robust empirical patterns ('hurdles') involving RT2 to guide further theorizing about confidence judgments. Finally, we present a unified computational dual stage model for choice, confidence and their latencies namely, the collapsing confidence boundary model (CCB). According to CCB, a diffusion-process choice is followed by a second evidence-integration stage towards a stochastic collapsing confidence boundary. Despite its simplicity, CCB clears the entire list of hurdles. Copyright © 2015 Elsevier Inc. All rights reserved.

13. Determinants and consequences of short birth interval in rural Bangladesh: A cross-sectional study

NARCIS (Netherlands)

H.R. de Jonge (Hugo); K. Azad (Kishwar); N. Seward (Nadine); A. Kuddus (Abdul); S. Shaha (Sanjit); J. Beard (James); A. Costello (Anthony); A.J. Houweling (Tanja); E. Fottrell (Edward)

2014-01-01

textabstractBackground: Short birth intervals are known to have negative effects on pregnancy outcomes. We analysed data from a large population surveillance system in rural Bangladesh to identify predictors of short birth interval and determine consequences of short intervals on pregnancy outcomes.

14. "Normality of Residuals Is a Continuous Variable, and Does Seem to Influence the Trustworthiness of Confidence Intervals: A Response to, and Appreciation of, Williams, Grajales, and Kurkiewicz (2013"

Directory of Open Access Journals (Sweden)

Jason W. Osborne

2013-09-01

Full Text Available Osborne and Waters (2002 focused on checking some of the assumptions of multiple linear.regression. In a critique of that paper, Williams, Grajales, and Kurkiewicz correctly clarify that.regression models estimated using ordinary least squares require the assumption of normally.distributed errors, but not the assumption of normally distributed response or predictor variables..They go on to discuss estimate bias and provide a helpful summary of the assumptions of multiple.regression when using ordinary least squares. While we were not as precise as we could have been.when discussing assumptions of normality, the critical issue of the 2002 paper remains -' researchers.often do not check on or report on the assumptions of their statistical methods. This response.expands on the points made by Williams, advocates a thorough examination of data prior to.reporting results, and provides an example of how incremental improvements in meeting the.assumption of normality of residuals incrementally improves the accuracy of confidence intervals.

15. Thought confidence as a determinant of persuasion: the self-validation hypothesis.

Science.gov (United States)

Petty, Richard E; Briñol, Pablo; Tormala, Zakary L

2002-05-01

Previous research in the domain of attitude change has described 2 primary dimensions of thinking that impact persuasion processes and outcomes: the extent (amount) of thinking and the direction (valence) of issue-relevant thought. The authors examined the possibility that another, more meta-cognitive aspect of thinking is also important-the degree of confidence people have in their own thoughts. Four studies test the notion that thought confidence affects the extent of persuasion. When positive thoughts dominate in response to a message, increasing confidence in those thoughts increases persuasion, but when negative thoughts dominate, increasing confidence decreases persuasion. In addition, using self-reported and manipulated thought confidence in separate studies, the authors provide evidence that the magnitude of the attitude-thought relationship depends on the confidence people have in their thoughts. Finally, the authors also show that these self-validation effects are most likely in situations that foster high amounts of information processing activity.

16. Resampling Approach for Determination of the Method for Reference Interval Calculation in Clinical Laboratory Practice▿

Science.gov (United States)

Pavlov, Igor Y.; Wilson, Andrew R.; Delgado, Julio C.

2010-01-01

Reference intervals (RI) play a key role in clinical interpretation of laboratory test results. Numerous articles are devoted to analyzing and discussing various methods of RI determination. The two most widely used approaches are the parametric method, which assumes data normality, and a nonparametric, rank-based procedure. The decision about which method to use is usually made arbitrarily. The goal of this study was to demonstrate that using a resampling approach for the comparison of RI determination techniques could help researchers select the right procedure. Three methods of RI calculation—parametric, transformed parametric, and quantile-based bootstrapping—were applied to multiple random samples drawn from 81 values of complement factor B observations and from a computer-simulated normally distributed population. It was shown that differences in RI between legitimate methods could be up to 20% and even more. The transformed parametric method was found to be the best method for the calculation of RI of non-normally distributed factor B estimations, producing an unbiased RI and the lowest confidence limits and interquartile ranges. For a simulated Gaussian population, parametric calculations, as expected, were the best; quantile-based bootstrapping produced biased results at low sample sizes, and the transformed parametric method generated heavily biased RI. The resampling approach could help compare different RI calculation methods. An algorithm showing a resampling procedure for choosing the appropriate method for RI calculations is included. PMID:20554803

17. Zero- vs. one-dimensional, parametric vs. non-parametric, and confidence interval vs. hypothesis testing procedures in one-dimensional biomechanical trajectory analysis.

Science.gov (United States)

Pataky, Todd C; Vanrenterghem, Jos; Robinson, Mark A

2015-05-01

Biomechanical processes are often manifested as one-dimensional (1D) trajectories. It has been shown that 1D confidence intervals (CIs) are biased when based on 0D statistical procedures, and the non-parametric 1D bootstrap CI has emerged in the Biomechanics literature as a viable solution. The primary purpose of this paper was to clarify that, for 1D biomechanics datasets, the distinction between 0D and 1D methods is much more important than the distinction between parametric and non-parametric procedures. A secondary purpose was to demonstrate that a parametric equivalent to the 1D bootstrap exists in the form of a random field theory (RFT) correction for multiple comparisons. To emphasize these points we analyzed six datasets consisting of force and kinematic trajectories in one-sample, paired, two-sample and regression designs. Results showed, first, that the 1D bootstrap and other 1D non-parametric CIs were qualitatively identical to RFT CIs, and all were very different from 0D CIs. Second, 1D parametric and 1D non-parametric hypothesis testing results were qualitatively identical for all six datasets. Last, we highlight the limitations of 1D CIs by demonstrating that they are complex, design-dependent, and thus non-generalizable. These results suggest that (i) analyses of 1D data based on 0D models of randomness are generally biased unless one explicitly identifies 0D variables before the experiment, and (ii) parametric and non-parametric 1D hypothesis testing provide an unambiguous framework for analysis when one׳s hypothesis explicitly or implicitly pertains to whole 1D trajectories. Copyright © 2015 Elsevier Ltd. All rights reserved.

18. A comparison of confidence/credible interval methods for the area under the ROC curve for continuous diagnostic tests with small sample size.

Science.gov (United States)

Feng, Dai; Cortese, Giuliana; Baumgartner, Richard

2017-12-01

The receiver operating characteristic (ROC) curve is frequently used as a measure of accuracy of continuous markers in diagnostic tests. The area under the ROC curve (AUC) is arguably the most widely used summary index for the ROC curve. Although the small sample size scenario is common in medical tests, a comprehensive study of small sample size properties of various methods for the construction of the confidence/credible interval (CI) for the AUC has been by and large missing in the literature. In this paper, we describe and compare 29 non-parametric and parametric methods for the construction of the CI for the AUC when the number of available observations is small. The methods considered include not only those that have been widely adopted, but also those that have been less frequently mentioned or, to our knowledge, never applied to the AUC context. To compare different methods, we carried out a simulation study with data generated from binormal models with equal and unequal variances and from exponential models with various parameters and with equal and unequal small sample sizes. We found that the larger the true AUC value and the smaller the sample size, the larger the discrepancy among the results of different approaches. When the model is correctly specified, the parametric approaches tend to outperform the non-parametric ones. Moreover, in the non-parametric domain, we found that a method based on the Mann-Whitney statistic is in general superior to the others. We further elucidate potential issues and provide possible solutions to along with general guidance on the CI construction for the AUC when the sample size is small. Finally, we illustrate the utility of different methods through real life examples.

19. Asymptotics for the Fredholm Determinant of the Sine Kernel on a Union of Intervals

OpenAIRE

Widom, Harold

1994-01-01

In the bulk scaling limit of the Gaussian Unitary Ensemble of Hermitian matrices the probability that an interval of length $s$ contains no eigenvalues is the Fredholm determinant of the sine kernel $\\sin(x-y)\\over\\pi(x-y)$ over this interval. A formal asymptotic expansion for the determinant as $s$ tends to infinity was obtained by Dyson. In this paper we replace a single interval of length $s$ by $sJ$ where $J$ is a union of $m$ intervals and present a proof of the asymptotics up to second ...

20. The Model Confidence Set

DEFF Research Database (Denmark)

Hansen, Peter Reinhard; Lunde, Asger; Nason, James M.

The paper introduces the model confidence set (MCS) and applies it to the selection of models. A MCS is a set of models that is constructed such that it will contain the best model with a given level of confidence. The MCS is in this sense analogous to a confidence interval for a parameter. The MCS......, beyond the comparison of models. We apply the MCS procedure to two empirical problems. First, we revisit the inflation forecasting problem posed by Stock and Watson (1999), and compute the MCS for their set of inflation forecasts. Second, we compare a number of Taylor rule regressions and determine...... the MCS of the best in terms of in-sample likelihood criteria....

1. Demographic and Socio-economic Determinants of Birth Interval Dynamics in Manipur: A Survival Analysis

Directory of Open Access Journals (Sweden)

Sanajaoba Singh N,

2011-01-01

Full Text Available The birth interval is a major determinant of levels of fertility in high fertility populations. A house-to-house survey of 1225 women in Manipur, a tiny state in North Eastern India was carried out to investigate birth interval patterns and its determinants. Using survival analysis, among the nine explanatory variables of interest, only three factors – infant mortality, Lactation and use of contraceptive devices have highly significant effect (P<0.01 on the duration of birth interval and only three factors – age at marriage of wife, parity and sex of child are found to be significant (P<0.05 on the duration variable.

2. Comparison of the methods for determination of calibration and verification intervals of measuring devices

Directory of Open Access Journals (Sweden)

Toteva Pavlina

2017-01-01

Full Text Available The paper presents different determination and optimisation methods for verification intervals of technical devices for monitoring and measurement based on the requirements of some widely used international standards, e.g. ISO 9001, ISO/IEC 17020, ISO/IEC 17025 etc., maintained by various organizations implementing measuring devices in practice. Comparative analysis of the reviewed methods is conducted in terms of opportunities for assessing the adequacy of interval(s for calibration of measuring devices and their optimisation accepted by an organization – an extension or reduction depending on the obtained results. The advantages and disadvantages of the reviewed methods are discussed, and recommendations for their applicability are provided.

3. 用Delta法估计多维测验合成信度的置信区间%Estimating the Confidence Interval of Composite Reliability of a Multidimensional Test With the Delta Method

Institute of Scientific and Technical Information of China (English)

叶宝娟; 温忠麟

2012-01-01

Reliability is very important in evaluating the quality of a test. Based on the confirmatory factor analysis, composite reliabili- ty is a good index to estimate the test reliability for general applications. As is well known, point estimate contains limited information a- bout a population parameter and cannot indicate how far it can be from the population parameter. The confidence interval of the parame- ter can provide more information. In evaluating the quality of a test, the confidence interval of composite reliability has received atten- tion in recent years. There are three approaches to estimating the confidence interval of composite reliability of an unidimensional test: the Bootstrap method, the Delta method, and the direct use of the standard error of a software output (e. g. , LISREL). The Bootstrap method pro- vides empirical results of the standard error, and is the most credible method. But it needs data simulation techniques, and its computa- tion process is rather complex. The Delta method computes the standard error of composite reliability by approximate calculation. It is simpler than the Bootstrap method. The LISREL software can directly prompt the standard error, and it is the easiest among the three methods. By simulation study, it had been found that the interval estimates obtained by the Delta method and the Bootstrap method were almost identical, whereas the results obtained by LISREL and by the Bootstrap method were substantially different ( Ye & Wen, 2011 ). The Delta method is recommended when the confidence interval of composite reliability of a unidimensional test is estimated, because the Delta method is simpler than the Bootstrap method. There was little research about how to compute the confidence interval of composite reliability of a multidimensional test. We de- duced a formula by using the Delta method for computing the standard error of composite reliability of a multidimensional test. Based on the standard error, the

4. Asymptotics for the Fredholm determinant of the sine kernel on a union of intervals

Science.gov (United States)

Widom, Harold

1995-07-01

In the bulk scaling limit of the Gaussian Unitary Ensemble of hermitian matrices the probability that an interval of length s contains no eigenvalues is the Fredholm determinant of the sine kernel{sin (x - y)}/{π (x - y)} over this interval. A formal asymptotic expansion for the determinant as s tends to infinity was obtained by Dyson. In this paper we replace a single interval of length s by sJ, where J is a union of m intervals and present a proof of the asymptotics up to second order. The logarithmic derivative with respect to s of the determinant equals a constant (expressible in terms of hyperelliptic integrals) times s, plus a bounded oscillatory function of s (zero if m=1, periodic if m=2, and in general expressible in terms of the solution of a Jacobi inversion problem), plus o(1). Also determined are the asymptotics of the trace of the resolvent operator, which is the ratio in the same model of the probability that the set contains exactly one eigenvalue to the probability that it contains none. The proofs use ideas from orthogonal polynomial theory.

5. The proximate determinants of fertility and birth intervals in Egypt: An application of calendar data

Directory of Open Access Journals (Sweden)

Andrew Hinde

2007-01-01

Full Text Available In this paper we use calendar data from the 2000 Egyptian Demographic and Health Survey (DHS to assess the determinants of birth interval length among women who are in union. We make use of the well-known model of the proximate determinants of fertility, and take advantage of the fact that the DHS calendar data provide month-by-month data on contraceptive use, breastfeeding and post-partum amenorrhoea, which are the most important proximate determinants among women in union. One aim of the analysis is to see whether the calendar data are sufficiently detailed to account for all variation among individual women in birth interval duration, in that once they are controlled, the effect of background social, economic and cultural variables is not statistically significant. The results suggest that this is indeed the case, especially after a random effect term to account for the unobserved proximate determinants is included in the model. Birth intervals are determined mainly by the use of modern methods of contraception (the IUD being more effective than the pill. Breastfeeding and post-partum amenorrhoea both inhibit conception, and the effect of breastfeeding remains even after the period of amenorrhoea has ended.

6. Level of confidence in venepuncture and knowledge in determining causes of blood sample haemolysis among clinical staff and phlebotomists.

Science.gov (United States)

Makhumula-Nkhoma, Nellie; Whittaker, Vicki; McSherry, Robert

2015-02-01

To investigate the association between confidence level in venepuncture and knowledge in determining causes of blood sample haemolysis among clinical staff and phlebotomists. Various collection methods are used to perform venepuncture, also called phlebotomy, the act of drawing blood from a patient using a needle. The collection method used has an impact on preanalytical blood sample haemolysis. Haemolysis is the breakdown of red blood cells, which makes the sample unsuitable. Despite available evidence on the common causes, extensive literature search showed a lack of published evidence on the association of haemolysis with staff confidence and knowledge. A quantitative primary research design using survey method. A purposive sample of 290 clinical staff and phlebotomists conducting venepuncture in one North England hospital participated in this quantitative survey. A three-section web-based questionnaire comprising demographic profile, confidence and competence levels, and knowledge sections was used to collect data in 2012. The chi-squared test for independence was used to compare the distribution of responses for categorical data. anova was used to determine mean difference in the knowledge scores of staff with different confidence levels. Almost 25% clinical staff and phlebotomists participated in the survey. There was an increase in confidence at the last venepuncture among staff of all categories. While doctors' scores were higher compared with healthcare assistants', p ≤ 0·001, nurses' were of wide range and lowest. There was no statistically significant difference (at the 5% level) in the total knowledge scores and confidence level at the last venepuncture F(2,4·690) = 1·67, p = 0·31 among staff of all categories. Evidence-based measures are required to boost staff knowledge base of preanalytical blood sample haemolysis for standardised and quality service. Monitoring and evaluation of the training, conducting and monitoring haemolysis rate are

7. Human error considerations and annunciator effects in determining optimal test intervals for periodically inspected standby systems

International Nuclear Information System (INIS)

McWilliams, T.P.; Martz, H.F.

1981-01-01

This paper incorporates the effects of four types of human error in a model for determining the optimal time between periodic inspections which maximizes the steady state availability for standby safety systems. Such safety systems are characteristic of nuclear power plant operations. The system is modeled by means of an infinite state-space Markov chain. Purpose of the paper is to demonstrate techniques for computing steady-state availability A and the optimal periodic inspection interval tau* for the system. The model can be used to investigate the effects of human error probabilities on optimal availability, study the benefits of annunciating the standby-system, and to determine optimal inspection intervals. Several examples which are representative of nuclear power plant applications are presented

8. Effects of human errors on the determination of surveillance test interval

International Nuclear Information System (INIS)

Chung, Dae Wook; Koo, Bon Hyun

1990-01-01

This paper incorporates the effects of human error relevant to the periodic test on the unavailability of the safety system as well as the component unavailability. Two types of possible human error during the test are considered. One is the possibility that a good safety system is inadvertently left in a bad state after the test (Type A human error) and the other is the possibility that bad safety system is undetected upon the test (Type B human error). An event tree model is developed for the steady-state unavailability of safety system to determine the effects of human errors on the component unavailability and the test interval. We perform the reliability analysis of safety injection system (SIS) by applying aforementioned two types of human error to safety injection pumps. Results of various sensitivity analyses show that; 1) the appropriate test interval decreases and steady-state unavailability increases as the probabilities of both types of human errors increase, and they are far more sensitive to Type A human error than Type B and 2) the SIS unavailability increases slightly as the probability of Type B human error increases, and significantly as the probability of Type A human error increases. Therefore, to avoid underestimation, the effects of human error should be incorporated in the system reliability analysis which aims at the relaxations of the surveillance test intervals, and Type A human error has more important effect on the unavailability and surveillance test interval

9. AlphaCI: un programa de cálculo de intervalos de confianza para el coeficiente alfa de Cronbach AlphaCI: a computer program for computing confidence intervals around Cronbach's alfa coefficient

Directory of Open Access Journals (Sweden)

Rubén Ledesma

2004-06-01

10. Determining optimal preventive maintenance interval for component of Well Barrier Element in an Oil & Gas Company

Science.gov (United States)

Siswanto, A.; Kurniati, N.

2018-04-01

An oil and gas company has 2,268 oil and gas wells. Well Barrier Element (WBE) is installed in a well to protect human, prevent asset damage and minimize harm to the environment. The primary WBE component is Surface Controlled Subsurface Safety Valve (SCSSV). The secondary WBE component is Christmas Tree Valves that consist of four valves i.e. Lower Master Valve (LMV), Upper Master Valve (UMV), Swab Valve (SV) and Wing Valve (WV). Current practice on WBE Preventive Maintenance (PM) program is conducted by considering the suggested schedule as stated on manual. Corrective Maintenance (CM) program is conducted when the component fails unexpectedly. Both PM and CM need cost and may cause production loss. This paper attempts to analyze the failure data and reliability based on historical data. Optimal PM interval is determined in order to minimize the total cost of maintenance per unit time. The optimal PM interval for SCSSV is 730 days, LMV is 985 days, UMV is 910 days, SV is 900 days and WV is 780 days. In average of all components, the cost reduction by implementing the suggested interval is 52%, while the reliability is improved by 4% and the availability is increased by 5%.

11. Reference interval determination of hemoglobin fractions in umbilical cord and placental blood by capillary electrophoresis.

Science.gov (United States)

Bó, Suzane Dal; de Oliveira Lemos, Fabiane Kreutz; Pedrazzani, Fabiane Spagnol; Cagliari, Cláudia Rosa; Scotti, Luciana

2016-04-01

Umbilical cord and placental blood (UCPB) is a rich source of hematopoietic stem cells widely used to treat diseases that did not have effective treatments until recently. Umbilical cord and placental blood banks (UCPBBs) are needed to be created to store UCPB. UCPB is collected immediately after birth, processed, and frozen until infusion. Detection of abnormal hemoglobins is one of UCPB screening tests available. The objective of the present study was to determine the reference interval for HbA, HbF, and HbA2 in UCPB using capillary electrophoresis. Methods: Observational retrospective study of UCPB samples undergoing hemoglobin electrophoresis was performed between April 2012 and May 2013. We analyzed 273 UCPB samples. All cords met the criteria of BrasilCORD. We found 19.9% (10.5–36.7%) for HbA, 80.1% (62.7–89.4%) for HbF, and 0.1% (0.0–0.6%) for HbA2. Data were expressed as median (P2.5–P97.5). Establishing specific reference intervals is the best option for most tests because such ranges reflect the status of the population in which the tests will be applied. The use of appropriate reference intervals ensures that clinical labs provide reliable information, thus enabling clinicians to correctly interpret results and choose the best approach for the target population.

12. Determining diabetic retinopathy screening interval based on time from no retinopathy to laser therapy.

Science.gov (United States)

Hughes, Daniel; Nair, Sunil; Harvey, John N

2017-12-01

Objectives To determine the necessary screening interval for retinopathy in diabetic patients with no retinopathy based on time to laser therapy and to assess long-term visual outcome following screening. Methods In a population-based community screening programme in North Wales, 2917 patients were followed until death or for approximately 12 years. At screening, 2493 had no retinopathy; 424 had mostly minor degrees of non-proliferative retinopathy. Data on timing of first laser therapy and visual outcome following screening were obtained from local hospitals and ophthalmology units. Results Survival analysis showed that very few of the no retinopathy at screening group required laser therapy in the early years compared with the non-proliferative retinopathy group ( p retinopathy at screening group required laser therapy, and at three years 0.2% (cumulative), lower rates of treatment than have been suggested by analyses of sight-threatening retinopathy determined photographically. At follow-up (mean 7.8 ± 4.6 years), mild to moderate visual impairment in one or both eyes due to diabetic retinopathy was more common in those with retinopathy at screening (26% vs. 5%, p diabetes occurred in only 1 in 1000. Conclusions Optimum screening intervals should be determined from time to active treatment. Based on requirement for laser therapy, the screening interval for diabetic patients with no retinopathy can be extended to two to three years. Patients who attend for retinal screening and treatment who have no or non-proliferative retinopathy now have a very low risk of eventual blindness from diabetes.

13. Bootstrap confidence intervals for principal response curves

NARCIS (Netherlands)

Timmerman, Marieke E.; Ter Braak, Cajo J. F.

2008-01-01

The principal response curve (PRC) model is of use to analyse multivariate data resulting from experiments involving repeated sampling in time. The time-dependent treatment effects are represented by PRCs, which are functional in nature. The sample PRCs can be estimated using a raw approach, or the

14. Bootstrap Confidence Intervals for Principal Response Curves

NARCIS (Netherlands)

Timmerman, M.E.; Braak, ter C.J.F.

2008-01-01

The principal response curve (PRC) model is of use to analyse multivariate data resulting from experiments involving repeated sampling in time. The time-dependent treatment effects are represented by PRCs, which are functional in nature. The sample PRCs can be estimated using a raw approach, or the

15. Determination of the postmortem interval by Laser Induced Breakdown Spectroscopy using swine skeletal muscles

International Nuclear Information System (INIS)

Marín-Roldan, A.; Manzoor, S.; Moncayo, S.; Navarro-Villoslada, F.; Izquierdo-Hornillos, R.C.; Caceres, J.O.

2013-01-01

Skin and muscle samples are useful to discriminate individuals as well as their postmortem interval (PMI) in crime scenes and natural or caused disasters. In this study, a simple and fast method based on Laser Induced Breakdown Spectroscopy (LIBS) has been developed to estimate PMI using swine skeletal muscle samples. Environmental conditions (moisture, temperature, fauna, etc.) having strong influence on the PMI determination were considered. Time-dependent changes in the emission intensity ratio for Mg, Na, Hα and K were observed, as a result of the variations in their concentration due to chemical reactions in tissues and were correlated with PMI. This relationship, which has not been reported previously in the forensic literature, offers a simple and potentially valuable means of estimating the PMI. - Highlights: • LIBS has been applied for Postmortem Interval estimation. • Environmental and sample storage conditions have been considered. • Significant correlation of elemental emission intensity with PMI has been observed. • Pig skeletal muscle samples have been used

16. Determination of the postmortem interval by Laser Induced Breakdown Spectroscopy using swine skeletal muscles

Energy Technology Data Exchange (ETDEWEB)

Marín-Roldan, A.; Manzoor, S.; Moncayo, S.; Navarro-Villoslada, F.; Izquierdo-Hornillos, R.C.; Caceres, J.O., E-mail: jcaceres@quim.ucm.es

2013-10-01

Skin and muscle samples are useful to discriminate individuals as well as their postmortem interval (PMI) in crime scenes and natural or caused disasters. In this study, a simple and fast method based on Laser Induced Breakdown Spectroscopy (LIBS) has been developed to estimate PMI using swine skeletal muscle samples. Environmental conditions (moisture, temperature, fauna, etc.) having strong influence on the PMI determination were considered. Time-dependent changes in the emission intensity ratio for Mg, Na, Hα and K were observed, as a result of the variations in their concentration due to chemical reactions in tissues and were correlated with PMI. This relationship, which has not been reported previously in the forensic literature, offers a simple and potentially valuable means of estimating the PMI. - Highlights: • LIBS has been applied for Postmortem Interval estimation. • Environmental and sample storage conditions have been considered. • Significant correlation of elemental emission intensity with PMI has been observed. • Pig skeletal muscle samples have been used.

17. Interpretando correctamente en salud pública estimaciones puntuales, intervalos de confianza y contrastes de hipótesis Accurate interpretation of point estimates, confidence intervals, and hypothesis tests in public health

Directory of Open Access Journals (Sweden)

Manuel G Scotto

2003-12-01

Full Text Available El presente ensayo trata de aclarar algunos conceptos utilizados habitualmente en el campo de investigación de la salud pública, que en numerosas situaciones son interpretados de manera incorrecta. Entre ellos encontramos la estimación puntual, los intervalos de confianza, y los contrastes de hipótesis. Estableciendo un paralelismo entre estos tres conceptos, podemos observar cuáles son sus diferencias más importantes a la hora de ser interpretados, tanto desde el punto de vista del enfoque clásico como desde la óptica bayesiana.This essay reviews some statistical concepts frequently used in public health research that are commonly misinterpreted. These include point estimates, confidence intervals, and hypothesis tests. By comparing them using the classical and the Bayesian perspectives, their interpretation becomes clearer.

18. The INTERVAL trial to determine whether intervals between blood donations can be safely and acceptably decreased to optimise blood supply: study protocol for a randomised controlled trial.

Science.gov (United States)

Moore, Carmel; Sambrook, Jennifer; Walker, Matthew; Tolkien, Zoe; Kaptoge, Stephen; Allen, David; Mehenny, Susan; Mant, Jonathan; Di Angelantonio, Emanuele; Thompson, Simon G; Ouwehand, Willem; Roberts, David J; Danesh, John

2014-09-17

Ageing populations may demand more blood transfusions, but the blood supply could be limited by difficulties in attracting and retaining a decreasing pool of younger donors. One approach to increase blood supply is to collect blood more frequently from existing donors. If more donations could be safely collected in this manner at marginal cost, then it would be of considerable benefit to blood services. National Health Service (NHS) Blood and Transplant in England currently allows men to donate up to every 12 weeks and women to donate up to every 16 weeks. In contrast, some other European countries allow donations as frequently as every 8 weeks for men and every 10 weeks for women. The primary aim of the INTERVAL trial is to determine whether donation intervals can be safely and acceptably decreased to optimise blood supply whilst maintaining the health of donors. INTERVAL is a randomised trial of whole blood donors enrolled from all 25 static centres of NHS Blood and Transplant. Recruitment of about 50,000 male and female donors started in June 2012 and was completed in June 2014. Men have been randomly assigned to standard 12-week versus 10-week versus 8-week inter-donation intervals, while women have been assigned to standard 16-week versus 14-week versus 12-week inter-donation intervals. Sex-specific comparisons will be made by intention-to-treat analysis of outcomes assessed after two years of intervention. The primary outcome is the number of blood donations made. A key secondary outcome is donor quality of life, assessed using the Short Form Health Survey. Additional secondary endpoints include the number of 'deferrals' due to low haemoglobin (and other factors), iron status, cognitive function, physical activity, and donor attitudes. A comprehensive health economic analysis will be undertaken. The INTERVAL trial should yield novel information about the effect of inter-donation intervals on blood supply, acceptability, and donors' physical and mental well

19. Simultaneous determination of radionuclides separable into natural decay series by use of time-interval analysis

International Nuclear Information System (INIS)

Hashimoto, Tetsuo; Sanada, Yukihisa; Uezu, Yasuhiro

2004-01-01

A delayed coincidence method, time-interval analysis (TIA), has been applied to successive α-α decay events on the millisecond time-scale. Such decay events are part of the 220 Rn→ 216 Po (T 1/2 145 ms) (Th-series) and 219 Rn→ 215 Po (T 1/2 1.78 ms) (Ac-series). By using TIA in addition to measurement of 226 Ra (U-series) from α-spectrometry by liquid scintillation counting (LSC), two natural decay series could be identified and separated. The TIA detection efficiency was improved by using the pulse-shape discrimination technique (PSD) to reject β-pulses, by solvent extraction of Ra combined with simple chemical separation, and by purging the scintillation solution with dry N 2 gas. The U- and Th-series together with the Ac-series were determined, respectively, from alpha spectra and TIA carried out immediately after Ra-extraction. Using the 221 Fr→ 217 At (T 1/2 32.3 ms) decay process as a tracer, overall yields were estimated from application of TIA to the 225 Ra (Np-decay series) at the time of maximum growth. The present method has proven useful for simultaneous determination of three radioactive decay series in environmental samples. (orig.)

20. Rapid determination of long-lived artificial alpha radionuclides using time interval analysis

International Nuclear Information System (INIS)

Uezu, Yasuhiro; Koarashi, Jun; Sanada, Yukihisa; Hashimoto, Tetsuo

2003-01-01

1. Determination of Kps and β1,H in a wide interval of initial concentrations of lutetium

International Nuclear Information System (INIS)

Lopez-G, H.; Jimenez R, M.; Solache R, M.; Rojas H, A.

2006-01-01

The solubility product constants and the first of lutetium hydrolysis in the interval of initial concentration of 3.72 X 10 -5 to 2.09 X 10 -3 M of lutetium, in a 2M of NaCIO 4 media, at 303 K and under conditions free of CO 2 its were considered. The solubility diagrams (pLu (ac) -pC H ) by means of a radiochemical method were obtained, and starting from its the pC H values that limit the saturation and no-saturation zones of the solutions were settled down. Those diagrams allowed, also, to calculate the solubility product constants of Lu(OH) 3 . The experimental data to the polynomial solubility equation were adjusted, what allowed to calculate those values of the solubility product constants of Lu(OH) 3 and to determine the first hydrolysis constant. The value of precipitation pC H diminishes when the initial concentration of the lutetium increases, while the values of K ps and β 1,H its remain constant. (Author)

2. Optimal Testing Intervals in the Squatting Test to Determine Baroreflex Sensitivity

OpenAIRE

Ishitsuka, S.; Kusuyama, N.; Tanaka, M.

2014-01-01

The recently introduced “squatting test” (ST) utilizes a simple postural change to perturb the blood pressure and to assess baroreflex sensitivity (BRS). In our study, we estimated the reproducibility of and the optimal testing interval between the STs in healthy volunteers. Thirty-four subjects free of cardiovascular disorders and taking no medication were instructed to perform the repeated ST at 30-sec, 1-min, and 3-min intervals in duplicate in a random sequence, while the systolic blood p...

3. Using interval maxima regression (IMR) to determine environmental optima controlling Microcystis spp. growth in Lake Taihu.

Science.gov (United States)

Li, Ming; Peng, Qiang; Xiao, Man

2016-01-01

Fortnightly investigations at 12 sampling sites in Meiliang Bay and Gonghu Bay of Lake Taihu (China) were carried out from June to early November 2010. The relationship between abiotic factors and cell density of different Microcystis species was analyzed using the interval maxima regression (IMR) to determine the optimum temperature and nutrient concentrations for growth of different Microcystis species. Our results showed that cell density of all the Microcystis species increased along with the increase of water temperature, but Microcystis aeruginosa adapted to a wide range of temperatures. The optimum total dissolved nitrogen concentrations for M. aeruginosa, Microcystis wesenbergii, Microcystis ichthyoblabe, and unidentified Microcystis were 3.7, 2.0, 2.4, and 1.9 mg L(-1), respectively. The optimum total dissolved phosphorus concentrations for different species were M. wesenbergii (0.27 mg L(-1)) > M. aeruginosa (0.1 mg L(-1)) > M. ichthyoblabe (0.06 mg L(-1)) ≈ unidentified Microcystis, and the iron (Fe(3+)) concentrations were M. wesenbergii (0.73 mg L(-1)) > M. aeruginosa (0.42 mg L(-1)) > M. ichthyoblabe (0.35 mg L(-1)) > unidentified Microcystis (0.09 mg L(-1)). The above results suggest that if phosphorus concentration was reduced to 0.06 mg L(-1) or/and iron concentration was reduced to 0.35 mg L(-1) in Lake Taihu, the large colonial M. wesenbergii and M. aeruginosa would be replaced by small colonial M. ichthyoblabe and unidentified Microcystis. Thereafter, the intensity and frequency of the occurrence of Microcystis blooms would be reduced by changing Microcystis species composition.

4. Determinants of birth interval in a rural Mediterranean population (La Alpujarra, Spain).

Science.gov (United States)

Polo, V; Luna, F; Fuster, V

2000-10-01

The fertility pattern, in terms of birth intervals, in a rural population not practicing contraception belonging to La Alta Alpujarra Oriental (southeast Spain) is analyzed. During the first half of the 20th century, this population experienced a considerable degree of geographical and cultural isolation. Because of this population's high variability in fertility and therefore in birth intervals, the analysis was limited to a homogenous subsample of 154 families, each with at least five pregnancies. This limitation allowed us to analyze, among and within families, effects of a set of variables on the interbirth pattern, and to avoid possible problems of pseudoreplication. Information on birth date of the mother, age at marriage, children's birth date and death date, birth order, and frequency of miscarriages was collected. Our results indicate that interbirth intervals depend on an exponential effect of maternal age, especially significant after the age of 35. This effect is probably related to the biological degenerative processes of female fertility with age. A linear increase of birth intervals with birth order within families was found as well as a reduction of intervals among families experiencing an infant death. Our sample size was insufficient to detect a possible replacement behavior in the case of infant death. High natality and mortality rates, a secular decrease of natality rates, a log-normal birth interval, and family-size distributions suggest that La Alpujarra has been a natural fertility population following a demographic transition process.

5. Determinants of willingness-to-pay for water pollution abatement: a point and interval data payment card application.

Science.gov (United States)

Mahieu, Pierre-Alexandre; Riera, Pere; Giergiczny, Marek

2012-10-15

This paper shows a contingent valuation exercise of pollution abatement in remote lakes. In addition to estimating the usual interval data model, it applies a point and interval statistical approach allowing for uncensored data, left-censored data, right-censored data and left- and right-censored data to explore the determinants of willingness-to-pay in a payment card survey. Results suggest that the estimations between models may diverge under certain conditions. Copyright © 2012 Elsevier Ltd. All rights reserved.

6. Determination of molecular markers associated with anthesis-silking interval in maize

International Nuclear Information System (INIS)

Simpson, J.

1998-01-01

Maize lines contrasting in anthesis-silking, interval (ASI), a trait strongly linked to drought tolerance, have been analyzed under different water stress conditions in the field and with molecular markers. Correlation of marker and field data has revealed molecular markers strongly associated with flowering and yield traits. (author)

7. Determining the optimal screening interval for type 2 diabetes mellitus using a risk prediction model.

Directory of Open Access Journals (Sweden)

Andrei Brateanu

Full Text Available Progression to diabetes mellitus (DM is variable and the screening time interval not well defined. The American Diabetes Association and US Preventive Services Task Force suggest screening every 3 years, but evidence is limited. The objective of the study was to develop a model to predict the probability of developing DM and suggest a risk-based screening interval.We included non-diabetic adult patients screened for DM in the Cleveland Clinic Health System if they had at least two measurements of glycated hemoglobin (HbA1c, an initial one less than 6.5% (48 mmol/mol in 2008, and another between January, 2009 and December, 2013. Cox proportional hazards models were created. The primary outcome was DM defined as HbA1C greater than 6.4% (46 mmol/mol. The optimal rescreening interval was chosen based on the predicted probability of developing DM.Of 5084 participants, 100 (4.4% of the 2281 patients with normal HbA1c and 772 (27.5% of the 2803 patients with prediabetes developed DM within 5 years. Factors associated with developing DM included HbA1c (HR per 0.1 units increase 1.20; 95%CI, 1.13-1.27, family history (HR 1.31; 95%CI, 1.13-1.51, smoking (HR 1.18; 95%CI, 1.03-1.35, triglycerides (HR 1.01; 95%CI, 1.00-1.03, alanine aminotransferase (HR 1.07; 95%CI, 1.03-1.11, body mass index (HR 1.06; 95%CI, 1.01-1.11, age (HR 0.95; 95%CI, 0.91-0.99 and high-density lipoproteins (HR 0.93; 95% CI, 0.90-0.95. Five percent of patients in the highest risk tertile developed DM within 8 months, while it took 35 months for 5% of the middle tertile to develop DM. Only 2.4% percent of the patients in the lowest tertile developed DM within 5 years.A risk prediction model employing commonly available data can be used to guide screening intervals. Based on equal intervals for equal risk, patients in the highest risk category could be rescreened after 8 months, while those in the intermediate and lowest risk categories could be rescreened after 3 and 5 years

8. Determination and identification of naturally occurring decay series using milli-second order pulse time interval analysis (TIA)

International Nuclear Information System (INIS)

Hashimoto, T.; Sanada, Y.; Uezu, Y.

2003-01-01

A delayed coincidence method, called a time interval analysis (TIA) method, has been successfully applied to selective determination of the correlated α-α decay events in millisecond order life-time. A main decay process applicable to TIA-treatment is 220 Rn → 216 Po(T 1/2 :145ms) → {Th-series}. The TIA is fundamentally based on the difference of time interval distribution between non-correlated decay events and other events such as background or random events when they were compiled the time interval data within a fixed time (for example, a tenth of concerned half lives). The sensitivity of the TIA-analysis due to correlated α-α decay events could be subsequently improved in respect of background elimination using the pulse shape discrimination technique (PSD with PERALS counter) to reject β/γ-pulses, purging of nitrogen gas into extra scintillator, and applying solvent extraction of Ra. (author)

9. Determination of adenosine phosphates in rat gastrocnemius at various postmortem intervals using high performance liquid chromatography.

Science.gov (United States)

Huang, Hong; Yan, Youyi; Zuo, Zhong; Yang, Lin; Li, Bin; Song, Yu; Liao, Linchuan

2010-09-01

Although the change in adenosine phosphate levels in muscles may contribute to the development of rigor mortis, the relationship between their levels and the onset and development of rigor mortis has not been well elucidated. In the current study, levels of the adenosine phosphates including adenosine triphosphate (ATP), adenosine diphosphate (ADP), and adenosine monophosphate (AMP) in gastrocnemius at various postmortem intervals of 180 rats from different death modes were detected by high performance liquid chromatography. The results showed that the levels of ATP and ADP significantly decreased along with the postmortem period of rats from different death mode whereas the AMP level remained the same. In addition, it was found that changes in the ATP levels in muscles after death correlated well with the development of rigor mortis. Therefore, the ATP level could serve as a reference parameter for the deduction of rigor mortis in forensic science.

10. Radial diffusion with outer boundary determined by geosynchronous measurements: Storm and post-storm intervals

Science.gov (United States)

Chu, F.; Haines, P.; Hudson, M.; Kress, B.; Freidel, R.; Kanekal, S.

2007-12-01

Work is underway by several groups to quantify diffusive radial transport of radiation belt electrons, including a model for pitch angle scattering losses to the atmosphere. The radial diffusion model conserves the first and second adiabatic invariants and breaks the third invariant. We have developed a radial diffusion code which uses the Crank Nicholson method with a variable outer boundary condition. For the radial diffusion coefficient, DLL, we have several choices, including the Brautigam and Albert (JGR, 2000) diffusion coefficient parameterized by Kp, which provides an ad hoc measure of the power level at ULF wave frequencies in the range of electron drift (mHz), breaking the third invariant. Other diffusion coefficient models are Kp-independent, fixed in time but explicitly dependent on the first invariant, or energy at a fixed L, such as calculated by Elkington et al. (JGR, 2003) and Perry et al. (JGR, 2006) based on ULF wave model fields. We analyzed three periods of electron flux and phase space density (PSD) enhancements inside of geosynchronous orbit: March 31 - May 31, 1991, and July 2004 and Nov 2004 storm intervals. The radial diffusion calculation is initialized with a computed phase space density profile for the 1991 interval using differential flux values from the CRRES High Energy Electron Fluxmeter instrument, covering 0.65 - 7.5 MeV. To calculate the initial phase space density, we convert Roederer L* to McIlwain's L- parameter using the ONERA-DESP program. A time averaged model developed by Vampola1 from the entire 14 month CRRES data set is applied to the July 2004 and Nov 2004 storms. The online CRESS data for specific orbits and the Vampola-model flux are both expressed in McIlwain L-shell, while conversion to L* conserves phase space density in a distorted non-dipolar magnetic field model. A Tsyganenko (T04) magnetic field model is used for conversion between L* and L. The outer boundary PSD is updated using LANL GEO satellite fluxes

11. Application of Fuzzy Logic Inference System, Interval Numbers and Mapping Operator for Determination of Risk Level

Directory of Open Access Journals (Sweden)

Mohsen Omidvar

2015-12-01

Full Text Available Background & objective: Due to the features such as intuitive graphical appearance, ease of perception and straightforward applicability, risk matrix has become as one of the most used risk assessment tools. On the other hand, features such as the lack of precision in the classification of risk index, as well as subjective computational process, has limited its use. In order to solve this problem, in the current study we used fuzzy logic inference systems and mathematical operators (interval numbers and mapping operator. Methods: In this study, first 10 risk scenarios in the excavation and piping process were selected, then the outcome of the risk assessment were studied using four types of matrix including traditional (ORM, displaced cells (RCM , extended (ERM and fuzzy (FRM risk matrixes. Results: The results showed that the use of FRM and ERM matrix have prority, due to the high level of " Risk Tie Density" (RTD and "Risk Level Density" (RLD in the ORM and RCM matrix, as well as more accurate results presented in FRM and ERM, in risk assessment. While, FRM matrix provides more reliable results due to the application of fuzzy membership functions. Conclusion: Using new mathematical issues such as fuzzy sets and arithmetic and mapping operators for risk assessment could improve the accuracy of risk matrix and increase the reliability of the risk assessment results, when the accurate data are not available, or its data are avaliable in a limit range.

12. Short-interval SMS wind vector determinations for a severe local storms area

Science.gov (United States)

Peslen, C. A.

1980-01-01

Short-interval SMS-2 visible digital image data are used to derive wind vectors from cloud tracking on time-lapsed sequences of geosynchronous satellite images. The cloud tracking areas are located in the Central Plains, where on May 6, 1975 hail-producing thunderstorms occurred ahead of a well defined dry line. Cloud tracking is performed on the Goddard Space Flight Center Atmospheric and Oceanographic Information Processing System. Lower tropospheric cumulus tracers are selected with the assistance of a cloud-top height algorithm. Divergence is derived from the cloud motions using a modified Cressman (1959) objective analysis technique which is designed to organize irregularly spaced wind vectors into uniformly gridded wind fields. The results demonstrate the feasibility of using satellite-derived wind vectors and their associated divergence fields in describing the conditions preceding severe local storm development. For this case, an area of convergence appeared ahead of the dry line and coincided with the developing area of severe weather. The magnitude of the maximum convergence varied between -10 to the -5th and -10 to the -14th per sec. The number of satellite-derived wind vectors which were required to describe conditions of the low-level atmosphere was adequate before numerous cumulonimbus cells formed. This technique is limited in areas of advanced convection.

13. Dependency of magnetocardiographically determined fetal cardiac time intervals on gestational age, gender and postnatal biometrics in healthy pregnancies

Directory of Open Access Journals (Sweden)

Geue Daniel

2004-04-01

Full Text Available Abstract Background Magnetocardiography enables the precise determination of fetal cardiac time intervals (CTI as early as the second trimester of pregnancy. It has been shown that fetal CTI change in course of gestation. The aim of this work was to investigate the dependency of fetal CTI on gestational age, gender and postnatal biometric data in a substantial sample of subjects during normal pregnancy. Methods A total of 230 fetal magnetocardiograms were obtained in 47 healthy fetuses between the 15th and 42nd week of gestation. In each recording, after subtraction of the maternal cardiac artifact and the identification of fetal beats, fetal PQRST courses were signal averaged. On the basis of therein detected wave onsets and ends, the following CTI were determined: P wave, PR interval, PQ interval, QRS complex, ST segment, T wave, QT and QTc interval. Using regression analysis, the dependency of the CTI were examined with respect to gestational age, gender and postnatal biometric data. Results Atrioventricular conduction and ventricular depolarization times could be determined dependably whereas the T wave was often difficult to detect. Linear and nonlinear regression analysis established strong dependency on age for the P wave and QRS complex (r2 = 0.67, p r2 = 0.66, p r2 = 0.21, p r2 = 0.13, p st week onward (p Conclusion We conclude that 1 from approximately the 18th week to term, fetal CTI which quantify depolarization times can be reliably determined using magnetocardiography, 2 the P wave and QRS complex duration show a high dependency on age which to a large part reflects fetal growth and 3 fetal gender plays a role in QRS complex duration in the third trimester. Fetal development is thus in part reflected in the CTI and may be useful in the identification of intrauterine growth retardation.

14. Determination of hematology and plasma chemistry reference intervals for 3 populations of captive Atlantic sturgeon (Acipenser oxyrinchus oxyrinchus).

Science.gov (United States)

Matsche, Mark A; Arnold, Jill; Jenkins, Erin; Townsend, Howard; Rosemary, Kevin

2014-09-01

The imperiled status of Atlantic sturgeon (Acipenser oxyrinchus oxyrinchus), a large, long-lived, anadromous fish found along the Atlantic coast of North America, has prompted efforts at captive propagation for research and stock enhancement. The purpose of this study was to establish hematology and plasma chemistry reference intervals of captive Atlantic sturgeon maintained under different culture conditions. Blood specimens were collected from a total of 119 fish at 3 hatcheries: Lamar, PA (n = 36, ages 10-14 years); Chalk Point, MD (n = 40, siblings of Lamar); and Horn Point, Cambridge, MD (n = 43, mixed population from Chesapeake Bay). Reference intervals (using robust techniques), median, mean, and standard deviations were determined for WBC, RBC, thrombocytes, PCV, HGB, MCV, MCH, MCHC, and absolute counts for lymphocytes (L), neutrophils (N), monocytes, and eosinophils. Chemistry analytes included concentrations of total proteins, albumin, glucose, urea, calcium, phosphate, sodium, potassium, chloride, and globulins, AST, CK, and LDH activities, and osmolality. Mean concentrations of total proteins, albumin, and glucose were at or below the analytic range. Statistical comparisons showed significant differences among hatcheries for each remaining plasma chemistry analyte and for PCV, RBC, MCHC, MCH, eosinophil and monocyte counts, and N:L ratio throughout all 3 groups. Therefore, reference intervals were calculated separately for each population. Reference intervals for fish maintained under differing conditions should be established per population. © 2014 American Society for Veterinary Clinical Pathology and European Society for Veterinary Clinical Pathology.

15. Determining the thermionic converter interval parameters according to volt-ampere characteristics

International Nuclear Information System (INIS)

Kajbyshev, V.Z.

1986-01-01

On the basis of the condition for independence of plasma parameters on the collector emission and the expression for the ratio of the current from collector into plasma to the passing current at a great collector emission, the technique for determining the electron temperature near the collector, voltage drop in collector, and collector work function according to experimental characteristics is developed

16. Determination of fat content in chicken hamburgers using NIR spectroscopy and the Successive Projections Algorithm for interval selection in PLS regression (iSPA-PLS)

Science.gov (United States)

Krepper, Gabriela; Romeo, Florencia; Fernandes, David Douglas de Sousa; Diniz, Paulo Henrique Gonçalves Dias; de Araújo, Mário César Ugulino; Di Nezio, María Susana; Pistonesi, Marcelo Fabián; Centurión, María Eugenia

2018-01-01

Determining fat content in hamburgers is very important to minimize or control the negative effects of fat on human health, effects such as cardiovascular diseases and obesity, which are caused by the high consumption of saturated fatty acids and cholesterol. This study proposed an alternative analytical method based on Near Infrared Spectroscopy (NIR) and Successive Projections Algorithm for interval selection in Partial Least Squares regression (iSPA-PLS) for fat content determination in commercial chicken hamburgers. For this, 70 hamburger samples with a fat content ranging from 14.27 to 32.12 mg kg- 1 were prepared based on the upper limit recommended by the Argentinean Food Codex, which is 20% (w w- 1). NIR spectra were then recorded and then preprocessed by applying different approaches: base line correction, SNV, MSC, and Savitzky-Golay smoothing. For comparison, full-spectrum PLS and the Interval PLS are also used. The best performance for the prediction set was obtained for the first derivative Savitzky-Golay smoothing with a second-order polynomial and window size of 19 points, achieving a coefficient of correlation of 0.94, RMSEP of 1.59 mg kg- 1, REP of 7.69% and RPD of 3.02. The proposed methodology represents an excellent alternative to the conventional Soxhlet extraction method, since waste generation is avoided, yet without the use of either chemical reagents or solvents, which follows the primary principles of Green Chemistry. The new method was successfully applied to chicken hamburger analysis, and the results agreed with those with reference values at a 95% confidence level, making it very attractive for routine analysis.

17. Application of derivative spectrophotometry under orthogonal polynomial at unequal intervals: determination of metronidazole and nystatin in their pharmaceutical mixture.

Science.gov (United States)

Korany, Mohamed A; Abdine, Heba H; Ragab, Marwa A A; Aboras, Sara I

2015-05-15

This paper discusses a general method for the use of orthogonal polynomials for unequal intervals (OPUI) to eliminate interferences in two-component spectrophotometric analysis. In this paper, a new approach was developed by using first derivative D1 curve instead of absorbance curve to be convoluted using OPUI method for the determination of metronidazole (MTR) and nystatin (NYS) in their mixture. After applying derivative treatment of the absorption data many maxima and minima points appeared giving characteristic shape for each drug allowing the selection of different number of points for the OPUI method for each drug. This allows the specific and selective determination of each drug in presence of the other and in presence of any matrix interference. The method is particularly useful when the two absorption spectra have considerable overlap. The results obtained are encouraging and suggest that the method can be widely applied to similar problems. Copyright © 2015 Elsevier B.V. All rights reserved.

18. Long-term maintenance of immediate or delayed extinction is determined by the extinction-test interval

OpenAIRE

Johnson, Justin S.; Escobar, Martha; Kimble, Whitney L.

2010-01-01

Short acquisition-extinction intervals (immediate extinction) can lead to either more or less spontaneous recovery than long acquisition-extinction intervals (delayed extinction). Using rat subjects, we observed less spontaneous recovery following immediate than delayed extinction (Experiment 1). However, this was the case only if a relatively long extinction-test interval was used; a relatively short extinction-test interval yielded the opposite result (Experiment 2). Previous data appear co...

19. Magnitude of cyantraniliprole residues in tomato following open field application: pre-harvest interval determination and risk assessment.

Science.gov (United States)

Malhat, Farag; Kasiotis, Konstantinos M; Shalaby, Shehata

2018-02-05

Cyantraniliprole is an anthranilic diamide insecticide, belonging to the ryanoid class, with a broad range of applications against several pests. In the presented work, a reliable analytical technique employing high-performance liquid chromatography coupled with photodiode array detector (HPLC-DAD) for analyzing cyantraniliprole residues in tomato was developed. The method was then applied to field-incurred tomato samples collected after applications under open field conditions. The latter aimed to ensure the safe application of cyantraniliprole to tomato and contribute the derived residue data to the risk assessment under field conditions. Sample preparation involved a single step extraction with acetonitrile and sodium chloride for partitioning. The extract was purified utilizing florisil as cleanup reagent. The developed method was further evaluated by comparing the analytical results with those obtained using the QuEChERS technique. The novel method outbalanced QuEChERS regarding matrix interferences in the analysis, while it met all guideline criteria. Hence, it showed excellent linearity over the assayed concentration and yielded satisfactory recovery rate in the range of 88.9 to 96.5%. The half-life of degradation of cyantraniliprole was determined at 2.6 days. Based on the Codex MRL, the pre-harvest interval (PHI) for cyantraniliprole on tomato was 3 days, after treatment at the recommended dose. To our knowledge, the present work provides the first record on PHI determination of cyantraniliprole in tomato under open field conditions in Egypt and the broad Mediterranean region.

20. Long-Term Maintenance of Immediate or Delayed Extinction Is Determined by the Extinction-Test Interval

Science.gov (United States)

Johnson, Justin S.; Escobar, Martha; Kimble, Whitney L.

2010-01-01

Short acquisition-extinction intervals (immediate extinction) can lead to either more or less spontaneous recovery than long acquisition-extinction intervals (delayed extinction). Using rat subjects, we observed less spontaneous recovery following immediate than delayed extinction (Experiment 1). However, this was the case only if a relatively…

1. The analytical determination of useful life and replacement intervals for equipment located in a non-harsh environment

International Nuclear Information System (INIS)

Glazman, J.S.; Ahluwalia, J.S.; Kneppel, D.S.; Harter, T.G.

1985-01-01

In order to establish useful life and replacement intervals for equipment located in a non-harsh environment, an analysis can be performed to show that either the thermal degradation of the equipment is insignificant over the life of the plant or that certain components must be replaced periodically. In these analyses it is necessary to calculate the thermal lives of the components based on their actual operating temperatures rather than at a single cabinet temperature. The Infrared Thermal Imaging Measurement System is a rapid technique for measuring the temperatures of all points on a board or cabinet side simultaneously. The infrared scan of the operating equipment is displayed on a monitor, analyzed and stored on videotape for future reference. This paper presents an approach to performing such an analysis using the example of a process analysis and display system. The equilibrium operating temperatures of the individual components in the above system were measured by the Infrared Thermal Imaging Measurement System and compared to a calculated maximum permitted service temperature determined by the Arrhenius methodology. Examples will be shown where it was possible to exempt entire assemblies from replacement by showing that no point on the assembly exceeds the calculated maximum permitted temperature

2. Conductometric titration to determine total volatile basic nitrogen (TVB-N) for post-mortem interval (PMI).

Science.gov (United States)

Xia, Zhiyuan; Zhai, Xiandun; Liu, Beibei; Mo, Yaonan

2016-11-01

3. Confidant Relations in Italy

Directory of Open Access Journals (Sweden)

Jenny Isaacs

2015-02-01

Full Text Available Confidants are often described as the individuals with whom we choose to disclose personal, intimate matters. The presence of a confidant is associated with both mental and physical health benefits. In this study, 135 Italian adults responded to a structured questionnaire that asked if they had a confidant, and if so, to describe various features of the relationship. The vast majority of participants (91% reported the presence of a confidant and regarded this relationship as personally important, high in mutuality and trust, and involving minimal lying. Confidants were significantly more likely to be of the opposite sex. Participants overall were significantly more likely to choose a spouse or other family member as their confidant, rather than someone outside of the family network. Familial confidants were generally seen as closer, and of greater value, than non-familial confidants. These findings are discussed within the context of Italian culture.

4. Statistical intervals a guide for practitioners

CERN Document Server

Hahn, Gerald J

2011-01-01

Presents a detailed exposition of statistical intervals and emphasizes applications in industry. The discussion differentiates at an elementary level among different kinds of statistical intervals and gives instruction with numerous examples and simple math on how to construct such intervals from sample data. This includes confidence intervals to contain a population percentile, confidence intervals on probability of meeting specified threshold value, and prediction intervals to include observation in a future sample. Also has an appendix containing computer subroutines for nonparametric stati

5. Confidence Intervals for Omega Coefficient: Proposal for Calculus.

Science.gov (United States)

Ventura-León, José Luis

2018-01-01

La confiabilidad es entendida como una propiedad métrica de las puntuaciones de un instrumento de medida. Recientemente se viene utilizando el coeficiente omega (ω) para la estimación de la confiabilidad. No obstante, la medición nunca es exacta por la influencia del error aleatorio, por esa razón es necesario calcular y reportar el intervalo de confianza (IC) que permite encontrar en valor verdadero en un rango de medida. En ese contexto, el artículo plantea una forma de estimar el IC mediante el método de bootstrap para facilitar este procedimiento se brindan códigos de R (un software de acceso libre) para que puedan realizarse los cálculos de una forma amigable. Se espera que el artículo sirva de ayuda a los investigadores de ámbito de salud.

6. Secure and Usable Bio-Passwords based on Confidence Interval

OpenAIRE

Aeyoung Kim; Geunshik Han; Seung-Hyun Seo

2017-01-01

The most popular user-authentication method is the password. Many authentication systems try to enhance their security by enforcing a strong password policy, and by using the password as the first factor, something you know, with the second factor being something you have. However, a strong password policy and a multi-factor authentication system can make it harder for a user to remember the password and login in. In this paper a bio-password-based scheme is proposed as a unique authenticatio...

7. Intervals of confidence: Uncertain accounts of global hunger

NARCIS (Netherlands)

Yates-Doerr, E.

2015-01-01

Global health policy experts tend to organize hunger through scales of ‘the individual’, ‘the community’ and ‘the global’. This organization configures hunger as a discrete, measurable object to be scaled up or down with mathematical certainty. This article offers a counter to this approach, using

8. A quick method to calculate QTL confidence interval

2011-08-19

Aug 19, 2011 ... experimental design and analysis to reveal the real molecular nature of the ... strap sample form the bootstrap distribution of QTL location. The 2.5 and ..... ative probability to harbour a true QTL, hence x-LOD rule is not stable ... Darvasi A. and Soller M. 1997 A simple method to calculate resolv- ing power ...

9. Large Sample Confidence Intervals for Item Response Theory Reliability Coefficients

Science.gov (United States)

2018-01-01

In applications of item response theory (IRT), an estimate of the reliability of the ability estimates or sum scores is often reported. However, analytical expressions for the standard errors of the estimators of the reliability coefficients are not available in the literature and therefore the variability associated with the estimated reliability…

10. An approximate confidence interval for recombination fraction in ...

African Journals Online (AJOL)

user

2011-02-14

Feb 14, 2011 ... whose parents are not in the pedigree) and θ be the recombination fraction. ( )|. P x g is the penetrance probability, that is, the probability that an individual with genotype g has phenotype x . Let (. ) | , k k k f m. P g g g be the transmission probability, that is, the probability that an individual having genotype k.

11. Reference intervals for mean platelet volume and immature platelet fraction determined on a sysmex XE5000 hematology analyzer

DEFF Research Database (Denmark)

Jørgensen, Mikala Klok; Bathum, L.

2016-01-01

Background New parameters describing the platelet population of the blood are mean platelet volume (MPV), which is a crude estimate of thrombocyte reactivity, and immature platelet fraction (IPF), which reflects megakaryopoietic activity. This study aimed to define reference intervals for MPV...... and IPF and to investigate whether separate reference intervals according to smoking status, age or sex are necessary.Methods Blood samples were obtained from subjects participating in The Danish General Suburban Population Study. MPV and IPF measurements were performed by the use of the Sysmex XE-5000...

12. Raising Confident Kids

Science.gov (United States)

... First Aid & Safety Doctors & Hospitals Videos Recipes for Kids Kids site Sitio para niños How the Body ... Videos for Educators Search English Español Raising Confident Kids KidsHealth / For Parents / Raising Confident Kids What's in ...

13. Optimal preparation-to-colonoscopy interval in split-dose PEG bowel preparation determines satisfactory bowel preparation quality: an observational prospective study.

Science.gov (United States)

Seo, Eun Hee; Kim, Tae Oh; Park, Min Jae; Joo, Hee Rin; Heo, Nae Yun; Park, Jongha; Park, Seung Ha; Yang, Sung Yeon; Moon, Young Soo

2012-03-01

Several factors influence bowel preparation quality. Recent studies have indicated that the time interval between bowel preparation and the start of colonoscopy is also important in determining bowel preparation quality. To evaluate the influence of the preparation-to-colonoscopy (PC) interval (the interval of time between the last polyethylene glycol dose ingestion and the start of the colonoscopy) on bowel preparation quality in the split-dose method for colonoscopy. Prospective observational study. University medical center. A total of 366 consecutive outpatients undergoing colonoscopy. Split-dose bowel preparation and colonoscopy. The quality of bowel preparation was assessed by using the Ottawa Bowel Preparation Scale according to the PC interval, and other factors that might influence bowel preparation quality were analyzed. Colonoscopies with a PC interval of 3 to 5 hours had the best bowel preparation quality score in the whole, right, mid, and rectosigmoid colon according to the Ottawa Bowel Preparation Scale. In multivariate analysis, the PC interval (odds ratio [OR] 1.85; 95% CI, 1.18-2.86), the amount of PEG ingested (OR 4.34; 95% CI, 1.08-16.66), and compliance with diet instructions (OR 2.22l 95% CI, 1.33-3.70) were significant contributors to satisfactory bowel preparation. Nonrandomized controlled, single-center trial. The optimal time interval between the last dose of the agent and the start of colonoscopy is one of the important factors to determine satisfactory bowel preparation quality in split-dose polyethylene glycol bowel preparation. Copyright © 2012 American Society for Gastrointestinal Endoscopy. Published by Mosby, Inc. All rights reserved.

14. Evaluating a Computer Flash-Card Sight-Word Recognition Intervention with Self-Determined Response Intervals in Elementary Students with Intellectual Disability

Science.gov (United States)

Cazzell, Samantha; Skinner, Christopher H.; Ciancio, Dennis; Aspiranti, Kathleen; Watson, Tiffany; Taylor, Kala; McCurdy, Merilee; Skinner, Amy

2017-01-01

A concurrent multiple-baseline across-tasks design was used to evaluate the effectiveness of a computer flash-card sight-word recognition intervention with elementary-school students with intellectual disability. This intervention allowed the participants to self-determine each response interval and resulted in both participants acquiring…

15. nigerian students' self-confidence in responding to statements

African Journals Online (AJOL)

Temechegn

Altogether the test is made up of 40 items covering students' ability to recall definition ... confidence interval within which student have confidence in their choice of the .... is mentioned these equilibrium systems come to memory of the learner.

16. Impact of Short Interval SMS Digital Data on Wind Vector Determination for a Severe Local Storms Area

Science.gov (United States)

Peslen, C. A.

1979-01-01

The impact of 5 minute interval SMS-2 visible digital image data in analyzing severe local storms is examined using wind vectors derived from cloud tracking on time lapsed sequence of geosynchronous satellite images. The cloud tracking areas are located in the Central Plains, where on 6 May 1975, hail-producing thunderstorms occurred ahead of a well defined dry line. The results demonstrate that satellite-derived wind vectors and their associated divergence fields complement conventional meteorological analyses in describing the conditions preceding severe local storm development.

17. Degradation of Insecticides in Poultry Manure: Determining the Insecticidal Treatment Interval for Managing House Fly (Diptera: Muscidae) Populations in Poultry Farms.

Science.gov (United States)

Ong, Song-Quan; Ab Majid, Abdul Hafiz; Ahmad, Hamdan

2016-04-01

It is crucial to understand the degradation pattern of insecticides when designing a sustainable control program for the house fly, Musca domestica (L.), on poultry farms. The aim of this study was to determine the half-life and degradation rates of cyromazine, chlorpyrifos, and cypermethrin by spiking these insecticides into poultry manure, and then quantitatively analyzing the insecticide residue using ultra-performance liquid chromatography. The insecticides were later tested in the field in order to study the appropriate insecticidal treatment intervals. Bio-assays on manure samples were later tested at 3, 7, 10, and 15 d for bio-efficacy on susceptible house fly larvae. Degradation analysis demonstrated that cyromazine has the shortest half-life (3.01 d) compared with chlorpyrifos (4.36 d) and cypermethrin (3.75 d). Cyromazine also had a significantly greater degradation rate compared with chlorpyrifos and cypermethrin. For the field insecticidal treatment interval study, 10 d was the interval that had been determined for cyromazine due to its significantly lower residue; for ChCy (a mixture of chlorpyrifos and cypermethrin), the suggested interval was 7 d. Future work should focus on the effects of insecticide metabolites on targeted pests and the poultry manure environment.

18. Methodology for building confidence measures

Science.gov (United States)

Bramson, Aaron L.

2004-04-01

This paper presents a generalized methodology for propagating known or estimated levels of individual source document truth reliability to determine the confidence level of a combined output. Initial document certainty levels are augmented by (i) combining the reliability measures of multiply sources, (ii) incorporating the truth reinforcement of related elements, and (iii) incorporating the importance of the individual elements for determining the probability of truth for the whole. The result is a measure of confidence in system output based on the establishing of links among the truth values of inputs. This methodology was developed for application to a multi-component situation awareness tool under development at the Air Force Research Laboratory in Rome, New York. Determining how improvements in data quality and the variety of documents collected affect the probability of a correct situational detection helps optimize the performance of the tool overall.

19. Predictable weathering of puparial hydrocarbons of necrophagous flies for determining the postmortem interval: a field experiment using Chrysomya rufifacies.

Science.gov (United States)

Zhu, Guang-Hui; Jia, Zheng-Jun; Yu, Xiao-Jun; Wu, Ku-Sheng; Chen, Lu-Shi; Lv, Jun-Yao; Eric Benbow, M

2017-05-01

Preadult development of necrophagous flies is commonly recognized as an accurate method for estimating the minimum postmortem interval (PMImin). However, once the PMImin exceeds the duration of preadult development, the method is less accurate. Recently, fly puparial hydrocarbons were found to significantly change with weathering time in the field, indicating their potential use for PMImin estimates. However, additional studies are required to demonstrate how the weathering varies among species. In this study, the puparia of Chrysomya rufifacies were placed in the field to experience natural weathering to characterize hydrocarbon composition change over time. We found that weathering of the puparial hydrocarbons was regular and highly predictable in the field. For most of the hydrocarbons, the abundance decreased significantly and could be modeled using a modified exponent function. In addition, the weathering rate was significantly correlated with the hydrocarbon classes. The weathering rate of 2-methyl alkanes was significantly lower than that of alkenes and internal methyl alkanes, and alkenes were higher than the other two classes. For mono-methyl alkanes, the rate was significantly and positively associated with carbon chain length and branch position. These results indicate that puparial hydrocarbon weathering is highly predictable and can be used for estimating long-term PMImin.

20. A strategy for determination of test intervals of k-out-of-n multi-channel systems

International Nuclear Information System (INIS)

Cho, S.; Jiang, J.

2007-01-01

State space models for determination of the optimal test frequencies for k-out-of-n multi channel systems are developed in this paper. The analytic solutions for the optimal surveillance test frequencies are derived using the Markov process technique. The solutions show that an optimal test frequency which maximizes the target probability can be determined by decomposing the system states to 3 states based on the system configuration and success criteria. Examples of quantification of the state probabilities and the optimal test frequencies of a three-channel system and a four-channel system with different success criteria are presented. The strategy for finding the optimal test frequency developed in this paper can generally be applicable to any k-out-of-n multi-channel standby systems that involve complex testing schemes. (author)

1. The use of regression analysis in determining reference intervals for low hematocrit and thrombocyte count in multiple electrode aggregometry and platelet function analyzer 100 testing of platelet function.

Science.gov (United States)

Kuiper, Gerhardus J A J M; Houben, Rik; Wetzels, Rick J H; Verhezen, Paul W M; Oerle, Rene van; Ten Cate, Hugo; Henskens, Yvonne M C; Lancé, Marcus D

2017-11-01

Low platelet counts and hematocrit levels hinder whole blood point-of-care testing of platelet function. Thus far, no reference ranges for MEA (multiple electrode aggregometry) and PFA-100 (platelet function analyzer 100) devices exist for low ranges. Through dilution methods of volunteer whole blood, platelet function at low ranges of platelet count and hematocrit levels was assessed on MEA for four agonists and for PFA-100 in two cartridges. Using (multiple) regression analysis, 95% reference intervals were computed for these low ranges. Low platelet counts affected MEA in a positive correlation (all agonists showed r 2 ≥ 0.75) and PFA-100 in an inverse correlation (closure times were prolonged with lower platelet counts). Lowered hematocrit did not affect MEA testing, except for arachidonic acid activation (ASPI), which showed a weak positive correlation (r 2 = 0.14). Closure time on PFA-100 testing was inversely correlated with hematocrit for both cartridges. Regression analysis revealed different 95% reference intervals in comparison with originally established intervals for both MEA and PFA-100 in low platelet or hematocrit conditions. Multiple regression analysis of ASPI and both tests on the PFA-100 for combined low platelet and hematocrit conditions revealed that only PFA-100 testing should be adjusted for both thrombocytopenia and anemia. 95% reference intervals were calculated using multiple regression analysis. However, coefficients of determination of PFA-100 were poor, and some variance remained unexplained. Thus, in this pilot study using (multiple) regression analysis, we could establish reference intervals of platelet function in anemia and thrombocytopenia conditions on PFA-100 and in thrombocytopenia conditions on MEA.

2. Validation and determination of a reference interval for canine HbA1c using an immunoturbidimetric assay.

Science.gov (United States)

Goemans, Anne F; Spence, Susanna J; Ramsey, Ian K

2017-06-01

Hemoglobin A1c (HbA1c) provides a reliable measure of glycemic control over 2-3 months in human diabetes mellitus. In dogs, presence of HbA1c has been demonstrated, but there are no validated commercial assays. The purpose of the study was to validate a commercially available automated immunoturbidimetric assay for canine HbA1c and determine an RI in a hospital population. The specificity of the assay was assessed by inducing glycosylation in vitro using isolated canine hemoglobin, repeatability by measuring canine samples 5 times in succession, long term inter-assay imprecision by measuring supplied control materials, stability using samples stored at 4°C over 5 days and -20°C over 8 weeks, linearity by mixing samples of known HbA1c in differing proportions, and the effect of anticoagulants with paired samples. An RI was determined using EDTA-anticoagulated blood samples from 60 nondiabetic hospitalized animals of various ages and breeds. Hemoglobin A1c was also measured in 10 diabetic dogs. The concentration of HbA1c increased proportionally with glucose concentration in vitro. For repeat measurements, the CV was 4.08% (range 1.16-6.10%). Samples were stable for 5 days at 4°C. The assay was linear within the assessed range. Heparin- and EDTA-anticoagulated blood provided comparable results. The RI for HbA1c was 9-18.5 mmol/mol. There was no apparent effect of age or breed on HbA1c. In diabetic dogs, HbA1c ranged from 14 to 48 mmol/mol. The assay provides a reliable method for canine HbA1c measurement with good analytic performance. © 2017 American Society for Veterinary Clinical Pathology.

Science.gov (United States)

Kelley, Tom; Kelley, David

2012-12-01

Most people are born creative. But over time, a lot of us learn to stifle those impulses. We become warier of judgment, more cautious more analytical. The world seems to divide into "creatives" and "noncreatives," and too many people resign themselves to the latter category. And yet we know that creativity is essential to success in any discipline or industry. The good news, according to authors Tom Kelley and David Kelley of IDEO, is that we all can rediscover our creative confidence. The trick is to overcome the four big fears that hold most of us back: fear of the messy unknown, fear of judgment, fear of the first step, and fear of losing control. The authors use an approach based on the work of psychologist Albert Bandura in helping patients get over their snake phobias: You break challenges down into small steps and then build confidence by succeeding on one after another. Creativity is something you practice, say the authors, not just a talent you are born with.

4. Confidence in Numerical Simulations

International Nuclear Information System (INIS)

Hemez, Francois M.

2015-01-01

This PowerPoint presentation offers a high-level discussion of uncertainty, confidence and credibility in scientific Modeling and Simulation (M&S). It begins by briefly evoking M&S trends in computational physics and engineering. The first thrust of the discussion is to emphasize that the role of M&S in decision-making is either to support reasoning by similarity or to ''forecast,'' that is, make predictions about the future or extrapolate to settings or environments that cannot be tested experimentally. The second thrust is to explain that M&S-aided decision-making is an exercise in uncertainty management. The three broad classes of uncertainty in computational physics and engineering are variability and randomness, numerical uncertainty and model-form uncertainty. The last part of the discussion addresses how scientists ''think.'' This thought process parallels the scientific method where by a hypothesis is formulated, often accompanied by simplifying assumptions, then, physical experiments and numerical simulations are performed to confirm or reject the hypothesis. ''Confidence'' derives, not just from the levels of training and experience of analysts, but also from the rigor with which these assessments are performed, documented and peer-reviewed.

5. Confidence bands for inverse regression models

International Nuclear Information System (INIS)

Birke, Melanie; Bissantz, Nicolai; Holzmann, Hajo

2010-01-01

We construct uniform confidence bands for the regression function in inverse, homoscedastic regression models with convolution-type operators. Here, the convolution is between two non-periodic functions on the whole real line rather than between two periodic functions on a compact interval, since the former situation arguably arises more often in applications. First, following Bickel and Rosenblatt (1973 Ann. Stat. 1 1071–95) we construct asymptotic confidence bands which are based on strong approximations and on a limit theorem for the supremum of a stationary Gaussian process. Further, we propose bootstrap confidence bands based on the residual bootstrap and prove consistency of the bootstrap procedure. A simulation study shows that the bootstrap confidence bands perform reasonably well for moderate sample sizes. Finally, we apply our method to data from a gel electrophoresis experiment with genetically engineered neuronal receptor subunits incubated with rat brain extract

6. Predicting fecal coliform using the interval-to-interval approach and SWAT in the Miyun watershed, China.

Science.gov (United States)

Bai, Jianwen; Shen, Zhenyao; Yan, Tiezhu; Qiu, Jiali; Li, Yangyang

2017-06-01

Pathogens in manure can cause waterborne-disease outbreaks, serious illness, and even death in humans. Therefore, information about the transformation and transport of bacteria is crucial for determining their source. In this study, the Soil and Water Assessment Tool (SWAT) was applied to simulate fecal coliform bacteria load in the Miyun Reservoir watershed, China. The data for the fecal coliform were obtained at three sampling sites, Chenying (CY), Gubeikou (GBK), and Xiahui (XH). The calibration processes of the fecal coliform were conducted using the CY and GBK sites, and validation was conducted at the XH site. An interval-to-interval approach was designed and incorporated into the processes of fecal coliform calibration and validation. The 95% confidence interval of the predicted values and the 95% confidence interval of measured values were considered during calibration and validation in the interval-to-interval approach. Compared with the traditional point-to-point comparison, this method can improve simulation accuracy. The results indicated that the simulation of fecal coliform using the interval-to-interval approach was reasonable for the watershed. This method could provide a new research direction for future model calibration and validation studies.

7. Globalization of consumer confidence

Directory of Open Access Journals (Sweden)

2017-01-01

Full Text Available The globalization of world economies and the importance of nowcasting analysis have been at the core of the recent literature. Nevertheless, these two strands of research are hardly coupled. This study aims to fill this gap through examining the globalization of the consumer confidence index (CCI by applying conventional and unconventional econometric methods. The US CCI is used as the benchmark in tests of comovement among the CCIs of several developing and developed countries, with the data sets divided into three sub-periods: global liquidity abundance, the Great Recession, and postcrisis. The existence and/or degree of globalization of the CCIs vary according to the period, whereas globalization in the form of coherence and similar paths is observed only during the Great Recession and, surprisingly, stronger in developing/emerging countries.

8. Self-Confidence in the Hospitality Industry

Directory of Open Access Journals (Sweden)

Michael Oshins

2014-02-01

Full Text Available Few industries rely on self-confidence to the extent that the hospitality industry does because guests must feel welcome and that they are in capable hands. This article examines the results of hundreds of student interviews with industry professionals at all levels to determine where the majority of the hospitality industry gets their self-confidence.

9. Analysing of 228Th, 232Th, 228Ra in human bone tissues for the purpose of determining the post mortal interval

International Nuclear Information System (INIS)

Kandlbinder, R.; Geissler, V.; Schupfner, R.; Wolfbeis, O.; Zinka, B.

2009-01-01

Bone tissues of thirteen deceased persons were analyzed to determine the activity concentration of the radionuclides 228 Ra, 228 Th, 232 Th and 2 30 Th. The activity ratios enable to assess the post-mortem-interval PMI). The samples were prepared for analysis by incinerating and pulverizing. 228 Ra was directly detected by γ-spectrometry. 2 28 Th, 230 Th, 232 Th were detected by α-spectrometry after radiochemical purification and electrodeposition. It is shown that the method s principally suited to determine the PMI. A minimum of 300 g (wet weight) f human bone tissue is required for the analysis. Counting times are in the range of one to two weeks. (author)

10. Leadership by Confidence in Teams

OpenAIRE

Kobayashi, Hajime; Suehiro, Hideo

2008-01-01

We study endogenous signaling by analyzing a team production problem with endogenous timing. Each agent of the team is privately endowed with some level of confidence about team productivity. Each of them must then commit a level of effort in one of two periods. At the end of each period, each agent observes his partner' s move in this period. Both agents are rewarded by a team output determined by team productivity and total invested effort. Each agent must personally incur the cost of effor...

11. Regional Competition for Confidence: Features of Formation

Directory of Open Access Journals (Sweden)

Irina Svyatoslavovna Vazhenina

2016-09-01

Full Text Available The increase in economic independence of the regions inevitably leads to an increase in the quality requirements of the regional economic policy. The key to successful regional policy, both during its development and implementation, is the understanding of the necessity of gaining confidence (at all levels, and the inevitable participation in the competition for confidence. The importance of confidence in the region is determined by its value as a competitive advantage in the struggle for partners, resources and tourists, and attracting investments. In today’s environment the focus of governments, regions and companies on long-term cooperation is clearly expressed, which is impossible without a high level of confidence between partners. Therefore, the most important competitive advantages of territories are intangible assets such as an attractive image and a good reputation, which builds up confidence of the population and partners. The higher the confidence in the region is, the broader is the range of potential partners, the larger is the planning horizon of long-term concerted action, the better are the chances of acquiring investment, the higher is the level of competitive immunity of the territories. The article defines competition for confidence as purposeful behavior of a market participant in economic environment, aimed at acquiring specific intangible competitive advantage – the confidence of the largest possible number of other market actors. The article also highlights the specifics of confidence as a competitive goal, presents factors contributing to the destruction of confidence, proposes a strategy to fight for confidence as a program of four steps, considers the factors which integrate regional confidence and offers several recommendations for the establishment of effective regional competition for confidence

12. Confidence in Numerical Simulations

Energy Technology Data Exchange (ETDEWEB)

Hemez, Francois M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

2015-02-23

This PowerPoint presentation offers a high-level discussion of uncertainty, confidence and credibility in scientific Modeling and Simulation (M&S). It begins by briefly evoking M&S trends in computational physics and engineering. The first thrust of the discussion is to emphasize that the role of M&S in decision-making is either to support reasoning by similarity or to “forecast,” that is, make predictions about the future or extrapolate to settings or environments that cannot be tested experimentally. The second thrust is to explain that M&S-aided decision-making is an exercise in uncertainty management. The three broad classes of uncertainty in computational physics and engineering are variability and randomness, numerical uncertainty and model-form uncertainty. The last part of the discussion addresses how scientists “think.” This thought process parallels the scientific method where by a hypothesis is formulated, often accompanied by simplifying assumptions, then, physical experiments and numerical simulations are performed to confirm or reject the hypothesis. “Confidence” derives, not just from the levels of training and experience of analysts, but also from the rigor with which these assessments are performed, documented and peer-reviewed.

13. Vis-NIR spectrometric determination of Brix and sucrose in sugar production samples using kernel partial least squares with interval selection based on the successive projections algorithm.

Science.gov (United States)

de Almeida, Valber Elias; de Araújo Gomes, Adriano; de Sousa Fernandes, David Douglas; Goicoechea, Héctor Casimiro; Galvão, Roberto Kawakami Harrop; Araújo, Mario Cesar Ugulino

2018-05-01

This paper proposes a new variable selection method for nonlinear multivariate calibration, combining the Successive Projections Algorithm for interval selection (iSPA) with the Kernel Partial Least Squares (Kernel-PLS) modelling technique. The proposed iSPA-Kernel-PLS algorithm is employed in a case study involving a Vis-NIR spectrometric dataset with complex nonlinear features. The analytical problem consists of determining Brix and sucrose content in samples from a sugar production system, on the basis of transflectance spectra. As compared to full-spectrum Kernel-PLS, the iSPA-Kernel-PLS models involve a smaller number of variables and display statistically significant superiority in terms of accuracy and/or bias in the predictions. Published by Elsevier B.V.

14. Planning an Availability Demonstration Test with Consideration of Confidence Level

Directory of Open Access Journals (Sweden)

Frank Müller

2017-08-01

Full Text Available The full service life of a technical product or system is usually not completed after an initial failure. With appropriate measures, the system can be returned to a functional state. Availability is an important parameter for evaluating such repairable systems: Failure and repair behaviors are required to determine this availability. These data are usually given as mean value distributions with a certain confidence level. Consequently, the availability value also needs to be expressed with a confidence level. This paper first highlights the bootstrap Monte Carlo simulation (BMCS for availability demonstration and inference with confidence intervals based on limited failure and repair data. The BMCS enables point-, steady-state and average availability to be determined with a confidence level based on the pure samples or mean value distributions in combination with the corresponding sample size of failure and repair behavior. Furthermore, the method enables individual sample sizes to be used. A sample calculation of a system with Weibull-distributed failure behavior and a sample of repair times is presented. Based on the BMCS, an extended, new procedure is introduced: the “inverse bootstrap Monte Carlo simulation” (IBMCS to be used for availability demonstration tests with consideration of confidence levels. The IBMCS provides a test plan comprising the required number of failures and repair actions that must be observed to demonstrate a certain availability value. The concept can be applied to each type of availability and can also be applied to the pure samples or distribution functions of failure and repair behavior. It does not require special types of distribution. In other words, for example, a Weibull, a lognormal or an exponential distribution can all be considered as distribution functions of failure and repair behavior. After presenting the IBMCS, a sample calculation will be carried out and the potential of the BMCS and the IBMCS

15. Picking Funds with Confidence

DEFF Research Database (Denmark)

Grønborg, Niels Strange; Lunde, Asger; Timmermann, Allan

We present a new approach to selecting active mutual funds that uses both holdings and return information to eliminate funds with predicted inferior performance through a sequence of pair-wise comparisons. Our methodology determines both the number of skilled funds and their identity, funds...... identified ex-ante as being superior earn substantially higher risk-adjusted returns than top funds identified by conventional alpha ranking methods. Importantly, we find strong evidence of variation in the breadth of the set of funds identified as superior, as well as fluctuations in the style and industry...... exposures of such funds over time and across different volatility states....

16. Confidence bounds for normal and lognormal distribution coefficients of variation

Science.gov (United States)

Steve Verrill

2003-01-01

This paper compares the so-called exact approach for obtaining confidence intervals on normal distribution coefficients of variation to approximate methods. Approximate approaches were found to perform less well than the exact approach for large coefficients of variation and small sample sizes. Web-based computer programs are described for calculating confidence...

17. Communication confidence in persons with aphasia.

Science.gov (United States)

Babbitt, Edna M; Cherney, Leora R

2010-01-01

Communication confidence is a construct that has not been explored in the aphasia literature. Recently, national and international organizations have endorsed broader assessment methods that address quality of life and include participation, activity, and impairment domains as well as psychosocial areas. Individuals with aphasia encounter difficulties in all these areas on a daily basis in living with a communication disorder. Improvements are often reflected in narratives that are not typically included in standard assessments. This article illustrates how a new instrument measuring communication confidence might fit into a broad assessment framework and discusses the interaction of communication confidence, autonomy, and self-determination for individuals living with aphasia.

18. Simultaneous determination of environmental α-radionuclides using liquid scintillation counting combined with time interval analysis (TIA) and pulse shape discrimination (PSD)

International Nuclear Information System (INIS)

Hashimoto, T.; Sato, K.; Yoneyama, Y.; Fukuyama, N.

1997-01-01

Some improvements of the detection sensitivity in pulse time interval analysis (TIA) based on selective extraction of successively α-α correlated decay events within millisecond order from random or background events, were established by the utilization of PSD, to reject β/γ-pulses from α-ones and a simple chemical procedure of radium separation, together with the use of well resolved scintillator. By applying the PSD, the contribution of β-decay events was completely eliminated in both the α-spectra and the TIA distribution curves as well as the improvement into clear energy resolution and the enhancement of detection sensitivity for the TIA. As a result, the TIA and α-spectrometric analysis of 226 Ra-extract showed the existence of 223 Ra (Ac-series) and β/α-correlated events with correlated life (due to 0.16 ms due to 214 Bi(β)-> 214 Po(α)->) along with a singly well resolved α-peak to be useful for the determination of 226 Ra (U-series). The difference of half-lives (145 and 1.78 ms) due to 216 Po and 215 Po (direct daughters of 224 Ra for Th-series and 223 Ra for Ac-series, respectively) was also proven for the possibility of the simultaneous determination of both correlated events by using the TIA/PSD combined with chemical separation and liquid scintillation counting method. Finally, the simultaneous determination of three natural decay series, which include U-, Th- and Ac-series nuclides, have been conveniently carried out for some environmental samples using the present method combined with 225 Ra yield tracer (Np-series). (author)

19. The idiosyncratic nature of confidence.

Science.gov (United States)

Navajas, Joaquin; Hindocha, Chandni; Foda, Hebah; Keramati, Mehdi; Latham, Peter E; Bahrami, Bahador

2017-11-01

Confidence is the 'feeling of knowing' that accompanies decision making. Bayesian theory proposes that confidence is a function solely of the perceived probability of being correct. Empirical research has suggested, however, that different individuals may perform different computations to estimate confidence from uncertain evidence. To test this hypothesis, we collected confidence reports in a task where subjects made categorical decisions about the mean of a sequence. We found that for most individuals, confidence did indeed reflect the perceived probability of being correct. However, in approximately half of them, confidence also reflected a different probabilistic quantity: the perceived uncertainty in the estimated variable. We found that the contribution of both quantities was stable over weeks. We also observed that the influence of the perceived probability of being correct was stable across two tasks, one perceptual and one cognitive. Overall, our findings provide a computational interpretation of individual differences in human confidence.

20. Determination of reference intervals and comparison of venous blood gas parameters using standard and non-standard collection methods in 24 cats.

Science.gov (United States)

Bachmann, Karin; Kutter, Annette Pn; Schefer, Rahel Jud; Marly-Voquer, Charlotte; Sigrist, Nadja

2017-08-01

Objectives The aim of this study was to determine in-house reference intervals (RIs) for venous blood analysis with the RAPIDPoint 500 blood gas analyser using blood gas syringes (BGSs) and to determine whether immediate analysis of venous blood collected into lithium heparin (LH) tubes can replace anaerobic blood sampling into BGSs. Methods Venous blood was collected from 24 healthy cats and directly transferred into a BGS and an LH tube. The BGS was immediately analysed on the RAPIDPoint 500 followed by the LH tube. The BGSs and LH tubes were compared using paired t-test or Wilcoxon matched-pairs signed-rank test, Bland-Altman and Passing-Bablok analysis. To assess clinical relevance, bias or percentage bias between BGSs and LH tubes was compared with the allowable total error (TEa) recommended for the respective parameter. Results Based on the values obtained from the BGSs, RIs were calculated for the evaluated parameters, including blood gases, electrolytes, glucose and lactate. Values derived from LH tubes showed no significant difference for standard bicarbonate, whole blood base excess, haematocrit, total haemoglobin, sodium, potassium, chloride, glucose and lactate, while pH, partial pressure of carbon dioxide and oxygen, actual bicarbonate, extracellular base excess, ionised calcium and anion gap were significantly different to the samples collected in BGSs ( P glucose and lactate can be made based on blood collected in LH tubes and analysed within 5 mins. For pH, partial pressure of carbon dioxide and oxygen, extracellular base excess, anion gap and ionised calcium the clinically relevant alterations have to be considered if analysed in LH tubes.

1. Diverse interpretations of confidence building

International Nuclear Information System (INIS)

Macintosh, J.

1998-01-01

This paper explores the variety of operational understandings associated with the term 'confidence building'. Collectively, these understandings constitute what should be thought of as a 'family' of confidence building approaches. This unacknowledged and generally unappreciated proliferation of operational understandings that function under the rubric of confidence building appears to be an impediment to effective policy. The paper's objective is to analyze these different understandings, stressing the important differences in their underlying assumptions. In the process, the paper underlines the need for the international community to clarify its collective thinking about what it means when it speaks of 'confidence building'. Without enhanced clarity, it will be unnecessarily difficult to employ the confidence building approach effectively due to the lack of consistent objectives and common operating assumptions. Although it is not the intention of this paper to promote a particular account of confidence building, dissecting existing operational understandings should help to identify whether there are fundamental elements that define what might be termed 'authentic' confidence building. Implicit here is the view that some operational understandings of confidence building may diverge too far from consensus models to count as meaningful members of the confidence building family. (author)

2. Reference intervals for thyreotropin and thyroid hormones for healthy adults based on the NOBIDA material and determined using a Modular E170

DEFF Research Database (Denmark)

Friis-Hansen, Lennart; Hilsted, Linda

2008-01-01

BACKGROUND: The aim of the present study was to establish Nordic reference intervals for thyreotropin (TSH) and the thyroid hormones in heparinized plasma. METHODS: We used 489 heparinized blood samples, collected in the morning, from the Nordic NOBIDA reference material, from healthy adults...... for the thyroid hormones, but not TSH, followed a Gaussian distribution. There were more TPO-ab and Tg-ab positive women than men. After exclusion of the TPO-ab and the Tg-ab positive individuals, the reference interval TSH was 0.64 (0.61-0.72) to 4.7 (4.4-5.0) mIU/L. The exclusion of these ab-positive samples...... also minimized the differences in TSH concentrations between the sexes and the different Nordic countries. For the thyroid hormones, there were only minor differences between the reference intervals between the Nordic populations and between men and women. These reference intervals were unaffected...

3. Energy and macronutrient content of familiar beverages interact with pre-meal intervals to determine later food intake, appetite and glycemic response in young adults.

Science.gov (United States)

Panahi, Shirin; Luhovyy, Bohdan L; Liu, Ting Ting; Akhavan, Tina; El Khoury, Dalia; Goff, H Douglas; Harvey Anderson, G

2013-01-01

The objective was to compare the effects of pre-meal consumption of familiar beverages on appetite, food intake, and glycemic response in healthy young adults. Two short-term experiments compared the effect of consumption at 30 (experiment 1) or 120 min (experiment 2) before a pizza meal of isovolumetric amounts (500 mL) of water (0 kcal), soy beverage (200 kcal), 2% milk (260 kcal), 1% chocolate milk (340 kcal), orange juice (229 kcal) and cow's milk-based infant formula (368 kcal) on food intake and subjective appetite and blood glucose before and after a meal. Pre-meal ingestion of chocolate milk and infant formula reduced food intake compared to water at 30 min, however, beverage type did not affect food intake at 2h. Pre-meal blood glucose was higher after chocolate milk than other caloric beverages from 0 to 30 min (experiment 1), and after chocolate milk and orange juice from 0 to 120 min (experiment 2). Only milk reduced post-meal blood glucose in both experiments, suggesting that its effects were independent of meal-time energy intake. Combined pre- and post-meal blood glucose was lower after milk compared to chocolate milk and orange juice, but did not differ from other beverages. Thus, beverage calorie content and inter-meal intervals are primary determinants of food intake in the short-term, but macronutrient composition, especially protein content and composition, may play the greater role in glycemic control. Copyright © 2012 Elsevier Ltd. All rights reserved.

4. Correct Bayesian and frequentist intervals are similar

International Nuclear Information System (INIS)

Atwood, C.L.

1986-01-01

This paper argues that Bayesians and frequentists will normally reach numerically similar conclusions, when dealing with vague data or sparse data. It is shown that both statistical methodologies can deal reasonably with vague data. With sparse data, in many important practical cases Bayesian interval estimates and frequentist confidence intervals are approximately equal, although with discrete data the frequentist intervals are somewhat longer. This is not to say that the two methodologies are equally easy to use: The construction of a frequentist confidence interval may require new theoretical development. Bayesians methods typically require numerical integration, perhaps over many variables. Also, Bayesian can easily fall into the trap of over-optimism about their amount of prior knowledge. But in cases where both intervals are found correctly, the two intervals are usually not very different. (orig.)

5. Interval selection with machine-dependent intervals

OpenAIRE

Bohmova K.; Disser Y.; Mihalak M.; Widmayer P.

2013-01-01

We study an offline interval scheduling problem where every job has exactly one associated interval on every machine. To schedule a set of jobs, exactly one of the intervals associated with each job must be selected, and the intervals selected on the same machine must not intersect.We show that deciding whether all jobs can be scheduled is NP-complete already in various simple cases. In particular, by showing the NP-completeness for the case when all the intervals associated with the same job...

6. Nuclear power: restoring public confidence

International Nuclear Information System (INIS)

Arnold, L.

1986-01-01

The paper concerns a one day conference on nuclear power organised by the Centre for Science Studies and Science Policy, Lancaster, April 1986. Following the Chernobyl reactor accident, the conference concentrated on public confidence in nuclear power. Causes of lack of public confidence, public perceptions of risk, and the effect of Chernobyl in the United Kingdom, were all discussed. A Select Committee on the Environment examined the problems of radioactive waste disposal. (U.K.)

7. Confidence building - is science the only approach

International Nuclear Information System (INIS)

Bragg, K.

1990-01-01

The Atomic Energy Control Board (AECB) has begun to develop some simplified methods to determine if it is possible to provide confidence that dose, risk and environmental criteria can be respected without undue reliance on detailed scientific models. The progress to date will be outlined and the merits of this new approach will be compared to the more complex, traditional approach. Stress will be given to generating confidence in both technical and non-technical communities as well as the need to enhance communication between them. 3 refs., 1 tab

8. Parents' obesity-related behavior and confidence to support behavioral change in their obese child: data from the STAR study.

Science.gov (United States)

Arsenault, Lisa N; Xu, Kathleen; Taveras, Elsie M; Hacker, Karen A

2014-01-01

Successful childhood obesity interventions frequently focus on behavioral modification and involve parents or family members. Parental confidence in supporting behavior change may be an element of successful family-based prevention efforts. We aimed to determine whether parents' own obesity-related behaviors were related to their confidence in supporting their child's achievement of obesity-related behavioral goals. Cross-sectional analyses of data collected at baseline of a randomized control trial testing a treatment intervention for obese children (n = 787) in primary care settings (n = 14). Five obesity-related behaviors (physical activity, screen time, sugar-sweetened beverage, sleep duration, fast food) were self-reported by parents for themselves and their child. Behaviors were dichotomized on the basis of achievement of behavioral goals. Five confidence questions asked how confident the parent was in helping their child achieve each goal. Logistic regression modeling high confidence was conducted with goal achievement and demographics as independent variables. Parents achieving physical activity or sleep duration goals were significantly more likely to be highly confident in supporting their child's achievement of those goals (physical activity, odds ratio 1.76; 95% confidence interval 1.19-2.60; sleep, odds ratio 1.74; 95% confidence interval 1.09-2.79) independent of sociodemographic variables and child's current behavior. Parental achievements of TV watching and fast food goals were also associated with confidence, but significance was attenuated after child's behavior was included in models. Parents' own obesity-related behaviors are factors that may affect their confidence to support their child's behavior change. Providers seeking to prevent childhood obesity should address parent/family behaviors as part of their obesity prevention strategies. Copyright © 2014 Academic Pediatric Association. Published by Elsevier Inc. All rights reserved.

9. Confidence in critical care nursing.

Science.gov (United States)

Evans, Jeanne; Bell, Jennifer L; Sweeney, Annemarie E; Morgan, Jennifer I; Kelly, Helen M

2010-10-01

The purpose of the study was to gain an understanding of the nursing phenomenon, confidence, from the experience of nurses in the nursing subculture of critical care. Leininger's theory of cultural care diversity and universality guided this qualitative descriptive study. Questions derived from the sunrise model were used to elicit nurses' perspectives about cultural and social structures that exist within the critical care nursing subculture and the influence that these factors have on confidence. Twenty-eight critical care nurses from a large Canadian healthcare organization participated in semistructured interviews about confidence. Five themes arose from the descriptions provided by the participants. The three themes, tenuously navigating initiation rituals, deliberately developing holistic supportive relationships, and assimilating clinical decision-making rules were identified as social and cultural factors related to confidence. The remaining two themes, preserving a sense of security despite barriers and accommodating to diverse challenges, were identified as environmental factors related to confidence. Practice and research implications within the culture of critical care nursing are discussed in relation to each of the themes.

10. Professional confidence: a concept analysis.

Science.gov (United States)

Holland, Kathlyn; Middleton, Lyn; Uys, Leana

2012-03-01

Professional confidence is a concept that is frequently used and or implied in occupational therapy literature, but often without specifying its meaning. Rodgers's Model of Concept Analysis was used to analyse the term "professional confidence". Published research obtained from a federated search in four health sciences databases was used to inform the concept analysis. The definitions, attributes, antecedents, and consequences of professional confidence as evidenced in the literature are discussed. Surrogate terms and related concepts are identified, and a model case of the concept provided. Based on the analysis, professional confidence can be described as a dynamic, maturing personal belief held by a professional or student. This includes an understanding of and a belief in the role, scope of practice, and significance of the profession, and is based on their capacity to competently fulfil these expectations, fostered through a process of affirming experiences. Developing and fostering professional confidence should be nurtured and valued to the same extent as professional competence, as the former underpins the latter, and both are linked to professional identity.

11. Targeting Low Career Confidence Using the Career Planning Confidence Scale

Science.gov (United States)

McAuliffe, Garrett; Jurgens, Jill C.; Pickering, Worth; Calliotte, James; Macera, Anthony; Zerwas, Steven

2006-01-01

The authors describe the development and validation of a test of career planning confidence that makes possible the targeting of specific problem issues in employment counseling. The scale, developed using a rational process and the authors' experience with clients, was tested for criterion-related validity against 2 other measures. The scale…

12. Convex Interval Games

NARCIS (Netherlands)

Alparslan-Gok, S.Z.; Brânzei, R.; Tijs, S.H.

2008-01-01

In this paper, convex interval games are introduced and some characterizations are given. Some economic situations leading to convex interval games are discussed. The Weber set and the Shapley value are defined for a suitable class of interval games and their relations with the interval core for

13. Normal probability plots with confidence.

Science.gov (United States)

Chantarangsi, Wanpen; Liu, Wei; Bretz, Frank; Kiatsupaibul, Seksan; Hayter, Anthony J; Wan, Fang

2015-01-01

Normal probability plots are widely used as a statistical tool for assessing whether an observed simple random sample is drawn from a normally distributed population. The users, however, have to judge subjectively, if no objective rule is provided, whether the plotted points fall close to a straight line. In this paper, we focus on how a normal probability plot can be augmented by intervals for all the points so that, if the population distribution is normal, then all the points should fall into the corresponding intervals simultaneously with probability 1-α. These simultaneous 1-α probability intervals provide therefore an objective mean to judge whether the plotted points fall close to the straight line: the plotted points fall close to the straight line if and only if all the points fall into the corresponding intervals. The powers of several normal probability plot based (graphical) tests and the most popular nongraphical Anderson-Darling and Shapiro-Wilk tests are compared by simulation. Based on this comparison, recommendations are given in Section 3 on which graphical tests should be used in what circumstances. An example is provided to illustrate the methods. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

14. Experimenting with musical intervals

Science.gov (United States)

Lo Presto, Michael C.

2003-07-01

When two tuning forks of different frequency are sounded simultaneously the result is a complex wave with a repetition frequency that is the fundamental of the harmonic series to which both frequencies belong. The ear perceives this 'musical interval' as a single musical pitch with a sound quality produced by the harmonic spectrum responsible for the waveform. This waveform can be captured and displayed with data collection hardware and software. The fundamental frequency can then be calculated and compared with what would be expected from the frequencies of the tuning forks. Also, graphing software can be used to determine equations for the waveforms and predict their shapes. This experiment could be used in an introductory physics or musical acoustics course as a practical lesson in superposition of waves, basic Fourier series and the relationship between some of the ear's subjective perceptions of sound and the physical properties of the waves that cause them.

15. Alan Greenspan, the confidence strategy

Directory of Open Access Journals (Sweden)

Edwin Le Heron

2006-12-01

Full Text Available To evaluate the Greenspan era, we nevertheless need to address three questions: Is his success due to talent or just luck? Does he have a system of monetary policy or is he himself the system? What will be his legacy? Greenspan was certainly lucky, but he was also clairvoyant. Above all, he has developed a profoundly original monetary policy. His confidence strategy is clearly opposed to the credibility strategy developed in central banks and the academic milieu after 1980, but also inflation targeting, which today constitutes the mainstream monetary policy regime. The question of his legacy seems more nuanced. However, Greenspan will remain 'for a considerable period of time' a highly heterodox and original central banker. His political vision, his perception of an uncertain world, his pragmatism and his openness form the structure of a powerful alternative system, the confidence strategy, which will leave its mark on the history of monetary policy.

16. Graphical interpretation of confidence curves in rankit plots

DEFF Research Database (Denmark)

Hyltoft Petersen, Per; Blaabjerg, Ole; Andersen, Marianne

2004-01-01

A well-known transformation from the bell-shaped Gaussian (normal) curve to a straight line in the rankit plot is investigated, and a tool for evaluation of the distribution of reference groups is presented. It is based on the confidence intervals for percentiles of the calculated Gaussian distri...

17. Towards confidence in transport safety

International Nuclear Information System (INIS)

Robison, R.W.

1992-01-01

The U.S. Department of Energy (US DOE) plans to demonstrate to the public that high-level waste can be transported safely to the proposed repository. The author argues US DOE should begin now to demonstrate its commitment to safety by developing an extraordinary safety program for nuclear cargo it is now shipping. The program for current shipments should be developed with State, Tribal, and local officials. Social scientists should be involved in evaluating the effect of the safety program on public confidence. The safety program developed in cooperation with western states for shipments to the Waste Isolation Pilot plant is a good basis for designing that extraordinary safety program

18. Confidence limits for regional cerebral blood flow values obtained with circular positron system, using krypton-77

International Nuclear Information System (INIS)

Meyer, E.; Yamamoto, Y.L.; Thompson, C.J.

1978-01-01

The 90% confidence limits have been determined for regional cerebral blood flow (rCBF) values obtained in each cm 2 of a cross section of the human head after inhalation of radioactive krypton-77, using the MNI circular positron emission tomography system (Positome). CBF values for small brain tissue elements are calculated by linear regression analysis on the semi-logarithmically transformed clearance curve. A computer program displays CBF values and their estimated error in numeric and gray scale forms. The following typical results have been obtained on a control subject: mean CBF in the entire cross section of the head: 54.6 + - 5 ml/min/100 g tissue, rCBF for small area of frontal gray matter: 75.8 + - 9 ml/min/100 g tissue. Confidence intervals for individual rCBF values varied between + - 13 and + - 55% except for areas pertaining to the ventricular system where particularly poor statistics have been obtained. Knowledge of confidence limits for rCBF values improves their diagnostic significance, particularly with respect to the assessment of reduced rCBF in stroke patients. A nomogram for convenient determination of 90% confidence limits for slope values obtained in linear regression analysis has been designed with the number of fitted points (n) and the correlation coefficient (r) as parameters. (author)

19. Workshop on confidence limits. Proceedings

International Nuclear Information System (INIS)

James, F.; Lyons, L.; Perrin, Y.

2000-01-01

The First Workshop on Confidence Limits was held at CERN on 17-18 January 2000. It was devoted to the problem of setting confidence limits in difficult cases: number of observed events is small or zero, background is larger than signal, background not well known, and measurements near a physical boundary. Among the many examples in high-energy physics are searches for the Higgs, searches for neutrino oscillations, B s mixing, SUSY, compositeness, neutrino masses, and dark matter. Several different methods are on the market: the CL s methods used by the LEP Higgs searches; Bayesian methods; Feldman-Cousins and modifications thereof; empirical and combined methods. The Workshop generated considerable interest, and attendance was finally limited by the seating capacity of the CERN Council Chamber where all the sessions took place. These proceedings contain all the papers presented, as well as the full text of the discussions after each paper and of course the last session which was a discussion session. The list of participants and the 'required reading', which was expected to be part of the prior knowledge of all participants, are also included. (orig.)

20. The Great Recession and confidence in homeownership

OpenAIRE

Anat Bracha; Julian Jamison

2013-01-01

Confidence in homeownership shifts for those who personally experienced real estate loss during the Great Recession. Older Americans are confident in the value of homeownership. Younger Americans are less confident.

1. Programming with Intervals

Science.gov (United States)

Matsakis, Nicholas D.; Gross, Thomas R.

Intervals are a new, higher-level primitive for parallel programming with which programmers directly construct the program schedule. Programs using intervals can be statically analyzed to ensure that they do not deadlock or contain data races. In this paper, we demonstrate the flexibility of intervals by showing how to use them to emulate common parallel control-flow constructs like barriers and signals, as well as higher-level patterns such as bounded-buffer producer-consumer. We have implemented intervals as a publicly available library for Java and Scala.

2. Development of free statistical software enabling researchers to calculate confidence levels, clinical significance curves and risk-benefit contours

International Nuclear Information System (INIS)

Shakespeare, T.P.; Mukherjee, R.K.; Gebski, V.J.

2003-01-01

Confidence levels, clinical significance curves, and risk-benefit contours are tools improving analysis of clinical studies and minimizing misinterpretation of published results, however no software has been available for their calculation. The objective was to develop software to help clinicians utilize these tools. Excel 2000 spreadsheets were designed using only built-in functions, without macros. The workbook was protected and encrypted so that users can modify only input cells. The workbook has 4 spreadsheets for use in studies comparing two patient groups. Sheet 1 comprises instructions and graphic examples for use. Sheet 2 allows the user to input the main study results (e.g. survival rates) into a 2-by-2 table. Confidence intervals (95%), p-value and the confidence level for Treatment A being better than Treatment B are automatically generated. An additional input cell allows the user to determine the confidence associated with a specified level of benefit. For example if the user wishes to know the confidence that Treatment A is at least 10% better than B, 10% is entered. Sheet 2 automatically displays clinical significance curves, graphically illustrating confidence levels for all possible benefits of one treatment over the other. Sheet 3 allows input of toxicity data, and calculates the confidence that one treatment is more toxic than the other. It also determines the confidence that the relative toxicity of the most effective arm does not exceed user-defined tolerability. Sheet 4 automatically calculates risk-benefit contours, displaying the confidence associated with a specified scenario of minimum benefit and maximum risk of one treatment arm over the other. The spreadsheet is freely downloadable at www.ontumor.com/professional/statistics.htm A simple, self-explanatory, freely available spreadsheet calculator was developed using Excel 2000. The incorporated decision-making tools can be used for data analysis and improve the reporting of results of any

3. An Extended Step-Wise Weight Assessment Ratio Analysis with Symmetric Interval Type-2 Fuzzy Sets for Determining the Subjective Weights of Criteria in Multi-Criteria Decision-Making Problems

Directory of Open Access Journals (Sweden)

Mehdi Keshavarz-Ghorabaee

2018-03-01

Full Text Available Determination of subjective weights, which are based on the opinions and preferences of decision-makers, is one of the most important matters in the process of multi-criteria decision-making (MCDM. Step-wise Weight Assessment Ratio Analysis (SWARA is an efficient method for obtaining the subjective weights of criteria in the MCDM problems. On the other hand, decision-makers may express their opinions with a degree of uncertainty. Using the symmetric interval type-2 fuzzy sets enables us to not only capture the uncertainty of information flexibly but also to perform computations simply. In this paper, we propose an extended SWARA method with symmetric interval type-2 fuzzy sets to determine the weights of criteria based on the opinions of a group of decision-makers. The weights determined by the proposed approach involve the uncertainty of decision-makers’ preferences and the symmetric form of the weights makes them more interpretable. To show the procedure of the proposed approach, it is used to determine the importance of intellectual capital dimensions and components in a company. The results show that the proposed approach is efficient in determining the subjective weights of criteria and capturing the uncertainty of information.

4. Confidence-Based Learning in Investment Analysis

Science.gov (United States)

Serradell-Lopez, Enric; Lara-Navarra, Pablo; Castillo-Merino, David; González-González, Inés

5. QT interval in healthy dogs: which method of correcting the QT interval in dogs is appropriate for use in small animal clinics?

Directory of Open Access Journals (Sweden)

Maira S. Oliveira

2014-05-01

Full Text Available The electrocardiography (ECG QT interval is influenced by fluctuations in heart rate (HR what may lead to misinterpretation of its length. Considering that alterations in QT interval length reflect abnormalities of the ventricular repolarisation which predispose to occurrence of arrhythmias, this variable must be properly evaluated. The aim of this work is to determine which method of correcting the QT interval is the most appropriate for dogs regarding different ranges of normal HR (different breeds. Healthy adult dogs (n=130; German Shepherd, Boxer, Pit Bull Terrier, and Poodle were submitted to ECG examination and QT intervals were determined in triplicates from the bipolar limb II lead and corrected for the effects of HR through the application of three published formulae involving quadratic, cubic or linear regression. The mean corrected QT values (QTc obtained using the diverse formulae were significantly different (ρ<0.05, while those derived according to the equation QTcV = QT + 0.087(1- RR were the most consistent (linear regression. QTcV values were strongly correlated (r=0.83 with the QT interval and showed a coefficient of variation of 8.37% and a 95% confidence interval of 0.22-0.23 s. Owing to its simplicity and reliability, the QTcV was considered the most appropriate to be used for the correction of QT interval in dogs.

6. Public confidence and nuclear energy

International Nuclear Information System (INIS)

1990-01-01

Today in France there are 54 nuclear power units in operation at 18 sites. They supply 75% of all electricity produced, 12% of which is exported to neighbouring countries, and play an important role in the French economy. For the French, nuclear power is a fact of life, and most accept it. However, the accident of Chernobyl has made public opinion more sensitive, and the public relations work has had to be reconsidered carefully with a view to increase the confidence of the French public in nuclear power, anticipating media crises and being equipped to deal with such crises. The three main approaches are the following: keeping the public better informed, providing clear information at time of crisis and international activities

7. Knowledge, Self Confidence and Courage

DEFF Research Database (Denmark)

Selberg, Hanne; Steenberg Holtzmann, Jette; Hovedskov, Jette

. Results The students identified their major learning outcomes as transfer of operational skills, experiencing self-efficacy and enhanced understanding of the patients' perspective.Involving simulated patients in the training of technical skills contributed to the development of the students' communication......Knowledge, self confidence and courage – long lasting learning outcomes through simulation in a clinical context. Hanne Selberg1, Jette Hovedskov2, Jette Steenberg Holtzmann2 The significance and methodology of the researchThe study focuses on simulation alongside the clinical practice and linked...... Development, Clinical Lecturer, Metropolitan University College, Faculty of Nursing, Email: hase@phoe.dk, phone: +45-72282830. 2. Jette Hovedskov, RN, Development Consultant, Glostrup University Hospital, Department of Development Email : jeho@glo.regionh.dk ,phone: +45- 43232090 3. Jette Holtzmann Steenberg...

8. Doubly Bayesian Analysis of Confidence in Perceptual Decision-Making.

Science.gov (United States)

Aitchison, Laurence; Bang, Dan; Bahrami, Bahador; Latham, Peter E

2015-10-01

Humans stand out from other animals in that they are able to explicitly report on the reliability of their internal operations. This ability, which is known as metacognition, is typically studied by asking people to report their confidence in the correctness of some decision. However, the computations underlying confidence reports remain unclear. In this paper, we present a fully Bayesian method for directly comparing models of confidence. Using a visual two-interval forced-choice task, we tested whether confidence reports reflect heuristic computations (e.g. the magnitude of sensory data) or Bayes optimal ones (i.e. how likely a decision is to be correct given the sensory data). In a standard design in which subjects were first asked to make a decision, and only then gave their confidence, subjects were mostly Bayes optimal. In contrast, in a less-commonly used design in which subjects indicated their confidence and decision simultaneously, they were roughly equally likely to use the Bayes optimal strategy or to use a heuristic but suboptimal strategy. Our results suggest that, while people's confidence reports can reflect Bayes optimal computations, even a small unusual twist or additional element of complexity can prevent optimality.

9. Determination of cosmic ray (CR) ionization path and iono/atmospheric cut-off energy from CR intervals III, IV and V in the planetary environments

International Nuclear Information System (INIS)

Velinov, P.

2001-01-01

In this paper are determined the ionization path and cut-off energies of the cosmic ray (CR) nuclei in relation to the general interaction model 'CR - ionosphere-middle atmosphere'. Here the ionization path and the iono/atmospheric cut-off energies of the galactic CR, solar CR and anomalous CR are separately considered in each energetic range, without taking into account the particle transfer from one range in another. This more general approach will be the object of a further paper

10. Determination of the nuclear level densities and radiative strength function for 43 nuclei in the mass interval 28≤A≤200

Science.gov (United States)

Knezevic, David; Jovancevic, Nikola; Sukhovoj, Anatoly M.; Mitsyna, Ludmila V.; Krmar, Miodrag; Cong, Vu D.; Hambsch, Franz-Josef; Oberstedt, Stephan; Revay, Zsolt; Stieghorst, Christian; Dragic, Aleksandar

2018-03-01

The determination of nuclear level densities and radiative strength functions is one of the most important tasks in low-energy nuclear physics. Accurate experimental values of these parameters are critical for the study of the fundamental properties of nuclear structure. The step-like structure in the dependence of the level densities p on the excitation energy of nuclei Eex is observed in the two-step gamma cascade measurements for nuclei in the 28 ≤ A ≤ 200 mass region. This characteristic structure can be explained only if a co-existence of quasi-particles and phonons, as well as their interaction in a nucleus, are taken into account in the process of gamma-decay. Here we present a new improvement to the Dubna practical model for the determination of nuclear level densities and radiative strength functions. The new practical model guarantees a good description of the available intensities of the two step gamma cascades, comparable to the experimental data accuracy.

11. Confidence limits for parameters of Poisson and binomial distributions

International Nuclear Information System (INIS)

Arnett, L.M.

1976-04-01

The confidence limits for the frequency in a Poisson process and for the proportion of successes in a binomial process were calculated and tabulated for the situations in which the observed values of the frequency or proportion and an a priori distribution of these parameters are available. Methods are used that produce limits with exactly the stated confidence levels. The confidence interval [a,b] is calculated so that Pr [a less than or equal to lambda less than or equal to b c,μ], where c is the observed value of the parameter, and μ is the a priori hypothesis of the distribution of this parameter. A Bayesian type analysis is used. The intervals calculated are narrower and appreciably different from results, known to be conservative, that are often used in problems of this type. Pearson and Hartley recognized the characteristics of their methods and contemplated that exact methods could someday be used. The calculation of the exact intervals requires involved numerical analyses readily implemented only on digital computers not available to Pearson and Hartley. A Monte Carlo experiment was conducted to verify a selected interval from those calculated. This numerical experiment confirmed the results of the analytical methods and the prediction of Pearson and Hartley that their published tables give conservative results

12. CIMP status of interval colon cancers: another piece to the puzzle.

Science.gov (United States)

Arain, Mustafa A; Sawhney, Mandeep; Sheikh, Shehla; Anway, Ruth; Thyagarajan, Bharat; Bond, John H; Shaukat, Aasma

2010-05-01

Colon cancers diagnosed in the interval after a complete colonoscopy may occur due to limitations of colonoscopy or due to the development of new tumors, possibly reflecting molecular and environmental differences in tumorigenesis resulting in rapid tumor growth. In a previous study from our group, interval cancers (colon cancers diagnosed within 5 years of a complete colonoscopy) were almost four times more likely to demonstrate microsatellite instability (MSI) than non-interval cancers. In this study we extended our molecular analysis to compare the CpG island methylator phenotype (CIMP) status of interval and non-interval colorectal cancers and investigate the relationship between the CIMP and MSI pathways in the pathogenesis of interval cancers. We searched our institution's cancer registry for interval cancers, defined as colon cancers that developed within 5 years of a complete colonoscopy. These were frequency matched in a 1:2 ratio by age and sex to patients with non-interval cancers (defined as colon cancers diagnosed on a patient's first recorded colonoscopy). Archived cancer specimens for all subjects were retrieved and tested for CIMP gene markers. The MSI status of subjects identified between 1989 and 2004 was known from our previous study. Tissue specimens of newly identified cases and controls (between 2005 and 2006) were tested for MSI. There were 1,323 cases of colon cancer diagnosed over the 17-year study period, of which 63 were identified as having interval cancer and matched to 131 subjects with non-interval cancer. Study subjects were almost all Caucasian men. CIMP was present in 57% of interval cancers compared to 33% of non-interval cancers (P=0.004). As shown previously, interval cancers were more likely than non-interval cancers to occur in the proximal colon (63% vs. 39%; P=0.002), and have MSI 29% vs. 11%, P=0.004). In multivariable logistic regression model, proximal location (odds ratio (OR) 1.85; 95% confidence interval (CI) 1

13. Confidence building in safety assessments

International Nuclear Information System (INIS)

Grundfelt, Bertil

1999-01-01

Future generations should be adequately protected from damage caused by the present disposal of radioactive waste. This presentation discusses the core of safety and performance assessment: The demonstration and building of confidence that the disposal system meets the safety requirements stipulated by society. The major difficulty is to deal with risks in the very long time perspective of the thousands of years during which the waste is hazardous. Concern about these problems has stimulated the development of the safety assessment discipline. The presentation concentrates on two of the elements of safety assessment: (1) Uncertainty and sensitivity analysis, and (2) validation and review. Uncertainty is associated both with respect to what is the proper conceptual model and with respect to parameter values for a given model. A special kind of uncertainty derives from the variation of a property in space. Geostatistics is one approach to handling spatial variability. The simplest way of doing a sensitivity analysis is to offset the model parameters one by one and observe how the model output changes. The validity of the models and data used to make predictions is central to the credibility of safety assessments for radioactive waste repositories. There are several definitions of model validation. The presentation discusses it as a process and highlights some aspects of validation methodologies

14. Confidence bounds for nonlinear dose-response relationships

DEFF Research Database (Denmark)

Baayen, C; Hougaard, P

2015-01-01

An important aim of drug trials is to characterize the dose-response relationship of a new compound. Such a relationship can often be described by a parametric (nonlinear) function that is monotone in dose. If such a model is fitted, it is useful to know the uncertainty of the fitted curve...... intervals for the dose-response curve. These confidence bounds have better coverage than Wald intervals and are more precise and generally faster than bootstrap methods. Moreover, if monotonicity is assumed, the profile likelihood approach takes this automatically into account. The approach is illustrated...

15. Parturition lines in modern human wisdom tooth roots: do they exist, can they be characterized and are they useful for retrospective determination of age at first reproduction and/or inter-birth intervals?

Science.gov (United States)

2014-01-01

Parturition lines have been described in the teeth of a number of animals, including primates, but never in modern humans. These accentuated lines in dentine are comprised of characteristic dark and light component zones. The aim of this study was to review the physiology underlying these lines and to ask if parturition lines exist in the third molar tooth roots of mothers known to have had one or more children during their teenage years. Brief retrospective oral medical obstetric histories were taken from four mothers and compared with histological estimates for the timing of accentuated markings visible in longitudinal ground sections of their wisdom teeth. Evidence of accentuated markings in M3 root dentine matched the age of the mother at the time their first child was born reasonably well. However, the dates calculated for inter-birth intervals did not match well. Parturition lines corresponding to childbirth during the teenage years can exist in human M3 roots, but may not always do so. Without a written medical history it would not be possible to say with confidence that an accentuated line in M3 root dentine was caused by stress, illness or was a parturition line.

16. Confidence Intervals Verification for Simulated Error Rate Performance of Wireless Communication System

KAUST Repository

Smadi, Mahmoud A.; Ghaeb, Jasim A.; Jazzar, Saleh; Saraereh, Omar A.

2012-01-01

In this paper, we derived an efficient simulation method to evaluate the error rate of wireless communication system. Coherent binary phase-shift keying system is considered with imperfect channel phase recovery. The results presented demonstrate

17. Optimizing lengths of confidence intervals: fourth-order efficiency in location models

NARCIS (Netherlands)

Klaassen, C.; Venetiaan, S.

2010-01-01

Under regularity conditions the maximum likelihood estimator of the location parameter in a location model is asymptotically efficient among translation equivariant estimators. Additional regularity conditions warrant third- and even fourth-order efficiency, in the sense that no translation

18. Confidence Intervals for System Reliability and Availability of Maintained Systems Using Monte Carlo Techniques

Science.gov (United States)

1981-12-01

DTIC _JUN ,I 51982 UNITED STATES AIR FORCE AIR UNIVERSITY E AIR FORCE INSTITUTE OF TECHNOLOGY Wright-Patterson Air-force Base,Ohio S 2 B 14 Best...t’re Air F:or- e -ns"it’.,, e of Technclogy Air Uv-ýerz.tyj in Partial 𔄁ulfilIThent Reýquirements fol- ,-hth D,ýýr.e c4" MastLer of’ OperaZ-ins...iesearc- VeTA 3 MohamedO ’’’’Jo SpD’ Fas.abal-la Lt. C ol. Egyplt.’.an Army Gradua~’p ( ler ons Research December 1981 Approcved fL~r pu>ý’ rclea.se

19. Confidence Intervals for a Semiparametric Approach to Modeling Nonlinear Relations among Latent Variables

Science.gov (United States)

Pek, Jolynn; Losardo, Diane; Bauer, Daniel J.

2011-01-01

Compared to parametric models, nonparametric and semiparametric approaches to modeling nonlinearity between latent variables have the advantage of recovering global relationships of unknown functional form. Bauer (2005) proposed an indirect application of finite mixtures of structural equation models where latent components are estimated in the…

20. Technical Report: Benchmarking for Quasispecies Abundance Inference with Confidence Intervals from Metagenomic Sequence Data

Energy Technology Data Exchange (ETDEWEB)

McLoughlin, K. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

2016-01-22

The software application “MetaQuant” was developed by our group at Lawrence Livermore National Laboratory (LLNL). It is designed to profile microbial populations in a sample using data from whole-genome shotgun (WGS) metagenomic DNA sequencing. Several other metagenomic profiling applications have been described in the literature. We ran a series of benchmark tests to compare the performance of MetaQuant against that of a few existing profiling tools, using real and simulated sequence datasets. This report describes our benchmarking procedure and results.

1. Bayesian Methods and Confidence Intervals for Automatic Target Recognition of SAR Canonical Shapes

Science.gov (United States)

2014-03-27

and DirectX [22]. The CUDA platform was developed by the NVIDIA Corporation to allow programmers access to the computational capabilities of the...were used for the intense repetitive computations. Developing CUDA software requires writing code for specialized compilers provided by NVIDIA and

2. Statistical Significance, Effect Size Reporting, and Confidence Intervals: Best Reporting Strategies

Science.gov (United States)

Capraro, Robert M.

2004-01-01

With great interest the author read the May 2002 editorial in the "Journal for Research in Mathematics Education (JRME)" (King, 2002) regarding changes to the 5th edition of the "Publication Manual of the American Psychological Association" (APA, 2001). Of special note to him, and of great import to the field of mathematics education research, are…

3. Interpregnancy intervals: impact of postpartum contraceptive effectiveness and coverage.

Science.gov (United States)

Thiel de Bocanegra, Heike; Chang, Richard; Howell, Mike; Darney, Philip

2014-04-01

4. High Confidence Software and Systems Research Needs

Data.gov (United States)

Networking and Information Technology Research and Development, Executive Office of the President — This White Paper presents a survey of high confidence software and systems research needs. It has been prepared by the High Confidence Software and Systems...

5. Confidence Building Strategies in the Public Schools.

Science.gov (United States)

Achilles, C. M.; And Others

1985-01-01

Data from the Phi Delta Kappa Commission on Public Confidence in Education indicate that "high-confidence" schools make greater use of marketing and public relations strategies. Teacher attitudes were ranked first and administrator attitudes second by 409 respondents for both gain and loss of confidence in schools. (MLF)

6. Overconfidence in Interval Estimates

Science.gov (United States)

Soll, Jack B.; Klayman, Joshua

2004-01-01

Judges were asked to make numerical estimates (e.g., "In what year was the first flight of a hot air balloon?"). Judges provided high and low estimates such that they were X% sure that the correct answer lay between them. They exhibited substantial overconfidence: The correct answer fell inside their intervals much less than X% of the time. This…

7. New interval forecast for stationary autoregressive models ...

African Journals Online (AJOL)

In this paper, we proposed a new forecasting interval for stationary Autoregressive, AR(p) models using the Akaike information criterion (AIC) function. Ordinarily, the AIC function is used to determine the order of an AR(p) process. In this study however, AIC forecast interval compared favorably with the theoretical forecast ...

8. Probability Distribution for Flowing Interval Spacing

International Nuclear Information System (INIS)

Kuzio, S.

2001-01-01

The purpose of this analysis is to develop a probability distribution for flowing interval spacing. A flowing interval is defined as a fractured zone that transmits flow in the Saturated Zone (SZ), as identified through borehole flow meter surveys (Figure 1). This analysis uses the term ''flowing interval spacing'' as opposed to fractured spacing, which is typically used in the literature. The term fracture spacing was not used in this analysis because the data used identify a zone (or a flowing interval) that contains fluid-conducting fractures but does not distinguish how many or which fractures comprise the flowing interval. The flowing interval spacing is measured between the midpoints of each flowing interval. Fracture spacing within the SZ is defined as the spacing between fractures, with no regard to which fractures are carrying flow. The Development Plan associated with this analysis is entitled, ''Probability Distribution for Flowing Interval Spacing'', (CRWMS M and O 2000a). The parameter from this analysis may be used in the TSPA SR/LA Saturated Zone Flow and Transport Work Direction and Planning Documents: (1) ''Abstraction of Matrix Diffusion for SZ Flow and Transport Analyses'' (CRWMS M and O 1999a) and (2) ''Incorporation of Heterogeneity in SZ Flow and Transport Analyses'', (CRWMS M and O 1999b). A limitation of this analysis is that the probability distribution of flowing interval spacing may underestimate the effect of incorporating matrix diffusion processes in the SZ transport model because of the possible overestimation of the flowing interval spacing. Larger flowing interval spacing results in a decrease in the matrix diffusion processes. This analysis may overestimate the flowing interval spacing because the number of fractures that contribute to a flowing interval cannot be determined from the data. Because each flowing interval probably has more than one fracture contributing to a flowing interval, the true flowing interval spacing could be

9. The Development of Confidence Limits for Fatigue Strength Data

International Nuclear Information System (INIS)

SUTHERLAND, HERBERT J.; VEERS, PAUL S.

1999-01-01

Over the past several years, extensive databases have been developed for the S-N behavior of various materials used in wind turbine blades, primarily fiberglass composites. These data are typically presented both in their raw form and curve fit to define their average properties. For design, confidence limits must be placed on these descriptions. In particular, most designs call for the 95/95 design values; namely, with a 95% level of confidence, the designer is assured that 95% of the material will meet or exceed the design value. For such material properties as the ultimate strength, the procedures for estimating its value at a particular confidence level is well defined if the measured values follow a normal or a log-normal distribution. Namely, based upon the number of sample points and their standard deviation, a commonly-found table may be used to determine the survival percentage at a particular confidence level with respect to its mean value. The same is true for fatigue data at a constant stress level (the number of cycles to failure N at stress level S(sub 1)). However, when the stress level is allowed to vary, as with a typical S-N fatigue curve, the procedures for determining confidence limits are not as well defined. This paper outlines techniques for determining confidence limits of fatigue data. Different approaches to estimating the 95/95 level are compared. Data from the MSU/DOE and the FACT fatigue databases are used to illustrate typical results

10. Applications of interval computations

CERN Document Server

1996-01-01

Primary Audience for the Book • Specialists in numerical computations who are interested in algorithms with automatic result verification. • Engineers, scientists, and practitioners who desire results with automatic verification and who would therefore benefit from the experience of suc­ cessful applications. • Students in applied mathematics and computer science who want to learn these methods. Goal Of the Book This book contains surveys of applications of interval computations, i. e. , appli­ cations of numerical methods with automatic result verification, that were pre­ sented at an international workshop on the subject in EI Paso, Texas, February 23-25, 1995. The purpose of this book is to disseminate detailed and surveyed information about existing and potential applications of this new growing field. Brief Description of the Papers At the most fundamental level, interval arithmetic operations work with sets: The result of a single arithmetic operation is the set of all possible results as the o...

11. The integrated model of sport confidence: a canonical correlation and mediational analysis.

Science.gov (United States)

Koehn, Stefan; Pearce, Alan J; Morris, Tony

2013-12-01

The main purpose of the study was to examine crucial parts of Vealey's (2001) integrated framework hypothesizing that sport confidence is a mediating variable between sources of sport confidence (including achievement, self-regulation, and social climate) and athletes' affect in competition. The sample consisted of 386 athletes, who completed the Sources of Sport Confidence Questionnaire, Trait Sport Confidence Inventory, and Dispositional Flow Scale-2. Canonical correlation analysis revealed a confidence-achievement dimension underlying flow. Bias-corrected bootstrap confidence intervals in AMOS 20.0 were used in examining mediation effects between source domains and dispositional flow. Results showed that sport confidence partially mediated the relationship between achievement and self-regulation domains and flow, whereas no significant mediation was found for social climate. On a subscale level, full mediation models emerged for achievement and flow dimensions of challenge-skills balance, clear goals, and concentration on the task at hand.

12. Food skills confidence and household gatekeepers' dietary practices.

Science.gov (United States)

Burton, Melissa; Reid, Mike; Worsley, Anthony; Mavondo, Felix

2017-01-01

Household food gatekeepers have the potential to influence the food attitudes and behaviours of family members, as they are mainly responsible for food-related tasks in the home. The aim of this study was to determine the role of gatekeepers' confidence in food-related skills and nutrition knowledge on food practices in the home. An online survey was completed by 1059 Australian dietary gatekeepers selected from the Global Market Insite (GMI) research database. Participants responded to questions about food acquisition and preparation behaviours, the home eating environment, perceptions and attitudes towards food, and demographics. Two-step cluster analysis was used to identify groups based on confidence regarding food skills and nutrition knowledge. Chi-square tests and one-way ANOVAs were used to compare the groups on the dependent variables. Three groups were identified: low confidence, moderate confidence and high confidence. Gatekeepers in the highest confidence group were significantly more likely to report lower body mass index (BMI), and indicate higher importance of fresh food products, vegetable prominence in meals, product information use, meal planning, perceived behavioural control and overall diet satisfaction. Gatekeepers in the lowest confidence group were significantly more likely to indicate more perceived barriers to healthy eating, report more time constraints and more impulse purchasing practices, and higher convenience ingredient use. Other smaller associations were also found. Household food gatekeepers with high food skills confidence were more likely to engage in several healthy food practices, while those with low food skills confidence were more likely to engage in unhealthy food practices. Food education strategies aimed at building food-skills and nutrition knowledge will enable current and future gatekeepers to make healthier food decisions for themselves and for their families. Copyright Â© 2016 Elsevier Ltd. All rights reserved.

13. Surveillance test interval optimization

International Nuclear Information System (INIS)

Cepin, M.; Mavko, B.

1995-01-01

Technical specifications have been developed on the bases of deterministic analyses, engineering judgment, and expert opinion. This paper introduces our risk-based approach to surveillance test interval (STI) optimization. This approach consists of three main levels. The first level is the component level, which serves as a rough estimation of the optimal STI and can be calculated analytically by a differentiating equation for mean unavailability. The second and third levels give more representative results. They take into account the results of probabilistic risk assessment (PRA) calculated by a personal computer (PC) based code and are based on system unavailability at the system level and on core damage frequency at the plant level

14. Application of Interval Arithmetic in the Evaluation of Transfer Capabilities by Considering the Sources of Uncertainty

Directory of Open Access Journals (Sweden)

Prabha Umapathy

2009-01-01

Full Text Available Total transfer capability (TTC is an important index in a power system with large volume of inter-area power exchanges. This paper proposes a novel technique to determine the TTC and its confidence intervals in the system by considering the uncertainties in the load and line parameters. The optimal power flow (OPF method is used to obtain the TTC. Variations in the load and line parameters are incorporated using the interval arithmetic (IA method. The IEEE 30 bus test system is used to illustrate the proposed methodology. Various uncertainties in the line, load and both line and load are incorporated in the evaluation of total transfer capability. From the results, it is observed that the solutions obtained through the proposed method provide much wider information in terms of closed interval form which is more useful in ensuring secured operation of the interconnected system in the presence of uncertainties in load and line parameters.

15. The prognostic value of the QT interval and QT interval dispersion in all-cause and cardiac mortality and morbidity in a population of Danish citizens.

Science.gov (United States)

Elming, H; Holm, E; Jun, L; Torp-Pedersen, C; Køber, L; Kircshoff, M; Malik, M; Camm, J

1998-09-01

To evaluate the prognostic value of the QT interval and QT interval dispersion in total and in cardiovascular mortality, as well as in cardiac morbidity, in a general population. The QT interval was measured in all leads from a standard 12-lead ECG in a random sample of 1658 women and 1797 men aged 30-60 years. QT interval dispersion was calculated from the maximal difference between QT intervals in any two leads. All cause mortality over 13 years, and cardiovascular mortality as well as cardiac morbidity over 11 years, were the main outcome parameters. Subjects with a prolonged QT interval (430 ms or more) or prolonged QT interval dispersion (80 ms or more) were at higher risk of cardiovascular death and cardiac morbidity than subjects whose QT interval was less than 360 ms, or whose QT interval dispersion was less than 30 ms. Cardiovascular death relative risk ratios, adjusted for age, gender, myocardial infarct, angina pectoris, diabetes mellitus, arterial hypertension, smoking habits, serum cholesterol level, and heart rate were 2.9 for the QT interval (95% confidence interval 1.1-7.8) and 4.4 for QT interval dispersion (95% confidence interval 1.0-19-1). Fatal and non-fatal cardiac morbidity relative risk ratios were similar, at 2.7 (95% confidence interval 1.4-5.5) for the QT interval and 2.2 (95% confidence interval 1.1-4.0) for QT interval dispersion. Prolongation of the QT interval and QT interval dispersion independently affected the prognosis of cardiovascular mortality and cardiac fatal and non-fatal morbidity in a general population over 11 years.

16. Chaos on the interval

CERN Document Server

Ruette, Sylvie

2017-01-01

The aim of this book is to survey the relations between the various kinds of chaos and related notions for continuous interval maps from a topological point of view. The papers on this topic are numerous and widely scattered in the literature; some of them are little known, difficult to find, or originally published in Russian, Ukrainian, or Chinese. Dynamical systems given by the iteration of a continuous map on an interval have been broadly studied because they are simple but nevertheless exhibit complex behaviors. They also allow numerical simulations, which enabled the discovery of some chaotic phenomena. Moreover, the "most interesting" part of some higher-dimensional systems can be of lower dimension, which allows, in some cases, boiling it down to systems in dimension one. Some of the more recent developments such as distributional chaos, the relation between entropy and Li-Yorke chaos, sequence entropy, and maps with infinitely many branches are presented in book form for the first time. The author gi...

17. Consumer confidence or the business cycle

DEFF Research Database (Denmark)

Møller, Stig Vinther; Nørholm, Henrik; Rangvid, Jesper

2014-01-01

Answer: The business cycle. We show that consumer confidence and the output gap both excess returns on stocks in many European countries: When the output gap is positive (the economy is doing well), expected returns are low, and when consumer confidence is high, expected returns are also low...

18. Financial Literacy, Confidence and Financial Advice Seeking

NARCIS (Netherlands)

Kramer, Marc M.

2016-01-01

We find that people with higher confidence in their own financial literacy are less likely to seek financial advice, but no relation between objective measures of literacy and advice seeking. The negative association between confidence and advice seeking is more pronounced among wealthy households.

19. Aging and Confidence Judgments in Item Recognition

Science.gov (United States)

Voskuilen, Chelsea; Ratcliff, Roger; McKoon, Gail

2018-01-01

We examined the effects of aging on performance in an item-recognition experiment with confidence judgments. A model for confidence judgments and response time (RTs; Ratcliff & Starns, 2013) was used to fit a large amount of data from a new sample of older adults and a previously reported sample of younger adults. This model of confidence…

20. Organic labbeling systems and consumer confidence

OpenAIRE

Sønderskov, Kim Mannemar; Daugbjerg, Carsten

2009-01-01

A research analysis suggests that a state certification and labelling system creates confidence in organic labelling systems and consequently green consumerism. Danish consumers have higher levels of confidence in the labelling system than consumers in countries where the state plays a minor role in labelling and certification.

1. Self-confidence and metacognitive processes

Directory of Open Access Journals (Sweden)

Kleitman Sabina

2005-01-01

Full Text Available This paper examines the status of Self-confidence trait. Two studies strongly suggest that Self-confidence is a component of metacognition. In the first study, participants (N=132 were administered measures of Self-concept, a newly devised Memory and Reasoning Competence Inventory (MARCI, and a Verbal Reasoning Test (VRT. The results indicate a significant relationship between confidence ratings on the VRT and the Reasoning component of MARCI. The second study (N=296 employed an extensive battery of cognitive tests and several metacognitive measures. Results indicate the presence of robust Self-confidence and Metacognitive Awareness factors, and a significant correlation between them. Self-confidence taps not only processes linked to performance on items that have correct answers, but also beliefs about events that may never occur.

2. Trust versus confidence: Microprocessors and personnel monitoring

International Nuclear Information System (INIS)

Chiaro, P.J. Jr.

1993-01-01

Due to recent technological advances, substantial improvements have been made in personnel contamination monitoring. In all likelihood, these advances will close out the days of manually frisking personnel for radioactive contamination. Unfortunately, as microprocessor-based monitors become more widely used, not only at commercial power reactors but also at government facilities, questions concerning their trustworthiness arise. Algorithms make decisions that were previously made by technicians. Trust is placed not in technicians but in machines. In doing this it is assumed that the machine never misses. Inevitably, this trust drops, due largely to open-quotes false alarms.close quotes This is especially true when monitoring for alpha contamination. What is a open-quotes false alarm?close quotes Do these machines and their algorithms that we put our trust in make mistakes? An analysis was performed on half-body and hand-and-foot monitors at Oak Ridge National Laboratory (ORNL) in order to justify the suggested confidence level used for alarm point determination. Sources used in this analysis had activities approximating ORNL's contamination limits

3. Trust versus confidence: Microprocessors and personnel monitoring

International Nuclear Information System (INIS)

Chiaro, P.J. Jr.

1993-01-01

Due to recent technological advances, substantial improvements have been made in personnel contamination monitoring. In all likelihood, these advances will close out the days of manually frisking personnel for radioactive contamination. Unfortunately, as microprocessor-based monitors become more widely used, not only at commercial power reactors but also at government facilities, questions concerning their trustworthiness arise. Algorithms make decisions that were previously made by technicians. Trust is placed not in technicians but in machines. In doing this it is assumed that the machine never misses. Inevitably, this trust drops, due largely to ''false alarms''. This is especially true when monitoring for alpha contamination. What is a ''false alarm''? Do these machines and their algorithms that we put our trust in make mistakes? An analysis was performed on half-body and hand-and-foot monitors at Oak Ridge National Laboratory (ORNL) in order to justify the suggested confidence level used for alarm point determination. Sources used in this analysis had activities approximating ORNL's contamination limits

4. Trust versus confidence: Microprocessors and personnel monitoring

International Nuclear Information System (INIS)

Chiaro, P.J. Jr.

1994-01-01

Due to recent technological advances, substantial improvements have been made in personnel contamination monitoring. In all likelihood, these advances will close out the days of manually frisking personnel for radioactive contamination. Unfortunately, as microprocessor-based monitors become more widely used, not only at commercial power reactors but also at government facilities, questions concerning their trustworthiness arise. Algorithms make decisions that were previously made by technicians. Trust is placed not in technicians but in machines. In doing this it is assumed that the machine never misses. Inevitably, this trust drops, due largely to ''false alarms''. This is especially true when monitoring for alpha contamination. What is a ''false alarm''? Do these machines and their algorithms that they put their trust in make mistakes? An analysis was performed on half-body and hand-and-foot monitors at Oak Ridge National Laboratory (ORNL) in order to justify the suggested confidence level used for alarm point determination. Sources used in this analysis had activities approximating ORNL's contamination limits

5. Interval methods: An introduction

DEFF Research Database (Denmark)

Achenie, L.E.K.; Kreinovich, V.; Madsen, Kaj

2006-01-01

This chapter contains selected papers presented at the Minisymposium on Interval Methods of the PARA'04 Workshop '' State-of-the-Art in Scientific Computing ''. The emphasis of the workshop was on high-performance computing (HPC). The ongoing development of ever more advanced computers provides...... the potential for solving increasingly difficult computational problems. However, given the complexity of modern computer architectures, the task of realizing this potential needs careful attention. A main concern of HPC is the development of software that optimizes the performance of a given computer....... An important characteristic of the computer performance in scientific computing is the accuracy of the Computation results. Often, we can estimate this accuracy by using traditional statistical techniques. However, in many practical situations, we do not know the probability distributions of different...

6. Multichannel interval timer

International Nuclear Information System (INIS)

Turko, B.T.

1983-10-01

A CAMAC based modular multichannel interval timer is described. The timer comprises twelve high resolution time digitizers with a common start enabling twelve independent stop inputs. Ten time ranges from 2.5 μs to 1.3 μs can be preset. Time can be read out in twelve 24-bit words either via CAMAC Crate Controller or an external FIFO register. LSB time calibration is 78.125 ps. An additional word reads out the operational status of twelve stop channels. The system consists of two modules. The analog module contains a reference clock and 13 analog time stretchers. The digital module contains counters, logic and interface circuits. The timer has an excellent differential linearity, thermal stability and crosstalk free performance

7. Interpregnancy interval and risk of autistic disorder.

Science.gov (United States)

Gunnes, Nina; Surén, Pål; Bresnahan, Michaeline; Hornig, Mady; Lie, Kari Kveim; Lipkin, W Ian; Magnus, Per; Nilsen, Roy Miodini; Reichborn-Kjennerud, Ted; Schjølberg, Synnve; Susser, Ezra Saul; Øyen, Anne-Siri; Stoltenberg, Camilla

2013-11-01

A recent California study reported increased risk of autistic disorder in children conceived within a year after the birth of a sibling. We assessed the association between interpregnancy interval and risk of autistic disorder using nationwide registry data on pairs of singleton full siblings born in Norway. We defined interpregnancy interval as the time from birth of the first-born child to conception of the second-born child in a sibship. The outcome of interest was autistic disorder in the second-born child. Analyses were restricted to sibships in which the second-born child was born in 1990-2004. Odds ratios (ORs) were estimated by fitting ordinary logistic models and logistic generalized additive models. The study sample included 223,476 singleton full-sibling pairs. In sibships with interpregnancy intervals autistic disorder, compared with 0.13% in the reference category (≥ 36 months). For interpregnancy intervals shorter than 9 months, the adjusted OR of autistic disorder in the second-born child was 2.18 (95% confidence interval 1.42-3.26). The risk of autistic disorder in the second-born child was also increased for interpregnancy intervals of 9-11 months in the adjusted analysis (OR = 1.71 [95% CI = 1.07-2.64]). Consistent with a previous report from California, interpregnancy intervals shorter than 1 year were associated with increased risk of autistic disorder in the second-born child. A possible explanation is depletion of micronutrients in mothers with closely spaced pregnancies.

8. Assessing QT interval prolongation and its associated risks with antipsychotics

DEFF Research Database (Denmark)

Nielsen, Jimmi; Graff, Claus; Kanters, Jørgen K.

2011-01-01

markers for TdP have been developed but none of them is clinically implemented yet and QT interval prolongation is still considered the most valid surrogate marker. Although automated QT interval determination may offer some assistance, QT interval determination is best performed by a cardiologist skilled...

9. We will be champions: Leaders' confidence in 'us' inspires team members' team confidence and performance.

Science.gov (United States)

Fransen, K; Steffens, N K; Haslam, S A; Vanbeselaere, N; Vande Broek, G; Boen, F

2016-12-01

10. Confidence building in implementation of geological disposal

International Nuclear Information System (INIS)

Umeki, Hiroyuki

2004-01-01

Long-term safety of the disposal system should be demonstrated to the satisfaction of the stakeholders. Convincing arguments are therefore required that instil in the stakeholders confidence in the safety of a particular concept for the siting and design of a geological disposal, given the uncertainties that inevitably exist in its a priori description and in its evolution. The step-wise approach associated with making safety case at each stage is a key to building confidence in the repository development programme. This paper discusses aspects and issues on confidence building in the implementation of HLW disposal in Japan. (author)

11. Confidence rating of marine eutrophication assessments

DEFF Research Database (Denmark)

Murray, Ciarán; Andersen, Jesper Harbo; Kaartokallio, Hermanni

2011-01-01

of the 'value' of the indicators on which the primary assessment is made. Such secondary assessment of confidence represents a first step towards linking status classification with information regarding their accuracy and precision and ultimately a tool for improving or targeting actions to improve the health......This report presents the development of a methodology for assessing confidence in eutrophication status classifications. The method can be considered as a secondary assessment, supporting the primary assessment of eutrophication status. The confidence assessment is based on a transparent scoring...

12. Advanced Interval Management: A Benefit Analysis

Science.gov (United States)

Timer, Sebastian; Peters, Mark

2016-01-01

This document is the final report for the NASA Langley Research Center (LaRC)- sponsored task order 'Possible Benefits for Advanced Interval Management Operations.' Under this research project, Architecture Technology Corporation performed an analysis to determine the maximum potential benefit to be gained if specific Advanced Interval Management (AIM) operations were implemented in the National Airspace System (NAS). The motivation for this research is to guide NASA decision-making on which Interval Management (IM) applications offer the most potential benefit and warrant further research.

13. An Exact Confidence Region in Multivariate Calibration

OpenAIRE

Mathew, Thomas; Kasala, Subramanyam

1994-01-01

In the multivariate calibration problem using a multivariate linear model, an exact confidence region is constructed. It is shown that the region is always nonempty and is invariant under nonsingular transformations.

14. Weighting Mean and Variability during Confidence Judgments

Science.gov (United States)

de Gardelle, Vincent; Mamassian, Pascal

2015-01-01

Humans can not only perform some visual tasks with great precision, they can also judge how good they are in these tasks. However, it remains unclear how observers produce such metacognitive evaluations, and how these evaluations might be dissociated from the performance in the visual task. Here, we hypothesized that some stimulus variables could affect confidence judgments above and beyond their impact on performance. In a motion categorization task on moving dots, we manipulated the mean and the variance of the motion directions, to obtain a low-mean low-variance condition and a high-mean high-variance condition with matched performances. Critically, in terms of confidence, observers were not indifferent between these two conditions. Observers exhibited marked preferences, which were heterogeneous across individuals, but stable within each observer when assessed one week later. Thus, confidence and performance are dissociable and observers’ confidence judgments put different weights on the stimulus variables that limit performance. PMID:25793275

15. Distinguishing highly confident accurate and inaccurate memory: insights about relevant and irrelevant influences on memory confidence

OpenAIRE

Chua, Elizabeth F.; Hannula, Deborah E.; Ranganath, Charan

2012-01-01

It is generally believed that accuracy and confidence in one’s memory are related, but there are many instances when they diverge. Accordingly, it is important to disentangle the factors which contribute to memory accuracy and confidence, especially those factors that contribute to confidence, but not accuracy. We used eye movements to separately measure fluent cue processing, the target recognition experience, and relative evidence assessment on recognition confidence and accuracy. Eye movem...

16. How do regulators measure public confidence?

International Nuclear Information System (INIS)

Schmitt, A.; Besenyei, E.

2006-01-01

The conclusions and recommendations of this session can be summarized this way. - There are some important elements of confidence: visibility, satisfaction, credibility and reputation. The latter can consist of trust, positive image and knowledge of the role the organisation plays. A good reputation is hard to achieve but easy to lose. - There is a need to define what public confidence is and what to measure. The difficulty is that confidence is a matter of perception of the public, so what we try to measure is the perception. - It is controversial how to take into account the results of confidence measurement because of the influence of the context. It is not an exact science, results should be examined cautiously and surveys should be conducted frequently, at least every two years. - Different experiences were explained: - Quantitative surveys - among the general public or more specific groups like the media; - Qualitative research - with test groups and small panels; - Semi-quantitative studies - among stakeholders who have regular contracts with the regulatory body. It is not clear if the results should be shared with the public or just with other authorities and governmental organisations. - Efforts are needed to increase visibility, which is a prerequisite for confidence. - A practical example of organizing an emergency exercise and an information campaign without taking into account the real concerns of the people was given to show how public confidence can be decreased. - We learned about a new method - the so-called socio-drama - which addresses another issue also connected to confidence - the notion of understanding between stakeholders around a nuclear site. It is another way of looking at confidence in a more restricted group. (authors)

17. Confidence in leadership among the newly qualified.

Science.gov (United States)

Bayliss-Pratt, Lisa; Morley, Mary; Bagley, Liz; Alderson, Steven

2013-10-23

The Francis report highlighted the importance of strong leadership from health professionals but it is unclear how prepared those who are newly qualified feel to take on a leadership role. We aimed to assess the confidence of newly qualified health professionals working in the West Midlands in the different competencies of the NHS Leadership Framework. Most respondents felt confident in their abilities to demonstrate personal qualities and work with others, but less so at managing or improving services or setting direction.

18. [Sources of leader's confidence in organizations].

Science.gov (United States)

Ikeda, Hiroshi; Furukawa, Hisataka

2006-04-01

19. Errors and Predictors of Confidence in Condom Use amongst Young Australians Attending a Music Festival

OpenAIRE

Hall, Karina M.; Brieger, Daniel G.; De Silva, Sukhita H.; Pfister, Benjamin F.; Youlden, Daniel J.; John-Leader, Franklin; Pit, Sabrina W.

2016-01-01

Objectives. To determine the confidence and ability to use condoms correctly and consistently and the predictors of confidence in young Australians attending a festival. Methods. 288 young people aged 18 to 29 attending a mixed-genre music festival completed a survey measuring demographics, self-reported confidence using condoms, ability to use condoms, and issues experienced when using condoms in the past 12 months. Results. Self-reported confidence using condoms was high (77%). Multivariate...

20. Nearest unlike neighbor (NUN): an aid to decision confidence estimation

Science.gov (United States)

Dasarathy, Belur V.

1995-09-01

The concept of nearest unlike neighbor (NUN), proposed and explored previously in the design of nearest neighbor (NN) based decision systems, is further exploited in this study to develop a measure of confidence in the decisions made by NN-based decision systems. This measure of confidence, on the basis of comparison with a user-defined threshold, may be used to determine the acceptability of the decision provided by the NN-based decision system. The concepts, associated methodology, and some illustrative numerical examples using the now classical Iris data to bring out the ease of implementation and effectiveness of the proposed innovations are presented.

1. Chosen interval methods for solving linear interval systems with special type of matrix

Science.gov (United States)

Szyszka, Barbara

2013-10-01

The paper is devoted to chosen direct interval methods for solving linear interval systems with special type of matrix. This kind of matrix: band matrix with a parameter, from finite difference problem is obtained. Such linear systems occur while solving one dimensional wave equation (Partial Differential Equations of hyperbolic type) by using the central difference interval method of the second order. Interval methods are constructed so as the errors of method are enclosed in obtained results, therefore presented linear interval systems contain elements that determining the errors of difference method. The chosen direct algorithms have been applied for solving linear systems because they have no errors of method. All calculations were performed in floating-point interval arithmetic.

2. CONFIDENCE LEVELS AND/VS. STATISTICAL HYPOTHESIS TESTING IN STATISTICAL ANALYSIS. CASE STUDY

Directory of Open Access Journals (Sweden)

ILEANA BRUDIU

2009-05-01

Full Text Available Estimated parameters with confidence intervals and testing statistical assumptions used in statistical analysis to obtain conclusions on research from a sample extracted from the population. Paper to the case study presented aims to highlight the importance of volume of sample taken in the study and how this reflects on the results obtained when using confidence intervals and testing for pregnant. If statistical testing hypotheses not only give an answer "yes" or "no" to some questions of statistical estimation using statistical confidence intervals provides more information than a test statistic, show high degree of uncertainty arising from small samples and findings build in the "marginally significant" or "almost significant (p very close to 0.05.

3. Maximum-confidence discrimination among symmetric qudit states

International Nuclear Information System (INIS)

Jimenez, O.; Solis-Prosser, M. A.; Delgado, A.; Neves, L.

2011-01-01

We study the maximum-confidence (MC) measurement strategy for discriminating among nonorthogonal symmetric qudit states. Restricting to linearly dependent and equally likely pure states, we find the optimal positive operator valued measure (POVM) that maximizes our confidence in identifying each state in the set and minimizes the probability of obtaining inconclusive results. The physical realization of this POVM is completely determined and it is shown that after an inconclusive outcome, the input states may be mapped into a new set of equiprobable symmetric states, restricted, however, to a subspace of the original qudit Hilbert space. By applying the MC measurement again onto this new set, we can still gain some information about the input states, although with less confidence than before. This leads us to introduce the concept of sequential maximum-confidence (SMC) measurements, where the optimized MC strategy is iterated in as many stages as allowed by the input set, until no further information can be extracted from an inconclusive result. Within each stage of this measurement our confidence in identifying the input states is the highest possible, although it decreases from one stage to the next. In addition, the more stages we accomplish within the maximum allowed, the higher will be the probability of correct identification. We will discuss an explicit example of the optimal SMC measurement applied in the discrimination among four symmetric qutrit states and propose an optical network to implement it.

Science.gov (United States)

Phillips, Marla; Kashyap, Vishal; Cheung, Mee-Shew

2015-01-01

Leaders in the pharmaceutical, medical device, and food industries expressed a unilateral concern over product confidence throughout the total product lifecycle, an unsettling fact for these leaders to manage given that their products affect the lives of millions of people each year. Fueled by the heparin incident of intentional adulteration in 2008, initial efforts for increasing product confidence were focused on improving the confidence of incoming materials, with a belief that supplier performance must be the root cause. As in the heparin case, concern over supplier performance extended deep into the supply chain to include suppliers of the suppliers-which is often a blind spot for pharmaceutical, device, and food manufacturers. Resolved to address the perceived lack of supplier performance, these U.S. Food and Drug Administration (FDA)-regulated industries began to adopt the supplier relationship management strategy, developed by the automotive industry, that emphasizes "management" of suppliers for the betterment of the manufacturers. Current product and supplier management strategies, however, have not led to a significant improvement in product confidence. As a result of the enduring concern by industry leaders over the lack of product confidence, Xavier University launched the Integrity of Supply Initiative in 2012 with a team of industry leaders and FDA officials. Through a methodical research approach, data generated by the pharmaceutical, medical device, and food manufacturers surprisingly pointed to themselves as a source of the lack of product confidence, and revealed that manufacturers either unknowingly increase the potential for error or can control/prevent many aspects of product confidence failure. It is only through this paradigm shift that manufacturers can work collaboratively with their suppliers as equal partners, instead of viewing their suppliers as "lesser" entities needing to be controlled. The basis of this shift provides manufacturers

5. Collagen degradation as a possibility to determine the post-mortem interval (PMI) of animal bones: a validation study referring to an original study of Boaks et al. (2014).

Science.gov (United States)

Jellinghaus, Katharina; Hachmann, Carolin; Hoeland, Katharina; Bohnert, Michael; Wittwer-Backofen, Ursula

2018-05-01

Estimation of the post-mortem interval (PMI) of unknown skeletal remains is a common forensic task. Boaks and colleagues demonstrated a new method for PMI estimation in showing a reduction of the collagen to non-collagen content (Co/NCo ratio) in porcine bones after a PMI of 12 months using the Sirius Red/Fast Green Collagen Staining Kit from Chondrex in 2014 (Boaks et al. Forensic Sci Int 240: 104-110, 2014). The aim of our study was to reproduce this method and to investigate if the method could be used for forensic issues. Sixteen fresh porcine bones were placed in prepared boxes where they were treated regularly with distilled water or with water from hay infusions. For determining the Co/NCo ratio, we used the Sirius Red/Fast Green Collagen Staining Kit from Chondrex, which stains collagenous (Co) proteins red and non-collagenous (NCo) proteins green Chondrex Inc. (2008). After a PMI of 1-3 months, an analysis of porcine bone thin sections was performed on the one hand with spectrophotometry, on the other hand with stereomicroscopy. Using spectrophotometry, we go low and partially negative Co/NCo ratios which were up to 100-fold lower than the results we expected to get. The data we got by stereomicroscopy and calculating the Co/NCo ratio from extracting the red and green content with the software MATLAB and so calculating the Co/NCo ratio showed a correlation between PMI and the Co/NCo ratio in the porcine bone samples. Regular addition of distilled water or water from a hay infusion did not produce any significant differences so that an increased presence of microorganisms had obviously no influence on collagen degradation.

Energy Technology Data Exchange (ETDEWEB)

Cleminson, F.R. [Dept. of Foreign Affairs and International Trade, Verification, Non-Proliferation, Arms Control and Disarmament Div (IDA), Ottawa, Ontario (Canada)

1998-07-01

Confidence-building has come into its own as a 'tool of choice' in facilitating the non-proliferation, arms control and disarmament (NACD) agenda, whether regional or global. From the Middle East Peace Process (MEPP) to the ASEAN Intersessional Group on Confidence-Building (ARF ISG on CBMS), confidence-building has assumed a central profile in regional terms. In the Four Power Talks begun in Geneva on December 9, 1997, the United States identified confidence-building as one of two subject areas for initial discussion as part of a structured peace process between North and South Korea. Thus, with CBMs assuming such a high profile internationally, it seems prudent for Canadians to pause and take stock of the significant role which Canada has already played in the conceptual development of the process over the last two decades. Since the Helsinki accords of 1975, Canada has developed a significant expertise in this area through an unbroken series of original, basic research projects. These have contributed to defining the process internationally from concept to implementation. Today, these studies represent a solid and unique Departmental investment in basic research from which to draw in meeting Canada's current commitments to multilateral initiatives in the area of confidence-building and to provide a 'step up' in terms of future-oriented leadership. (author)

7. Confidence Leak in Perceptual Decision Making.

Science.gov (United States)

Rahnev, Dobromir; Koizumi, Ai; McCurdy, Li Yan; D'Esposito, Mark; Lau, Hakwan

2015-11-01

People live in a continuous environment in which the visual scene changes on a slow timescale. It has been shown that to exploit such environmental stability, the brain creates a continuity field in which objects seen seconds ago influence the perception of current objects. What is unknown is whether a similar mechanism exists at the level of metacognitive representations. In three experiments, we demonstrated a robust intertask confidence leak-that is, confidence in one's response on a given task or trial influencing confidence on the following task or trial. This confidence leak could not be explained by response priming or attentional fluctuations. Better ability to modulate confidence leak predicted higher capacity for metacognition as well as greater gray matter volume in the prefrontal cortex. A model based on normative principles from Bayesian inference explained the results by postulating that observers subjectively estimate the perceptual signal strength in a stable environment. These results point to the existence of a novel metacognitive mechanism mediated by regions in the prefrontal cortex. © The Author(s) 2015.

8. ADAM SMITH: THE INVISIBLE HAND OR CONFIDENCE

Directory of Open Access Journals (Sweden)

Fernando Luis, Gache

2010-01-01

Full Text Available In 1776 Adam Smith raised the matter that an invisible hand was the one which moved the markets to obtain its efficiency. Despite in the present paper we are going to raise the hypothesis, that this invisible hand is in fact the confidence that each person feels when he is going to do business. That in addition it is unique, because it is different from the confidence of the others and that is a variable nonlinear that essentially is ligatured to respective personal histories. For that we are going to take as its bases the paper by Leopoldo Abadía (2009, with respect to the financial economy crisis that happened in 2007-2008, to evidence the form in which confidence operates. Therefore the contribution that we hope to do with this paper is to emphasize that, the level of confidence of the different actors, is the one which really moves the markets, (therefore the economy and that the crisis of the subprime mortgages is a confidence crisis at world-wide level.

International Nuclear Information System (INIS)

Cleminson, F.R.

1998-01-01

Confidence-building has come into its own as a 'tool of choice' in facilitating the non-proliferation, arms control and disarmament (NACD) agenda, whether regional or global. From the Middle East Peace Process (MEPP) to the ASEAN Intersessional Group on Confidence-Building (ARF ISG on CBMS), confidence-building has assumed a central profile in regional terms. In the Four Power Talks begun in Geneva on December 9, 1997, the United States identified confidence-building as one of two subject areas for initial discussion as part of a structured peace process between North and South Korea. Thus, with CBMs assuming such a high profile internationally, it seems prudent for Canadians to pause and take stock of the significant role which Canada has already played in the conceptual development of the process over the last two decades. Since the Helsinki accords of 1975, Canada has developed a significant expertise in this area through an unbroken series of original, basic research projects. These have contributed to defining the process internationally from concept to implementation. Today, these studies represent a solid and unique Departmental investment in basic research from which to draw in meeting Canada's current commitments to multilateral initiatives in the area of confidence-building and to provide a 'step up' in terms of future-oriented leadership. (author)

10. Magnetic Resonance Imaging in the measurement of whole body muscle mass: A comparison of interval gap methods

International Nuclear Information System (INIS)

Hellmanns, K.; McBean, K.; Thoirs, K.

2015-01-01

Purpose: Magnetic Resonance Imaging (MRI) is commonly used in body composition research to measure whole body skeletal muscle mass (SM). MRI calculation methods of SM can vary by analysing the images at different slice intervals (or interval gaps) along the length of the body. This study compared SM measurements made from MRI images of apparently healthy individuals using different interval gap methods to determine the error associated with each technique. It was anticipated that the results would inform researchers of optimum interval gap measurements to detect a predetermined minimum change in SM. Methods: A method comparison study was used to compare eight interval gap methods (interval gaps of 40, 50, 60, 70, 80, 100, 120 and 140 mm) against a reference 10 mm interval gap method for measuring SM from twenty MRI image sets acquired from apparently healthy participants. Pearson product-moment correlation analysis was used to determine the association between methods. Total error was calculated as the sum of the bias (systematic error) and the random error (limits of agreement) of the mean differences. Percentage error was used to demonstrate proportional error. Results: Pearson product-moment correlation analysis between the reference method and all interval gap methods demonstrated strong and significant associations (r > 0.99, p < 0.0001). The 40 mm interval gap method was comparable with the 10 mm interval reference method and had a low error (total error 0.95 kg, −3.4%). Analysis methods using wider interval gap techniques demonstrated larger errors than reported for dual-energy x-ray absorptiometry (DXA), a technique which is more available, less expensive, and less time consuming than MRI analysis of SM. Conclusions: Researchers using MRI to measure SM can be confident in using a 40 mm interval gap technique when analysing the images to detect minimum changes less than 1 kg. The use of wider intervals will introduce error that is no better

11. High confidence in falsely recognizing prototypical faces.

Science.gov (United States)

Sampaio, Cristina; Reinke, Victoria; Mathews, Jeffrey; Swart, Alexandra; Wallinger, Stephen

2018-06-01

We applied a metacognitive approach to investigate confidence in recognition of prototypical faces. Participants were presented with sets of faces constructed digitally as deviations from prototype/base faces. Participants were then tested with a simple recognition task (Experiment 1) or a multiple-choice task (Experiment 2) for old and new items plus new prototypes, and they showed a high rate of confident false alarms to the prototypes. Confidence and accuracy relationship in this face recognition paradigm was found to be positive for standard items but negative for the prototypes; thus, it was contingent on the nature of the items used. The data have implications for lineups that employ match-to-suspect strategies.

12. Reference intervals for selected serum biochemistry analytes in cheetahs Acinonyx jubatus.

Science.gov (United States)

Hudson-Lamb, Gavin C; Schoeman, Johan P; Hooijberg, Emma H; Heinrich, Sonja K; Tordiffe, Adrian S W

2016-02-26

Published haematologic and serum biochemistry reference intervals are very scarce for captive cheetahs and even more for free-ranging cheetahs. The current study was performed to establish reference intervals for selected serum biochemistry analytes in cheetahs. Baseline serum biochemistry analytes were analysed from 66 healthy Namibian cheetahs. Samples were collected from 30 captive cheetahs at the AfriCat Foundation and 36 free-ranging cheetahs from central Namibia. The effects of captivity-status, age, sex and haemolysis score on the tested serum analytes were investigated. The biochemistry analytes that were measured were sodium, potassium, magnesium, chloride, urea and creatinine. The 90% confidence interval of the reference limits was obtained using the non-parametric bootstrap method. Reference intervals were preferentially determined by the non-parametric method and were as follows: sodium (128 mmol/L - 166 mmol/L), potassium (3.9 mmol/L - 5.2 mmol/L), magnesium (0.8 mmol/L - 1.2 mmol/L), chloride (97 mmol/L - 130 mmol/L), urea (8.2 mmol/L - 25.1 mmol/L) and creatinine (88 µmol/L - 288 µmol/L). Reference intervals from the current study were compared with International Species Information System values for cheetahs and found to be narrower. Moreover, age, sex and haemolysis score had no significant effect on the serum analytes in this study. Separate reference intervals for captive and free-ranging cheetahs were also determined. Captive cheetahs had higher urea values, most likely due to dietary factors. This study is the first to establish reference intervals for serum biochemistry analytes in cheetahs according to international guidelines. These results can be used for future health and disease assessments in both captive and free-ranging cheetahs.

13. Reference intervals for selected serum biochemistry analytes in cheetahs (Acinonyx jubatus

Directory of Open Access Journals (Sweden)

Gavin C. Hudson-Lamb

2016-02-01

Full Text Available Published haematologic and serum biochemistry reference intervals are very scarce for captive cheetahs and even more for free-ranging cheetahs. The current study was performed to establish reference intervals for selected serum biochemistry analytes in cheetahs. Baseline serum biochemistry analytes were analysed from 66 healthy Namibian cheetahs. Samples were collected from 30 captive cheetahs at the AfriCat Foundation and 36 free-ranging cheetahs from central Namibia. The effects of captivity-status, age, sex and haemolysis score on the tested serum analytes were investigated. The biochemistry analytes that were measured were sodium, potassium, magnesium, chloride, urea and creatinine. The 90% confidence interval of the reference limits was obtained using the non-parametric bootstrap method. Reference intervals were preferentially determined by the non-parametric method and were as follows: sodium (128 mmol/L – 166 mmol/L, potassium (3.9 mmol/L – 5.2 mmol/L, magnesium (0.8 mmol/L – 1.2 mmol/L, chloride (97 mmol/L – 130 mmol/L, urea (8.2 mmol/L – 25.1 mmol/L and creatinine (88 µmol/L – 288 µmol/L. Reference intervals from the current study were compared with International Species Information System values for cheetahs and found to be narrower. Moreover, age, sex and haemolysis score had no significant effect on the serum analytes in this study. Separate reference intervals for captive and free-ranging cheetahs were also determined. Captive cheetahs had higher urea values, most likely due to dietary factors. This study is the first to establish reference intervals for serum biochemistry analytes in cheetahs according to international guidelines. These results can be used for future health and disease assessments in both captive and free-ranging cheetahs.

14. A systematic review of maternal confidence for physiologic birth: characteristics of prenatal care and confidence measurement.

Science.gov (United States)

Avery, Melissa D; Saftner, Melissa A; Larson, Bridget; Weinfurter, Elizabeth V

2014-01-01

Because a focus on physiologic labor and birth has reemerged in recent years, care providers have the opportunity in the prenatal period to help women increase confidence in their ability to give birth without unnecessary interventions. However, most research has only examined support for women during labor. The purpose of this systematic review was to examine the research literature for information about prenatal care approaches that increase women's confidence for physiologic labor and birth and tools to measure that confidence. Studies were reviewed that explored any element of a pregnant woman's interaction with her prenatal care provider that helped build confidence in her ability to labor and give birth. Timing of interaction with pregnant women included during pregnancy, labor and birth, and the postpartum period. In addition, we looked for studies that developed a measure of women's confidence related to labor and birth. Outcome measures included confidence or similar concepts, descriptions of components of prenatal care contributing to maternal confidence for birth, and reliability and validity of tools measuring confidence. The search of MEDLINE, CINAHL, PsycINFO, and Scopus databases provided a total of 893 citations. After removing duplicates and articles that did not meet inclusion criteria, 6 articles were included in the review. Three relate to women's confidence for labor during the prenatal period, and 3 describe tools to measure women's confidence for birth. Research about enhancing women's confidence for labor and birth was limited to qualitative studies. Results suggest that women desire information during pregnancy and want to use that information to participate in care decisions in a relationship with a trusted provider. Further research is needed to develop interventions to help midwives and physicians enhance women's confidence in their ability to give birth and to develop a tool to measure confidence for use during prenatal care. © 2014 by

15. Intact interval timing in circadian CLOCK mutants.

Science.gov (United States)

Cordes, Sara; Gallistel, C R

2008-08-28

While progress has been made in determining the molecular basis for the circadian clock, the mechanism by which mammalian brains time intervals measured in seconds to minutes remains a mystery. An obvious question is whether the interval-timing mechanism shares molecular machinery with the circadian timing mechanism. In the current study, we trained circadian CLOCK +/- and -/- mutant male mice in a peak-interval procedure with 10 and 20-s criteria. The mutant mice were more active than their wild-type littermates, but there were no reliable deficits in the accuracy or precision of their timing as compared with wild-type littermates. This suggests that expression of the CLOCK protein is not necessary for normal interval timing.

16. Self Confidence Spillovers and Motivated Beliefs

DEFF Research Database (Denmark)

Banerjee, Ritwik; Gupta, Nabanita Datta; Villeval, Marie Claire

that success when competing in a task increases the performers’ self-confidence and competitiveness in the subsequent task. We also find that such spillovers affect the self-confidence of low-status individuals more than that of high-status individuals. Receiving good news under Affirmative Action, however......Is success in a task used strategically by individuals to motivate their beliefs prior to taking action in a subsequent, unrelated, task? Also, is the distortion of beliefs reinforced for individuals who have lower status in society? Conducting an artefactual field experiment in India, we show...

17. Magnetic Resonance Fingerprinting with short relaxation intervals.

Science.gov (United States)

Amthor, Thomas; Doneva, Mariya; Koken, Peter; Sommer, Karsten; Meineke, Jakob; Börnert, Peter

2017-09-01

The aim of this study was to investigate a technique for improving the performance of Magnetic Resonance Fingerprinting (MRF) in repetitive sampling schemes, in particular for 3D MRF acquisition, by shortening relaxation intervals between MRF pulse train repetitions. A calculation method for MRF dictionaries adapted to short relaxation intervals and non-relaxed initial spin states is presented, based on the concept of stationary fingerprints. The method is applicable to many different k-space sampling schemes in 2D and 3D. For accuracy analysis, T 1 and T 2 values of a phantom are determined by single-slice Cartesian MRF for different relaxation intervals and are compared with quantitative reference measurements. The relevance of slice profile effects is also investigated in this case. To further illustrate the capabilities of the method, an application to in-vivo spiral 3D MRF measurements is demonstrated. The proposed computation method enables accurate parameter estimation even for the shortest relaxation intervals, as investigated for different sampling patterns in 2D and 3D. In 2D Cartesian measurements, we achieved a scan acceleration of more than a factor of two, while maintaining acceptable accuracy: The largest T 1 values of a sample set deviated from their reference values by 0.3% (longest relaxation interval) and 2.4% (shortest relaxation interval). The largest T 2 values showed systematic deviations of up to 10% for all relaxation intervals, which is discussed. The influence of slice profile effects for multislice acquisition is shown to become increasingly relevant for short relaxation intervals. In 3D spiral measurements, a scan time reduction of 36% was achieved, maintaining the quality of in-vivo T1 and T2 maps. Reducing the relaxation interval between MRF sequence repetitions using stationary fingerprint dictionaries is a feasible method to improve the scan efficiency of MRF sequences. The method enables fast implementations of 3D spatially

18. Confident Communication: Speaking Tips for Educators.

Science.gov (United States)

Parker, Douglas A.

This resource book seeks to provide the building blocks needed for public speaking while eliminating the fear factor. The book explains how educators can perfect their oratorical capabilities as well as enjoy the security, confidence, and support needed to create and deliver dynamic speeches. Following an Introduction: A Message for Teachers,…

19. Principles of psychological confidence of NPP operators

International Nuclear Information System (INIS)

Alpeev, A.S.

1994-01-01

The problems of operator interaction with subsystems supporting his activity are discussed from the point of view of formation of his psychological confidence on the basis of the automation intellectual means capabilities. The functions of operator activity supporting subsystems, which realization will provide to decrease greatly the portion of accidents at NPPs connected with mistakes in operator actions, are derived. 6 refs

20. Growing confidence, building skills | IDRC - International ...

International Development Research Centre (IDRC) Digital Library (Canada)

In 2012 Rashid explored the influence of think tanks on policy in Bangladesh, as well as their relationships with international donors and media. In 2014, he explored two-way student exchanges between Canadian and ... his IDRC experience “gave me the confidence to conduct high quality research in social sciences.”.

1. Detecting Disease in Radiographs with Intuitive Confidence

Directory of Open Access Journals (Sweden)

Stefan Jaeger

2015-01-01

Full Text Available This paper argues in favor of a specific type of confidence for use in computer-aided diagnosis and disease classification, namely, sine/cosine values of angles represented by points on the unit circle. The paper shows how this confidence is motivated by Chinese medicine and how sine/cosine values are directly related with the two forces Yin and Yang. The angle for which sine and cosine are equal (45° represents the state of equilibrium between Yin and Yang, which is a state of nonduality that indicates neither normality nor abnormality in terms of disease classification. The paper claims that the proposed confidence is intuitive and can be readily understood by physicians. The paper underpins this thesis with theoretical results in neural signal processing, stating that a sine/cosine relationship between the actual input signal and the perceived (learned input is key to neural learning processes. As a practical example, the paper shows how to use the proposed confidence values to highlight manifestations of tuberculosis in frontal chest X-rays.

2. Current Developments in Measuring Academic Behavioural Confidence

Science.gov (United States)

Sander, Paul

2009-01-01

Using published findings and by further analyses of existing data, the structure, validity and utility of the Academic Behavioural Confidence scale (ABC) is critically considered. Validity is primarily assessed through the scale's relationship with other existing scales as well as by looking for predicted differences. The utility of the ABC scale…

3. Evaluating Measures of Optimism and Sport Confidence

Science.gov (United States)

Fogarty, Gerard J.; Perera, Harsha N.; Furst, Andrea J.; Thomas, Patrick R.

2016-01-01

The psychometric properties of the Life Orientation Test-Revised (LOT-R), the Sport Confidence Inventory (SCI), and the Carolina SCI (CSCI) were examined in a study involving 260 athletes. The study aimed to test the dimensional structure, convergent and divergent validity, and invariance over competition level of scores generated by these…

4. Distinguishing highly confident accurate and inaccurate memory: insights about relevant and irrelevant influences on memory confidence.

Science.gov (United States)

Chua, Elizabeth F; Hannula, Deborah E; Ranganath, Charan

2012-01-01

It is generally believed that accuracy and confidence in one's memory are related, but there are many instances when they diverge. Accordingly it is important to disentangle the factors that contribute to memory accuracy and confidence, especially those factors that contribute to confidence, but not accuracy. We used eye movements to separately measure fluent cue processing, the target recognition experience, and relative evidence assessment on recognition confidence and accuracy. Eye movements were monitored during a face-scene associative recognition task, in which participants first saw a scene cue, followed by a forced-choice recognition test for the associated face, with confidence ratings. Eye movement indices of the target recognition experience were largely indicative of accuracy, and showed a relationship to confidence for accurate decisions. In contrast, eye movements during the scene cue raised the possibility that more fluent cue processing was related to higher confidence for both accurate and inaccurate recognition decisions. In a second experiment we manipulated cue familiarity, and therefore cue fluency. Participants showed higher confidence for cue-target associations for when the cue was more familiar, especially for incorrect responses. These results suggest that over-reliance on cue familiarity and under-reliance on the target recognition experience may lead to erroneous confidence.

5. Building Public Confidence in Nuclear Activities

International Nuclear Information System (INIS)

Isaacs, T

2002-01-01

Achieving public acceptance has become a central issue in discussions regarding the future of nuclear power and associated nuclear activities. Effective public communication and public participation are often put forward as the key building blocks in garnering public acceptance. A recent international workshop in Finland provided insights into other features that might also be important to building and sustaining public confidence in nuclear activities. The workshop was held in Finland in close cooperation with Finnish stakeholders. This was most appropriate because of the recent successes in achieving positive decisions at the municipal, governmental, and Parliamentary levels, allowing the Finnish high-level radioactive waste repository program to proceed, including the identification and approval of a proposed candidate repository site. Much of the workshop discussion appropriately focused on the roles of public participation and public communications in building public confidence. It was clear that well constructed and implemented programs of public involvement and communication and a sense of fairness were essential in building the extent of public confidence needed to allow the repository program in Finland to proceed. It was also clear that there were a number of other elements beyond public involvement that contributed substantially to the success in Finland to date. And, in fact, it appeared that these other factors were also necessary to achieving the Finnish public acceptance. In other words, successful public participation and communication were necessary but not sufficient. What else was important? Culture, politics, and history vary from country to country, providing differing contexts for establishing and maintaining public confidence. What works in one country will not necessarily be effective in another. Nonetheless, there appear to be certain elements that might be common to programs that are successful in sustaining public confidence and some of

6. Building Public Confidence in Nuclear Activities

International Nuclear Information System (INIS)

Isaacs, T

2002-01-01

Achieving public acceptance has become a central issue in discussions regarding the future of nuclear power and associated nuclear activities. Effective public communication and public participation are often put forward as the key building blocks in garnering public acceptance. A recent international workshop in Finland provided insights into other features that might also be important to building and sustaining public confidence in nuclear activities. The workshop was held in Finland in close cooperation with Finnish stakeholders. This was most appropriate because of the recent successes in achieving positive decisions at the municipal, governmental, and Parliamentary levels, allowing the Finnish high-level radioactive waste repository program to proceed, including the identification and approval of a proposed candidate repository site Much of the workshop discussion appropriately focused on the roles of public participation and public communications in building public confidence. It was clear that well constructed and implemented programs of public involvement and communication and a sense of fairness were essential in building the extent of public confidence needed to allow the repository program in Finland to proceed. It was also clear that there were a number of other elements beyond public involvement that contributed substantially to the success in Finland to date. And, in fact, it appeared that these other factors were also necessary to achieving the Finnish public acceptance. In other words, successful public participation and communication were necessary but not sufficient. What else was important? Culture, politics, and history vary from country to country, providing differing contexts for establishing and maintaining public confidence. What works in one country will not necessarily be effective in another. Nonetheless, there appear to be certain elements that might be common to programs that are successful in sustaining public confidence, and some of

7. Probability Distribution for Flowing Interval Spacing

International Nuclear Information System (INIS)

S. Kuzio

2004-01-01

Fracture spacing is a key hydrologic parameter in analyses of matrix diffusion. Although the individual fractures that transmit flow in the saturated zone (SZ) cannot be identified directly, it is possible to determine the fractured zones that transmit flow from flow meter survey observations. The fractured zones that transmit flow as identified through borehole flow meter surveys have been defined in this report as flowing intervals. The flowing interval spacing is measured between the midpoints of each flowing interval. The determination of flowing interval spacing is important because the flowing interval spacing parameter is a key hydrologic parameter in SZ transport modeling, which impacts the extent of matrix diffusion in the SZ volcanic matrix. The output of this report is input to the ''Saturated Zone Flow and Transport Model Abstraction'' (BSC 2004 [DIRS 170042]). Specifically, the analysis of data and development of a data distribution reported herein is used to develop the uncertainty distribution for the flowing interval spacing parameter for the SZ transport abstraction model. Figure 1-1 shows the relationship of this report to other model reports that also pertain to flow and transport in the SZ. Figure 1-1 also shows the flow of key information among the SZ reports. It should be noted that Figure 1-1 does not contain a complete representation of the data and parameter inputs and outputs of all SZ reports, nor does it show inputs external to this suite of SZ reports. Use of the developed flowing interval spacing probability distribution is subject to the limitations of the assumptions discussed in Sections 5 and 6 of this analysis report. The number of fractures in a flowing interval is not known. Therefore, the flowing intervals are assumed to be composed of one flowing zone in the transport simulations. This analysis may overestimate the flowing interval spacing because the number of fractures that contribute to a flowing interval cannot be

8. Confidence and Use of Communication Skills in Medical Students

OpenAIRE

Mahnaz Jalalvandi; Akhtar Jamali; Ali Taghipoor-Zahir; Mohammad-Reza Sohrabi

2014-01-01

Background: Well-designed interventions can improve the communication skills of physicians. Since the understanding of the current situation is essential for designing effective interventions, this study was performed to determine medical interns’ confidence and use of communication skills.Materials and Methods: This descriptive-analytical study was performed in spring 2013 within 3 branches of Islamic Azad University (Tehran, Mashhad, and Yazd), on 327 randomly selected interns. Data gatheri...

9. The radiographic acromiohumeral interval is affected by arm and radiographic beam position

Energy Technology Data Exchange (ETDEWEB)

Fehringer, Edward V.; Rosipal, Charles E.; Rhodes, David A.; Lauder, Anthony J.; Feschuk, Connie A.; Mormino, Matthew A.; Hartigan, David E. [University of Nebraska Medical Center, Department of Orthopaedic Surgery and Rehabilitation, Omaha, NE (United States); Puumala, Susan E. [Nebraska Medical Center, Department of Preventive and Societal Medicine, Omaha, NE (United States)

2008-06-15

The objective was to determine whether arm and radiographic beam positional changes affect the acromiohumeral interval (AHI) in radiographs of healthy shoulders. Controlling for participant's height and position as well as radiographic beam height and angle, from 30 right shoulders of right-handed males without shoulder problems four antero-posterior (AP) radiographic views each were obtained in defined positions. Three independent, blinded physicians measured the AHI to the nearest millimeter in 120 randomized radiographs. Mean differences between measurements were calculated, along with a 95% confidence interval. Controlling for observer effect, there was a significant difference between AHI measurements on different views (p<0.01). All pair-wise differences were statistically significant after adjusting for multiple comparisons (all p values <0.01). Even in healthy shoulders, small changes in arm position and radiographic beam orientation affect the AHI in radiographs. (orig.)

10. Frontline nurse managers' confidence and self-efficacy.

Science.gov (United States)

Van Dyk, Jennifer; Siedlecki, Sandra L; Fitzpatrick, Joyce J

2016-05-01

This study was focused on determining relationships between confidence levels and self-efficacy among nurse managers. Frontline nurse managers have a pivotal role in delivering high-quality patient care while managing the associated costs and resources. The competency and skill of nurse managers affect every aspect of patient care and staff well-being as nurse managers are largely responsible for creating work environments in which clinical nurses are able to provide high-quality, patient-centred, holistic care. A descriptive, correlational survey design was used; 85 nurse managers participated. Years in a formal leadership role and confidence scores were found to be significant predictors of self-efficacy scores. Experience as a nurse manager is an important component of confidence and self-efficacy. There is a need to develop educational programmes for nurse managers to enhance their self-confidence and self-efficacy, and to maintain experienced nurse managers in the role. © 2016 John Wiley & Sons Ltd.

11. Challenge for reconstruction of public confidence

International Nuclear Information System (INIS)

Matsuura, S.

2001-01-01

Past incidents and scandals that have had a large influence on damaging public confidence in nuclear energy safety are presented. Radiation leak on nuclear-powered ship 'Mutsu' (1974), the T.M.I. incident in 1979, Chernobyl accident (1986), the sodium leak at the Monju reactor (1995), fire and explosion at a low level waste asphalt solidification facility (1997), J.C.O. incident (Tokai- MURA, 1999), are so many examples that have created feelings of distrust and anxiety in society. In order to restore public confidence there is no other course but to be prepared for difficulty and work honestly to our fullest ability, with all steps made openly and accountably. (N.C.)

12. Tables of Confidence Limits for Proportions

Science.gov (United States)

1990-09-01

0.972 180 49 0.319 0.332 0,357 175 165 0.964 0.969 0.976 ISO 50 0.325 0.338 0.363 175 166 0.969 0.973 0.980 180 51 0.331 0.344 0.368 175 167 0.973 0.977...0.528 180 18 0.135 0 145 0.164 180 19 0.141 0.151 0.171 ISO 80 0.495 0,508 0.534 347 UPPER CONFIDENCE LIMIT FOR PROPORTIONS CONFIDENCE LEVEL...500 409 0.8401 0.8459 0.8565 500 355 0.7364 0.7434 0.7564 500 356 0.7383 0.7453 0.7582 500 410 0.8420 0.8478 0 8583 500 357 0.7402 0.7472 0.7602 500

13. Social media sentiment and consumer confidence

OpenAIRE

Daas, Piet J.H.; Puts, Marco J.H.

2014-01-01

Changes in the sentiment of Dutch public social media messages were compared with changes in monthly consumer confidence over a period of three-and-a-half years, revealing that both were highly correlated (up to r = 0.9) and that both series cointegrated. This phenomenon is predominantly affected by changes in the sentiment of all Dutch public Facebook messages. The inclusion of various selections of public Twitter messages improved this association and the response to changes in sentiment. G...

14. Confidence, Visual Research, and the Aesthetic Function

Directory of Open Access Journals (Sweden)

Stan Ruecker

2007-05-01

Full Text Available The goal of this article is to identify and describe one of the primary purposes of aesthetic quality in the design of computer interfaces and visualization tools. We suggest that humanists can derive advantages in visual research by acknowledging by their efforts to advance aesthetic quality that a significant function of aesthetics in this context is to inspire the user’s confidence. This confidence typically serves to create a sense of trust in the provider of the interface or tool. In turn, this increased trust may result in an increased willingness to engage with the object, on the basis that it demonstrates an attention to detail that promises to reward increased engagement. In addition to confidence, the aesthetic may also contribute to a heightened degree of satisfaction with having spent time using or investigating the object. In the realm of interface design and visualization research, we propose that these aesthetic functions have implications not only for the quality of interactions, but also for the results of the standard measures of performance and preference.

15. Predictor sort sampling and one-sided confidence bounds on quantiles

Science.gov (United States)

Steve Verrill; Victoria L. Herian; David W. Green

2002-01-01

Predictor sort experiments attempt to make use of the correlation between a predictor that can be measured prior to the start of an experiment and the response variable that we are investigating. Properly designed and analyzed, they can reduce necessary sample sizes, increase statistical power, and reduce the lengths of confidence intervals. However, if the non- random...

16. Reference Interval and Subject Variation in Excretion of Urinary Metabolites of Nicotine from Non-Smoking Healthy Subjects in Denmark

DEFF Research Database (Denmark)

Hansen, Å. M.; Garde, A. H.; Christensen, J. M.

2001-01-01

for determination of cotinine was carried out on 27 samples from non-smokers and smokers. Results obtained from the RIA method showed 2.84 [confidence interval (CI): 2.50; 3.18] times higher results compared to the GC-MS method. A linear correlation between the two methods was demonstrated (rho=0.96). CONCLUSION......BACKGROUND: Passive smoking has been found to be a respiratory health hazard in humans. The present study describes the calculation of a reference interval for urinary nicotine metabolites calculated as cotinine equivalents on the basis of 72 non-smokers exposed to tobacco smoke less than 25....... Parametric reference interval for excretion of nicotine metabolites in urine from non-smokers was established according to International Union of Pure and Applied Chemistry (IUPAC) and International Federation for Clinical Chemistry (IFCC) for use of risk assessment of exposure to tobacco smoke...

17. Exact nonparametric confidence bands for the survivor function.

Science.gov (United States)

Matthews, David

2013-10-12

A method to produce exact simultaneous confidence bands for the empirical cumulative distribution function that was first described by Owen, and subsequently corrected by Jager and Wellner, is the starting point for deriving exact nonparametric confidence bands for the survivor function of any positive random variable. We invert a nonparametric likelihood test of uniformity, constructed from the Kaplan-Meier estimator of the survivor function, to obtain simultaneous lower and upper bands for the function of interest with specified global confidence level. The method involves calculating a null distribution and associated critical value for each observed sample configuration. However, Noe recursions and the Van Wijngaarden-Decker-Brent root-finding algorithm provide the necessary tools for efficient computation of these exact bounds. Various aspects of the effect of right censoring on these exact bands are investigated, using as illustrations two observational studies of survival experience among non-Hodgkin's lymphoma patients and a much larger group of subjects with advanced lung cancer enrolled in trials within the North Central Cancer Treatment Group. Monte Carlo simulations confirm the merits of the proposed method of deriving simultaneous interval estimates of the survivor function across the entire range of the observed sample. This research was supported by the Natural Sciences and Engineering Research Council (NSERC) of Canada. It was begun while the author was visiting the Department of Statistics, University of Auckland, and completed during a subsequent sojourn at the Medical Research Council Biostatistics Unit in Cambridge. The support of both institutions, in addition to that of NSERC and the University of Waterloo, is greatly appreciated.

18. POSTMORTAL CHANGES AND ASSESSMENT OF POSTMORTEM INTERVAL

Directory of Open Access Journals (Sweden)

Edin Šatrović

2013-01-01

Full Text Available This paper describes in a simple way the changes that occur in the body after death.They develop in a specific order, and the speed of their development and their expression are strongly influenced by various endogenous and exogenous factors. The aim of the authors is to indicate the characteristics of the postmortem changes, and their significance in establishing time since death, which can be established precisely within 72 hours. Accurate evaluation of the age of the corpse based on the common changes is not possible with longer postmortem intervals, so the entomological findings become the most significant change on the corpse for determination of the postmortem interval (PMI.

19. Interval Continuous Plant Identification from Value Sets

Directory of Open Access Journals (Sweden)

R. Hernández

2012-01-01

Full Text Available This paper shows how to obtain the values of the numerator and denominator Kharitonov polynomials of an interval plant from its value set at a given frequency. Moreover, it is proven that given a value set, all the assigned polynomials of the vertices can be determined if and only if there is a complete edge or a complete arc lying on a quadrant. This algorithm is nonconservative in the sense that if the value-set boundary of an interval plant is exactly known, and particularly its vertices, then the Kharitonov rectangles are exactly those used to obtain these value sets.

20. Interval stability for complex systems

Science.gov (United States)

2018-04-01

Stability of dynamical systems against strong perturbations is an important problem of nonlinear dynamics relevant to many applications in various areas. Here, we develop a novel concept of interval stability, referring to the behavior of the perturbed system during a finite time interval. Based on this concept, we suggest new measures of stability, namely interval basin stability (IBS) and interval stability threshold (IST). IBS characterizes the likelihood that the perturbed system returns to the stable regime (attractor) in a given time. IST provides the minimal magnitude of the perturbation capable to disrupt the stable regime for a given interval of time. The suggested measures provide important information about the system susceptibility to external perturbations which may be useful for practical applications. Moreover, from a theoretical viewpoint the interval stability measures are shown to bridge the gap between linear and asymptotic stability. We also suggest numerical algorithms for quantification of the interval stability characteristics and demonstrate their potential for several dynamical systems of various nature, such as power grids and neural networks.

1. Confidence limits for data mining models of options prices

Science.gov (United States)

Healy, J. V.; Dixon, M.; Read, B. J.; Cai, F. F.

2004-12-01

Non-parametric methods such as artificial neural nets can successfully model prices of financial options, out-performing the Black-Scholes analytic model (Eur. Phys. J. B 27 (2002) 219). However, the accuracy of such approaches is usually expressed only by a global fitting/error measure. This paper describes a robust method for determining prediction intervals for models derived by non-linear regression. We have demonstrated it by application to a standard synthetic example (29th Annual Conference of the IEEE Industrial Electronics Society, Special Session on Intelligent Systems, pp. 1926-1931). The method is used here to obtain prediction intervals for option prices using market data for LIFFE “ESX” FTSE 100 index options ( http://www.liffe.com/liffedata/contracts/month_onmonth.xls). We avoid special neural net architectures and use standard regression procedures to determine local error bars. The method is appropriate for target data with non constant variance (or volatility).

2. Transparency as an element of public confidence

International Nuclear Information System (INIS)

Kim, H.K.

2007-01-01

In the modern society, there is increasing demands for greater transparency. It has been discussed with respect to corruption or ethics issues in social science. The need for greater openness and transparency in nuclear regulation is widely recognised as public expectations on regulator grow. It is also related to the digital and information technology that enables disclosures of every activity and information of individual and organisation, characterised by numerous 'small brothers'. Transparency has become a key word in this ubiquitous era. Transparency in regulatory activities needs to be understood in following contexts. First, transparency is one of elements to build public confidence in regulator and eventually to achieve regulatory goal of providing the public with satisfaction at nuclear safety. Transparent bases of competence, independence, ethics and integrity of working process of regulatory body would enhance public confidence. Second, activities transmitting information on nuclear safety and preparedness to be accessed are different types of transparency. Communication is an active method of transparency. With increasing use of web-sites, 'digital transparency' is also discussed as passive one. Transparency in regulatory process may be more important than that of contents. Simply providing more information is of little value and specific information may need to be protected for security reason. Third, transparency should be discussed in international, national and organizational perspectives. It has been demanded through international instruments. for each country, transparency is demanded by residents, public, NGOs, media and other stakeholders. Employees also demand more transparency in operating and regulatory organisations. Whistle-blower may appear unless they are satisfied. Fourth, pursuing transparency may cause undue social cost or adverse effects. Over-transparency may decrease public confidence and the process for transparency may also hinder

3. Asymptotically Honest Confidence Regions for High Dimensional

DEFF Research Database (Denmark)

Caner, Mehmet; Kock, Anders Bredahl

While variable selection and oracle inequalities for the estimation and prediction error have received considerable attention in the literature on high-dimensional models, very little work has been done in the area of testing and construction of confidence bands in high-dimensional models. However...... develop an oracle inequality for the conservative Lasso only assuming the existence of a certain number of moments. This is done by means of the Marcinkiewicz-Zygmund inequality which in our context provides sharper bounds than Nemirovski's inequality. As opposed to van de Geer et al. (2014) we allow...

4. National Debate and Public Confidence in Sweden

International Nuclear Information System (INIS)

Lindquist, Ted

2014-01-01

Ted Lindquist, coordinator of the Association of Swedish Municipalities with Nuclear Facilities (KSO), closed the first day of conferences. He showed what the nuclear landscape was in Sweden, and in particular that through time there has been a rather good support from the population. He explained that the reason could be the confidence of the public in the national debate. On a more local scale, Ted Lindquist showed how overwhelmingly strong the support was in towns where the industry would like to operate long-term storage facilities

5. Transmission line sag calculations using interval mathematics

Energy Technology Data Exchange (ETDEWEB)

Shaalan, H. [Institute of Electrical and Electronics Engineers, Washington, DC (United States)]|[US Merchant Marine Academy, Kings Point, NY (United States)

2007-07-01

Electric utilities are facing the need for additional generating capacity, new transmission systems and more efficient use of existing resources. As such, there are several uncertainties associated with utility decisions. These uncertainties include future load growth, construction times and costs, and performance of new resources. Regulatory and economic environments also present uncertainties. Uncertainty can be modeled based on a probabilistic approach where probability distributions for all of the uncertainties are assumed. Another approach to modeling uncertainty is referred to as unknown but bounded. In this approach, the upper and lower bounds on the uncertainties are assumed without probability distributions. Interval mathematics is a tool for the practical use and extension of the unknown but bounded concept. In this study, the calculation of transmission line sag was used as an example to demonstrate the use of interval mathematics. The objective was to determine the change in cable length, based on a fixed span and an interval of cable sag values for a range of temperatures. The resulting change in cable length was an interval corresponding to the interval of cable sag values. It was shown that there is a small change in conductor length due to variation in sag based on the temperature ranges used in this study. 8 refs.

6. Reducing public communication apprehension by boosting self confidence on communication competence

Directory of Open Access Journals (Sweden)

Eva Rachmi

2012-07-01

medical doctor should be competent in communicating with others. Some students at the medical faculty Universitas Mulawarman tend to be silent at public communication training, and this is thought to be influenced by communication anxiety. This study aimed to analyze the possibility of self-confidence on communication competence and communication skills are risk factors of communication apprehension. Methods: This study was conducted on 55 students at the medical faculty Universitas Mulawarman.  Public communication apprehension was measured using the Personal Report of Communication Apprehension (PRCA-24. Confidence in communication competence was determined by the Self Perceived Communication Competence scale (SPCC.  Communication skills were based on the instructor’s score during the communication training program. Data were analyzed by linear regression to identify dominant factors using STATA 9.0. Results: The study showed a negative association between public communication apprehension and students’ self confidence in communication competence [coefficient regression (CR =-0.13; p=0.000; 95% confidence interval (CI=-0.20; -0.52]. However, it was not related to communication skills (p=0.936. Among twelve traits of self confidence on communication competence, students who had confidence to talk to a group of strangers had lower public communication apprehension (adjusted CR=-0.13; CI=-0.21; 0.05; p=0.002. Conclusions:  Increased confidence in their communication competence will reduce the degree of public communication apprehension by students. Therefore, the faculty should provide more opportunities for students to practice public communication, in particular, talking to a group of strangers more frequently. (Health Science Indones 2010; 1: 37 - 42

7. Primary care physicians' perceptions about and confidence in deciding which patients to refer for total joint arthroplasty of the hip and knee.

Science.gov (United States)

Waugh, E J; Badley, E M; Borkhoff, C M; Croxford, R; Davis, A M; Dunn, S; Gignac, M A; Jaglal, S B; Sale, J; Hawker, G A

2016-03-01

8. Diagnosing Anomalous Network Performance with Confidence

Energy Technology Data Exchange (ETDEWEB)

Settlemyer, Bradley W [ORNL; Hodson, Stephen W [ORNL; Kuehn, Jeffery A [ORNL; Poole, Stephen W [ORNL

2011-04-01

Variability in network performance is a major obstacle in effectively analyzing the throughput of modern high performance computer systems. High performance interconnec- tion networks offer excellent best-case network latencies; how- ever, highly parallel applications running on parallel machines typically require consistently high levels of performance to adequately leverage the massive amounts of available computing power. Performance analysts have usually quantified network performance using traditional summary statistics that assume the observational data is sampled from a normal distribution. In our examinations of network performance, we have found this method of analysis often provides too little data to under- stand anomalous network performance. Our tool, Confidence, instead uses an empirically derived probability distribution to characterize network performance. In this paper we describe several instances where the Confidence toolkit allowed us to understand and diagnose network performance anomalies that we could not adequately explore with the simple summary statis- tics provided by traditional measurement tools. In particular, we examine a multi-modal performance scenario encountered with an Infiniband interconnection network and we explore the performance repeatability on the custom Cray SeaStar2 interconnection network after a set of software and driver updates.

9. The relationship between confidence in charitable organizations and volunteering revisited

NARCIS (Netherlands)

Bekkers, René H.F.P.; Bowman, Woods

2009-01-01

Confidence in charitable organizations (charitable confidence) would seem to be an important prerequisite for philanthropic behavior. Previous research relying on cross-sectional data has suggested that volunteering promotes charitable confidence and vice versa. This research note, using new

10. Experimental uncertainty estimation and statistics for data having interval uncertainty.

Energy Technology Data Exchange (ETDEWEB)

Kreinovich, Vladik (Applied Biomathematics, Setauket, New York); Oberkampf, William Louis (Applied Biomathematics, Setauket, New York); Ginzburg, Lev (Applied Biomathematics, Setauket, New York); Ferson, Scott (Applied Biomathematics, Setauket, New York); Hajagos, Janos (Applied Biomathematics, Setauket, New York)

2007-05-01

This report addresses the characterization of measurements that include epistemic uncertainties in the form of intervals. It reviews the application of basic descriptive statistics to data sets which contain intervals rather than exclusively point estimates. It describes algorithms to compute various means, the median and other percentiles, variance, interquartile range, moments, confidence limits, and other important statistics and summarizes the computability of these statistics as a function of sample size and characteristics of the intervals in the data (degree of overlap, size and regularity of widths, etc.). It also reviews the prospects for analyzing such data sets with the methods of inferential statistics such as outlier detection and regressions. The report explores the tradeoff between measurement precision and sample size in statistical results that are sensitive to both. It also argues that an approach based on interval statistics could be a reasonable alternative to current standard methods for evaluating, expressing and propagating measurement uncertainties.

11. Confidence assessment. Site descriptive modelling SDM-Site Forsmark

International Nuclear Information System (INIS)

2008-09-01

The objective of this report is to assess the confidence that can be placed in the Forsmark site descriptive model, based on the information available at the conclusion of the surface-based investigations (SDM-Site Forsmark). In this exploration, an overriding question is whether remaining uncertainties are significant for repository engineering design or long-term safety assessment and could successfully be further reduced by more surface based investigations or more usefully by explorations underground made during construction of the repository. The confidence in the Forsmark site descriptive model, based on the data available at the conclusion of the surface-based site investigations, have been assessed by exploring: Confidence in the site characterisation data base; Key remaining issues and their handling; Handling of alternative models; Consistency between disciplines; and, Main reasons for confidence and lack of confidence in the model. It is generally found that the key aspects of importance for safety assessment and repository engineering of the Forsmark site descriptive model are associated with a high degree of confidence. Because of the robust geological model that describes the site, the overall confidence in Forsmark site descriptive model is judged to be high. While some aspects have lower confidence this lack of confidence is handled by providing wider uncertainty ranges, bounding estimates and/or alternative models. Most, but not all, of the low confidence aspects have little impact on repository engineering design or for long-term safety. Poor precision in the measured data are judged to have limited impact on uncertainties on the site descriptive model, with the exceptions of inaccuracy in determining the position of some boreholes at depth in 3-D space, as well as the poor precision of the orientation of BIPS images in some boreholes, and the poor precision of stress data determined by overcoring at the locations where the pre

12. Comparing interval estimates for small sample ordinal CFA models.

Science.gov (United States)

Natesan, Prathiba

2015-01-01

Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading

13. Teachers and Counselors: Building Math Confidence in Schools

Directory of Open Access Journals (Sweden)

Joseph M. Furner

2017-08-01

Full Text Available Mathematics teachers need to take on the role of counselors in addressing the math anxious in today's math classrooms. This paper looks at the impact math anxiety has on the future of young adults in our high-tech society. Teachers and professional school counselors are encouraged to work together to prevent and reduce math anxiety. It is important that all students feel confident in their ability to do mathematics in an age that relies so heavily on problem solving, technology, science, and mathematics. It really is a school's obligation to see that their students value and feel confident in their ability to do math, because ultimately a child's life: all decisions they will make and careers choices may be determined based on their disposition toward mathematics. This paper raises some interesting questions and provides some strategies (See Appendix A for teachers and counselors for addressing the issue of math anxiety while discussing the importance of developing mathematically confident young people for a high-tech world of STEM.

14. Confidence crisis of results in biomechanics research.

Science.gov (United States)

Knudson, Duane

2017-11-01

Many biomechanics studies have small sample sizes and incorrect statistical analyses, so reporting of inaccurate inferences and inflated magnitude of effects are common in the field. This review examines these issues in biomechanics research and summarises potential solutions from research in other fields to increase the confidence in the experimental effects reported in biomechanics. Authors, reviewers and editors of biomechanics research reports are encouraged to improve sample sizes and the resulting statistical power, improve reporting transparency, improve the rigour of statistical analyses used, and increase the acceptance of replication studies to improve the validity of inferences from data in biomechanics research. The application of sports biomechanics research results would also improve if a larger percentage of unbiased effects and their uncertainty were reported in the literature.

15. Technology in a crisis of confidence

Energy Technology Data Exchange (ETDEWEB)

Damodaran, G R

1979-04-01

The power that technological progress has given to engineers is examined to see if there has been a corresponding growth in human happiness. A credit/debit approach is discussed, whereby technological advancement is measured against the criteria of social good. The credit side includes medicine, agriculture, and energy use, while the debit side lists pollution, unequal distribution of technology and welfare, modern weaponry, resource depletion, and a possible decline in the quality of life. The present anti-technologists claim the debit side is now predominant, but the author challenges this position by examining the role of technology and the engineer in the society. He sees a need for renewed self-confidence and a sense of direction among engineers, but is generally optimistic that technology and civilization will continue to be intertwined. (DCK)

16. Considering public confidence in developing regulatory programs

International Nuclear Information System (INIS)

Collins, S.J.

2001-01-01

In the area of public trust and in any investment, planning and strategy are important. While it is accepted in the United States that an essential part of our mission is to leverage our resources to improving Public Confidence this performance goal must be planned for, managed and measured. Similar to our premier performance goal of Maintaining Safety, a strategy must be developed and integrated with our external stake holders but with internal regulatory staff as well. In order to do that, business is to be conducted in an open environment, the basis for regulatory decisions has to be available through public documents and public meetings, communication must be done in clear and consistent terms. (N.C.)

17. Caregiver Confidence: Does It Predict Changes in Disability among Elderly Home Care Recipients?

Science.gov (United States)

Li, Lydia W.; McLaughlin, Sara J.

2012-01-01

Purpose of the study: The primary aim of this investigation was to determine whether caregiver confidence in their care recipients' functional capabilities predicts changes in the performance of activities of daily living (ADL) among elderly home care recipients. A secondary aim was to explore how caregiver confidence and care recipient functional…

18. Chinese Management Research Needs Self-Confidence but not Over-confidence

DEFF Research Database (Denmark)

Li, Xin; Ma, Li

2018-01-01

Chinese management research aims to contribute to global management knowledge by offering rigorous and innovative theories and practical recommendations both for managing in China and outside. However, two seemingly opposite directions that researchers are taking could prove detrimental......-confidence, limiting theoretical innovation and practical relevance. Yet going in the other direction of overly indigenous research reflects over-confidence, often isolating the Chinese management research from the mainstream academia and at times, even becoming anti-science. A more integrated approach of conducting...... to the healthy development of Chinese management research. We argue that the two directions share a common ground that lies in the mindset regarding the confidence in the work on and from China. One direction of simply following the American mainstream on academic rigor demonstrates a lack of self...

19. Haemostatic reference intervals in pregnancy

DEFF Research Database (Denmark)

Szecsi, Pal Bela; Jørgensen, Maja; Klajnbard, Anna

2010-01-01

largely unchanged during pregnancy, delivery, and postpartum and were within non-pregnant reference intervals. However, levels of fibrinogen, D-dimer, and coagulation factors VII, VIII, and IX increased markedly. Protein S activity decreased substantially, while free protein S decreased slightly and total......Haemostatic reference intervals are generally based on samples from non-pregnant women. Thus, they may not be relevant to pregnant women, a problem that may hinder accurate diagnosis and treatment of haemostatic disorders during pregnancy. In this study, we establish gestational age......-20, 21-28, 29-34, 35-42, at active labor, and on postpartum days 1 and 2. Reference intervals for each gestational period using only the uncomplicated pregnancies were calculated in all 391 women for activated partial thromboplastin time (aPTT), fibrinogen, fibrin D-dimer, antithrombin, free protein S...

20. Inverse Interval Matrix: A Survey

Czech Academy of Sciences Publication Activity Database

2011-01-01

Roč. 22, - (2011), s. 704-719 E-ISSN 1081-3810 R&D Projects: GA ČR GA201/09/1957; GA ČR GC201/08/J020 Institutional research plan: CEZ:AV0Z10300504 Keywords : interval matrix * inverse interval matrix * NP-hardness * enclosure * unit midpoint * inverse sign stability * nonnegative invertibility * absolute value equation * algorithm Subject RIV: BA - General Mathematics Impact factor: 0.808, year: 2010 http://www.math.technion.ac.il/iic/ ela / ela -articles/articles/vol22_pp704-719.pdf

1. A new model for cork weight estimation in Northern Portugal with methodology for construction of confidence intervals

Science.gov (United States)

Teresa J.F. Fonseca; Bernard R. Parresol

2001-01-01

Cork, a unique biological material, is a highly valued non-timber forest product. Portugal is the leading producer of cork with 52 percent of the world production. Tree cork weight models have been developed for Southern Portugal, but there are no representative published models for Northern Portugal. Because cork trees may have a different form between Northern and...

2. A Validation Study of the Rank-Preserving Structural Failure Time Model: Confidence Intervals and Unique, Multiple, and Erroneous Solutions.

Science.gov (United States)

Ouwens, Mario; Hauch, Ole; Franzén, Stefan

2018-05-01

The rank-preserving structural failure time model (RPSFTM) is used for health technology assessment submissions to adjust for switching patients from reference to investigational treatment in cancer trials. It uses counterfactual survival (survival when only reference treatment would have been used) and assumes that, at randomization, the counterfactual survival distribution for the investigational and reference arms is identical. Previous validation reports have assumed that patients in the investigational treatment arm stay on therapy throughout the study period. To evaluate the validity of the RPSFTM at various levels of crossover in situations in which patients are taken off the investigational drug in the investigational arm. The RPSFTM was applied to simulated datasets differing in percentage of patients switching, time of switching, underlying acceleration factor, and number of patients, using exponential distributions for the time on investigational and reference treatment. There were multiple scenarios in which two solutions were found: one corresponding to identical counterfactual distributions, and the other to two different crossing counterfactual distributions. The same was found for the hazard ratio (HR). Unique solutions were observed only when switching patients were on investigational treatment for <40% of the time that patients in the investigational arm were on treatment. Distributions other than exponential could have been used for time on treatment. An HR equal to 1 is a necessary but not always sufficient condition to indicate acceleration factors associated with equal counterfactual survival. Further assessment to distinguish crossing counterfactual curves from equal counterfactual curves is especially needed when the time that switchers stay on investigational treatment is relatively long compared to the time direct starters stay on investigational treatment.

3. Robust Coefficients Alpha and Omega and Confidence Intervals with Outlying Observations and Missing Data: Methods and Software

Science.gov (United States)

Zhang, Zhiyong; Yuan, Ke-Hai

2016-01-01

Cronbach's coefficient alpha is a widely used reliability measure in social, behavioral, and education sciences. It is reported in nearly every study that involves measuring a construct through multiple items. With non-tau-equivalent items, McDonald's omega has been used as a popular alternative to alpha in the literature. Traditional estimation…

4. Robust Coefficients Alpha and Omega and Confidence Intervals with Outlying Observations and Missing Data Methods and Software

Science.gov (United States)

Zhang, Zhiyong; Yuan, Ke-Hai

2016-01-01

Cronbach's coefficient alpha is a widely used reliability measure in social, behavioral, and education sciences. It is reported in nearly every study that involves measuring a construct through multiple items. With non-tau-equivalent items, McDonald's omega has been used as a popular alternative to alpha in the literature. Traditional estimation…

5. Meta-analysis to refine map position and reduce confidence intervals for delayed canopy wilting QTLs in soybean

Science.gov (United States)

Slow canopy wilting in soybean has been identified as a potentially beneficial trait for ameliorating drought effects on yield. Previous research identified QTLs for slow wilting from two different bi-parental populations and this information was combined with data from three other populations to id...

6. Noise annoyance from stationary sources: Relationships with exposure metric day-evening-night level (DENL) and their confidence intervals

NARCIS (Netherlands)

Miedema, H.M.E.; Vos, H.

2004-01-01

Relationships between exposure to noise [metric: day-evening-night levels (DENL)] from stationary sources (shunting yards, a seasonal industry, and other industries) and annoyance are presented. Curves are presented for expected annoyance score, the percentage "highly annoyed" (%HA, cutoff at 72 on

7. Derivation of confidence intervals of service measures in a base-stock inventory control system with low-frequent demand

DEFF Research Database (Denmark)

Larsen, Christian

We explore a base-stock system with backlogging where the demand process is a compound renewal process and the compound element is a delayed geometric distribution. For this setting it is proven in [4] that the long-run average service measures order fill rate (OFR) and volume fill rate (VFR) are...

8. Derivation of confidence intervals of service measures in a base-stock inventory control system with low-frequent demand

DEFF Research Database (Denmark)

Larsen, Christian

2011-01-01

We explore a base-stock system with backlogging where the demand process is a compound renewal process and the compound element is a delayed geometric distribution. For this setting it holds that the long-run average service measures order fill rate (OFR) and volume fill rate (VFR) are equal in v...

9. Criteria to determine the depth of the production interval in wells of the Cerro Prieto geothermal field, Mexico; Criterios para determinar la profundidad del intervalo productor en pozos del campo geotermico de Cerro Prieto, Mexico

Energy Technology Data Exchange (ETDEWEB)

Leon Vivar, Jesus Saul de [Comision Federal de Electricidad, Residencia General de Cerro Prieto, Mexicali, B.C. (Mexico)]. E-mail: jesus.deleon@cfe.gob.mx

2006-07-15

Ways to select the depth of the production interval or to complete wells in the Cerro Prieto geothermal field have changed during the development of the field. From 1961 when drilling began to the middle of 2005, a total of 325 wells were drilled. The paper compares the approaches used in the past with those of the last ten years. The Cerro Prieto system has been classified as being of liquid-dominated and high-temperature. Today, after 33 years of commercial exploitation, it has experienced a series of thermal and geochemical fluid changes making it necessary to modify the ways to select the depth of the well production intervals, according to the observed behavior of the reservoir. The new criteria include the thermal approach, the geological approach, the geochemical approach and a comparative approach with neighboring wells. If most of these criteria are interpreted correctly, the success of a well is ensured. [Spanish] Los criterios para seleccionar la profundidad del intervalo productor o la terminacion de los pozos en el campo geotermico de Cerro Prieto han cambiado durante el desarrollo del mismo. De 1961, cuando se perforaron los primeros pozos, hasta mediados del 2005 se han perforado un total de 325 pozos. En el presente articulo se hara una breve revision de cuales han sido los criterios usados en el pasado y los que se han venido empleando en los ultimos diez anos. El yacimiento de Cerro Prieto ha sido clasificado como de liquido dominante, de alta temperatura, pero actualmente, despues de 33 anos de explotacion comercial, ha sufrido una serie de cambios termicos y geoquimicos en sus fluidos, por lo que ha sido necesario modificar los criterios para seleccionar la profundidad del intervalo productor de los pozos de acuerdo al comportamiento observado en el yacimiento. Los criterios actuales se dividen en cuatro: 1. Criterio termico, 2. Criterio geologico, 3. Criterio geoquimico y 4. Criterio comparativo de los pozos vecinos. Cuando la mayoria de estos

10. The theory of confidence-building measures

International Nuclear Information System (INIS)

Darilek, R.E.

1992-01-01

This paper discusses the theory of Confidence-Building Measures (CBMs) in two ways. First, it employs a top-down, deductively oriented approach to explain CBM theory in terms of the arms control goals and objectives to be achieved, the types of measures to be employed, and the problems or limitations likely to be encountered when applying CBMs to conventional or nuclear forces. The chapter as a whole asks how various types of CBMs might function during a political - military escalation from peacetime to a crisis and beyond (i.e. including conflict), as well as how they might operate in a de-escalatory environment. In pursuit of these overarching issues, the second section of the chapter raises a fundamental but complicating question: how might the next all-out war actually come aoubt - by unpremeditated escalation resulting from misunderstanding or miscalculation, or by premeditation resulting in a surprise attack? The second section of the paper addresses this question, explores its various implications for CBMs, and suggests the potential contribution of different types of CBMs toward successful resolution of the issues involved

11. Dynamic Properties of QT Intervals

Czech Academy of Sciences Publication Activity Database

Halámek, Josef; Jurák, Pavel; Vondra, Vlastimil; Lipoldová, J.; Leinveber, Pavel; Plachý, M.; Fráňa, P.; Kára, T.

2009-01-01

Roč. 36, - (2009), s. 517-520 ISSN 0276-6574 R&D Projects: GA ČR GA102/08/1129; GA MŠk ME09050 Institutional research plan: CEZ:AV0Z20650511 Keywords : QT Intervals * arrhythmia diagnosis Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering http://cinc.mit.edu/archives/2009/pdf/0517.pdf

12. Haemostatic reference intervals in pregnancy

DEFF Research Database (Denmark)

Szecsi, Pal Bela; Jørgensen, Maja; Klajnbard, Anna

2010-01-01

Haemostatic reference intervals are generally based on samples from non-pregnant women. Thus, they may not be relevant to pregnant women, a problem that may hinder accurate diagnosis and treatment of haemostatic disorders during pregnancy. In this study, we establish gestational age-specific refe......Haemostatic reference intervals are generally based on samples from non-pregnant women. Thus, they may not be relevant to pregnant women, a problem that may hinder accurate diagnosis and treatment of haemostatic disorders during pregnancy. In this study, we establish gestational age......-specific reference intervals for coagulation tests during normal pregnancy. Eight hundred one women with expected normal pregnancies were included in the study. Of these women, 391 had no complications during pregnancy, vaginal delivery, or postpartum period. Plasma samples were obtained at gestational weeks 13......-20, 21-28, 29-34, 35-42, at active labor, and on postpartum days 1 and 2. Reference intervals for each gestational period using only the uncomplicated pregnancies were calculated in all 391 women for activated partial thromboplastin time (aPTT), fibrinogen, fibrin D-dimer, antithrombin, free protein S...

13. Interval matrices: Regularity generates singularity

Czech Academy of Sciences Publication Activity Database

Rohn, Jiří; Shary, S.P.

2018-01-01

Roč. 540, 1 March (2018), s. 149-159 ISSN 0024-3795 Institutional support: RVO:67985807 Keywords : interval matrix * regularity * singularity * P-matrix * absolute value equation * diagonally singilarizable matrix Subject RIV: BA - General Mathematics Impact factor: 0.973, year: 2016

14. Chaotic dynamics from interspike intervals

DEFF Research Database (Denmark)

Pavlov, A N; Sosnovtseva, Olga; Mosekilde, Erik

2001-01-01

Considering two different mathematical models describing chaotic spiking phenomena, namely, an integrate-and-fire and a threshold-crossing model, we discuss the problem of extracting dynamics from interspike intervals (ISIs) and show that the possibilities of computing the largest Lyapunov expone...

15. Determining

Directory of Open Access Journals (Sweden)

Bahram Andarzian

2015-06-01

Full Text Available Wheat production in the south of Khuzestan, Iran is constrained by heat stress for late sowing dates. For optimization of yield, sowing at the appropriate time to fit the cultivar maturity length and growing season is critical. Crop models could be used to determine optimum sowing window for a locality. The objectives of this study were to evaluate the Cropping System Model (CSM-CERES-Wheat for its ability to simulate growth, development, grain yield of wheat in the tropical regions of Iran, and to study the impact of different sowing dates on wheat performance. The genetic coefficients of cultivar Chamran were calibrated for the CSM-CERES-Wheat model and crop model performance was evaluated with experimental data. Wheat cultivar Chamran was sown on different dates, ranging from 5 November to 9 January during 5 years of field experiments that were conducted in the Khuzestan province, Iran, under full and deficit irrigation conditions. The model was run for 8 sowing dates starting on 25 October and repeated every 10 days until 5 January using long-term historical weather data from the Ahvaz, Behbehan, Dezful and Izeh locations. The seasonal analysis program of DSSAT was used to determine the optimum sowing window for different locations as well. Evaluation with the experimental data showed that performance of the model was reasonable as indicated by fairly accurate simulation of crop phenology, biomass accumulation and grain yield against measured data. The normalized RMSE were 3%, 2%, 11.8%, and 3.4% for anthesis date, maturity date, grain yield and biomass, respectively. Optimum sowing window was different among locations. It was opened and closed on 5 November and 5 December for Ahvaz; 5 November and 15 December for Behbehan and Dezful;and 1 November and 15 December for Izeh, respectively. CERES-Wheat model could be used as a tool to evaluate the effect of sowing date on wheat performance in Khuzestan conditions. Further model evaluations

16. A Confidence Paradigm for Classification Systems

Science.gov (United States)

2008-09-01

methodology to determine how much confi- dence one should have in a classifier output. This research proposes a framework to determine the level of...theoretical framework that attempts to unite the viewpoints of the classification system developer (or engineer) and the classification system user (or...operating point. An algorithm is developed that minimizes a “confidence” measure called Binned Error in the Posterior ( BEP ). Then, we prove that training a

17. Method for calculating the variance and prediction intervals for biomass estimates obtained from allometric equations

CSIR Research Space (South Africa)

Kirton, A

2010-08-01

Full Text Available for calculating the variance and prediction intervals for biomass estimates obtained from allometric equations A KIRTON B SCHOLES S ARCHIBALD CSIR Ecosystem Processes and Dynamics, Natural Resources and the Environment P.O. BOX 395, Pretoria, 0001, South... intervals (confidence intervals for predicted values) for allometric estimates can be obtained using an example of estimating tree biomass from stem diameter. It explains how to deal with relationships which are in the power function form - a common form...

18. T(peak)T(end) interval in long QT syndrome

DEFF Research Database (Denmark)

Kanters, Jørgen Kim; Haarmark, Christian; Vedel-Larsen, Esben

2008-01-01

BACKGROUND: The T(peak)T(end) (T(p)T(e)) interval is believed to reflect the transmural dispersion of repolarization. Accordingly, it should be a risk factor in long QT syndrome (LQTS). The aim of the study was to determine the effect of genotype on T(p)T(e) interval and test whether it was relat...

19. Reference intervals for serum total cholesterol, HDL cholesterol and ...

African Journals Online (AJOL)

Reference intervals of total cholesterol, HDL cholesterol and non-HDL cholesterol concentrations were determined on 309 blood donors from an urban and peri-urban population of Botswana. Using non-parametric methods to establish 2.5th and 97.5th percentiles of the distribution, the intervals were: total cholesterol 2.16 ...

20. Examining Belief and Confidence in Schizophrenia

Science.gov (United States)

Joyce, Dan W.; Averbeck, Bruno B.; Frith, Chris D.; Shergill, Sukhwinder S.

2018-01-01

Background People with psychoses often report fixed, delusional beliefs that are sustained even in the presence of unequivocal contrary evidence. Such delusional beliefs are the result of integrating new and old evidence inappropriately in forming a cognitive model. We propose and test a cognitive model of belief formation using experimental data from an interactive “Rock Paper Scissors” game. Methods Participants (33 controls and 27 people with schizophrenia) played a competitive, time-pressured interactive two-player game (Rock, Paper, Scissors). Participant’s behavior was modeled by a generative computational model using leaky-integrator and temporal difference methods. This model describes how new and old evidence is integrated to form both a playing strategy to beat the opponent and provide a mechanism for reporting confidence in one’s playing strategy to win against the opponent Results People with schizophrenia fail to appropriately model their opponent’s play despite consistent (rather than random) patterns that can be exploited in the simulated opponent’s play. This is manifest as a failure to weigh existing evidence appropriately against new evidence. Further, participants with schizophrenia show a ‘jumping to conclusions’ bias, reporting successful discovery of a winning strategy with insufficient evidence. Conclusions The model presented suggests two tentative mechanisms in delusional belief formation – i) one for modeling patterns in other’s behavior, where people with schizophrenia fail to use old evidence appropriately and ii) a meta-cognitive mechanism for ‘confidence’ in such beliefs where people with schizophrenia overweight recent reward history in deciding on the value of beliefs about the opponent. PMID:23521846

1. Balance confidence is related to features of balance and gait in individuals with chronic stroke

Science.gov (United States)

Schinkel-Ivy, Alison; Wong, Jennifer S.; Mansfield, Avril

2016-01-01

Reduced balance confidence is associated with impairments in features of balance and gait in individuals with sub-acute stroke. However, an understanding of these relationships in individuals at the chronic stage of stroke recovery is lacking. This study aimed to quantify relationships between balance confidence and specific features of balance and gait in individuals with chronic stroke. Participants completed a balance confidence questionnaire and clinical balance assessment (quiet standing, walking, and reactive stepping) at 6 months post-discharge from inpatient stroke rehabilitation. Regression analyses were performed using balance confidence as a predictor variable and quiet standing, walking, and reactive stepping outcome measures as the dependent variables. Walking velocity was positively correlated with balance confidence, while medio-lateral centre of pressure excursion (quiet standing) and double support time, step width variability, and step time variability (walking) were negatively correlated with balance confidence. This study provides insight into the relationships between balance confidence and balance and gait measures in individuals with chronic stroke, suggesting that individuals with low balance confidence exhibited impaired control of quiet standing as well as walking characteristics associated with cautious gait strategies. Future work should identify the direction of these relationships to inform community-based stroke rehabilitation programs for individuals with chronic stroke, and determine the potential utility of incorporating interventions to improve balance confidence into these programs. PMID:27955809

2. Interval Mathematics Applied to Critical Point Transitions

Directory of Open Access Journals (Sweden)

2012-03-01

Full Text Available The determination of critical points of mixtures is important for both practical and theoretical reasons in the modeling of phase behavior, especially at high pressure. The equations that describe the behavior of complex mixtures near critical points are highly nonlinear and with multiplicity of solutions to the critical point equations. Interval arithmetic can be used to reliably locate all the critical points of a given mixture. The method also verifies the nonexistence of a critical point if a mixture of a given composition does not have one. This study uses an interval Newton/Generalized Bisection algorithm that provides a mathematical and computational guarantee that all mixture critical points are located. The technique is illustrated using several example problems. These problems involve cubic equation of state models; however, the technique is general purpose and can be applied in connection with other nonlinear problems.

3. Graduating general surgery resident operative confidence: perspective from a national survey.

Science.gov (United States)

Fonseca, Annabelle L; Reddy, Vikram; Longo, Walter E; Gusberg, Richard J

2014-08-01

General surgical training has changed significantly over the last decade with work hour restrictions, increasing subspecialization, the expanding use of minimally invasive techniques, and nonoperative management for solid organ trauma. Given these changes, this study was undertaken to assess the confidence of graduating general surgery residents in performing open surgical operations and to determine factors associated with increased confidence. A survey was developed and sent to general surgery residents nationally. We queried them regarding demographics and program characteristics, asked them to rate their confidence (rated 1-5 on a Likert scale) in performing open surgical procedures and compared those who indicated confidence with those who did not. We received 653 responses from the fifth year (postgraduate year 5) surgical residents: 69% male, 68% from university programs, and 51% from programs affiliated with a Veterans Affairs hospital; 22% from small programs, 34% from medium programs, and 44% from large programs. Anticipated postresidency operative confidence was 72%. More than 25% of residents reported a lack of confidence in performing eight of the 13 operations they were queried about. Training at a university program, a large program, dedicated research years, future fellowship plans, and training at a program that performed a large percentage of operations laparoscopically was associated with decreased confidence in performing a number of open surgical procedures. Increased surgical volume was associated with increased operative confidence. Confidence in performing open surgery also varied regionally. Graduating surgical residents indicated a significant lack of confidence in performing a variety of open surgical procedures. This decreased confidence was associated with age, operative volume as well as type, and location of training program. Analyzing and addressing this confidence deficit merits further study. Copyright © 2014 Elsevier Inc. All

4. 75 FR 81037 - Waste Confidence Decision Update

Science.gov (United States)

2010-12-23

... radioactive wastes produced by NPPs can be safely disposed of, to determine when such disposal or offsite... safe permanent disposal of high-level radioactive waste (HLW) would be available when they were needed... proceedings designed to assess the degree of assurance that radioactive wastes generated by nuclear power...

5. Working Memory Capacity, Confidence and Scientific Thinking

Science.gov (United States)

2009-01-01

Working memory capacity is now well established as a rate determining factor in much learning and assessment, especially in the sciences. Most of the research has focussed on performance in tests and examinations in subject areas. This paper outlines some exploratory work in which other outcomes are related to working memory capacity. Confidence…

6. Fully automatized apparatus for determining speed of sound for liquids in the temperature and pressure interval (283.15–343.15) K and (0.1–95) MPa

International Nuclear Information System (INIS)

Yebra, Francisco; Troncoso, Jacobo; Romaní, Luis

2017-01-01

Highlights: • An apparatus for measuring speed of sound of liquids is described. • Pressure and temperature control is fully automatized. • Uncertainty of the measurements is estimated in 0.1%. • Comparison with literature data confirms the reliability of the methodology. - Abstract: An instrument for determining the speed of sound as a function of temperature and pressure for liquids is described. It was totally automatized: pressure and temperature values are controlled and time of flight of the ultrasonic wave data were acquired using a digital system which automatically made all required actions. The instrument calibration was made only at atmospheric pressure using high quality data of water and methanol. For higher pressures, the calibration parameters were predicted using a model for the high pressure cell, through finite-element calculations (FEM), in order to realistically determine the changes in the cell induced by the compression. Uncertainties in pressure and temperature were 20 mK and 0.1 MPa, respectively and in speed of sound it was estimated to be about 0.1%. The speeds of sound for water, methanol, hexane, heptane, octane, toluene, ethanol and 1-propanol were determined in the temperature and pressure ranges (283.15–343.15) K and (0.1–95) MPa. Comparison with literature data reveals the high reliability of the experimental procedure.

7. Sources of sport confidence, imagery type and performance among competitive athletes: the mediating role of sports confidence.

Science.gov (United States)

Levy, A R; Perry, J; Nicholls, A R; Larkin, D; Davies, J

2015-01-01

This study explored the mediating role of sport confidence upon (1) sources of sport confidence-performance relationship and (2) imagery-performance relationship. Participants were 157 competitive athletes who completed state measures of confidence level/sources, imagery type and performance within one hour after competition. Among the current sample, confirmatory factor analysis revealed appropriate support for the nine-factor SSCQ and the five-factor SIQ. Mediational analysis revealed that sport confidence had a mediating influence upon the achievement source of confidence-performance relationship. In addition, both cognitive and motivational imagery types were found to be important sources of confidence, as sport confidence mediated imagery type- performance relationship. Findings indicated that athletes who construed confidence from their own achievements and report multiple images on a more frequent basis are likely to benefit from enhanced levels of state sport confidence and subsequent performance.

8. Vaccination Confidence and Parental Refusal/Delay of Early Childhood Vaccines.

Directory of Open Access Journals (Sweden)

Melissa B Gilkey

Full Text Available To support efforts to address parental hesitancy towards early childhood vaccination, we sought to validate the Vaccination Confidence Scale using data from a large, population-based sample of U.S. parents.We used weighted data from 9,354 parents who completed the 2011 National Immunization Survey. Parents reported on the immunization history of a 19- to 35-month-old child in their households. Healthcare providers then verified children's vaccination status for vaccines including measles, mumps, and rubella (MMR, varicella, and seasonal flu. We used separate multivariable logistic regression models to assess associations between parents' mean scores on the 8-item Vaccination Confidence Scale and vaccine refusal, vaccine delay, and vaccination status.A substantial minority of parents reported a history of vaccine refusal (15% or delay (27%. Vaccination confidence was negatively associated with refusal of any vaccine (odds ratio [OR] = 0.58, 95% confidence interval [CI], 0.54-0.63 as well as refusal of MMR, varicella, and flu vaccines specifically. Negative associations between vaccination confidence and measures of vaccine delay were more moderate, including delay of any vaccine (OR = 0.81, 95% CI, 0.76-0.86. Vaccination confidence was positively associated with having received vaccines, including MMR (OR = 1.53, 95% CI, 1.40-1.68, varicella (OR = 1.54, 95% CI, 1.42-1.66, and flu vaccines (OR = 1.32, 95% CI, 1.23-1.42.Vaccination confidence was consistently associated with early childhood vaccination behavior across multiple vaccine types. Our findings support expanding the application of the Vaccination Confidence Scale to measure vaccination beliefs among parents of young children.

9. The statistical significance of error probability as determined from decoding simulations for long codes

Science.gov (United States)

Massey, J. L.

1976-01-01

The very low error probability obtained with long error-correcting codes results in a very small number of observed errors in simulation studies of practical size and renders the usual confidence interval techniques inapplicable to the observed error probability. A natural extension of the notion of a 'confidence interval' is made and applied to such determinations of error probability by simulation. An example is included to show the surprisingly great significance of as few as two decoding errors in a very large number of decoding trials.

10. Alternative confidence measure for local matching stereo algorithms

CSIR Research Space (South Africa)

Ndhlovu, T

2009-11-01

Full Text Available The authors present a confidence measure applied to individual disparity estimates in local matching stereo correspondence algorithms. It aims at identifying textureless areas, where most local matching algorithms fail. The confidence measure works...

11. Effects of Training and Feedback on Accuracy of Predicting Rectosigmoid Neoplastic Lesions and Selection of Surveillance Intervals by Endoscopists Performing Optical Diagnosis of Diminutive Polyps.

Science.gov (United States)

Vleugels, Jasper L A; Dijkgraaf, Marcel G W; Hazewinkel, Yark; Wanders, Linda K; Fockens, Paul; Dekker, Evelien

2018-05-01

Real-time differentiation of diminutive polyps (1-5 mm) during endoscopy could replace histopathology analysis. According to guidelines, implementation of optical diagnosis into routine practice would require it to identify rectosigmoid neoplastic lesions with a negative predictive value (NPV) of more than 90%, using histologic findings as a reference, and agreement with histology-based surveillance intervals for more than 90% of cases. We performed a prospective study with 39 endoscopists accredited to perform colonoscopies on participants with positive results from fecal immunochemical tests in the Bowel Cancer Screening Program at 13 centers in the Netherlands. Endoscopists were trained in optical diagnosis using a validated module (Workgroup serrAted polypS and Polyposis). After meeting predefined performance thresholds in the training program, the endoscopists started a 1-year program (continuation phase) in which they performed narrow band imaging analyses during colonoscopies of participants in the screening program and predicted histological findings with confidence levels. The endoscopists were randomly assigned to groups that received feedback or no feedback on the accuracy of their predictions. Primary outcome measures were endoscopists' abilities to identify rectosigmoid neoplastic lesions (using histology as a reference) with NPVs of 90% or more, and selecting surveillance intervals that agreed with those determined by histology for at least 90% of cases. Of 39 endoscopists initially trained, 27 (69%) completed the training program. During the continuation phase, these 27 endoscopists performed 3144 colonoscopies in which 4504 diminutive polyps were removed. The endoscopists identified neoplastic lesions with a pooled NPV of 90.8% (95% confidence interval 88.6-92.6); their proposed surveillance intervals agreed with those determined by histologic analysis for 95.4% of cases (95% confidence interval 94.0-96.6). Findings did not differ between the group

12. Simultaneous confidence bands for the integrated hazard function

OpenAIRE

Dudek, Anna; Gocwin, Maciej; Leskow, Jacek

2006-01-01

The construction of the simultaneous confidence bands for the integrated hazard function is considered. The Nelson--Aalen estimator is used. The simultaneous confidence bands based on bootstrap methods are presented. Two methods of construction of such confidence bands are proposed. The weird bootstrap method is used for resampling. Simulations are made to compare the actual coverage probability of the bootstrap and the asymptotic simultaneous confidence bands. It is shown that the equal--tai...

13. 49 CFR 1103.23 - Confidences of a client.

Science.gov (United States)

2010-10-01

... 49 Transportation 8 2010-10-01 2010-10-01 false Confidences of a client. 1103.23 Section 1103.23... Responsibilities Toward A Client § 1103.23 Confidences of a client. (a) The practitioner's duty to preserve his client's confidence outlasts the practitioner's employment by the client, and this duty extends to the...

14. Contrasting Academic Behavioural Confidence in Mexican and European Psychology Students

Science.gov (United States)

Ochoa, Alma Rosa Aguila; Sander, Paul

2012-01-01

Introduction: Research with the Academic Behavioural Confidence scale using European students has shown that students have high levels of confidence in their academic abilities. It is generally accepted that people in more collectivist cultures have more realistic confidence levels in contrast to the overconfidence seen in individualistic European…

15. Dijets at large rapidity intervals

CERN Document Server

Pope, B G

2001-01-01

Inclusive diet production at large pseudorapidity intervals ( Delta eta ) between the two jets has been suggested as a regime for observing BFKL dynamics. We have measured the dijet cross section for large Delta eta in pp collisions at square root s = 1800 and 630 GeV using the DOE detector. The partonic cross section increases strongly with the size of Delta eta . The observed growth is even stronger than expected on the basis of BFKL resummation in the leading logarithmic approximation. The growth of the partonic cross section can be accommodated with an effective BFKL intercept of alpha /sub BFKL/(20 GeV) = 1.65 +or- 0.07.

16. Variational collocation on finite intervals

International Nuclear Information System (INIS)

Amore, Paolo; Cervantes, Mayra; Fernandez, Francisco M

2007-01-01

In this paper, we study a set of functions, defined on an interval of finite width, which are orthogonal and which reduce to the sinc functions when the appropriate limit is taken. We show that these functions can be used within a variational approach to obtain accurate results for a variety of problems. We have applied them to the interpolation of functions on finite domains and to the solution of the Schroedinger equation, and we have compared the performance of the present approach with others

17. Analysis of methods to determine the latency of online movement adjustments

NARCIS (Netherlands)

Oostwoud Wijdenes, L.; Brenner, E.; Smeets, J.B.J.

2014-01-01

When studying online movement adjustments, one of the interesting parameters is their latency. We set out to compare three different methods of determining the latency: the threshold, confidence interval, and extrapolation methods. We simulated sets of movements with different movement times and

18. Comprehensive Plan for Public Confidence in Nuclear Regulator

International Nuclear Information System (INIS)

Choi, Kwang Sik; Choi, Young Sung; Kim, Ho ki

2008-01-01

Public confidence in nuclear regulator has been discussed internationally. Public trust or confidence is needed for achieving regulatory goal of assuring nuclear safety to the level that is acceptable by the public or providing public ease for nuclear safety. In Korea, public ease or public confidence has been suggested as major policy goal in the 'Nuclear regulatory policy direction' annually announced. This paper reviews theory of trust, its definitions and defines nuclear safety regulation, elements of public trust or public confidence developed based on the study conducted so far. Public ease model developed and 10 measures for ensuring public confidence are also presented and future study directions are suggested

19. CLUSTERING OF THE COUNTRIES ACCORDING TO CONSUMER CONFIDENCE INDEX AND EVALUATING WITH HUMAN DEVELOPMENT INDEX

Directory of Open Access Journals (Sweden)

Seda BAĞDATLI KALKAN

2018-01-01

Full Text Available Consumer confidence index is a national indicator that suggest about current and future expectations of the economic conditions. With consumer confidence index, it is aimed to determine the trends and expectations of consumers according to their general economic situation, employment opportunities, their financial situations and developments in the markets. Another parameter is also the Human Development Index (HDI. This index is an indicator that examines the development of countries both economically and socially. Countries are sorted by these two indices and are considered as basic parameters in international platforms. The purpose of this study is to group the selected countries according to the consumer confidence index and reveal the features of the groups and then determine the position of the grouped countries with the Human Development Index. According to the results of cluster analysis, it is shown that India, China, Sweden and USA have the highest total consumer confidence index, employment, expectation and investment index

20. Effects of postidentification feedback on eyewitness identification and nonidentification confidence.

Science.gov (United States)

Semmler, Carolyn; Brewer, Neil; Wells, Gary L

2004-04-01

Two experiments investigated new dimensions of the effect of confirming feedback on eyewitness identification confidence using target-absent and target-present lineups and (previously unused) unbiased witness instructions (i.e., "offender not present" option highlighted). In Experiment 1, participants viewed a crime video and were later asked to try to identify the thief from an 8-person target-absent photo array. Feedback inflated witness confidence for both mistaken identifications and correct lineup rejections. With target-present lineups in Experiment 2, feedback inflated confidence for correct and mistaken identifications and lineup rejections. Although feedback had no influence on the confidence-accuracy correlation, it produced clear overconfidence. Confidence inflation varied with the confidence measure reference point (i.e., retrospective vs. current confidence) and identification response latency.

1. Effects of confidence and anxiety on flow state in competition.

Science.gov (United States)

Koehn, Stefan

2013-01-01

Confidence and anxiety are important variables that underlie the experience of flow in sport. Specifically, research has indicated that confidence displays a positive relationship and anxiety a negative relationship with flow. The aim of this study was to assess potential direct and indirect effects of confidence and anxiety dimensions on flow state in tennis competition. A sample of 59 junior tennis players completed measures of Competitive State Anxiety Inventory-2d and Flow State Scale-2. Following predictive analysis, results showed significant positive correlations between confidence (intensity and direction) and anxiety symptoms (only directional perceptions) with flow state. Standard multiple regression analysis indicated confidence as the only significant predictor of flow. The results confirmed a protective function of confidence against debilitating anxiety interpretations, but there were no significant interaction effects between confidence and anxiety on flow state.

2. Women's empowerment in India: assessment of women's confidence before and after training as a lay provider

OpenAIRE

Megan Storm; Alan Xi; Ayesha Khan

2018-01-01

Background: Gender is the main social determinant of health in India and affects women's health outcomes even before birth. As women mature into adulthood, lack of education and empowerment increases health inequities, acting as a barrier to seeking medical care and to making medical choices. Although the process of women's empowerment is complex to measure, one indicator is confidence in ability. We sought to increase the confidence of rural Indian women in their abilities by training them a...

3. Confidence and trust: empirical investigations for the Netherlands and the financial sector

OpenAIRE

Mosch, Robert; Prast, Henriëtte

2010-01-01

This paper reviews the state of confidence and trust in the Netherlands, with special attention to the financial sector. An attempt has been made to identify the factors that determine individual trust and confidence and to uncover connections between the various variables. Based on surveys over the period 2003-2006, the data show that interpersonal trust in the Netherlands - the extent to which the Dutch trust each other - is high from both an international and an historical perspective. Peo...

4. Some Characterizations of Convex Interval Games

NARCIS (Netherlands)

Brânzei, R.; Tijs, S.H.; Alparslan-Gok, S.Z.

2008-01-01

This paper focuses on new characterizations of convex interval games using the notions of exactness and superadditivity. We also relate big boss interval games with concave interval games and obtain characterizations of big boss interval games in terms of exactness and subadditivity.

5. Beyond hypercorrection: remembering corrective feedback for low-confidence errors.

Science.gov (United States)

Griffiths, Lauren; Higham, Philip A

2018-02-01

Correcting errors based on corrective feedback is essential to successful learning. Previous studies have found that corrections to high-confidence errors are better remembered than low-confidence errors (the hypercorrection effect). The aim of this study was to investigate whether corrections to low-confidence errors can also be successfully retained in some cases. Participants completed an initial multiple-choice test consisting of control, trick and easy general-knowledge questions, rated their confidence after answering each question, and then received immediate corrective feedback. After a short delay, they were given a cued-recall test consisting of the same questions. In two experiments, we found high-confidence errors to control questions were better corrected on the second test compared to low-confidence errors - the typical hypercorrection effect. However, low-confidence errors to trick questions were just as likely to be corrected as high-confidence errors. Most surprisingly, we found that memory for the feedback and original responses, not confidence or surprise, were significant predictors of error correction. We conclude that for some types of material, there is an effortful process of elaboration and problem solving prior to making low-confidence errors that facilitates memory of corrective feedback.

6. Factors affecting midwives' confidence in intrapartum care: a phenomenological study.

Science.gov (United States)

Bedwell, Carol; McGowan, Linda; Lavender, Tina

2015-01-01

midwives are frequently the lead providers of care for women throughout labour and birth. In order to perform their role effectively and provide women with the choices they require midwives need to be confident in their practice. This study explores factors which may affect midwives' confidence in their practice. hermeneutic phenomenology formed the theoretical basis for the study. Prospective longitudinal data collection was completed using diaries and semi-structured interviews. Twelve midwives providing intrapartum care in a variety of settings were recruited to ensure a variety of experiences in different contexts were captured. the principal factor affecting workplace confidence, both positively and negatively, was the influence of colleagues. Perceived autonomy and a sense of familiarity could also enhance confidence. However, conflict in the workplace was a critical factor in reducing midwives' confidence. Confidence was an important, but fragile, phenomenon to midwives and they used a variety of coping strategies, emotional intelligence and presentation management to maintain it. this is the first study to highlight both the factors influencing midwives' workplace confidence and the strategies midwives employed to maintain their confidence. Confidence is important in maintaining well-being and workplace culture may play a role in explaining the current low morale within the midwifery workforce. This may have implications for women's choices and care. Support, effective leadership and education may help midwives develop and sustain a positive sense of confidence. Copyright © 2014 Elsevier Ltd. All rights reserved.

7. An optimal dynamic interval preventive maintenance scheduling for series systems

International Nuclear Information System (INIS)

Gao, Yicong; Feng, Yixiong; Zhang, Zixian; Tan, Jianrong

2015-01-01

This paper studies preventive maintenance (PM) with dynamic interval for a multi-component system. Instead of equal interval, the time of PM period in the proposed dynamic interval model is not a fixed constant, which varies from interval-down to interval-up. It is helpful to reduce the outage loss on frequent repair parts and avoid lack of maintenance of the equipment by controlling the equipment maintenance frequency, when compared to a periodic PM scheme. According to the definition of dynamic interval, the reliability of system is analyzed from the failure mechanisms of its components and the different effects of non-periodic PM actions on the reliability of the components. Following the proposed model of reliability, a novel framework for solving the non-periodical PM schedule with dynamic interval based on the multi-objective genetic algorithm is proposed. The framework denotes the strategies include updating strategy, deleting strategy, inserting strategy and moving strategy, which is set to correct the invalid population individuals of the algorithm. The values of the dynamic interval and the selections of PM action for the components on every PM stage are determined by achieving a certain level of system availability with the minimum total PM-related cost. Finally, a typical rotary table system of NC machine tool is used as an example to describe the proposed method. - Highlights: • A non-periodic preventive maintenance scheduling model is proposed. • A framework for solving the non-periodical PM schedule problem is developed. • The interval of non-periodic PM is flexible and schedule can be better adjusted. • Dynamic interval leads to more efficient solutions than fixed interval does

8. Geometric Least Square Models for Deriving [0,1]-Valued Interval Weights from Interval Fuzzy Preference Relations Based on Multiplicative Transitivity

Directory of Open Access Journals (Sweden)

Xuan Yang

2015-01-01

Full Text Available This paper presents a geometric least square framework for deriving [0,1]-valued interval weights from interval fuzzy preference relations. By analyzing the relationship among [0,1]-valued interval weights, multiplicatively consistent interval judgments, and planes, a geometric least square model is developed to derive a normalized [0,1]-valued interval weight vector from an interval fuzzy preference relation. Based on the difference ratio between two interval fuzzy preference relations, a geometric average difference ratio between one interval fuzzy preference relation and the others is defined and employed to determine the relative importance weights for individual interval fuzzy preference relations. A geometric least square based approach is further put forward for solving group decision making problems. An individual decision numerical example and a group decision making problem with the selection of enterprise resource planning software products are furnished to illustrate the effectiveness and applicability of the proposed models.

9. Errors and Predictors of Confidence in Condom Use amongst Young Australians Attending a Music Festival.

Science.gov (United States)

Hall, Karina M; Brieger, Daniel G; De Silva, Sukhita H; Pfister, Benjamin F; Youlden, Daniel J; John-Leader, Franklin; Pit, Sabrina W

2016-01-01

Objectives . To determine the confidence and ability to use condoms correctly and consistently and the predictors of confidence in young Australians attending a festival. Methods . 288 young people aged 18 to 29 attending a mixed-genre music festival completed a survey measuring demographics, self-reported confidence using condoms, ability to use condoms, and issues experienced when using condoms in the past 12 months. Results . Self-reported confidence using condoms was high (77%). Multivariate analyses showed confidence was associated with being male ( P < 0.001) and having had five or more lifetime sexual partners ( P = 0.038). Reading packet instructions was associated with increased condom use confidence ( P = 0.011). Amongst participants who had used a condom in the last year, 37% had experienced the condom breaking and 48% had experienced the condom slipping off during intercourse and 51% when withdrawing the penis after sex. Conclusion . This population of young people are experiencing high rates of condom failures and are using them inconsistently or incorrectly, demonstrating the need to improve attitudes, behaviour, and knowledge about correct and consistent condom usage. There is a need to empower young Australians, particularly females, with knowledge and confidence in order to improve condom use self-efficacy.

10. Can confidence indicators forecast the probability of expansion in Croatia?

Directory of Open Access Journals (Sweden)

Mirjana Čižmešija

2016-04-01

Full Text Available The aim of this paper is to investigate how reliable are confidence indicators in forecasting the probability of expansion. We consider three Croatian Business Survey indicators: the Industrial Confidence Indicator (ICI, the Construction Confidence Indicator (BCI and the Retail Trade Confidence Indicator (RTCI. The quarterly data, used in the research, covered the periods from 1999/Q1 to 2014/Q1. Empirical analysis consists of two parts. The non-parametric Bry-Boschan algorithm is used for distinguishing periods of expansion from the period of recession in the Croatian economy. Then, various nonlinear probit models were estimated. The models differ with respect to the regressors (confidence indicators and the time lags. The positive signs of estimated parameters suggest that the probability of expansion increases with an increase in Confidence Indicators. Based on the obtained results, the conclusion is that ICI is the most powerful predictor of the probability of expansion in Croatia.

11. Confidence mediates the sex difference in mental rotation performance.

Science.gov (United States)

Estes, Zachary; Felker, Sydney

2012-06-01

On tasks that require the mental rotation of 3-dimensional figures, males typically exhibit higher accuracy than females. Using the most common measure of mental rotation (i.e., the Mental Rotations Test), we investigated whether individual variability in confidence mediates this sex difference in mental rotation performance. In each of four experiments, the sex difference was reliably elicited and eliminated by controlling or manipulating participants' confidence. Specifically, confidence predicted performance within and between sexes (Experiment 1), rendering confidence irrelevant to the task reliably eliminated the sex difference in performance (Experiments 2 and 3), and manipulating confidence significantly affected performance (Experiment 4). Thus, confidence mediates the sex difference in mental rotation performance and hence the sex difference appears to be a difference of performance rather than ability. Results are discussed in relation to other potential mediators and mechanisms, such as gender roles, sex stereotypes, spatial experience, rotation strategies, working memory, and spatial attention.

12. Coping skills: role of trait sport confidence and trait anxiety.

Science.gov (United States)

Cresswell, Scott; Hodge, Ken

2004-04-01

The current research assesses relationships among coping skills, trait sport confidence, and trait anxiety. Two samples (n=47 and n=77) of international competitors from surf life saving (M=23.7 yr.) and touch rugby (M=26.2 yr.) completed the Athletic Coping Skills Inventory, Trait Sport Confidence Inventory, and Sport Anxiety Scale. Analysis yielded significant correlations amongst trait anxiety, sport confidence, and coping. Specifically confidence scores were positively associated with coping with adversity scores and anxiety scores were negatively associated. These findings support the inclusion of the personality characteristics of confidence and anxiety within the coping model presented by Hardy, Jones, and Gould, Researchers should be aware that confidence and anxiety may influence the coping processes of athletes.

13. Assessing Mediational Models: Testing and Interval Estimation for Indirect Effects.

Science.gov (United States)

Biesanz, Jeremy C; Falk, Carl F; Savalei, Victoria

2010-08-06

Theoretical models specifying indirect or mediated effects are common in the social sciences. An indirect effect exists when an independent variable's influence on the dependent variable is mediated through an intervening variable. Classic approaches to assessing such mediational hypotheses ( Baron & Kenny, 1986 ; Sobel, 1982 ) have in recent years been supplemented by computationally intensive methods such as bootstrapping, the distribution of the product methods, and hierarchical Bayesian Markov chain Monte Carlo (MCMC) methods. These different approaches for assessing mediation are illustrated using data from Dunn, Biesanz, Human, and Finn (2007). However, little is known about how these methods perform relative to each other, particularly in more challenging situations, such as with data that are incomplete and/or nonnormal. This article presents an extensive Monte Carlo simulation evaluating a host of approaches for assessing mediation. We examine Type I error rates, power, and coverage. We study normal and nonnormal data as well as complete and incomplete data. In addition, we adapt a method, recently proposed in statistical literature, that does not rely on confidence intervals (CIs) to test the null hypothesis of no indirect effect. The results suggest that the new inferential method-the partial posterior p value-slightly outperforms existing ones in terms of maintaining Type I error rates while maximizing power, especially with incomplete data. Among confidence interval approaches, the bias-corrected accelerated (BC a ) bootstrapping approach often has inflated Type I error rates and inconsistent coverage and is not recommended; In contrast, the bootstrapped percentile confidence interval and the hierarchical Bayesian MCMC method perform best overall, maintaining Type I error rates, exhibiting reasonable power, and producing stable and accurate coverage rates.

14. Is consumer confidence an indicator of JSE performance?

OpenAIRE

Kamini Solanki; Yudhvir Seetharam

2014-01-01

While most studies examine the impact of business confidence on market performance, we instead focus on the consumer because consumer spending habits are a natural extension of trading activity on the equity market. This particular study examines investor sentiment as measured by the Consumer Confidence Index in South Africa and its effect on the Johannesburg Stock Exchange (JSE). We employ Granger causality tests to investigate the relationship across time between the Consumer Confidence Ind...

15. Establishment of reference intervals for plasma protein electrophoresis in Indo-Pacific green sea turtles, Chelonia mydas.

Science.gov (United States)

Flint, Mark; Matthews, Beren J; Limpus, Colin J; Mills, Paul C

2015-01-01

Biochemical and haematological parameters are increasingly used to diagnose disease in green sea turtles. Specific clinical pathology tools, such as plasma protein electrophoresis analysis, are now being used more frequently to improve our ability to diagnose disease in the live animal. Plasma protein reference intervals were calculated from 55 clinically healthy green sea turtles using pulsed field electrophoresis to determine pre-albumin, albumin, α-, β- and γ-globulin concentrations. The estimated reference intervals were then compared with data profiles from clinically unhealthy turtles admitted to a local wildlife hospital to assess the validity of the derived intervals and identify the clinically useful plasma protein fractions. Eighty-six per cent {19 of 22 [95% confidence interval (CI) 65-97]} of clinically unhealthy turtles had values outside the derived reference intervals, including the following: total protein [six of 22 turtles or 27% (95% CI 11-50%)], pre-albumin [two of five, 40% (95% CI 5-85%)], albumin [13 of 22, 59% (95% CI 36-79%)], total albumin [13 of 22, 59% (95% CI 36-79%)], α- [10 of 22, 45% (95% CI 24-68%)], β- [two of 10, 20% (95% CI 3-56%)], γ- [one of 10, 10% (95% CI 0.3-45%)] and β-γ-globulin [one of 12, 8% (95% CI 0.2-38%)] and total globulin [five of 22, 23% (8-45%)]. Plasma protein electrophoresis shows promise as an accurate adjunct tool to identify a disease state in marine turtles. This study presents the first reference interval for plasma protein electrophoresis in the Indo-Pacific green sea turtle.

16. Building confidence and credibility amid growing model and computing complexity

Science.gov (United States)

Evans, K. J.; Mahajan, S.; Veneziani, C.; Kennedy, J. H.

2017-12-01

As global Earth system models are developed to answer an ever-wider range of science questions, software products that provide robust verification, validation, and evaluation must evolve in tandem. Measuring the degree to which these new models capture past behavior, predict the future, and provide the certainty of predictions is becoming ever more challenging for reasons that are generally well known, yet are still challenging to address. Two specific and divergent needs for analysis of the Accelerated Climate Model for Energy (ACME) model - but with a similar software philosophy - are presented to show how a model developer-based focus can address analysis needs during expansive model changes to provide greater fidelity and execute on multi-petascale computing facilities. A-PRIME is a python script-based quick-look overview of a fully-coupled global model configuration to determine quickly if it captures specific behavior before significant computer time and expense is invested. EVE is an ensemble-based software framework that focuses on verification of performance-based ACME model development, such as compiler or machine settings, to determine the equivalence of relevant climate statistics. The challenges and solutions for analysis of multi-petabyte output data are highlighted from the aspect of the scientist using the software, with the aim of fostering discussion and further input from the community about improving developer confidence and community credibility.

17. Preservice teachers' perceived confidence in teaching school violence prevention.

Science.gov (United States)

Kandakai, Tina L; King, Keith A

2002-01-01

To examine preservice teachers' perceived confidence in teaching violence prevention and the potential effect of violence-prevention training on preservice teachers' confidence in teaching violence prevention. Six Ohio universities participated in the study. More than 800 undergraduate and graduate students completed surveys. Violence-prevention training, area of certification, and location of student- teaching placement significantly influenced preservice teachers' perceived confidence in teaching violence prevention. Violence-prevention training positively influences preservice teachers' confidence in teaching violence prevention. The results suggest that such training should be considered as a requirement for teacher preparation programs.

18. The antecedents and belief-polarized effects of thought confidence.

Science.gov (United States)

Chou, Hsuan-Yi; Lien, Nai-Hwa; Liang, Kuan-Yu

2011-01-01

This article investigates 2 possible antecedents of thought confidence and explores the effects of confidence induced before or during ad exposure. The results of the experiments indicate that both consumers' dispositional optimism and spokesperson attractiveness have significant effects on consumers' confidence in thoughts that are generated after viewing the advertisement. Higher levels of thought confidence will influence the quality of the thoughts that people generate, lead to either positively or negatively polarized message processing, and therefore induce better or worse advertising effectiveness, depending on the valence of thoughts. The authors posit the belief-polarization hypothesis to explain these findings.

19. The second birth interval in Egypt: the role of contraception

OpenAIRE

Baschieri, Angela

2004-01-01

The paper discusses problems of model specification in birth interval analysis. Using Bongaarts’s conceptual framework on the proximate determinants on fertility, the paper tests the hypothesis that all important variation in fertility is captured by differences in marriage, breastfeeding, contraception and induced abortion. The paper applies a discrete time hazard model to study the second birth interval using data from the Egyptian Demographic and Health Survey 2000 (EDHS), and the month by...

20. Natural markings and their use in determining calving intervals in ...

African Journals Online (AJOL)

1987-10-06

Oct 6, 1987 ... Since 1979, 245 right whales (excluding calves) have been individually identified in aerial ... From a simple model it is shown that .... were exposed, and notes taken of associated information .... predominantly male, or that their female component is ... In order to compare the incidence of partial albinism.

1. Probability and Confidence Trade-space (PACT) Evaluation: Accounting for Uncertainty in Sparing Assessments

Science.gov (United States)

Anderson, Leif; Box, Neil; Carter, Katrina; DiFilippo, Denise; Harrington, Sean; Jackson, David; Lutomski, Michael

2012-01-01

There are two general shortcomings to the current annual sparing assessment: 1. The vehicle functions are currently assessed according to confidence targets, which can be misleading- overly conservative or optimistic. 2. The current confidence levels are arbitrarily determined and do not account for epistemic uncertainty (lack of knowledge) in the ORU failure rate. There are two major categories of uncertainty that impact Sparing Assessment: (a) Aleatory Uncertainty: Natural variability in distribution of actual failures around an Mean Time Between Failure (MTBF) (b) Epistemic Uncertainty : Lack of knowledge about the true value of an Orbital Replacement Unit's (ORU) MTBF We propose an approach to revise confidence targets and account for both categories of uncertainty, an approach we call Probability and Confidence Trade-space (PACT) evaluation.

2. Biomass Thermogravimetric Analysis: Uncertainty Determination Methodology and Sampling Maps Generation

Science.gov (United States)

Pazó, Jose A.; Granada, Enrique; Saavedra, Ángeles; Eguía, Pablo; Collazo, Joaquín

2010-01-01

The objective of this study was to develop a methodology for the determination of the maximum sampling error and confidence intervals of thermal properties obtained from thermogravimetric analysis (TG), including moisture, volatile matter, fixed carbon and ash content. The sampling procedure of the TG analysis was of particular interest and was conducted with care. The results of the present study were compared to those of a prompt analysis, and a correlation between the mean values and maximum sampling errors of the methods were not observed. In general, low and acceptable levels of uncertainty and error were obtained, demonstrating that the properties evaluated by TG analysis were representative of the overall fuel composition. The accurate determination of the thermal properties of biomass with precise confidence intervals is of particular interest in energetic biomass applications. PMID:20717532

3. Experiential Education Builds Student Self-Confidence in Delivering Medication Therapy Management

Directory of Open Access Journals (Sweden)

Wendy M. Parker

2017-07-01

Full Text Available To determine the impact of advanced pharmacy practice experiences (APPE on student self-confidence related to medication therapy management (MTM, fourth-year pharmacy students were surveyed pre/post APPE to: identify exposure to MTM learning opportunities, assess knowledge of the MTM core components, and assess self-confidence performing MTM services. An anonymous electronic questionnaire administered pre/post APPE captured demographics, factors predicted to impact student self-confidence (Grade point average (GPA, work experience, exposure to MTM learning opportunities, MTM knowledge and self-confidence conducting MTM using a 5-point Likert scale (1 = Not at all Confident; 5 = Extremely Confident. Sixty-two students (26% response rate responded to the pre-APPE questionnaire and n = 44 (18% to the post-APPE. Over 90% demonstrated MTM knowledge and 68.2% completed MTM learning activities. APPE experiences significantly improved students’ overall self-confidence (pre-APPE = 3.27 (0.85 SD, post-APPE = 4.02 (0.88, p < 0.001. Students engaging in MTM learning opportunities had higher self-confidence post-APPE (4.20 (0.71 vs. those not reporting MTM learning opportunities (3.64 (1.08, p = 0.05. Post-APPE, fewer students reported MTM was patient-centric or anticipated engaging in MTM post-graduation. APPE learning opportunities increased student self-confidence to provide MTM services. However, the reduction in anticipated engagement in MTM post-graduation and reduction in sensing the patient-centric nature of MTM practice, may reveal a gap between practice expectations and reality.

4. Reference Value Advisor: a new freeware set of macroinstructions to calculate reference intervals with Microsoft Excel.

Science.gov (United States)

Geffré, Anne; Concordet, Didier; Braun, Jean-Pierre; Trumel, Catherine

2011-03-01

International recommendations for determination of reference intervals have been recently updated, especially for small reference sample groups, and use of the robust method and Box-Cox transformation is now recommended. Unfortunately, these methods are not included in most software programs used for data analysis by clinical laboratories. We have created a set of macroinstructions, named Reference Value Advisor, for use in Microsoft Excel to calculate reference limits applying different methods. For any series of data, Reference Value Advisor calculates reference limits (with 90% confidence intervals [CI]) using a nonparametric method when n≥40 and by parametric and robust methods from native and Box-Cox transformed values; tests normality of distributions using the Anderson-Darling test and outliers using Tukey and Dixon-Reed tests; displays the distribution of values in dot plots and histograms and constructs Q-Q plots for visual inspection of normality; and provides minimal guidelines in the form of comments based on international recommendations. The critical steps in determination of reference intervals are correct selection of as many reference individuals as possible and analysis of specimens in controlled preanalytical and analytical conditions. Computing tools cannot compensate for flaws in selection and size of the reference sample group and handling and analysis of samples. However, if those steps are performed properly, Reference Value Advisor, available as freeware at http://www.biostat.envt.fr/spip/spip.php?article63, permits rapid assessment and comparison of results calculated using different methods, including currently unavailable methods. This allows for selection of the most appropriate method, especially as the program provides the CI of limits. It should be useful in veterinary clinical pathology when only small reference sample groups are available. ©2011 American Society for Veterinary Clinical Pathology.

5. Intraclass Correlation Coefficients in Hierarchical Design Studies with Discrete Response Variables: A Note on a Direct Interval Estimation Procedure

Science.gov (United States)

Raykov, Tenko; Marcoulides, George A.

2015-01-01

A latent variable modeling procedure that can be used to evaluate intraclass correlation coefficients in two-level settings with discrete response variables is discussed. The approach is readily applied when the purpose is to furnish confidence intervals at prespecified confidence levels for these coefficients in setups with binary or ordinal…

6. Point and interval estimation of pollinator importance: a study using pollination data of Silene caroliniana.

Science.gov (United States)

Reynolds, Richard J; Fenster, Charles B

2008-05-01

Pollinator importance, the product of visitation rate and pollinator effectiveness, is a descriptive parameter of the ecology and evolution of plant-pollinator interactions. Naturally, sources of its variation should be investigated, but the SE of pollinator importance has never been properly reported. Here, a Monte Carlo simulation study and a result from mathematical statistics on the variance of the product of two random variables are used to estimate the mean and confidence limits of pollinator importance for three visitor species of the wildflower, Silene caroliniana. Both methods provided similar estimates of mean pollinator importance and its interval if the sample size of the visitation and effectiveness datasets were comparatively large. These approaches allowed us to determine that bumblebee importance was significantly greater than clearwing hawkmoth, which was significantly greater than beefly. The methods could be used to statistically quantify temporal and spatial variation in pollinator importance of particular visitor species. The approaches may be extended for estimating the variance of more than two random variables. However, unless the distribution function of the resulting statistic is known, the simulation approach is preferable for calculating the parameter's confidence limits.

7. An analysis of confidence limit calculations used in AAPM Task Group No. 119

International Nuclear Information System (INIS)

Knill, Cory; Snyder, Michael

2011-01-01

Purpose: The report issued by AAPM Task Group No. 119 outlined a procedure for evaluating the effectiveness of IMRT commissioning. The procedure involves measuring gamma pass-rate indices for IMRT plans of standard phantoms and determining if the results fall within a confidence limit set by assuming normally distributed data. As stated in the TG report, the assumption of normally distributed gamma pass rates is a convenient approximation for commissioning purposes, but may not accurately describe the data. Here the authors attempt to better describe gamma pass-rate data by fitting it to different distributions. The authors then calculate updated confidence limits using those distributions and compare them to those derived using TG No. 119 method. Methods: Gamma pass-rate data from 111 head and neck patients are fitted using the TG No. 119 normal distribution, a truncated normal distribution, and a Weibull distribution. Confidence limits to 95% are calculated for each and compared. A more general analysis of the expected differences between the TG No. 119 method of determining confidence limits and a more time-consuming curve fitting method is performed. Results: The TG No. 119 standard normal distribution does not fit the measured data. However, due to the small range of measured data points, the inaccuracy of the fit has only a small effect on the final value of the confidence limits. The confidence limits for the 111 patient plans are within 0.1% of each other for all distributions. The maximum expected difference in confidence limits, calculated using TG No. 119's approximation and a truncated distribution, is 1.2%. Conclusions: A three-parameter Weibull probability distribution more accurately fits the clinical gamma index pass-rate data than the normal distribution adopted by TG No. 119. However, the sensitivity of the confidence limit on distribution fit is low outside of exceptional circumstances.

8. Direct Interval Forecasting of Wind Power

DEFF Research Database (Denmark)

Wan, Can; Xu, Zhao; Pinson, Pierre

2013-01-01

This letter proposes a novel approach to directly formulate the prediction intervals of wind power generation based on extreme learning machine and particle swarm optimization, where prediction intervals are generated through direct optimization of both the coverage probability and sharpness...

9. A note on birth interval distributions

International Nuclear Information System (INIS)

Shrestha, G.

1989-08-01

A considerable amount of work has been done regarding the birth interval analysis in mathematical demography. This paper is prepared with the intention of reviewing some probability models related to interlive birth intervals proposed by different researchers. (author). 14 refs

10. Identifying the bad guy in a lineup using confidence judgments under deadline pressure.

Science.gov (United States)

Brewer, Neil; Weber, Nathan; Wootton, David; Lindsay, D Stephen

2012-10-01

11. Optimal Data Interval for Estimating Advertising Response

OpenAIRE

Gerard J. Tellis; Philip Hans Franses

2006-01-01

The abundance of highly disaggregate data (e.g., at five-second intervals) raises the question of the optimal data interval to estimate advertising carryover. The literature assumes that (1) the optimal data interval is the interpurchase time, (2) too disaggregate data causes a disaggregation bias, and (3) recovery of true parameters requires assumption of the underlying advertising process. In contrast, we show that (1) the optimal data interval is what we call , (2) too disaggregate data do...

12. Dynamic visual noise reduces confidence in short-term memory for visual information.

Science.gov (United States)

2012-05-01

Previous research has shown effects of the visual interference technique, dynamic visual noise (DVN), on visual imagery, but not on visual short-term memory, unless retention of precise visual detail is required. This study tested the prediction that DVN does also affect retention of gross visual information, specifically by reducing confidence. Participants performed a matrix pattern memory task with three retention interval interference conditions (DVN, static visual noise and no interference control) that varied from trial to trial. At recall, participants indicated whether or not they were sure of their responses. As in previous research, DVN did not impair recall accuracy or latency on the task, but it did reduce recall confidence relative to static visual noise and no interference. We conclude that DVN does distort visual representations in short-term memory, but standard coarse-grained recall measures are insensitive to these distortions.

13. Understanding public confidence in government to prevent terrorist attacks.

Energy Technology Data Exchange (ETDEWEB)

Baldwin, T. E.; Ramaprasad, A,; Samsa, M. E.; Decision and Information Sciences; Univ. of Illinois at Chicago

2008-04-02

A primary goal of terrorism is to instill a sense of fear and vulnerability in a population and to erode its confidence in government and law enforcement agencies to protect citizens against future attacks. In recognition of its importance, the Department of Homeland Security includes public confidence as one of the principal metrics used to assess the consequences of terrorist attacks. Hence, a detailed understanding of the variations in public confidence among individuals, terrorist event types, and as a function of time is critical to developing this metric. In this exploratory study, a questionnaire was designed, tested, and administered to small groups of individuals to measure public confidence in the ability of federal, state, and local governments and their public safety agencies to prevent acts of terrorism. Data was collected from three groups before and after they watched mock television news broadcasts portraying a smallpox attack, a series of suicide bomber attacks, a refinery explosion attack, and cyber intrusions on financial institutions, resulting in identity theft. Our findings are: (a) although the aggregate confidence level is low, there are optimists and pessimists; (b) the subjects are discriminating in interpreting the nature of a terrorist attack, the time horizon, and its impact; (c) confidence recovery after a terrorist event has an incubation period; and (d) the patterns of recovery of confidence of the optimists and the pessimists are different. These findings can affect the strategy and policies to manage public confidence after a terrorist event.

14. Animal Spirits and Extreme Confidence: No Guts, No Glory?

NARCIS (Netherlands)

M.G. Douwens-Zonneveld (Mariska)

2012-01-01

textabstractThis study investigates to what extent extreme confidence of either management or security analysts may impact financial or operating performance. We construct a multidimensional degree of company confidence measure from a wide range of corporate decisions. We empirically test this

15. Trust, confidence, and the 2008 global financial crisis.

Science.gov (United States)

Earle, Timothy C

2009-06-01

The 2008 global financial crisis has been compared to a "once-in-a-century credit tsunami," a disaster in which the loss of trust and confidence played key precipitating roles and the recovery from which will require the restoration of these crucial factors. Drawing on the analogy between the financial crisis and environmental and technological hazards, recent research on the role of trust and confidence in the latter is used to provide a perspective on the former. Whereas "trust" and "confidence" are used interchangeably and without explicit definition in most discussions of the financial crisis, this perspective uses the TCC model of cooperation to clearly distinguish between the two and to demonstrate how this distinction can lead to an improved understanding of the crisis. The roles of trust and confidence-both in precipitation and in possible recovery-are discussed for each of the three major sets of actors in the crisis, the regulators, the banks, and the public. The roles of trust and confidence in the larger context of risk management are also examined; trust being associated with political approaches, confidence with technical. Finally, the various stances that government can take with regard to trust-such as supportive or skeptical-are considered. Overall, it is argued that a clear understanding of trust and confidence and a close examination of the specific, concrete circumstances of a crisis-revealing when either trust or confidence is appropriate-can lead to useful insights for both recovery and prevention of future occurrences.

16. True and False Memories, Parietal Cortex, and Confidence Judgments

Science.gov (United States)

Urgolites, Zhisen J.; Smith, Christine N.; Squire, Larry R.

2015-01-01

Recent studies have asked whether activity in the medial temporal lobe (MTL) and the neocortex can distinguish true memory from false memory. A frequent complication has been that the confidence associated with correct memory judgments (true memory) is typically higher than the confidence associated with incorrect memory judgments (false memory).…

17. The Metamemory Approach to Confidence: A Test Using Semantic Memory

Science.gov (United States)

Brewer, William F.; Sampaio, Cristina

2012-01-01

The metamemory approach to memory confidence was extended and elaborated to deal with semantic memory tasks. The metamemory approach assumes that memory confidence is based on the products and processes of a completed memory task, as well as metamemory beliefs that individuals have about how their memory products and processes relate to memory…

18. Confidence Sharing in the Vocational Counselling Interview: Emergence and Repercussions

Science.gov (United States)

Olry-Louis, Isabelle; Bremond, Capucine; Pouliot, Manon

2012-01-01

Confidence sharing is an asymmetrical dialogic episode to which both parties consent, in which one reveals something personal to the other who participates in the emergence and unfolding of the confidence. We describe how this is achieved at a discursive level within vocational counselling interviews. Based on a corpus of 64 interviews, we analyse…

19. A scale for consumer confidence in the safety of food

NARCIS (Netherlands)

Jonge, de J.; Trijp, van J.C.M.; Lans, van der I.A.; Renes, R.J.; Frewer, L.J.

2008-01-01

The aim of this study was to develop and validate a scale to measure general consumer confidence in the safety of food. Results from exploratory and confirmatory analyses indicate that general consumer confidence in the safety of food consists of two distinct dimensions, optimism and pessimism,

20. Confidence Scoring of Speaking Performance: How Does Fuzziness become Exact?

Science.gov (United States)

Jin, Tan; Mak, Barley; Zhou, Pei

2012-01-01

The fuzziness of assessing second language speaking performance raises two difficulties in scoring speaking performance: "indistinction between adjacent levels" and "overlap between scales". To address these two problems, this article proposes a new approach, "confidence scoring", to deal with such fuzziness, leading to "confidence" scores between…

1. Monitoring consumer confidence in food safety: an exploratory study

NARCIS (Netherlands)

Jonge, de J.; Frewer, L.J.; Trijp, van J.C.M.; Renes, R.J.; Wit, de W.; Timmers, J.C.M.

2004-01-01

Abstract: In response to the potential for negative economic and societal effects resulting from a low level of consumer confidence in food safety, it is important to know how confidence is potentially influenced by external events. The aim of this article is to describe the development of a monitor

2. Modeling Confidence and Response Time in Recognition Memory

Science.gov (United States)

Ratcliff, Roger; Starns, Jeffrey J.

2009-01-01

A new model for confidence judgments in recognition memory is presented. In the model, the match between a single test item and memory produces a distribution of evidence, with better matches corresponding to distributions with higher means. On this match dimension, confidence criteria are placed, and the areas between the criteria under the…

3. Music educators : their artistry and self-confidence

NARCIS (Netherlands)

Lion-Slovak, Brigitte; Stöger, Christine; Smilde, Rineke; Malmberg, Isolde; de Vugt, Adri

2013-01-01

How does artistic identity influence the self-confidence of music educators? What is the interconnection between the artistic and the teacher identity? What is actually meant by artistic identity in music education? What is a fruitful environment for the development of artistic self-confidence of

4. To protect and serve: Restoring public confidence in the SAPS ...

African Journals Online (AJOL)

Persistent incidents of brutality, criminal behaviour and abuse of authority by members of South Africa's police agencies have serious implications for public trust and confidence in the police. A decline in trust and confidence in the police is inevitably harmful to the ability of the government to reduce crime and improve public ...

5. Improved realism of confidence for an episodic memory event

Directory of Open Access Journals (Sweden)

Sandra Buratti

2012-09-01

6. Variance misperception explains illusions of confidence in simple perceptual decisions

NARCIS (Netherlands)

Zylberberg, Ariel; Roelfsema, Pieter R.; Sigman, Mariano

2014-01-01

Confidence in a perceptual decision is a judgment about the quality of the sensory evidence. The quality of the evidence depends not only on its strength ('signal') but critically on its reliability ('noise'), but the separate contribution of these quantities to the formation of confidence judgments

7. On-line confidence monitoring during decision making.

Science.gov (United States)

Dotan, Dror; Meyniel, Florent; Dehaene, Stanislas

2018-02-01

Humans can readily assess their degree of confidence in their decisions. Two models of confidence computation have been proposed: post hoc computation using post-decision variables and heuristics, versus online computation using continuous assessment of evidence throughout the decision-making process. Here, we arbitrate between these theories by continuously monitoring finger movements during a manual sequential decision-making task. Analysis of finger kinematics indicated that subjects kept separate online records of evidence and confidence: finger deviation continuously reflected the ongoing accumulation of evidence, whereas finger speed continuously reflected the momentary degree of confidence. Furthermore, end-of-trial finger speed predicted the post-decisional subjective confidence rating. These data indicate that confidence is computed on-line, throughout the decision process. Speed-confidence correlations were previously interpreted as a post-decision heuristics, whereby slow decisions decrease subjective confidence, but our results suggest an adaptive mechanism that involves the opposite causality: by slowing down when unconfident, participants gain time to improve their decisions. Copyright © 2017 Elsevier B.V. All rights reserved.

8. A simultaneous confidence band for sparse longitudinal regression

KAUST Repository

Ma, Shujie; Yang, Lijian; Carroll, Raymond J.

2012-01-01

Functional data analysis has received considerable recent attention and a number of successful applications have been reported. In this paper, asymptotically simultaneous confidence bands are obtained for the mean function of the functional regression model, using piecewise constant spline estimation. Simulation experiments corroborate the asymptotic theory. The confidence band procedure is illustrated by analyzing CD4 cell counts of HIV infected patients.

9. What are effective techniques for improving public confidence or restoring lost confidence in a regulator?

International Nuclear Information System (INIS)

Harbitz, O.; Isaksson, R.

2006-01-01

The conclusions and recommendations of this session can be summarized this way. The following list contains thoughts related to restoring lost confidence: - hard, long lasting event; - strategy: maximum transparency; - to listen, be open, give phone numbers etc. - ways to rebuild trust: frequent communication, being there, open and transparent; - don't be too defensive; if things could be done better, say it; - technical staff and public affair staff together from the beginning - answer all questions; - classifications, actions, instructions that differ much from the earlier ones must be well explained and motivated - and still cause a lot of problems; - things may turn out to be political; - communicative work in an early stage saves work later; - communication experts must be working shoulder to shoulder with other staff; On handling emergencies in general, some recipes proposed are: - better to over react than to under react; - do not avoid extreme actions: hit hard, hit fast; - base your decisions in strict principles; - first principle: public safety first; - when you are realizing plant A, you must have a plant B in your pocket: - be transparent - from the beginning; - crisis communication: early, frequent etc - people need to see political leaders, someone who is making decisions - technical experts are needed but are not enough. On how to involve stakeholders and the public in decision making, recommendations are: - new kind of thinking -. demanding for a organisation; - go to local level, meet local people, speak language people understand, you have to start from the very beginning - introducing yourself tell who you are and why you are there. (authors)

10. An Adequate First Order Logic of Intervals

DEFF Research Database (Denmark)

Chaochen, Zhou; Hansen, Michael Reichhardt

1998-01-01

This paper introduces left and right neighbourhoods as primitive interval modalities to define other unary and binary modalities of intervals in a first order logic with interval length. A complete first order logic for the neighbourhood modalities is presented. It is demonstrated how the logic can...... support formal specification and verification of liveness and fairness, and also of various notions of real analysis....

11. Consistency and refinement for Interval Markov Chains

DEFF Research Database (Denmark)

Delahaye, Benoit; Larsen, Kim Guldstrand; Legay, Axel

2012-01-01

Interval Markov Chains (IMC), or Markov Chains with probability intervals in the transition matrix, are the base of a classic specification theory for probabilistic systems [18]. The standard semantics of IMCs assigns to a specification the set of all Markov Chains that satisfy its interval...

12. Multivariate interval-censored survival data

DEFF Research Database (Denmark)

Hougaard, Philip

2014-01-01

Interval censoring means that an event time is only known to lie in an interval (L,R], with L the last examination time before the event, and R the first after. In the univariate case, parametric models are easily fitted, whereas for non-parametric models, the mass is placed on some intervals, de...

13. Family Health Histories and Their Impact on Retirement Confidence.

Science.gov (United States)

Zick, Cathleen D; Mayer, Robert N; Smith, Ken R

2015-08-01

Retirement confidence is a key social barometer. In this article, we examine how personal and parental health histories relate to working-age adults' feelings of optimism or pessimism about their overall retirement prospects. This study links survey data on retirement planning with information on respondents' own health histories and those of their parents. The multivariate models control for the respondents' socio-demographic and economic characteristics along with past retirement planning activities when estimating the relationships between family health histories and retirement confidence. Retirement confidence is inversely related to parental history of cancer and cardiovascular disease but not to personal health history. In contrast, retirement confidence is positively associated with both parents being deceased. As members of the public become increasingly aware of how genetics and other family factors affect intergenerational transmission of chronic diseases, it is likely that the link between family health histories and retirement confidence will intensify. © The Author(s) 2015.

14. Multivoxel neurofeedback selectively modulates confidence without changing perceptual performance

Science.gov (United States)

Cortese, Aurelio; Amano, Kaoru; Koizumi, Ai; Kawato, Mitsuo; Lau, Hakwan

2016-01-01

A central controversy in metacognition studies concerns whether subjective confidence directly reflects the reliability of perceptual or cognitive processes, as suggested by normative models based on the assumption that neural computations are generally optimal. This view enjoys popularity in the computational and animal literatures, but it has also been suggested that confidence may depend on a late-stage estimation dissociable from perceptual processes. Yet, at least in humans, experimental tools have lacked the power to resolve these issues convincingly. Here, we overcome this difficulty by using the recently developed method of decoded neurofeedback (DecNef) to systematically manipulate multivoxel correlates of confidence in a frontoparietal network. Here we report that bi-directional changes in confidence do not affect perceptual accuracy. Further psychophysical analyses rule out accounts based on simple shifts in reporting strategy. Our results provide clear neuroscientific evidence for the systematic dissociation between confidence and perceptual performance, and thereby challenge current theoretical thinking. PMID:27976739

15. Profiling of RNA degradation for estimation of post mortem [corrected] interval.

Directory of Open Access Journals (Sweden)

Fernanda Sampaio-Silva

Full Text Available An estimation of the post mortem interval (PMI is frequently touted as the Holy Grail of forensic pathology. During the first hours after death, PMI estimation is dependent on the rate of physical observable modifications including algor, rigor and livor mortis. However, these assessment methods are still largely unreliable and inaccurate. Alternatively, RNA has been put forward as a valuable tool in forensic pathology, namely to identify body fluids, estimate the age of biological stains and to study the mechanism of death. Nevertheless, the attempts to find correlation between RNA degradation and PMI have been unsuccessful. The aim of this study was to characterize the RNA degradation in different post mortem tissues in order to develop a mathematical model that can be used as coadjuvant method for a more accurate PMI determination. For this purpose, we performed an eleven-hour kinetic analysis of total extracted RNA from murine's visceral and muscle tissues. The degradation profile of total RNA and the expression levels of several reference genes were analyzed by quantitative real-time PCR. A quantitative analysis of normalized transcript levels on the former tissues allowed the identification of four quadriceps muscle genes (Actb, Gapdh, Ppia and Srp72 that were found to significantly correlate with PMI. These results allowed us to develop a mathematical model with predictive value for estimation of the PMI (confidence interval of ±51 minutes at 95% that can become an important complementary tool for traditional methods.

16. Flexible regression models for estimating postmortem interval (PMI) in forensic medicine.

Science.gov (United States)

Muñoz Barús, José Ignacio; Febrero-Bande, Manuel; Cadarso-Suárez, Carmen

2008-10-30

Correct determination of time of death is an important goal in forensic medicine. Numerous methods have been described for estimating postmortem interval (PMI), but most are imprecise, poorly reproducible and/or have not been validated with real data. In recent years, however, some progress in PMI estimation has been made, notably through the use of new biochemical methods for quantifying relevant indicator compounds in the vitreous humour. The best, but unverified, results have been obtained with [K+] and hypoxanthine [Hx], using simple linear regression (LR) models. The main aim of this paper is to offer more flexible alternatives to LR, such as generalized additive models (GAMs) and support vector machines (SVMs) in order to obtain improved PMI estimates. The present study, based on detailed analysis of [K+] and [Hx] in more than 200 vitreous humour samples from subjects with known PMI, compared classical LR methodology with GAM and SVM methodologies. Both proved better than LR for estimation of PMI. SVM showed somewhat greater precision than GAM, but GAM offers a readily interpretable graphical output, facilitating understanding of findings by legal professionals; there are thus arguments for using both types of models. R code for these methods is available from the authors, permitting accurate prediction of PMI from vitreous humour [K+], [Hx] and [U], with confidence intervals and graphical output provided. Copyright 2008 John Wiley & Sons, Ltd.

17. Maternal Confidence for Physiologic Childbirth: A Concept Analysis.

Science.gov (United States)

Neerland, Carrie E

2018-06-06

Confidence is a term often used in research literature and consumer media in relation to birth, but maternal confidence has not been clearly defined, especially as it relates to physiologic labor and birth. The aim of this concept analysis was to define maternal confidence in the context of physiologic labor and childbirth. Rodgers' evolutionary method was used to identify attributes, antecedents, and consequences of maternal confidence for physiologic birth. Databases searched included Ovid MEDLINE, CINAHL, PsycINFO, and Sociological Abstracts from the years 1995 to 2015. A total of 505 articles were retrieved, using the search terms pregnancy, obstetric care, prenatal care, and self-efficacy and the keyword confidence. Articles were identified for in-depth review and inclusion based on whether the term confidence was used or assessed in relationship to labor and/or birth. In addition, a hand search of the reference lists of the selected articles was performed. Twenty-four articles were reviewed in this concept analysis. We define maternal confidence for physiologic birth as a woman's belief that physiologic birth can be achieved, based on her view of birth as a normal process and her belief in her body's innate ability to birth, which is supported by social support, knowledge, and information founded on a trusted relationship with a maternity care provider in an environment where the woman feels safe. This concept analysis advances the concept of maternal confidence for physiologic birth and provides new insight into how women's confidence for physiologic birth might be enhanced during the prenatal period. Further investigation of confidence for physiologic birth across different cultures is needed to identify cultural differences in constructions of the concept. © 2018 by the American College of Nurse-Midwives.

18. Complete Blood Count Reference Intervals for Healthy Han Chinese Adults

Science.gov (United States)

Mu, Runqing; Guo, Wei; Qiao, Rui; Chen, Wenxiang; Jiang, Hong; Ma, Yueyun; Shang, Hong

2015-01-01

Background Complete blood count (CBC) reference intervals are important to diagnose diseases, screen blood donors, and assess overall health. However, current reference intervals established by older instruments and technologies and those from American and European populations are not suitable for Chinese samples due to ethnic, dietary, and lifestyle differences. The aim of this multicenter collaborative study was to establish CBC reference intervals for healthy Han Chinese adults. Methods A total of 4,642 healthy individuals (2,136 males and 2,506 females) were recruited from six clinical centers in China (Shenyang, Beijing, Shanghai, Guangzhou, Chengdu, and Xi’an). Blood samples collected in K2EDTA anticoagulant tubes were analyzed. Analysis of variance was performed to determine differences in consensus intervals according to the use of data from the combined sample and selected samples. Results Median and mean platelet counts from the Chengdu center were significantly lower than those from other centers. Red blood cell count (RBC), hemoglobin (HGB), and hematocrit (HCT) values were higher in males than in females at all ages. Other CBC parameters showed no significant instrument-, region-, age-, or sex-dependent difference. Thalassemia carriers were found to affect the lower or upper limit of different RBC profiles. Conclusion We were able to establish consensus intervals for CBC parameters in healthy Han Chinese adults. RBC, HGB, and HCT intervals were established for each sex. The reference interval for platelets for the Chengdu center should be established independently. PMID:25769040

19. Power, effects, confidence, and significance: an investigation of statistical practices in nursing research.

Science.gov (United States)

2014-05-01

To (a) assess the statistical power of nursing research to detect small, medium, and large effect sizes; (b) estimate the experiment-wise Type I error rate in these studies; and (c) assess the extent to which (i) a priori power analyses, (ii) effect sizes (and interpretations thereof), and (iii) confidence intervals were reported. Statistical review. Papers published in the 2011 volumes of the 10 highest ranked nursing journals, based on their 5-year impact factors. Papers were assessed for statistical power, control of experiment-wise Type I error, reporting of a priori power analyses, reporting and interpretation of effect sizes, and reporting of confidence intervals. The analyses were based on 333 papers, from which 10,337 inferential statistics were identified. The median power to detect small, medium, and large effect sizes was .40 (interquartile range [IQR]=.24-.71), .98 (IQR=.85-1.00), and 1.00 (IQR=1.00-1.00), respectively. The median experiment-wise Type I error rate was .54 (IQR=.26-.80). A priori power analyses were reported in 28% of papers. Effect sizes were routinely reported for Spearman's rank correlations (100% of papers in which this test was used), Poisson regressions (100%), odds ratios (100%), Kendall's tau correlations (100%), Pearson's correlations (99%), logistic regressions (98%), structural equation modelling/confirmatory factor analyses/path analyses (97%), and linear regressions (83%), but were reported less often for two-proportion z tests (50%), analyses of variance/analyses of covariance/multivariate analyses of variance (18%), t tests (8%), Wilcoxon's tests (8%), Chi-squared tests (8%), and Fisher's exact tests (7%), and not reported for sign tests, Friedman's tests, McNemar's tests, multi-level models, and Kruskal-Wallis tests. Effect sizes were infrequently interpreted. Confidence intervals were reported in 28% of papers. The use, reporting, and interpretation of inferential statistics in nursing research need substantial

20. On the form of ROCs constructed from confidence ratings.

Science.gov (United States)

Malmberg, Kenneth J

2002-03-01

A classical question for memory researchers is whether memories vary in an all-or-nothing, discrete manner (e.g., stored vs. not stored, recalled vs. not recalled), or whether they vary along a continuous dimension (e.g., strength, similarity, or familiarity). For yes-no classification tasks, continuous- and discrete-state models predict nonlinear and linear receiver operating characteristics (ROCs), respectively (D. M. Green & J. A. Swets, 1966; N. A. Macmillan & C. D. Creelman, 1991). Recently, several authors have assumed that these predictions are generalizable to confidence ratings tasks (J. Qin, C. L. Raye, M. K. Johnson, & K. J. Mitchell, 2001; S. D. Slotnick, S. A. Klein, C. S. Dodson, & A. P. Shimamura, 2000, and A. P. Yonelinas, 1999). This assumption is shown to be unwarranted by showing that discrete-state ratings models predict both linear and nonlinear ROCs. The critical factor determining the form of the discrete-state ROC is the response strategy adopted by the classifier.

1. STUDY ON THE LEVEL OF CONFIDENCE THAT ROMANIAN CONSUMERS HAVE REGARDING THE ORGANIC PRODUCTS

Directory of Open Access Journals (Sweden)

Narcis Alexandru BOZGA

2015-04-01

Full Text Available Organic agriculture is a domain that is growing rapidly both in Europe or worldwide and in Romania. However, there is a limited number of researches which, by the used methodology, are able to offer a definite and appropriate image of the Romanian market of organic products. In this respect, we considered as relevant to conduct certain market researches which can offer a wide image of the Romanian market of organic products. The present study aimed to briefly present some ideas learned from these researches concerning the level of confidence that the Romanian consumer has in organic products and the way in which the level of confidence may influence the purchasing behavior. Among the most important conclusions, it could be mentioned the low level of confidence that a large number of Romanian consumers has regarding the organic products, the decision to buy organic products is strongly influenced by the confidence expressed by the consumer, as well as the lack of confidence in organic products represents one of the main reasons for not buying it, in some cases being more important than the high price. After a deeper analysis, the final conclusion is that, at least partially, the low level of confidence in organic products is determined by the confusion and the low information level, on one hand, and by some producers' practices that so not seem to comply with the certification community norms.

2. Eyewitness confidence in simultaneous and sequential lineups: a criterion shift account for sequential mistaken identification overconfidence.

Science.gov (United States)

Dobolyi, David G; Dodson, Chad S

2013-12-01

Confidence judgments for eyewitness identifications play an integral role in determining guilt during legal proceedings. Past research has shown that confidence in positive identifications is strongly associated with accuracy. Using a standard lineup recognition paradigm, we investigated accuracy using signal detection and ROC analyses, along with the tendency to choose a face with both simultaneous and sequential lineups. We replicated past findings of reduced rates of choosing with sequential as compared to simultaneous lineups, but notably found an accuracy advantage in favor of simultaneous lineups. Moreover, our analysis of the confidence-accuracy relationship revealed two key findings. First, we observed a sequential mistaken identification overconfidence effect: despite an overall reduction in false alarms, confidence for false alarms that did occur was higher with sequential lineups than with simultaneous lineups, with no differences in confidence for correct identifications. This sequential mistaken identification overconfidence effect is an expected byproduct of the use of a more conservative identification criterion with sequential than with simultaneous lineups. Second, we found a steady drop in confidence for mistaken identifications (i.e., foil identifications and false alarms) from the first to the last face in sequential lineups, whereas confidence in and accuracy of correct identifications remained relatively stable. Overall, we observed that sequential lineups are both less accurate and produce higher confidence false identifications than do simultaneous lineups. Given the increasing prominence of sequential lineups in our legal system, our data argue for increased scrutiny and possibly a wholesale reevaluation of this lineup format. PsycINFO Database Record (c) 2013 APA, all rights reserved.

3. Disconnections Between Teacher Expectations and Student Confidence in Bioethics

Science.gov (United States)

Hanegan, Nikki L.; Price, Laura; Peterson, Jeremy

2008-09-01

This study examines how student practice of scientific argumentation using socioscientific bioethics issues affects both teacher expectations of students’ general performance and student confidence in their own work. When teachers use bioethical issues in the classroom students can gain not only biology content knowledge but also important decision-making skills. Learning bioethics through scientific argumentation gives students opportunities to express their ideas, formulate educated opinions and value others’ viewpoints. Research has shown that science teachers’ expectations of student success and knowledge directly influence student achievement and confidence levels. Our study analyzes pre-course and post-course surveys completed by students enrolled in a university level bioethics course ( n = 111) and by faculty in the College of Biology and Agriculture faculty ( n = 34) based on their perceptions of student confidence. Additionally, student data were collected from classroom observations and interviews. Data analysis showed a disconnect between faculty and students perceptions of confidence for both knowledge and the use of science argumentation. Student reports of their confidence levels regarding various bioethical issues were higher than faculty reports. A further disconnect showed up between students’ preferred learning styles and the general faculty’s common teaching methods; students learned more by practicing scientific argumentation than listening to traditional lectures. Students who completed a bioethics course that included practice in scientific argumentation, significantly increased their confidence levels. This study suggests that professors’ expectations and teaching styles influence student confidence levels in both knowledge and scientific argumentation.

4. Sex differences in confidence influence patterns of conformity.

Science.gov (United States)

Cross, Catharine P; Brown, Gillian R; Morgan, Thomas J H; Laland, Kevin N

2017-11-01

Lack of confidence in one's own ability can increase the likelihood of relying on social information. Sex differences in confidence have been extensively investigated in cognitive tasks, but implications for conformity have not been directly tested. Here, we tested the hypothesis that, in a task that shows sex differences in confidence, an indirect effect of sex on social information use will also be evident. Participants (N = 168) were administered a mental rotation (MR) task or a letter transformation (LT) task. After providing an answer, participants reported their confidence before seeing the responses of demonstrators and being allowed to change their initial answer. In the MR, but not the LT, task, women showed lower levels of confidence than men, and confidence mediated an indirect effect of sex on the likelihood of switching answers. These results provide novel, experimental evidence that confidence is a general explanatory mechanism underpinning susceptibility to social influences. Our results have implications for the interpretation of the wider literature on sex differences in conformity. © 2016 The British Psychological Society.

5. Confidence in Alternative Dispute Resolution: Experience from Switzerland

Directory of Open Access Journals (Sweden)

Christof Schwenkel

2014-06-01

Full Text Available Alternative Dispute Resolution plays a crucial role in the justice system of Switzerland. With the unified Swiss Code of Civil Procedure, it is required that each litigation session shall be preceded by an attempt at conciliation before a conciliation authority. However, there has been little research on conciliation authorities and the public's perception of the authorities. This paper looks at public confidence in conciliation authorities and provides results of a survey conducted with more than 3,400 participants. This study found that public confidence in Swiss conciliation authorities is generally high, exceeds the ratings for confidence in cantonal governments and parliaments, but is lower than confidence in courts.Since the institutional models of the conciliation authorities (meaning the organization of the authorities and the selection of the conciliators differ widely between the 26 Swiss cantons, the influence of the institutional models on public confidence is analyzed. Contrary to assumptions based on New Institutional-ism approaches, this study reports that the institutional models do not impact public confidence. Also, the relationship between a participation in an election of justices of the peace or conciliators and public confidence in these authorities is found to be at most very limited (and negative. Similar to common findings on courts, the results show that general contacts with conciliation authorities decrease public confidence in these institutions whereas a positive experience with a conciliation authority leads to more confidence.The Study was completed as part of the research project 'Basic Research into Court Management in Switzerland', supported by the Swiss National Science Foundation (SNSF. Christof Schwenkel is a PhD student at the University of Lucerne and a research associate and project manager at Interface Policy Studies. A first version of this article was presented at the 2013 European Group for Public

6. A Poisson process approximation for generalized K-5 confidence regions

Science.gov (United States)

Arsham, H.; Miller, D. R.

1982-01-01

One-sided confidence regions for continuous cumulative distribution functions are constructed using empirical cumulative distribution functions and the generalized Kolmogorov-Smirnov distance. The band width of such regions becomes narrower in the right or left tail of the distribution. To avoid tedious computation of confidence levels and critical values, an approximation based on the Poisson process is introduced. This aproximation provides a conservative confidence region; moreover, the approximation error decreases monotonically to 0 as sample size increases. Critical values necessary for implementation are given. Applications are made to the areas of risk analysis, investment modeling, reliability assessment, and analysis of fault tolerant systems.

7. Serial binary interval ratios improve rhythm reproduction.

Science.gov (United States)

Wu, Xiang; Westanmo, Anders; Zhou, Liang; Pan, Junhao

2013-01-01

Musical rhythm perception is a natural human ability that involves complex cognitive processes. Rhythm refers to the organization of events in time, and musical rhythms have an underlying hierarchical metrical structure. The metrical structure induces the feeling of a beat and the extent to which a rhythm induces the feeling of a beat is referred to as its metrical strength. Binary ratios are the most frequent interval ratio in musical rhythms. Rhythms with hierarchical binary ratios are better discriminated and reproduced than rhythms with hierarchical non-binary ratios. However, it remains unclear whether a superiority of serial binary over non-binary ratios in rhythm perception and reproduction exists. In addition, how different types of serial ratios influence the metrical strength of rhythms remains to be elucidated. The present study investigated serial binary vs. non-binary ratios in a reproduction task. Rhythms formed with exclusively binary (1:2:4:8), non-binary integer (1:3:5:6), and non-integer (1:2.3:5.3:6.4) ratios were examined within a constant meter. The results showed that the 1:2:4:8 rhythm type was more accurately reproduced than the 1:3:5:6 and 1:2.3:5.3:6.4 rhythm types, and the 1:2.3:5.3:6.4 rhythm type was more accurately reproduced than the 1:3:5:6 rhythm type. Further analyses showed that reproduction performance was better predicted by the distribution pattern of event occurrences within an inter-beat interval, than by the coincidence of events with beats, or the magnitude and complexity of interval ratios. Whereas rhythm theories and empirical data emphasize the role of the coincidence of events with beats in determining metrical strength and predicting rhythm performance, the present results suggest that rhythm processing may be better understood when the distribution pattern of event occurrences is taken into account. These results provide new insights into the mechanisms underlining musical rhythm perception.

8. Serial binary interval ratios improve rhythm reproduction

Directory of Open Access Journals (Sweden)

Xiang eWu

2013-08-01

Full Text Available Musical rhythm perception is a natural human ability that involves complex cognitive processes. Rhythm refers to the organization of events in time, and musical rhythms have an underlying hierarchical metrical structure. The metrical structure induces the feeling of a beat and the extent to which a rhythm induces the feeling of a beat is referred to as its metrical strength. Binary ratios are the most frequent interval ratio in musical rhythms. Rhythms with hierarchical binary ratios are better discriminated and reproduced than rhythms with hierarchical non-binary ratios. However, it remains unclear whether a superiority of serial binary over non-binary ratios in rhythm perception and reproduction exists. In addition, how different types of serial ratios influence the metrical strength of rhythms remains to be elucidated. The present study investigated serial binary vs. non-binary ratios in a reproduction task. Rhythms formed with exclusively binary (1:2:4:8, non-binary integer (1:3:5:6, and non-integer (1:2.3:5.3:6.4 ratios were examined within a constant meter. The results showed that the 1:2:4:8 rhythm type was more accurately reproduced than the 1:3:5:6 and 1:2.3:5.3:6.4 rhythm types, and the 1:2.3:5.3:6.4 rhythm type was more accurately reproduced than the 1:3:5:6 rhythm type. Further analyses showed that reproduction performance was better predicted by the distribution pattern of event occurrences within an inter-beat interval, than by the coincidence of events with beats, or the magnitude and complexity of interval ratios. Whereas rhythm theories and empirical data emphasize the role of the coincidence of events with beats in determining metrical strength and predicting rhythm performance, the present results suggest that rhythm processing may be better understood when the distribution pattern of event occurrences is taken into account. These results provide new insights into the mechanisms underlining musical rhythm perception.

9. Incidence of interval cancers in faecal immunochemical test colorectal screening programmes in Italy.

Science.gov (United States)

Giorgi Rossi, Paolo; Carretta, Elisa; Mangone, Lucia; Baracco, Susanna; Serraino, Diego; Zorzi, Manuel

2018-03-01

Objective In Italy, colorectal screening programmes using the faecal immunochemical test from ages 50 to 69 every two years have been in place since 2005. We aimed to measure the incidence of interval cancers in the two years after a negative faecal immunochemical test, and compare this with the pre-screening incidence of colorectal cancer. Methods Using data on colorectal cancers diagnosed in Italy from 2000 to 2008 collected by cancer registries in areas with active screening programmes, we identified cases that occurred within 24 months of negative screening tests. We used the number of tests with a negative result as a denominator, grouped by age and sex. Proportional incidence was calculated for the first and second year after screening. Results Among 579,176 and 226,738 persons with negative test results followed up at 12 and 24 months, respectively, we identified 100 interval cancers in the first year and 70 in the second year. The proportional incidence was 13% (95% confidence interval 10-15) and 23% (95% confidence interval 18-25), respectively. The estimate for the two-year incidence is 18%, which was slightly higher in females (22%; 95% confidence interval 17-26), and for proximal colon (22%; 95% confidence interval 16-28). Conclusion The incidence of interval cancers in the two years after a negative faecal immunochemical test in routine population-based colorectal cancer screening was less than one-fifth of the expected incidence. This is direct evidence that the faecal immunochemical test-based screening programme protocol has high sensitivity for cancers that will become symptomatic.

10. Decoded fMRI neurofeedback can induce bidirectional confidence changes within single participants.

Science.gov (United States)

Cortese, Aurelio; Amano, Kaoru; Koizumi, Ai; Lau, Hakwan; Kawato, Mitsuo

2017-04-01

Neurofeedback studies using real-time functional magnetic resonance imaging (rt-fMRI) have recently incorporated the multi-voxel pattern decoding approach, allowing for fMRI to serve as a tool to manipulate fine-grained neural activity embedded in voxel patterns. Because of its tremendous potential for clinical applications, certain questions regarding decoded neurofeedback (DecNef) must be addressed. Specifically, can the same participants learn to induce neural patterns in opposite directions in different sessions? If so, how does previous learning affect subsequent induction effectiveness? These questions are critical because neurofeedback effects can last for months, but the short- to mid-term dynamics of such effects are unknown. Here we employed a within-subjects design, where participants underwent two DecNef training sessions to induce behavioural changes of opposing directionality (up or down regulation of perceptual confidence in a visual discrimination task), with the order of training counterbalanced across participants. Behavioral results indicated that the manipulation was strongly influenced by the order and the directionality of neurofeedback training. We applied nonlinear mathematical modeling to parametrize four main consequences of DecNef: main effect of change in confidence, strength of down-regulation of confidence relative to up-regulation, maintenance of learning effects, and anterograde learning interference. Modeling results revealed that DecNef successfully induced bidirectional confidence changes in different sessions within single participants. Furthermore, the effect of up- compared to down-regulation was more prominent, and confidence changes (regardless of the direction) were largely preserved even after a week-long interval. Lastly, the effect of the second session was markedly diminished as compared to the effect of the first session, indicating strong anterograde learning interference. These results are interpreted in the framework

11. The Perceived Importance of Youth Educator's Confidence in Delivering Leadership Development Programming

Science.gov (United States)

Brumbaugh, Laura; Cater, Melissa

2016-01-01

A successful component of programs designed to deliver youth leadership develop programs are youth educators who understand the importance of utilizing research-based information and seeking professional development opportunities. The purpose of this study was to determine youth educator's perceived confidence in leading youth leadership…

12. Medical Students? Confidence Judgments Using a Factual Database and Personal Memory: A Comparison.

Science.gov (United States)

O'Keefe, Karen M.; Wildemuth, Barbara M.; Friedman, Charles P.

1999-01-01

This study examined the quality of medical students' confidence estimates in answering questions in bacteriology based on personal knowledge alone and what they retrieved from a factual database in microbiology, in order to determine whether medical students can recognize when an information need has been fulfilled and when it has not. (Author/LRW)

13. Bootstrap Prediction Intervals in Non-Parametric Regression with Applications to Anomaly Detection

Science.gov (United States)

Kumar, Sricharan; Srivistava, Ashok N.

2012-01-01

Prediction intervals provide a measure of the probable interval in which the outputs of a regression model can be expected to occur. Subsequently, these prediction intervals can be used to determine if the observed output is anomalous or not, conditioned on the input. In this paper, a procedure for determining prediction intervals for outputs of nonparametric regression models using bootstrap methods is proposed. Bootstrap methods allow for a non-parametric approach to computing prediction intervals with no specific assumptions about the sampling distribution of the noise or the data. The asymptotic fidelity of the proposed prediction intervals is theoretically proved. Subsequently, the validity of the bootstrap based prediction intervals is illustrated via simulations. Finally, the bootstrap prediction intervals are applied to the problem of anomaly detection on aviation data.

14. An appraisal of statistical procedures used in derivation of reference intervals.

Science.gov (United States)

Ichihara, Kiyoshi; Boyd, James C

2010-11-01

When conducting studies to derive reference intervals (RIs), various statistical procedures are commonly applied at each step, from the planning stages to final computation of RIs. Determination of the necessary sample size is an important consideration, and evaluation of at least 400 individuals in each subgroup has been recommended to establish reliable common RIs in multicenter studies. Multiple regression analysis allows identification of the most important factors contributing to variation in test results, while accounting for possible confounding relationships among these factors. Of the various approaches proposed for judging the necessity of partitioning reference values, nested analysis of variance (ANOVA) is the likely method of choice owing to its ability to handle multiple groups and being able to adjust for multiple factors. Box-Cox power transformation often has been used to transform data to a Gaussian distribution for parametric computation of RIs. However, this transformation occasionally fails. Therefore, the non-parametric method based on determination of the 2.5 and 97.5 percentiles following sorting of the data, has been recommended for general use. The performance of the Box-Cox transformation can be improved by introducing an additional parameter representing the origin of transformation. In simulations, the confidence intervals (CIs) of reference limits (RLs) calculated by the parametric method were narrower than those calculated by the non-parametric approach. However, the margin of difference was rather small owing to additional variability in parametrically-determined RLs introduced by estimation of parameters for the Box-Cox transformation. The parametric calculation method may have an advantage over the non-parametric method in allowing identification and exclusion of extreme values during RI computation.

15. Confidence and the stock market: an agent-based approach.

Science.gov (United States)

Bertella, Mario A; Pires, Felipe R; Feng, Ling; Stanley, Harry Eugene

2014-01-01

Using a behavioral finance approach we study the impact of behavioral bias. We construct an artificial market consisting of fundamentalists and chartists to model the decision-making process of various agents. The agents differ in their strategies for evaluating stock prices, and exhibit differing memory lengths and confidence levels. When we increase the heterogeneity of the strategies used by the agents, in particular the memory lengths, we observe excess volatility and kurtosis, in agreement with real market fluctuations--indicating that agents in real-world financial markets exhibit widely differing memory lengths. We incorporate the behavioral traits of adaptive confidence and observe a positive correlation between average confidence and return rate, indicating that market sentiment is an important driver in price fluctuations. The introduction of market confidence increases price volatility, reflecting the negative effect of irrationality in market behavior.

16. CERN confident of LHC start-up in 2007

CERN Document Server

2007-01-01

"Delegates attending the 140th meeting of CERN Council heard a confident report from the Laboratory about the scheduled start-up of the world's highest energy particle accelerator, the Large Hadron Collier (LHC), in 2007." (1 page)

17. Confidence Measurement in the Light of Signal Detection Theory

Directory of Open Access Journals (Sweden)

Sébastien eMassoni

2014-12-01

Full Text Available We compare three alternative methods for eliciting retrospective confidence in the context of a simple perceptual task: the Simple Confidence Rating (a direct report on a numerical scale, the Quadratic Scoring Rule (a post-wagering procedure and the Matching Probability (a generalization of the no-loss gambling method. We systematically compare the results obtained with these three rules to the theoretical confidence levels that can be inferred from performance in the perceptual task using Signal Detection Theory. We find that the Matching Probability provides better results in that respect. We conclude that Matching Probability is particularly well suited for studies of confidence that use Signal Detection Theory as a theoretical framework.

18. Confidence-building measures in the Asia-Pacific region

International Nuclear Information System (INIS)

Qin Huasun

1991-01-01

The regional confidence-building, security and disarmament issues in the Asia-Pacific region, and in particular, support to non-proliferation regime and establishing nuclear-weapon-free zones are reviewed

19. Building Supervisory Confidence--A Key to Transfer of Training

Science.gov (United States)

Byham, William C.; Robinson, James

1977-01-01

A training concept is described which suggests that efforts toward maintaining and/or building the confidence of the participants in supervisory training programs can increase their likelihood of using the skills on the job. (TA)

20. Confidence assessment. Site-descriptive modelling SDM-Site Laxemar

International Nuclear Information System (INIS)

2009-06-01

The objective of this report is to assess the confidence that can be placed in the Laxemar site descriptive model, based on the information available at the conclusion of the surface-based investigations (SDM-Site Laxemar). In this exploration, an overriding question is whether remaining uncertainties are significant for repository engineering design or long-term safety assessment and could successfully be further reduced by more surface-based investigations or more usefully by explorations underground made during construction of the repository. Procedures for this assessment have been progressively refined during the course of the site descriptive modelling, and applied to all previous versions of the Forsmark and Laxemar site descriptive models. They include assessment of whether all relevant data have been considered and understood, identification of the main uncertainties and their causes, possible alternative models and their handling, and consistency between disciplines. The assessment then forms the basis for an overall confidence statement. The confidence in the Laxemar site descriptive model, based on the data available at the conclusion of the surface based site investigations, has been assessed by exploring: - Confidence in the site characterization data base, - remaining issues and their handling, - handling of alternatives, - consistency between disciplines and - main reasons for confidence and lack of confidence in the model. Generally, the site investigation database is of high quality, as assured by the quality procedures applied. It is judged that the Laxemar site descriptive model has an overall high level of confidence. Because of the relatively robust geological model that describes the site, the overall confidence in the Laxemar Site Descriptive model is judged to be high, even though details of the spatial variability remain unknown. The overall reason for this confidence is the wide spatial distribution of the data and the consistency between

1. Confidence assessment. Site-descriptive modelling SDM-Site Laxemar

Energy Technology Data Exchange (ETDEWEB)

2008-12-15

The objective of this report is to assess the confidence that can be placed in the Laxemar site descriptive model, based on the information available at the conclusion of the surface-based investigations (SDM-Site Laxemar). In this exploration, an overriding question is whether remaining uncertainties are significant for repository engineering design or long-term safety assessment and could successfully be further reduced by more surface-based investigations or more usefully by explorations underground made during construction of the repository. Procedures for this assessment have been progressively refined during the course of the site descriptive modelling, and applied to all previous versions of the Forsmark and Laxemar site descriptive models. They include assessment of whether all relevant data have been considered and understood, identification of the main uncertainties and their causes, possible alternative models and their handling, and consistency between disciplines. The assessment then forms the basis for an overall confidence statement. The confidence in the Laxemar site descriptive model, based on the data available at the conclusion of the surface based site investigations, has been assessed by exploring: - Confidence in the site characterization data base, - remaining issues and their handling, - handling of alternatives, - consistency between disciplines and - main reasons for confidence and lack of confidence in the model. Generally, the site investigation database is of high quality, as assured by the quality procedures applied. It is judged that the Laxemar site descriptive model has an overall high level of confidence. Because of the relatively robust geological model that describes the site, the overall confidence in the Laxemar Site Descriptive model is judged to be high, even though details of the spatial variability remain unknown. The overall reason for this confidence is the wide spatial distribution of the data and the consistency between

2. Sequential Interval Estimation of a Location Parameter with Fixed Width in the Nonregular Case

OpenAIRE

Koike, Ken-ichi

2007-01-01

For a location-scale parameter family of distributions with a finite support, a sequential confidence interval with a fixed width is obtained for the location parameter, and its asymptotic consistency and efficiency are shown. Some comparisons with the Chow-Robbins procedure are also done.

3. The Sense of Confidence during Probabilistic Learning: A Normative Account.

Directory of Open Access Journals (Sweden)

Florent Meyniel

2015-06-01

Full Text Available Learning in a stochastic environment consists of estimating a model from a limited amount of noisy data, and is therefore inherently uncertain. However, many classical models reduce the learning process to the updating of parameter estimates and neglect the fact that learning is also frequently accompanied by a variable "feeling of knowing" or confidence. The characteristics and the origin of these subjective confidence estimates thus remain largely unknown. Here we investigate whether, during learning, humans not only infer a model of their environment, but also derive an accurate sense of confidence from their inferences. In our experiment, humans estimated the transition probabilities between two visual or auditory stimuli in a changing environment, and reported their mean estimate and their confidence in this report. To formalize the link between both kinds of estimate and assess their accuracy in comparison to a normative reference, we derive the optimal inference strategy for our task. Our results indicate that subjects accurately track the likelihood that their inferences are correct. Learning and estimating confidence in what has been learned appear to be two intimately related abilities, suggesting that they arise from a single inference process. We show that human performance matches several properties of the optimal probabilistic inference. In particular, subjective confidence is impacted by environmental uncertainty, both at the first level (uncertainty in stimulus occurrence given the inferred stochastic characteristics and at the second level (uncertainty due to unexpected changes in these stochastic characteristics. Confidence also increases appropriately with the number of observations within stable periods. Our results support the idea that humans possess a quantitative sense of confidence in their inferences about abstract non-sensory parameters of the environment. This ability cannot be reduced to simple heuristics, it seems

4. Confidence limits for small numbers of events in astrophysical data

Science.gov (United States)

Gehrels, N.

1986-01-01

The calculation of limits for small numbers of astronomical counts is based on standard equations derived from Poisson and binomial statistics; although the equations are straightforward, their direct use is cumbersome and involves both table-interpolations and several mathematical operations. Convenient tables and approximate formulae are here presented for confidence limits which are based on such Poisson and binomial statistics. The limits in the tables are given for all confidence levels commonly used in astrophysics.

5. Non-Asymptotic Confidence Sets for Circular Means

Directory of Open Access Journals (Sweden)

Thomas Hotz

2016-10-01

Full Text Available The mean of data on the unit circle is defined as the minimizer of the average squared Euclidean distance to the data. Based on Hoeffding’s mass concentration inequalities, non-asymptotic confidence sets for circular means are constructed which are universal in the sense that they require no distributional assumptions. These are then compared with asymptotic confidence sets in simulations and for a real data set.

6. Learning style and confidence: an empirical investigation of Japanese employees

OpenAIRE

Yoshitaka Yamazaki

2012-01-01

This study aims to examine how learning styles relate to employees' confidence through a view of Kolb's experiential learning theory. For this aim, an empirical investigation was conducted using the sample of 201 Japanese employees who work for a Japanese multinational corporation. Results illustrated that the learning style group of acting orientation described a significantly higher level of job confidence than that of reflecting orientation, whereas the two groups of feeling and thinking o...

7. Learning to make collective decisions: the impact of confidence escalation.

Science.gov (United States)

2013-01-01

Little is known about how people learn to take into account others' opinions in joint decisions. To address this question, we combined computational and empirical approaches. Human dyads made individual and joint visual perceptual decision and rated their confidence in those decisions (data previously published). We trained a reinforcement (temporal difference) learning agent to get the participants' confidence level and learn to arrive at a dyadic decision by finding the policy that either maximized the accuracy of the model decisions or maximally conformed to the empirical dyadic decisions. When confidences were shared visually without verbal interaction, RL agents successfully captured social learning. When participants exchanged confidences visually and interacted verbally, no collective benefit was achieved and the model failed to predict the dyadic behaviour. Behaviourally, dyad members' confidence increased progressively and verbal interaction accelerated this escalation. The success of the model in drawing collective benefit from dyad members was inversely related to confidence escalation rate. The findings show an automated learning agent can, in principle, combine individual opinions and achieve collective benefit but the same agent cannot discount the escalation suggesting that one cognitive component of collective decision making in human may involve discounting of overconfidence arising from interactions.

8. Lack of an Effect of Standard and Supratherapeutic Doses of Linezolid on QTc Interval Prolongation▿†

Science.gov (United States)

Damle, Bharat; LaBadie, Robert R.; Cuozzo, Cheryl; Alvey, Christine; Choo, Heng Wee; Riley, Steve; Kirby, Deborah

2011-01-01

A double-blind, placebo-controlled, four-way crossover study was conducted in 40 subjects to assess the effect of linezolid on corrected QT (QTc) interval prolongation. Time-matched, placebo-corrected QT intervals were determined predose and at 0.5, 1 (end of infusion), 2, 4, 8, 12, and 24 h after intravenous dosing of linezolid 600 and 1,200 mg. Oral moxifloxacin at 400 mg was used as an active control. The pharmacokinetic profile of linezolid was also evaluated. At each time point, the upper bound of the 90% confidence interval (CI) for placebo-corrected QTcF values (i.e., QTc values adjusted for ventricular rate using the correction methods of Fridericia) for linezolid 600 and 1,200-mg doses were 5 ms, indicating that the study was adequately sensitive to assess QTc prolongation. The pharmacokinetic profile of linezolid at 600 mg was consistent with previous observations. Systemic exposure to linezolid increased in a slightly more than dose-proportional manner at supratherapeutic doses, but the degree of nonlinearity was small. At a supratherapeutic single dose of 1,200 mg of linezolid, no treatment-related increase in adverse events was seen compared to 600 mg of linezolid, and no clinically meaningful effects on vital signs and safety laboratory evaluations were noted. PMID:21709083

9. Restricted Interval Valued Neutrosophic Sets and Restricted Interval Valued Neutrosophic Topological Spaces

Directory of Open Access Journals (Sweden)

Anjan Mukherjee

2016-08-01

Full Text Available In this paper we introduce the concept of restricted interval valued neutrosophic sets (RIVNS in short. Some basic operations and properties of RIVNS are discussed. The concept of restricted interval valued neutrosophic topology is also introduced together with restricted interval valued neutrosophic finer and restricted interval valued neutrosophic coarser topology. We also define restricted interval valued neutrosophic interior and closer of a restricted interval valued neutrosophic set. Some theorems and examples are cites. Restricted interval valued neutrosophic subspace topology is also studied.

10. Identification of atrial fibrillation using electrocardiographic RR-interval difference

Science.gov (United States)

Eliana, M.; Nuryani, N.

2017-11-01

Automated detection of atrial fibrillation (AF) is an interesting topic. It is an account of very dangerous, not only as a trigger of embolic stroke, but it’s also related to some else chronical disease. In this study, we analyse the presence of AF by determining irregularities of RR-interval. We utilize the interval comparison to measure the degree of irregularities of RR-interval in a defined segment. The series of RR-interval is segmented with the length of 10 of them. In this study, we use interval comparison for the method. We were comparing all of the intervals there each other. Then we put the threshold to define the low difference and high difference (δ). A segment is defined as AF or Normal Sinus by the number of high δ, so we put the tolerance (β) of high δ there. We have used this method to test the 23 patients data from MIT-BIH. Using the approach and the clinical data we find accuracy, sensitivity, and specificity of 84.98%, 91.99%, and 77.85% respectively.

11. Extended score interval in the assessment of basic surgical skills.

Science.gov (United States)

Acosta, Stefan; Sevonius, Dan; Beckman, Anders

2015-01-01

The Basic Surgical Skills course uses an assessment score interval of 0-3. An extended score interval, 1-6, was proposed by the Swedish steering committee of the course. The aim of this study was to analyze the trainee scores in the current 0-3 scored version compared to a proposed 1-6 scored version. Sixteen participants, seven females and nine males, were evaluated in the current and proposed assessment forms by instructors, observers, and learners themselves during the first and second day. In each assessment form, 17 tasks were assessed. The inter-rater reliability between the current and the proposed score sheets were evaluated with intraclass correlation (ICC) with 95% confidence intervals (CI). The distribution of scores for 'knot tying' at the last time point and 'bowel anastomosis side to side' given by the instructors in the current assessment form showed that the highest score was given in 31 and 62%, respectively. No ceiling effects were found in the proposed assessment form. The overall ICC between the current and proposed score sheets after assessment by the instructors increased from 0.38 (95% CI 0.77-0.78) on Day 1 to 0.83 (95% CI 0.51-0.94) on Day 2. A clear ceiling effect of scores was demonstrated in the current assessment form, questioning its validity. The proposed score sheet provides more accurate scores and seems to be a better feedback instrument for learning technical surgical skills in the Basic Surgical Skills course.

12. Haematological and biochemical reference intervals for free-ranging brown bears (Ursus arctos) in Sweden

DEFF Research Database (Denmark)

Græsli, Anne Randi; Fahlman, Åsa; Evans, Alina L.

2014-01-01

BackgroundEstablishment of haematological and biochemical reference intervals is important to assess health of animals on individual and population level. Reference intervals for 13 haematological and 34 biochemical variables were established based on 88 apparently healthy free-ranging brown bears...... and marking for ecological studies. For each of the variables, the reference interval was described based on the 95% confidence interval, and differences due to host characteristics sex and age were included if detected. To our knowledge, this is the first report of reference intervals for free-ranging brown...... and the differences due to host factors age and gender can be useful for evaluation of health status in free-ranging European brown bears....

13. Prolonged corrected QT interval is predictive of future stroke events even in subjects without ECG-diagnosed left ventricular hypertrophy.

Science.gov (United States)

Ishikawa, Joji; Ishikawa, Shizukiyo; Kario, Kazuomi

2015-03-01

We attempted to evaluate whether subjects who exhibit prolonged corrected QT (QTc) interval (≥440 ms in men and ≥460 ms in women) on ECG, with and without ECG-diagnosed left ventricular hypertrophy (ECG-LVH; Cornell product, ≥244 mV×ms), are at increased risk of stroke. Among the 10 643 subjects, there were a total of 375 stroke events during the follow-up period (128.7±28.1 months; 114 142 person-years). The subjects with prolonged QTc interval (hazard ratio, 2.13; 95% confidence interval, 1.22-3.73) had an increased risk of stroke even after adjustment for ECG-LVH (hazard ratio, 1.71; 95% confidence interval, 1.22-2.40). When we stratified the subjects into those with neither a prolonged QTc interval nor ECG-LVH, those with a prolonged QTc interval but without ECG-LVH, and those with ECG-LVH, multivariate-adjusted Cox proportional hazards analysis demonstrated that the subjects with prolonged QTc intervals but not ECG-LVH (1.2% of all subjects; incidence, 10.7%; hazard ratio, 2.70, 95% confidence interval, 1.48-4.94) and those with ECG-LVH (incidence, 7.9%; hazard ratio, 1.83; 95% confidence interval, 1.31-2.57) had an increased risk of stroke events, compared with those with neither a prolonged QTc interval nor ECG-LVH. In conclusion, prolonged QTc interval was associated with stroke risk even among patients without ECG-LVH in the general population. © 2014 American Heart Association, Inc.

14. Conditional prediction intervals of wind power generation

DEFF Research Database (Denmark)

Pinson, Pierre; Kariniotakis, Georges

2010-01-01

A generic method for the providing of prediction intervals of wind power generation is described. Prediction intervals complement the more common wind power point forecasts, by giving a range of potential outcomes for a given probability, their so-called nominal coverage rate. Ideally they inform...... on the characteristics of prediction errors for providing conditional interval forecasts. By simultaneously generating prediction intervals with various nominal coverage rates, one obtains full predictive distributions of wind generation. Adapted resampling is applied here to the case of an onshore Danish wind farm...... to the case of a large number of wind farms in Europe and Australia among others is finally discussed....

15. PMICALC: an R code-based software for estimating post-mortem interval (PMI) compatible with Windows, Mac and Linux operating systems.

Science.gov (United States)

Muñoz-Barús, José I; Rodríguez-Calvo, María Sol; Suárez-Peñaranda, José M; Vieira, Duarte N; Cadarso-Suárez, Carmen; Febrero-Bande, Manuel

2010-01-30

In legal medicine the correct determination of the time of death is of utmost importance. Recent advances in estimating post-mortem interval (PMI) have made use of vitreous humour chemistry in conjunction with Linear Regression, but the results are questionable. In this paper we present PMICALC, an R code-based freeware package which estimates PMI in cadavers of recent death by measuring the concentrations of potassium ([K+]), hypoxanthine ([Hx]) and urea ([U]) in the vitreous humor using two different regression models: Additive Models (AM) and Support Vector Machine (SVM), which offer more flexibility than the previously used Linear Regression. The results from both models are better than those published to date and can give numerical expression of PMI with confidence intervals and graphic support within 20 min. The program also takes into account the cause of death. 2009 Elsevier Ireland Ltd. All rights reserved.

16. Correlates of emergency response interval and mortality from ...

African Journals Online (AJOL)

A retrospective study to determine the influence of blood transfusion emergency response interval on Mortality from childhood severe anemia was carried out. An admission record of all children with severe anemia over a 5-year period was reviewed. Those who either died before transfusion or got discharged against ...

17. Time Interval to Initiation of Contraceptive Methods Following ...

African Journals Online (AJOL)

Objectives: The objectives of the study were to determine factors affecting the interval between a woman's last childbirth and the initiation of contraception. Materials and Methods: This was a retrospective study. Family planning clinic records of the Barau Dikko Teaching Hospital Kaduna from January 2000 to March 2014 ...

18. Prognostic Significance Of QT Interval Prolongation In Adult ...

African Journals Online (AJOL)

Prognostic survival studies for heart-rate corrected QT interval in patients with chronic heart failure are few; although these patients are known to have a high risk of sudden cardiac death. This study was aimed at determining the mortality risk associated with prolonged QTc in Nigerians with heart failure. Ninety-six ...

19. Confidence-based learning CME: overcoming barriers in irritable bowel syndrome with constipation.

Science.gov (United States)

Cash, Brooks; Mitchner, Natasha A; Ravyn, Dana

2011-01-01

Performance of health care professionals depends on both medical knowledge and the certainty with which they possess it. Conventional continuing medical education interventions assess the correctness of learners' responses but do not determine the degree of confidence with which they hold incorrect information. This study describes the use of confidence-based learning (CBL) in an activity designed to enhance learners' knowledge, confidence in their knowledge, and clinical competence with regard to constipation-predominant IBS (IBS-C), a frequently underdiagnosed and misdiagnosed condition. The online CBL activity included multiple-choice questions in 2 modules: Burden of Care (BOC; 28 questions) and Patient Scenarios (PS; 9 case-based questions). After formative assessment, targeted feedback was provided, and the learner focused on material with demonstrated knowledge and/or confidence gaps. The process was repeated until 85% of questions were answered correctly and confidently (ie, mastery was attained). Of 275 participants (24% internal medicine, 13% gastroenterology, 32% family medicine, and 31% other), 249 and 167 completed the BOC and PS modules, respectively. Among all participants, 61.8% and 98.2% achieved mastery in the BOC and PS modules, respectively. Baseline mastery levels between specialties were significantly different in the BOC module (p = 0.002); no significant differences were evident between specialties in final mastery levels. Approximately one-third of learners were confident and wrong in topics of epidemiology, defining IBS and constipation, and treatments in the first iteration. No significant difference was observed between specialties for the PS module in either the first or last iterations. Learners achieved mastery in topics pertaining to IBS-C regardless of baseline knowledge or specialty. These data indicate that CME activities employing CBL can be used to address knowledge and confidence gaps. Copyright © 2010 The Alliance for

20. Effect of immersive workplace experience on undergraduate nurses' mental health clinical confidence.

Science.gov (United States)

Patterson, Christopher; Moxham, Lorna; Taylor, Ellie K; Perlman, Dana; Brighton, Renee; Sumskis, Susan; Heffernan, Tim; Lee-Bates, Benjamin

2017-12-01

Preregistration education needs to ensure that student nurses are properly trained with the required skills and knowledge, and have the confidence to work with people who have a mental illness. With increased attention on non-traditional mental health clinical placements, further research is required to determine the effects of non-traditional mental health clinical placements on mental health clinical confidence. The aim of the present study was to investigate the impact of a non-traditional mental health clinical placement on mental health nursing clinical confidence compared to nursing students undergoing traditional clinical placements. Using the Mental Health Nursing Clinical Confidence Scale, the study investigated the relative effects of two placement programmes on the mental health clinical confidence of 79 nursing students. The two placement programmes included a non-traditional clinical placement of Recovery Camp and a comparison group that attended traditional clinical placements. Overall, the results indicated that, for both groups, mental health placement had a significant effect on improving mean mental health clinical confidence, both immediately upon conclusion of placement and at the 3-month follow up. Students who attended Recovery Camp reported a significant positive difference, compared to the comparison group, for ratings related to communicating effectively with clients with a mental illness, having a basic knowledge of antipsychotic medications and their side-effects, and providing client education regarding the effects and side-effects of medications. The findings suggest that a unique clinical placement, such as Recovery Camp, can improve and maintain facets of mental health clinical confidence for students of nursing. © 2017 Australian College of Mental Health Nurses Inc.