#### Sample records for confidence intervals tailored

1. Using the confidence interval confidently.

Science.gov (United States)

Hazra, Avijit

2017-10-01

Biomedical research is seldom done with entire populations but rather with samples drawn from a population. Although we work with samples, our goal is to describe and draw inferences regarding the underlying population. It is possible to use a sample statistic and estimates of error in the sample to get a fair idea of the population parameter, not as a single value, but as a range of values. This range is the confidence interval (CI) which is estimated on the basis of a desired confidence level. Calculation of the CI of a sample statistic takes the general form: CI = Point estimate ± Margin of error, where the margin of error is given by the product of a critical value (z) derived from the standard normal curve and the standard error of point estimate. Calculation of the standard error varies depending on whether the sample statistic of interest is a mean, proportion, odds ratio (OR), and so on. The factors affecting the width of the CI include the desired confidence level, the sample size and the variability in the sample. Although the 95% CI is most often used in biomedical research, a CI can be calculated for any level of confidence. A 99% CI will be wider than 95% CI for the same sample. Conflict between clinical importance and statistical significance is an important issue in biomedical research. Clinical importance is best inferred by looking at the effect size, that is how much is the actual change or difference. However, statistical significance in terms of P only suggests whether there is any difference in probability terms. Use of the CI supplements the P value by providing an estimate of actual clinical effect. Of late, clinical trials are being designed specifically as superiority, non-inferiority or equivalence studies. The conclusions from these alternative trial designs are based on CI values rather than the P value from intergroup comparison.

2. Robust misinterpretation of confidence intervals

NARCIS (Netherlands)

Hoekstra, Rink; Morey, Richard; Rouder, Jeffrey N.; Wagenmakers, Eric-Jan

2014-01-01

Null hypothesis significance testing (NHST) is undoubtedly the most common inferential technique used to justify claims in the social sciences. However, even staunch defenders of NHST agree that its outcomes are often misinterpreted. Confidence intervals (CIs) have frequently been proposed as a more

3. Robust misinterpretation of confidence intervals.

Science.gov (United States)

Hoekstra, Rink; Morey, Richard D; Rouder, Jeffrey N; Wagenmakers, Eric-Jan

2014-10-01

Null hypothesis significance testing (NHST) is undoubtedly the most common inferential technique used to justify claims in the social sciences. However, even staunch defenders of NHST agree that its outcomes are often misinterpreted. Confidence intervals (CIs) have frequently been proposed as a more useful alternative to NHST, and their use is strongly encouraged in the APA Manual. Nevertheless, little is known about how researchers interpret CIs. In this study, 120 researchers and 442 students-all in the field of psychology-were asked to assess the truth value of six particular statements involving different interpretations of a CI. Although all six statements were false, both researchers and students endorsed, on average, more than three statements, indicating a gross misunderstanding of CIs. Self-declared experience with statistics was not related to researchers' performance, and, even more surprisingly, researchers hardly outperformed the students, even though the students had not received any education on statistical inference whatsoever. Our findings suggest that many researchers do not know the correct interpretation of a CI. The misunderstandings surrounding p-values and CIs are particularly unfortunate because they constitute the main tools by which psychologists draw conclusions from data.

4. Interpretation of Confidence Interval Facing the Conflict

Science.gov (United States)

Andrade, Luisa; Fernández, Felipe

2016-01-01

As literature has reported, it is usual that university students in statistics courses, and even statistics teachers, interpret the confidence level associated with a confidence interval as the probability that the parameter value will be between the lower and upper interval limits. To confront this misconception, class activities have been…

5. Confidence Interval Approximation For Treatment Variance In ...

African Journals Online (AJOL)

In a random effects model with a single factor, variation is partitioned into two as residual error variance and treatment variance. While a confidence interval can be imposed on the residual error variance, it is not possible to construct an exact confidence interval for the treatment variance. This is because the treatment ...

6. Coefficient Omega Bootstrap Confidence Intervals: Nonnormal Distributions

Science.gov (United States)

Padilla, Miguel A.; Divers, Jasmin

2013-01-01

The performance of the normal theory bootstrap (NTB), the percentile bootstrap (PB), and the bias-corrected and accelerated (BCa) bootstrap confidence intervals (CIs) for coefficient omega was assessed through a Monte Carlo simulation under conditions not previously investigated. Of particular interests were nonnormal Likert-type and binary items.…

7. Understanding Confidence Intervals With Visual Representations

OpenAIRE

Navruz, Bilgin; Delen, Erhan

2014-01-01

In the present paper, we showed how confidence intervals (CIs) are valuable and useful in research studies when they are used in the correct form with correct interpretations. The sixth edition of the APA (2010) Publication Manual strongly recommended reporting CIs in research studies, and it was described as “the best reporting strategy” (p. 34). Misconceptions and correct interpretations of CIs were presented from several textbooks. In addition, limitations of the null hypothesis statistica...

8. Generalized Confidence Intervals and Fiducial Intervals for Some Epidemiological Measures

Directory of Open Access Journals (Sweden)

Ionut Bebu

2016-06-01

Full Text Available For binary outcome data from epidemiological studies, this article investigates the interval estimation of several measures of interest in the absence or presence of categorical covariates. When covariates are present, the logistic regression model as well as the log-binomial model are investigated. The measures considered include the common odds ratio (OR from several studies, the number needed to treat (NNT, and the prevalence ratio. For each parameter, confidence intervals are constructed using the concepts of generalized pivotal quantities and fiducial quantities. Numerical results show that the confidence intervals so obtained exhibit satisfactory performance in terms of maintaining the coverage probabilities even when the sample sizes are not large. An appealing feature of the proposed solutions is that they are not based on maximization of the likelihood, and hence are free from convergence issues associated with the numerical calculation of the maximum likelihood estimators, especially in the context of the log-binomial model. The results are illustrated with a number of examples. The overall conclusion is that the proposed methodologies based on generalized pivotal quantities and fiducial quantities provide an accurate and unified approach for the interval estimation of the various epidemiological measures in the context of binary outcome data with or without covariates.

9. Confidence intervals for the lognormal probability distribution

International Nuclear Information System (INIS)

Smith, D.L.; Naberejnev, D.G.

2004-01-01

The present communication addresses the topic of symmetric confidence intervals for the lognormal probability distribution. This distribution is frequently utilized to characterize inherently positive, continuous random variables that are selected to represent many physical quantities in applied nuclear science and technology. The basic formalism is outlined herein and a conjured numerical example is provided for illustration. It is demonstrated that when the uncertainty reflected in a lognormal probability distribution is large, the use of a confidence interval provides much more useful information about the variable used to represent a particular physical quantity than can be had by adhering to the notion that the mean value and standard deviation of the distribution ought to be interpreted as best value and corresponding error, respectively. Furthermore, it is shown that if the uncertainty is very large a disturbing anomaly can arise when one insists on interpreting the mean value and standard deviation as the best value and corresponding error, respectively. Reliance on using the mode and median as alternative parameters to represent the best available knowledge of a variable with large uncertainties is also shown to entail limitations. Finally, a realistic physical example involving the decay of radioactivity over a time period that spans many half-lives is presented and analyzed to further illustrate the concepts discussed in this communication

10. Confidence Intervals from Normalized Data: A correction to Cousineau (2005

Directory of Open Access Journals (Sweden)

Richard D. Morey

2008-09-01

Full Text Available Presenting confidence intervals around means is a common method of expressing uncertainty in data. Loftus and Masson (1994 describe confidence intervals for means in within-subjects designs. These confidence intervals are based on the ANOVA mean squared error. Cousineau (2005 presents an alternative to the Loftus and Masson method, but his method produces confidence intervals that are smaller than those of Loftus and Masson. I show why this is the case and offer a simple correction that makes the expected size of Cousineau confidence intervals the same as that of Loftus and Masson confidence intervals.

11. Learning about confidence intervals with software R

Directory of Open Access Journals (Sweden)

Gariela Gonçalves

2013-08-01

Full Text Available 0 0 1 202 1111 USAL 9 2 1311 14.0 Normal 0 21 false false false ES JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Tabla normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:Calibri; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-ansi-language:ES; mso-fareast-language:EN-US;} This work was to study the feasibility of implementing a teaching method that employs software, in a Computational Mathematics course, involving students and teachers through the use of the statistical software R in carrying out practical work, such as strengthening the traditional teaching. The statistical inference, namely the determination of confidence intervals, was the content selected for this experience. It was intended show, first of all, that it is possible to promote, through the proposal methodology, the acquisition of basic skills in statistical inference and to promote the positive relationships between teachers and students. It presents also a comparative study between the methodologies used and their quantitative and qualitative results on two consecutive school years, in several indicators. The data used in the study were obtained from the students to the exam questions in the years 2010/2011 and 2011/2012, from the achievement of a working group in 2011/2012 and via the responses to a questionnaire (optional and anonymous also applied in 2011 / 2012. In terms of results, we emphasize a better performance of students in the examination questions in 2011/2012, the year that students used the software R, and a very favorable student’s perspective about

12. Confidence Intervals: From tests of statistical significance to confidence intervals, range hypotheses and substantial effects

Directory of Open Access Journals (Sweden)

Dominic Beaulieu-Prévost

2006-03-01

Full Text Available For the last 50 years of research in quantitative social sciences, the empirical evaluation of scientific hypotheses has been based on the rejection or not of the null hypothesis. However, more than 300 articles demonstrated that this method was problematic. In summary, null hypothesis testing (NHT is unfalsifiable, its results depend directly on sample size and the null hypothesis is both improbable and not plausible. Consequently, alternatives to NHT such as confidence intervals (CI and measures of effect size are starting to be used in scientific publications. The purpose of this article is, first, to provide the conceptual tools necessary to implement an approach based on confidence intervals, and second, to briefly demonstrate why such an approach is an interesting alternative to an approach based on NHT. As demonstrated in the article, the proposed CI approach avoids most problems related to a NHT approach and can often improve the scientific and contextual relevance of the statistical interpretations by testing range hypotheses instead of a point hypothesis and by defining the minimal value of a substantial effect. The main advantage of such a CI approach is that it replaces the notion of statistical power by an easily interpretable three-value logic (probable presence of a substantial effect, probable absence of a substantial effect and probabilistic undetermination. The demonstration includes a complete example.

13. Differentially Private Confidence Intervals for Empirical Risk Minimization

OpenAIRE

Wang, Yue; Kifer, Daniel; Lee, Jaewoo

2018-01-01

The process of data mining with differential privacy produces results that are affected by two types of noise: sampling noise due to data collection and privacy noise that is designed to prevent the reconstruction of sensitive information. In this paper, we consider the problem of designing confidence intervals for the parameters of a variety of differentially private machine learning models. The algorithms can provide confidence intervals that satisfy differential privacy (as well as the mor...

14. Confidence intervals for correlations when data are not normal.

Science.gov (United States)

Bishara, Anthony J; Hittner, James B

2017-02-01

With nonnormal data, the typical confidence interval of the correlation (Fisher z') may be inaccurate. The literature has been unclear as to which of several alternative methods should be used instead, and how extreme a violation of normality is needed to justify an alternative. Through Monte Carlo simulation, 11 confidence interval methods were compared, including Fisher z', two Spearman rank-order methods, the Box-Cox transformation, rank-based inverse normal (RIN) transformation, and various bootstrap methods. Nonnormality often distorted the Fisher z' confidence interval-for example, leading to a 95 % confidence interval that had actual coverage as low as 68 %. Increasing the sample size sometimes worsened this problem. Inaccurate Fisher z' intervals could be predicted by a sample kurtosis of at least 2, an absolute sample skewness of at least 1, or significant violations of normality hypothesis tests. Only the Spearman rank-order and RIN transformation methods were universally robust to nonnormality. Among the bootstrap methods, an observed imposed bootstrap came closest to accurate coverage, though it often resulted in an overly long interval. The results suggest that sample nonnormality can justify avoidance of the Fisher z' interval in favor of a more robust alternative. R code for the relevant methods is provided in supplementary materials.

15. Estimation and interpretation of keff confidence intervals in MCNP

International Nuclear Information System (INIS)

Urbatsch, T.J.

1995-11-01

MCNP's criticality methodology and some basic statistics are reviewed. Confidence intervals are discussed, as well as how to build them and their importance in the presentation of a Monte Carlo result. The combination of MCNP's three k eff estimators is shown, theoretically and empirically, by statistical studies and examples, to be the best k eff estimator. The method of combining estimators is based on a solid theoretical foundation, namely, the Gauss-Markov Theorem in regard to the least squares method. The confidence intervals of the combined estimator are also shown to have correct coverage rates for the examples considered

16. Robust Confidence Interval for a Ratio of Standard Deviations

Science.gov (United States)

Bonett, Douglas G.

2006-01-01

Comparing variability of test scores across alternate forms, test conditions, or subpopulations is a fundamental problem in psychometrics. A confidence interval for a ratio of standard deviations is proposed that performs as well as the classic method with normal distributions and performs dramatically better with nonnormal distributions. A simple…

17. Comparing confidence intervals for Goodman and Kruskal's gamma coefficient

NARCIS (Netherlands)

van der Ark, L.A.; van Aert, R.C.M.

2015-01-01

This study was motivated by the question which type of confidence interval (CI) one should use to summarize sample variance of Goodman and Kruskal's coefficient gamma. In a Monte-Carlo study, we investigated the coverage and computation time of the Goodman-Kruskal CI, the Cliff-consistent CI, the

18. Confidence Intervals for Assessing Heterogeneity in Generalized Linear Mixed Models

Science.gov (United States)

Wagler, Amy E.

2014-01-01

Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…

19. Parametric change point estimation, testing and confidence interval ...

African Journals Online (AJOL)

In many applications like finance, industry and medicine, it is important to consider that the model parameters may undergo changes at unknown moment in time. This paper deals with estimation, testing and confidence interval of a change point for a univariate variable which is assumed to be normally distributed. To detect ...

20. On Bayesian treatment of systematic uncertainties in confidence interval calculation

CERN Document Server

Tegenfeldt, Fredrik

2005-01-01

In high energy physics, a widely used method to treat systematic uncertainties in confidence interval calculations is based on combining a frequentist construction of confidence belts with a Bayesian treatment of systematic uncertainties. In this note we present a study of the coverage of this method for the standard Likelihood Ratio (aka Feldman & Cousins) construction for a Poisson process with known background and Gaussian or log-Normal distributed uncertainties in the background or signal efficiency. For uncertainties in the signal efficiency of upto 40 % we find over-coverage on the level of 2 to 4 % depending on the size of uncertainties and the region in signal space. Uncertainties in the background generally have smaller effect on the coverage. A considerable smoothing of the coverage curves is observed. A software package is presented which allows fast calculation of the confidence intervals for a variety of assumptions on shape and size of systematic uncertainties for different nuisance paramete...

1. Quantifying uncertainty on sediment loads using bootstrap confidence intervals

Science.gov (United States)

Slaets, Johanna I. F.; Piepho, Hans-Peter; Schmitter, Petra; Hilger, Thomas; Cadisch, Georg

2017-01-01

Load estimates are more informative than constituent concentrations alone, as they allow quantification of on- and off-site impacts of environmental processes concerning pollutants, nutrients and sediment, such as soil fertility loss, reservoir sedimentation and irrigation channel siltation. While statistical models used to predict constituent concentrations have been developed considerably over the last few years, measures of uncertainty on constituent loads are rarely reported. Loads are the product of two predictions, constituent concentration and discharge, integrated over a time period, which does not make it straightforward to produce a standard error or a confidence interval. In this paper, a linear mixed model is used to estimate sediment concentrations. A bootstrap method is then developed that accounts for the uncertainty in the concentration and discharge predictions, allowing temporal correlation in the constituent data, and can be used when data transformations are required. The method was tested for a small watershed in Northwest Vietnam for the period 2010-2011. The results showed that confidence intervals were asymmetric, with the highest uncertainty in the upper limit, and that a load of 6262 Mg year-1 had a 95 % confidence interval of (4331, 12 267) in 2010 and a load of 5543 Mg an interval of (3593, 8975) in 2011. Additionally, the approach demonstrated that direct estimates from the data were biased downwards compared to bootstrap median estimates. These results imply that constituent loads predicted from regression-type water quality models could frequently be underestimating sediment yields and their environmental impact.

2. Confidence Intervals from Realizations of Simulated Nuclear Data

Energy Technology Data Exchange (ETDEWEB)

Younes, W. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Ratkiewicz, A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Ressler, J. J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

2017-09-28

Various statistical techniques are discussed that can be used to assign a level of confidence in the prediction of models that depend on input data with known uncertainties and correlations. The particular techniques reviewed in this paper are: 1) random realizations of the input data using Monte-Carlo methods, 2) the construction of confidence intervals to assess the reliability of model predictions, and 3) resampling techniques to impose statistical constraints on the input data based on additional information. These techniques are illustrated with a calculation of the keff value, based on the 235U(n, f) and 239Pu (n, f) cross sections.

3. Profile-likelihood Confidence Intervals in Item Response Theory Models.

Science.gov (United States)

Chalmers, R Philip; Pek, Jolynn; Liu, Yang

2017-01-01

Confidence intervals (CIs) are fundamental inferential devices which quantify the sampling variability of parameter estimates. In item response theory, CIs have been primarily obtained from large-sample Wald-type approaches based on standard error estimates, derived from the observed or expected information matrix, after parameters have been estimated via maximum likelihood. An alternative approach to constructing CIs is to quantify sampling variability directly from the likelihood function with a technique known as profile-likelihood confidence intervals (PL CIs). In this article, we introduce PL CIs for item response theory models, compare PL CIs to classical large-sample Wald-type CIs, and demonstrate important distinctions among these CIs. CIs are then constructed for parameters directly estimated in the specified model and for transformed parameters which are often obtained post-estimation. Monte Carlo simulation results suggest that PL CIs perform consistently better than Wald-type CIs for both non-transformed and transformed parameters.

4. Effect size, confidence intervals and statistical power in psychological research.

Directory of Open Access Journals (Sweden)

Téllez A.

2015-07-01

Full Text Available Quantitative psychological research is focused on detecting the occurrence of certain population phenomena by analyzing data from a sample, and statistics is a particularly helpful mathematical tool that is used by researchers to evaluate hypotheses and make decisions to accept or reject such hypotheses. In this paper, the various statistical tools in psychological research are reviewed. The limitations of null hypothesis significance testing (NHST and the advantages of using effect size and its respective confidence intervals are explained, as the latter two measurements can provide important information about the results of a study. These measurements also can facilitate data interpretation and easily detect trivial effects, enabling researchers to make decisions in a more clinically relevant fashion. Moreover, it is recommended to establish an appropriate sample size by calculating the optimum statistical power at the moment that the research is designed. Psychological journal editors are encouraged to follow APA recommendations strictly and ask authors of original research studies to report the effect size, its confidence intervals, statistical power and, when required, any measure of clinical significance. Additionally, we must account for the teaching of statistics at the graduate level. At that level, students do not receive sufficient information concerning the importance of using different types of effect sizes and their confidence intervals according to the different types of research designs; instead, most of the information is focused on the various tools of NHST.

5. On a linear method in bootstrap confidence intervals

Directory of Open Access Journals (Sweden)

Andrea Pallini

2007-10-01

Full Text Available A linear method for the construction of asymptotic bootstrap confidence intervals is proposed. We approximate asymptotically pivotal and non-pivotal quantities, which are smooth functions of means of n independent and identically distributed random variables, by using a sum of n independent smooth functions of the same analytical form. Errors are of order Op(n-3/2 and Op(n-2, respectively. The linear method allows a straightforward approximation of bootstrap cumulants, by considering the set of n independent smooth functions as an original random sample to be resampled with replacement.

6. Comparison of Bootstrap Confidence Intervals Using Monte Carlo Simulations

Directory of Open Access Journals (Sweden)

Roberto S. Flowers-Cano

2018-02-01

Full Text Available Design of hydraulic works requires the estimation of design hydrological events by statistical inference from a probability distribution. Using Monte Carlo simulations, we compared coverage of confidence intervals constructed with four bootstrap techniques: percentile bootstrap (BP, bias-corrected bootstrap (BC, accelerated bias-corrected bootstrap (BCA and a modified version of the standard bootstrap (MSB. Different simulation scenarios were analyzed. In some cases, the mother distribution function was fit to the random samples that were generated. In other cases, a distribution function different to the mother distribution was fit to the samples. When the fitted distribution had three parameters, and was the same as the mother distribution, the intervals constructed with the four techniques had acceptable coverage. However, the bootstrap techniques failed in several of the cases in which the fitted distribution had two parameters.

7. Confidence interval procedures for Monte Carlo transport simulations

International Nuclear Information System (INIS)

Pederson, S.P.

1997-01-01

The problem of obtaining valid confidence intervals based on estimates from sampled distributions using Monte Carlo particle transport simulation codes such as MCNP is examined. Such intervals can cover the true parameter of interest at a lower than nominal rate if the sampled distribution is extremely right-skewed by large tallies. Modifications to the standard theory of confidence intervals are discussed and compared with some existing heuristics, including batched means normality tests. Two new types of diagnostics are introduced to assess whether the conditions of central limit theorem-type results are satisfied: the relative variance of the variance determines whether the sample size is sufficiently large, and estimators of the slope of the right tail of the distribution are used to indicate the number of moments that exist. A simulation study is conducted to quantify the relationship between various diagnostics and coverage rates and to find sample-based quantities useful in indicating when intervals are expected to be valid. Simulated tally distributions are chosen to emulate behavior seen in difficult particle transport problems. Measures of variation in the sample variance s 2 are found to be much more effective than existing methods in predicting when coverage will be near nominal rates. Batched means tests are found to be overly conservative in this regard. A simple but pathological MCNP problem is presented as an example of false convergence using existing heuristics. The new methods readily detect the false convergence and show that the results of the problem, which are a factor of 4 too small, should not be used. Recommendations are made for applying these techniques in practice, using the statistical output currently produced by MCNP

8. Estimation and interpretation of keff confidence intervals in MCNP

International Nuclear Information System (INIS)

Urbatsch, T.J.

1995-01-01

MCNP has three different, but correlated, estimators for Calculating k eff in nuclear criticality calculations: collision, absorption, and track length estimators. The combination of these three estimators, the three-combined k eff estimator, is shown to be the best k eff estimator available in MCNP for estimating k eff confidence intervals. Theoretically, the Gauss-Markov Theorem provides a solid foundation for MCNP's three-combined estimator. Analytically, a statistical study, where the estimates are drawn using a known covariance matrix, shows that the three-combined estimator is superior to the individual estimator with the smallest variance. The importance of MCNP's batch statistics is demonstrated by an investigation of the effects of individual estimator variance bias on the combination of estimators, both heuristically with the analytical study and emprically with MCNP

9. The 95% confidence intervals of error rates and discriminant coefficients

Directory of Open Access Journals (Sweden)

Shuichi Shinmura

2015-02-01

Full Text Available Fisher proposed a linear discriminant function (Fisher’s LDF. From 1971, we analysed electrocardiogram (ECG data in order to develop the diagnostic logic between normal and abnormal symptoms by Fisher’s LDF and a quadratic discriminant function (QDF. Our four years research was inferior to the decision tree logic developed by the medical doctor. After this experience, we discriminated many data and found four problems of the discriminant analysis. A revised Optimal LDF by Integer Programming (Revised IP-OLDF based on the minimum number of misclassification (minimum NM criterion resolves three problems entirely [13, 18]. In this research, we discuss fourth problem of the discriminant analysis. There are no standard errors (SEs of the error rate and discriminant coefficient. We propose a k-fold crossvalidation method. This method offers a model selection technique and a 95% confidence intervals (C.I. of error rates and discriminant coefficients.

10. GENERALISED MODEL BASED CONFIDENCE INTERVALS IN TWO STAGE CLUSTER SAMPLING

Directory of Open Access Journals (Sweden)

Christopher Ouma Onyango

2010-09-01

Full Text Available Chambers and Dorfman (2002 constructed bootstrap confidence intervals in model based estimation for finite population totals assuming that auxiliary values are available throughout a target population and that the auxiliary values are independent. They also assumed that the cluster sizes are known throughout the target population. We now extend to two stage sampling in which the cluster sizes are known only for the sampled clusters, and we therefore predict the unobserved part of the population total. Jan and Elinor (2008 have done similar work, but unlike them, we use a general model, in which the auxiliary values are not necessarily independent. We demonstrate that the asymptotic properties of our proposed estimator and its coverage rates are better than those constructed under the model assisted local polynomial regression model.

11. Estimation and interpretation of keff confidence intervals in MCNP

International Nuclear Information System (INIS)

Urbatsch, T.J.

1995-01-01

The Monte Carlo code MCNP has three different, but correlated, estimators for calculating k eff in nuclear criticality calculations: collision, absorption, and track length estimators. The combination of these three estimators, the three-combined k eff estimator, is shown to be the best k eff estimator available in MCNP for estimating k eff confidence intervals. Theoretically, the Gauss-Markov theorem provides a solid foundation for MCNP's three-combined estimator. Analytically, a statistical study, where the estimates are drawn using a known covariance matrix, shows that the three-combined estimator is superior to the estimator with the smallest variance. Empirically, MCNP examples for several physical systems demonstrate the three-combined estimator's superiority over each of the three individual estimators and its correct coverage rates. Additionally, the importance of MCNP's statistical checks is demonstrated

12. Secure and Usable Bio-Passwords based on Confidence Interval

Directory of Open Access Journals (Sweden)

Aeyoung Kim

2017-02-01

Full Text Available The most popular user-authentication method is the password. Many authentication systems try to enhance their security by enforcing a strong password policy, and by using the password as the first factor, something you know, with the second factor being something you have. However, a strong password policy and a multi-factor authentication system can make it harder for a user to remember the password and login in. In this paper a bio-password-based scheme is proposed as a unique authentication method, which uses biometrics and confidence interval sets to enhance the security of the log-in process and make it easier as well. The method offers a user-friendly solution for creating and registering strong passwords without the user having to memorize them. Here we also show the results of our experiments which demonstrate the efficiency of this method and how it can be used to protect against a variety of malicious attacks.

13. Confidence Intervals for Asbestos Fiber Counts: Approximate Negative Binomial Distribution.

Science.gov (United States)

Bartley, David; Slaven, James; Harper, Martin

2017-03-01

The negative binomial distribution is adopted for analyzing asbestos fiber counts so as to account for both the sampling errors in capturing only a finite number of fibers and the inevitable human variation in identifying and counting sampled fibers. A simple approximation to this distribution is developed for the derivation of quantiles and approximate confidence limits. The success of the approximation depends critically on the use of Stirling's expansion to sufficient order, on exact normalization of the approximating distribution, on reasonable perturbation of quantities from the normal distribution, and on accurately approximating sums by inverse-trapezoidal integration. Accuracy of the approximation developed is checked through simulation and also by comparison to traditional approximate confidence intervals in the specific case that the negative binomial distribution approaches the Poisson distribution. The resulting statistics are shown to relate directly to early research into the accuracy of asbestos sampling and analysis. Uncertainty in estimating mean asbestos fiber concentrations given only a single count is derived. Decision limits (limits of detection) and detection limits are considered for controlling false-positive and false-negative detection assertions and are compared to traditional limits computed assuming normal distributions. Published by Oxford University Press on behalf of the British Occupational Hygiene Society 2017.

14. Number of core samples: Mean concentrations and confidence intervals

International Nuclear Information System (INIS)

Jensen, L.; Cromar, R.D.; Wilmarth, S.R.; Heasler, P.G.

1995-01-01

This document provides estimates of how well the mean concentration of analytes are known as a function of the number of core samples, composite samples, and replicate analyses. The estimates are based upon core composite data from nine recently sampled single-shell tanks. The results can be used when determining the number of core samples needed to ''characterize'' the waste from similar single-shell tanks. A standard way of expressing uncertainty in the estimate of a mean is with a 95% confidence interval (CI). The authors investigate how the width of a 95% CI on the mean concentration decreases as the number of observations increase. Specifically, the tables and figures show how the relative half-width (RHW) of a 95% CI decreases as the number of core samples increases. The RHW of a CI is a unit-less measure of uncertainty. The general conclusions are as follows: (1) the RHW decreases dramatically as the number of core samples is increased, the decrease is much smaller when the number of composited samples or the number of replicate analyses are increase; (2) if the mean concentration of an analyte needs to be estimated with a small RHW, then a large number of core samples is required. The estimated number of core samples given in the tables and figures were determined by specifying different sizes of the RHW. Four nominal sizes were examined: 10%, 25%, 50%, and 100% of the observed mean concentration. For a majority of analytes the number of core samples required to achieve an accuracy within 10% of the mean concentration is extremely large. In many cases, however, two or three core samples is sufficient to achieve a RHW of approximately 50 to 100%. Because many of the analytes in the data have small concentrations, this level of accuracy may be satisfactory for some applications

15. Confidence intervals for experiments with background and small numbers of events

International Nuclear Information System (INIS)

Bruechle, W.

2003-01-01

Methods to find a confidence interval for Poisson distributed variables are illuminated, especially for the case of poor statistics. The application of 'central' and 'highest probability density' confidence intervals is compared for the case of low count-rates. A method to determine realistic estimates of the confidence intervals for Poisson distributed variables affected with background, and their ratios, is given. (orig.)

16. Confidence intervals for experiments with background and small numbers of events

International Nuclear Information System (INIS)

Bruechle, W.

2002-07-01

Methods to find a confidence interval for Poisson distributed variables are illuminated, especially for the case of poor statistics. The application of 'central' and 'highest probability density' confidence intervals is compared for the case of low count-rates. A method to determine realistic estimates of the confidence intervals for Poisson distributed variables affected with background, and their ratios, is given. (orig.)

17. An Introduction to Confidence Intervals for Both Statistical Estimates and Effect Sizes.

Science.gov (United States)

Capraro, Mary Margaret

This paper summarizes methods of estimating confidence intervals, including classical intervals and intervals for effect sizes. The recent American Psychological Association (APA) Task Force on Statistical Inference report suggested that confidence intervals should always be reported, and the fifth edition of the APA "Publication Manual"…

18. How to Avoid Errors in Error Propagation: Prediction Intervals and Confidence Intervals in Forest Biomass

Science.gov (United States)

Lilly, P.; Yanai, R. D.; Buckley, H. L.; Case, B. S.; Woollons, R. C.; Holdaway, R. J.; Johnson, J.

2016-12-01

Calculations of forest biomass and elemental content require many measurements and models, each contributing uncertainty to the final estimates. While sampling error is commonly reported, based on replicate plots, error due to uncertainty in the regression used to estimate biomass from tree diameter is usually not quantified. Some published estimates of uncertainty due to the regression models have used the uncertainty in the prediction of individuals, ignoring uncertainty in the mean, while others have propagated uncertainty in the mean while ignoring individual variation. Using the simple case of the calcium concentration of sugar maple leaves, we compare the variation among individuals (the standard deviation) to the uncertainty in the mean (the standard error) and illustrate the declining importance in the prediction of individual concentrations as the number of individuals increases. For allometric models, the analogous statistics are the prediction interval (or the residual variation in the model fit) and the confidence interval (describing the uncertainty in the best fit model). The effect of propagating these two sources of error is illustrated using the mass of sugar maple foliage. The uncertainty in individual tree predictions was large for plots with few trees; for plots with 30 trees or more, the uncertainty in individuals was less important than the uncertainty in the mean. Authors of previously published analyses have reanalyzed their data to show the magnitude of these two sources of uncertainty in scales ranging from experimental plots to entire countries. The most correct analysis will take both sources of uncertainty into account, but for practical purposes, country-level reports of uncertainty in carbon stocks, as required by the IPCC, can ignore the uncertainty in individuals. Ignoring the uncertainty in the mean will lead to exaggerated estimates of confidence in estimates of forest biomass and carbon and nutrient contents.

19. A note on Nonparametric Confidence Interval for a Shift Parameter ...

African Journals Online (AJOL)

The method is illustrated using the Cauchy distribution as a location model. The kernel-based method is found to have a shorter interval for the shift parameter between two Cauchy distributions than the one based on the Mann-Whitney test statistic. Keywords: Best Asymptotic Normal; Cauchy distribution; Kernel estimates; ...

20. Bootstrap confidence intervals for three-way methods

NARCIS (Netherlands)

Kiers, Henk A.L.

Results from exploratory three-way analysis techniques such as CANDECOMP/PARAFAC and Tucker3 analysis are usually presented without giving insight into uncertainties due to sampling. Here a bootstrap procedure is proposed that produces percentile intervals for all output parameters. Special

1. Using an R Shiny to Enhance the Learning Experience of Confidence Intervals

Science.gov (United States)

Williams, Immanuel James; Williams, Kelley Kim

2018-01-01

Many students find understanding confidence intervals difficult, especially because of the amalgamation of concepts such as confidence levels, standard error, point estimates and sample sizes. An R Shiny application was created to assist the learning process of confidence intervals using graphics and data from the US National Basketball…

2. Estimating confidence intervals in predicted responses for oscillatory biological models.

Science.gov (United States)

St John, Peter C; Doyle, Francis J

2013-07-29

The dynamics of gene regulation play a crucial role in a cellular control: allowing the cell to express the right proteins to meet changing needs. Some needs, such as correctly anticipating the day-night cycle, require complicated oscillatory features. In the analysis of gene regulatory networks, mathematical models are frequently used to understand how a network's structure enables it to respond appropriately to external inputs. These models typically consist of a set of ordinary differential equations, describing a network of biochemical reactions, and unknown kinetic parameters, chosen such that the model best captures experimental data. However, since a model's parameter values are uncertain, and since dynamic responses to inputs are highly parameter-dependent, it is difficult to assess the confidence associated with these in silico predictions. In particular, models with complex dynamics - such as oscillations - must be fit with computationally expensive global optimization routines, and cannot take advantage of existing measures of identifiability. Despite their difficulty to model mathematically, limit cycle oscillations play a key role in many biological processes, including cell cycling, metabolism, neuron firing, and circadian rhythms. In this study, we employ an efficient parameter estimation technique to enable a bootstrap uncertainty analysis for limit cycle models. Since the primary role of systems biology models is the insight they provide on responses to rate perturbations, we extend our uncertainty analysis to include first order sensitivity coefficients. Using a literature model of circadian rhythms, we show how predictive precision is degraded with decreasing sample points and increasing relative error. Additionally, we show how this method can be used for model discrimination by comparing the output identifiability of two candidate model structures to published literature data. Our method permits modellers of oscillatory systems to confidently

3. The P Value Problem in Otolaryngology: Shifting to Effect Sizes and Confidence Intervals.

Science.gov (United States)

Vila, Peter M; Townsend, Melanie Elizabeth; Bhatt, Neel K; Kao, W Katherine; Sinha, Parul; Neely, J Gail

2017-06-01

4. Graphing within-subjects confidence intervals using SPSS and S-Plus.

Science.gov (United States)

Wright, Daniel B

2007-02-01

Within-subjects confidence intervals are often appropriate to report and to display. Loftus and Masson (1994) have reported methods to calculate these, and their use is becoming common. In the present article, procedures for calculating within-subjects confidence intervals in SPSS and S-Plus are presented (an R version is on the accompanying Web site). The procedure in S-Plus allows the user to report the bias corrected and adjusted bootstrap confidence intervals as well as the standard confidence intervals based on traditional methods. The presented code can be easily altered to fit the individual user's needs.

5. Confidence intervals for distinguishing ordinal and disordinal interactions in multiple regression.

Science.gov (United States)

Lee, Sunbok; Lei, Man-Kit; Brody, Gene H

2015-06-01

Distinguishing between ordinal and disordinal interaction in multiple regression is useful in testing many interesting theoretical hypotheses. Because the distinction is made based on the location of a crossover point of 2 simple regression lines, confidence intervals of the crossover point can be used to distinguish ordinal and disordinal interactions. This study examined 2 factors that need to be considered in constructing confidence intervals of the crossover point: (a) the assumption about the sampling distribution of the crossover point, and (b) the possibility of abnormally wide confidence intervals for the crossover point. A Monte Carlo simulation study was conducted to compare 6 different methods for constructing confidence intervals of the crossover point in terms of the coverage rate, the proportion of true values that fall to the left or right of the confidence intervals, and the average width of the confidence intervals. The methods include the reparameterization, delta, Fieller, basic bootstrap, percentile bootstrap, and bias-corrected accelerated bootstrap methods. The results of our Monte Carlo simulation study suggest that statistical inference using confidence intervals to distinguish ordinal and disordinal interaction requires sample sizes more than 500 to be able to provide sufficiently narrow confidence intervals to identify the location of the crossover point. (c) 2015 APA, all rights reserved).

6. Sample size planning for composite reliability coefficients: accuracy in parameter estimation via narrow confidence intervals.

Science.gov (United States)

Terry, Leann; Kelley, Ken

2012-11-01

Composite measures play an important role in psychology and related disciplines. Composite measures almost always have error. Correspondingly, it is important to understand the reliability of the scores from any particular composite measure. However, the point estimates of the reliability of composite measures are fallible and thus all such point estimates should be accompanied by a confidence interval. When confidence intervals are wide, there is much uncertainty in the population value of the reliability coefficient. Given the importance of reporting confidence intervals for estimates of reliability, coupled with the undesirability of wide confidence intervals, we develop methods that allow researchers to plan sample size in order to obtain narrow confidence intervals for population reliability coefficients. We first discuss composite reliability coefficients and then provide a discussion on confidence interval formation for the corresponding population value. Using the accuracy in parameter estimation approach, we develop two methods to obtain accurate estimates of reliability by planning sample size. The first method provides a way to plan sample size so that the expected confidence interval width for the population reliability coefficient is sufficiently narrow. The second method ensures that the confidence interval width will be sufficiently narrow with some desired degree of assurance (e.g., 99% assurance that the 95% confidence interval for the population reliability coefficient will be less than W units wide). The effectiveness of our methods was verified with Monte Carlo simulation studies. We demonstrate how to easily implement the methods with easy-to-use and freely available software. ©2011 The British Psychological Society.

7. Confidence Intervals for True Scores Using the Skew-Normal Distribution

Science.gov (United States)

Garcia-Perez, Miguel A.

2010-01-01

A recent comparative analysis of alternative interval estimation approaches and procedures has shown that confidence intervals (CIs) for true raw scores determined with the Score method--which uses the normal approximation to the binomial distribution--have actual coverage probabilities that are closest to their nominal level. It has also recently…

8. Confidence Intervals for Weighted Composite Scores under the Compound Binomial Error Model

Science.gov (United States)

Kim, Kyung Yong; Lee, Won-Chan

2018-01-01

Reporting confidence intervals with test scores helps test users make important decisions about examinees by providing information about the precision of test scores. Although a variety of estimation procedures based on the binomial error model are available for computing intervals for test scores, these procedures assume that items are randomly…

9. Binomial confidence intervals for testing non-inferiority or superiority: a practitioner's dilemma.

Science.gov (United States)

Pradhan, Vivek; Evans, John C; Banerjee, Tathagata

2016-08-01

In testing for non-inferiority or superiority in a single arm study, the confidence interval of a single binomial proportion is frequently used. A number of such intervals are proposed in the literature and implemented in standard software packages. Unfortunately, use of different intervals leads to conflicting conclusions. Practitioners thus face a serious dilemma in deciding which one to depend on. Is there a way to resolve this dilemma? We address this question by investigating the performances of ten commonly used intervals of a single binomial proportion, in the light of two criteria, viz., coverage and expected length of the interval. © The Author(s) 2013.

10. Closed-form confidence intervals for functions of the normal mean and standard deviation.

Science.gov (United States)

Donner, Allan; Zou, G Y

2012-08-01

Confidence interval methods for a normal mean and standard deviation are well known and simple to apply. However, the same cannot be said for important functions of these parameters. These functions include the normal distribution percentiles, the Bland-Altman limits of agreement, the coefficient of variation and Cohen's effect size. We present a simple approach to this problem by using variance estimates recovered from confidence limits computed for the mean and standard deviation separately. All resulting confidence intervals have closed forms. Simulation results demonstrate that this approach performs very well for limits of agreement, coefficients of variation and their differences.

11. Binomial Distribution Sample Confidence Intervals Estimation 1. Sampling and Medical Key Parameters Calculation

Directory of Open Access Journals (Sweden)

Tudor DRUGAN

2003-08-01

Full Text Available The aim of the paper was to present the usefulness of the binomial distribution in studying of the contingency tables and the problems of approximation to normality of binomial distribution (the limits, advantages, and disadvantages. The classification of the medical keys parameters reported in medical literature and expressing them using the contingency table units based on their mathematical expressions restrict the discussion of the confidence intervals from 34 parameters to 9 mathematical expressions. The problem of obtaining different information starting with the computed confidence interval for a specified method, information like confidence intervals boundaries, percentages of the experimental errors, the standard deviation of the experimental errors and the deviation relative to significance level was solves through implementation in PHP programming language of original algorithms. The cases of expression, which contain two binomial variables, were separately treated. An original method of computing the confidence interval for the case of two-variable expression was proposed and implemented. The graphical representation of the expression of two binomial variables for which the variation domain of one of the variable depend on the other variable was a real problem because the most of the software used interpolation in graphical representation and the surface maps were quadratic instead of triangular. Based on an original algorithm, a module was implements in PHP in order to represent graphically the triangular surface plots. All the implementation described above was uses in computing the confidence intervals and estimating their performance for binomial distributions sample sizes and variable.

12. The Applicability of Confidence Intervals of Quantiles for the Generalized Logistic Distribution

Science.gov (United States)

Shin, H.; Heo, J.; Kim, T.; Jung, Y.

2007-12-01

The generalized logistic (GL) distribution has been widely used for frequency analysis. However, there is a little study related to the confidence intervals that indicate the prediction accuracy of distribution for the GL distribution. In this paper, the estimation of the confidence intervals of quantiles for the GL distribution is presented based on the method of moments (MOM), maximum likelihood (ML), and probability weighted moments (PWM) and the asymptotic variances of each quantile estimator are derived as functions of the sample sizes, return periods, and parameters. Monte Carlo simulation experiments are also performed to verify the applicability of the derived confidence intervals of quantile. As the results, the relative bias (RBIAS) and relative root mean square error (RRMSE) of the confidence intervals generally increase as return period increases and reverse as sample size increases. And PWM for estimating the confidence intervals performs better than the other methods in terms of RRMSE when the data is almost symmetric while ML shows the smallest RBIAS and RRMSE when the data is more skewed and sample size is moderately large. The GL model was applied to fit the distribution of annual maximum rainfall data. The results show that there are little differences in the estimated quantiles between ML and PWM while distinct differences in MOM.

13. Tests and Confidence Intervals for an Extended Variance Component Using the Modified Likelihood Ratio Statistic

DEFF Research Database (Denmark)

Christensen, Ole Fredslund; Frydenberg, Morten; Jensen, Jens Ledet

2005-01-01

The large deviation modified likelihood ratio statistic is studied for testing a variance component equal to a specified value. Formulas are presented in the general balanced case, whereas in the unbalanced case only the one-way random effects model is studied. Simulation studies are presented......, showing that the normal approximation to the large deviation modified likelihood ratio statistic gives confidence intervals for variance components with coverage probabilities very close to the nominal confidence coefficient....

14. The Optimal Confidence Intervals for Agricultural Products’ Price Forecasts Based on Hierarchical Historical Errors

Directory of Open Access Journals (Sweden)

Yi Wang

2016-12-01

Full Text Available With the levels of confidence and system complexity, interval forecasts and entropy analysis can deliver more information than point forecasts. In this paper, we take receivers’ demands as our starting point, use the trade-off model between accuracy and informativeness as the criterion to construct the optimal confidence interval, derive the theoretical formula of the optimal confidence interval and propose a practical and efficient algorithm based on entropy theory and complexity theory. In order to improve the estimation precision of the error distribution, the point prediction errors are STRATIFIED according to prices and the complexity of the system; the corresponding prediction error samples are obtained by the prices stratification; and the error distributions are estimated by the kernel function method and the stability of the system. In a stable and orderly environment for price forecasting, we obtain point prediction error samples by the weighted local region and RBF (Radial basis function neural network methods, forecast the intervals of the soybean meal and non-GMO (Genetically Modified Organism soybean continuous futures closing prices and implement unconditional coverage, independence and conditional coverage tests for the simulation results. The empirical results are compared from various interval evaluation indicators, different levels of noise, several target confidence levels and different point prediction methods. The analysis shows that the optimal interval construction method is better than the equal probability method and the shortest interval method and has good anti-noise ability with the reduction of system entropy; the hierarchical estimation error method can obtain higher accuracy and better interval estimation than the non-hierarchical method in a stable system.

15. Binomial Distribution Sample Confidence Intervals Estimation 7. Absolute Risk Reduction and ARR-like Expressions

Directory of Open Access Journals (Sweden)

2004-08-01

Full Text Available Assessments of a controlled clinical trial suppose to interpret some key parameters as the controlled event rate, experimental event date, relative risk, absolute risk reduction, relative risk reduction, number needed to treat when the effect of the treatment are dichotomous variables. Defined as the difference in the event rate between treatment and control groups, the absolute risk reduction is the parameter that allowed computing the number needed to treat. The absolute risk reduction is compute when the experimental treatment reduces the risk for an undesirable outcome/event. In medical literature when the absolute risk reduction is report with its confidence intervals, the method used is the asymptotic one, even if it is well know that may be inadequate. The aim of this paper is to introduce and assess nine methods of computing confidence intervals for absolute risk reduction and absolute risk reduction – like function.Computer implementations of the methods use the PHP language. Methods comparison uses the experimental errors, the standard deviations, and the deviation relative to the imposed significance level for specified sample sizes. Six methods of computing confidence intervals for absolute risk reduction and absolute risk reduction-like functions were assessed using random binomial variables and random sample sizes.The experiments shows that the ADAC, and ADAC1 methods obtains the best overall performance of computing confidence intervals for absolute risk reduction.

16. Comparing confidence intervals for Goodman and Kruskal’s gamma coefficient

NARCIS (Netherlands)

van der Ark, L.A.; van Aert, R.C.M.

2015-01-01

This study was motivated by the question which type of confidence interval (CI) one should use to summarize sample variance of Goodman and Kruskal's coefficient gamma. In a Monte-Carlo study, we investigated the coverage and computation time of the Goodman–Kruskal CI, the Cliff-consistent CI, the

17. The Distribution of the Product Explains Normal Theory Mediation Confidence Interval Estimation.

Science.gov (United States)

Kisbu-Sakarya, Yasemin; MacKinnon, David P; Miočević, Milica

2014-05-01

18. Optimal and Most Exact Confidence Intervals for Person Parameters in Item Response Theory Models

Science.gov (United States)

Doebler, Anna; Doebler, Philipp; Holling, Heinz

2013-01-01

The common way to calculate confidence intervals for item response theory models is to assume that the standardized maximum likelihood estimator for the person parameter [theta] is normally distributed. However, this approximation is often inadequate for short and medium test lengths. As a result, the coverage probabilities fall below the given…

19. WASP (Write a Scientific Paper) using Excel - 6: Standard error and confidence interval.

Science.gov (United States)

Grech, Victor

2018-03-01

The calculation of descriptive statistics includes the calculation of standard error and confidence interval, an inevitable component of data analysis in inferential statistics. This paper provides pointers as to how to do this in Microsoft Excel™. Copyright © 2018 Elsevier B.V. All rights reserved.

20. Methods for confidence interval estimation of a ratio parameter with application to location quotients

Directory of Open Access Journals (Sweden)

Beyene Joseph

2005-10-01

Full Text Available Abstract Background The location quotient (LQ ratio, a measure designed to quantify and benchmark the degree of relative concentration of an activity in the analysis of area localization, has received considerable attention in the geographic and economics literature. This index can also naturally be applied in the context of population health to quantify and compare health outcomes across spatial domains. However, one commonly observed limitation of LQ is its widespread use as only a point estimate without an accompanying confidence interval. Methods In this paper we present statistical methods that can be used to construct confidence intervals for location quotients. The delta and Fieller's methods are generic approaches for a ratio parameter and the generalized linear modelling framework is a useful re-parameterization particularly helpful for generating profile-likelihood based confidence intervals for the location quotient. A simulation experiment is carried out to assess the performance of each of the analytic approaches and a health utilization data set is used for illustration. Results Both the simulation results as well as the findings from the empirical data show that the different analytical methods produce very similar confidence limits for location quotients. When incidence of outcome is not rare and sample sizes are large, the confidence limits are almost indistinguishable. The confidence limits from the generalized linear model approach might be preferable in small sample situations. Conclusion LQ is a useful measure which allows quantification and comparison of health and other outcomes across defined geographical regions. It is a very simple index to compute and has a straightforward interpretation. Reporting this estimate with appropriate confidence limits using methods presented in this paper will make the measure particularly attractive for policy and decision makers.

1. Confidence Intervals Verification for Simulated Error Rate Performance of Wireless Communication System

KAUST Repository

2012-12-06

In this paper, we derived an efficient simulation method to evaluate the error rate of wireless communication system. Coherent binary phase-shift keying system is considered with imperfect channel phase recovery. The results presented demonstrate the system performance under very realistic Nakagami-m fading and additive white Gaussian noise channel. On the other hand, the accuracy of the obtained results is verified through running the simulation under a good confidence interval reliability of 95 %. We see that as the number of simulation runs N increases, the simulated error rate becomes closer to the actual one and the confidence interval difference reduces. Hence our results are expected to be of significant practical use for such scenarios. © 2012 Springer Science+Business Media New York.

2. Energy Performance Certificate of building and confidence interval in assessment: An Italian case study

International Nuclear Information System (INIS)

Tronchin, Lamberto; Fabbri, Kristian

2012-01-01

The Directive 2002/91/CE introduced the Energy Performance Certificate (EPC), an energy policy tool. The aim of the EPC is to inform building buyers about the energy performance and energy costs of buildings. The EPCs represent a specific energy policy tool to orient the building sector and real-estate markets toward higher energy efficiency buildings. The effectiveness of the EPC depends on two factors: •The accuracy of energy performance evaluation made by independent experts. •The capability of the energy classification and of the scale of energy performance to control the energy index fluctuations. In this paper, the results of a case study located in Italy are shown. In this example, 162 independent technicians on energy performance of building evaluation have studied the same building. The results reveal which part of confidence intervals is dependent on software misunderstanding and that the energy classification ranges are able to tolerate the fluctuation of energy indices. The example was chosen in accordance with the legislation of the Emilia-Romagna Region on Energy Efficiency of Buildings. Following these results, some thermo-economic evaluation related to building and energy labelling are illustrated, as the EPC, which is an energy policy tool for the real-estate market and building sector to find a way to build or retrofit an energy efficiency building. - Highlights: ► Evaluation of the accuracy of energy performance of buildings in relation with the knowledge of independent experts. ► Round robin test based on 162 case studies on the confidence intervals expressed by independent experts. ► Statistical considerations between the confidence intervals expressed by independent experts and energy simulation software. ► Relation between “proper class” in energy classification of buildings and confidence intervals of independent experts.

3. Growth Estimators and Confidence Intervals for the Mean of Negative Binomial Random Variables with Unknown Dispersion

Directory of Open Access Journals (Sweden)

David Shilane

2013-01-01

Full Text Available The negative binomial distribution becomes highly skewed under extreme dispersion. Even at moderately large sample sizes, the sample mean exhibits a heavy right tail. The standard normal approximation often does not provide adequate inferences about the data's expected value in this setting. In previous work, we have examined alternative methods of generating confidence intervals for the expected value. These methods were based upon Gamma and Chi Square approximations or tail probability bounds such as Bernstein's inequality. We now propose growth estimators of the negative binomial mean. Under high dispersion, zero values are likely to be overrepresented in the data. A growth estimator constructs a normal-style confidence interval by effectively removing a small, predetermined number of zeros from the data. We propose growth estimators based upon multiplicative adjustments of the sample mean and direct removal of zeros from the sample. These methods do not require estimating the nuisance dispersion parameter. We will demonstrate that the growth estimators' confidence intervals provide improved coverage over a wide range of parameter values and asymptotically converge to the sample mean. Interestingly, the proposed methods succeed despite adding both bias and variance to the normal approximation.

4. Bootstrap resampling: a powerful method of assessing confidence intervals for doses from experimental data

International Nuclear Information System (INIS)

Iwi, G.; Millard, R.K.; Palmer, A.M.; Preece, A.W.; Saunders, M.

1999-01-01

Bootstrap resampling provides a versatile and reliable statistical method for estimating the accuracy of quantities which are calculated from experimental data. It is an empirically based method, in which large numbers of simulated datasets are generated by computer from existing measurements, so that approximate confidence intervals of the derived quantities may be obtained by direct numerical evaluation. A simple introduction to the method is given via a detailed example of estimating 95% confidence intervals for cumulated activity in the thyroid following injection of 99m Tc-sodium pertechnetate using activity-time data from 23 subjects. The application of the approach to estimating confidence limits for the self-dose to the kidney following injection of 99m Tc-DTPA organ imaging agent based on uptake data from 19 subjects is also illustrated. Results are then given for estimates of doses to the foetus following administration of 99m Tc-sodium pertechnetate for clinical reasons during pregnancy, averaged over 25 subjects. The bootstrap method is well suited for applications in radiation dosimetry including uncertainty, reliability and sensitivity analysis of dose coefficients in biokinetic models, but it can also be applied in a wide range of other biomedical situations. (author)

5. A Note on Confidence Interval for the Power of the One Sample Test

OpenAIRE

A. Wong

2010-01-01

In introductory statistics texts, the power of the test of a one-sample mean when the variance is known is widely discussed. However, when the variance is unknown, the power of the Student's -test is seldom mentioned. In this note, a general methodology for obtaining inference concerning a scalar parameter of interest of any exponential family model is proposed. The method is then applied to the one-sample mean problem with unknown variance to obtain a ( 1 − ) 100% confidence interval for...

6. Rescaled Range Analysis and Detrended Fluctuation Analysis: Finite Sample Properties and Confidence Intervals

Czech Academy of Sciences Publication Activity Database

4/2010, č. 3 (2010), s. 236-250 ISSN 1802-4696 R&D Projects: GA ČR GD402/09/H045; GA ČR GA402/09/0965 Grant - others:GA UK(CZ) 118310 Institutional research plan: CEZ:AV0Z10750506 Keywords : rescaled range analysis * detrended fluctuation analysis * Hurst exponent * long-range dependence Subject RIV: AH - Economics http://library.utia.cas.cz/separaty/2010/E/kristoufek-rescaled range analysis and detrended fluctuation analysis finite sample properties and confidence intervals.pdf

7. A NEW METHOD FOR CONSTRUCTING CONFIDENCE INTERVAL FOR CPM BASED ON FUZZY DATA

Directory of Open Access Journals (Sweden)

2011-06-01

Full Text Available A measurement control system ensures that measuring equipment and measurement processes are fit for their intended use and its importance in achieving product quality objectives. In most real life applications, the observations are fuzzy. In some cases specification limits (SLs are not precise numbers and they are expressed in fuzzy terms, s o that the classical capability indices could not be applied. In this paper we obtain 100(1 - α% fuzzy confidence interval for C pm fuzzy process capability index, where instead of precise quality we have two membership functions for specification limits.

8. Confidence intervals for the first crossing point of two hazard functions.

Science.gov (United States)

Cheng, Ming-Yen; Qiu, Peihua; Tan, Xianming; Tu, Dongsheng

2009-12-01

The phenomenon of crossing hazard rates is common in clinical trials with time to event endpoints. Many methods have been proposed for testing equality of hazard functions against a crossing hazards alternative. However, there has been relatively few approaches available in the literature for point or interval estimation of the crossing time point. The problem of constructing confidence intervals for the first crossing time point of two hazard functions is considered in this paper. After reviewing a recent procedure based on Cox proportional hazard modeling with Box-Cox transformation of the time to event, a nonparametric procedure using the kernel smoothing estimate of the hazard ratio is proposed. The proposed procedure and the one based on Cox proportional hazard modeling with Box-Cox transformation of the time to event are both evaluated by Monte-Carlo simulations and applied to two clinical trial datasets.

9. Generalized additive models and Lucilia sericata growth: assessing confidence intervals and error rates in forensic entomology.

Science.gov (United States)

Tarone, Aaron M; Foran, David R

2008-07-01

Forensic entomologists use blow fly development to estimate a postmortem interval. Although accurate, fly age estimates can be imprecise for older developmental stages and no standard means of assigning confidence intervals exists. Presented here is a method for modeling growth of the forensically important blow fly Lucilia sericata, using generalized additive models (GAMs). Eighteen GAMs were created to predict the extent of juvenile fly development, encompassing developmental stage, length, weight, strain, and temperature data, collected from 2559 individuals. All measures were informative, explaining up to 92.6% of the deviance in the data, though strain and temperature exerted negligible influences. Predictions made with an independent data set allowed for a subsequent examination of error. Estimates using length and developmental stage were within 5% of true development percent during the feeding portion of the larval life cycle, while predictions for postfeeding third instars were less precise, but within expected error.

10. Confidence interval of intrinsic optimum temperature estimated using thermodynamic SSI model

Institute of Scientific and Technical Information of China (English)

Takaya Ikemoto; Issei Kurahashi; Pei-Jian Shi

2013-01-01

The intrinsic optimum temperature for the development of ectotherms is one of the most important factors not only for their physiological processes but also for ecological and evolutional processes.The Sharpe-Schoolfield-Ikemoto (SSI) model succeeded in defining the temperature that can thermodynamically meet the condition that at a particular temperature the probability of an active enzyme reaching its maximum activity is realized.Previously,an algorithm was developed by Ikemoto (Tropical malaria does not mean hot environments.Journal of Medical Entomology,45,963-969) to estimate model parameters,but that program was computationally very time consuming.Now,investigators can use the SSI model more easily because a full automatic computer program was designed by Shi et al.(A modified program for estimating the parameters of the SSI model.Environmental Entomology,40,462-469).However,the statistical significance of the point estimate of the intrinsic optimum temperature for each ectotherm has not yet been determined.Here,we provided a new method for calculating the confidence interval of the estimated intrinsic optimum temperature by modifying the approximate bootstrap confidence intervals method.For this purpose,it was necessary to develop a new program for a faster estimation of the parameters in the SSI model,which we have also done.

11. Confidence intervals for modeling anthocyanin retention in grape pomace during nonisothermal heating.

Science.gov (United States)

Mishra, D K; Dolan, K D; Yang, L

2008-01-01

Degradation of nutraceuticals in low- and intermediate-moisture foods heated at high temperature (>100 degrees C) is difficult to model because of the nonisothermal condition. Isothermal experiments above 100 degrees C are difficult to design because they require high pressure and small sample size in sealed containers. Therefore, a nonisothermal method was developed to estimate the thermal degradation kinetic parameter of nutraceuticals and determine the confidence intervals for the parameters and the predicted Y (concentration). Grape pomace at 42% moisture content (wb) was heated in sealed 202 x 214 steel cans in a steam retort at 126.7 degrees C for > 30 min. Can center temperature was measured by thermocouple and predicted using Comsol software. Thermal conductivity (k) and specific heat (C(p)) were estimated as quadratic functions of temperature using Comsol and nonlinear regression. The k and C(p) functions were then used to predict temperature inside the grape pomace during retorting. Similar heating experiments were run at different time-temperature treatments from 8 to 25 min for kinetic parameter estimation. Anthocyanin concentration in the grape pomace was measured using HPLC. Degradation rate constant (k(110 degrees C)) and activation energy (E(a)) were estimated using nonlinear regression. The thermophysical properties estimates at 100 degrees C were k = 0.501 W/m degrees C, Cp= 3600 J/kg and the kinetic parameters were k(110 degrees C)= 0.0607/min and E(a)= 65.32 kJ/mol. The 95% confidence intervals for the parameters and the confidence bands and prediction bands for anthocyanin retention were plotted. These methods are useful for thermal processing design for nutraceutical products.

12. Statistical variability and confidence intervals for planar dose QA pass rates

Energy Technology Data Exchange (ETDEWEB)

Bailey, Daniel W.; Nelms, Benjamin E.; Attwood, Kristopher; Kumaraswamy, Lalith; Podgorsak, Matthew B. [Department of Physics, State University of New York at Buffalo, Buffalo, New York 14260 (United States) and Department of Radiation Medicine, Roswell Park Cancer Institute, Buffalo, New York 14263 (United States); Canis Lupus LLC, Merrimac, Wisconsin 53561 (United States); Department of Biostatistics, Roswell Park Cancer Institute, Buffalo, New York 14263 (United States); Department of Radiation Medicine, Roswell Park Cancer Institute, Buffalo, New York 14263 (United States); Department of Radiation Medicine, Roswell Park Cancer Institute, Buffalo, New York 14263 (United States); Department of Molecular and Cellular Biophysics and Biochemistry, Roswell Park Cancer Institute, Buffalo, New York 14263 (United States) and Department of Physiology and Biophysics, State University of New York at Buffalo, Buffalo, New York 14214 (United States)

2011-11-15

Purpose: The most common metric for comparing measured to calculated dose, such as for pretreatment quality assurance of intensity-modulated photon fields, is a pass rate (%) generated using percent difference (%Diff), distance-to-agreement (DTA), or some combination of the two (e.g., gamma evaluation). For many dosimeters, the grid of analyzed points corresponds to an array with a low areal density of point detectors. In these cases, the pass rates for any given comparison criteria are not absolute but exhibit statistical variability that is a function, in part, on the detector sampling geometry. In this work, the authors analyze the statistics of various methods commonly used to calculate pass rates and propose methods for establishing confidence intervals for pass rates obtained with low-density arrays. Methods: Dose planes were acquired for 25 prostate and 79 head and neck intensity-modulated fields via diode array and electronic portal imaging device (EPID), and matching calculated dose planes were created via a commercial treatment planning system. Pass rates for each dose plane pair (both centered to the beam central axis) were calculated with several common comparison methods: %Diff/DTA composite analysis and gamma evaluation, using absolute dose comparison with both local and global normalization. Specialized software was designed to selectively sample the measured EPID response (very high data density) down to discrete points to simulate low-density measurements. The software was used to realign the simulated detector grid at many simulated positions with respect to the beam central axis, thereby altering the low-density sampled grid. Simulations were repeated with 100 positional iterations using a 1 detector/cm{sup 2} uniform grid, a 2 detector/cm{sup 2} uniform grid, and similar random detector grids. For each simulation, %/DTA composite pass rates were calculated with various %Diff/DTA criteria and for both local and global %Diff normalization

13. The Precision of Effect Size Estimation From Published Psychological Research: Surveying Confidence Intervals.

Science.gov (United States)

Brand, Andrew; Bradley, Michael T

2016-02-01

Confidence interval ( CI) widths were calculated for reported Cohen's d standardized effect sizes and examined in two automated surveys of published psychological literature. The first survey reviewed 1,902 articles from Psychological Science. The second survey reviewed a total of 5,169 articles from across the following four APA journals: Journal of Abnormal Psychology, Journal of Applied Psychology, Journal of Experimental Psychology: Human Perception and Performance, and Developmental Psychology. The median CI width for d was greater than 1 in both surveys. Hence, CI widths were, as Cohen (1994) speculated, embarrassingly large. Additional exploratory analyses revealed that CI widths varied across psychological research areas and that CI widths were not discernably decreasing over time. The theoretical implications of these findings are discussed along with ways of reducing the CI widths and thus improving precision of effect size estimation.

14. A Note on Confidence Interval for the Power of the One Sample Test

Directory of Open Access Journals (Sweden)

A. Wong

2010-01-01

Full Text Available In introductory statistics texts, the power of the test of a one-sample mean when the variance is known is widely discussed. However, when the variance is unknown, the power of the Student's -test is seldom mentioned. In this note, a general methodology for obtaining inference concerning a scalar parameter of interest of any exponential family model is proposed. The method is then applied to the one-sample mean problem with unknown variance to obtain a (1−100% confidence interval for the power of the Student's -test that detects the difference (−0. The calculations require only the density and the cumulative distribution functions of the standard normal distribution. In addition, the methodology presented can also be applied to determine the required sample size when the effect size and the power of a size test of mean are given.

15. Assessing a disaggregated energy input: using confidence intervals around translog elasticity estimates

International Nuclear Information System (INIS)

Hisnanick, J.J.; Kyer, B.L.

1995-01-01

The role of energy in the production of manufacturing output has been debated extensively in the literature, particularly its relationship with capital and labor. In an attempt to provide some clarification in this debate, a two-step methodology was used. First under the assumption of a five-factor production function specification, we distinguished between electric and non-electric energy and assessed each component's relationship with capital and labor. Second, we calculated both the Allen and price elasticities and constructed 95% confidence intervals around these values. Our approach led to the following conclusions: that the disaggregation of the energy input into electric and non-electric energy is justified; that capital and electric energy and capital and non-electric energy are substitutes, while labor and electric energy and labor and non-electric energy are complements in production; and that capital and energy are substitutes, while labor and energy are complements. (author)

16. Test Statistics and Confidence Intervals to Establish Noninferiority between Treatments with Ordinal Categorical Data.

Science.gov (United States)

Zhang, Fanghong; Miyaoka, Etsuo; Huang, Fuping; Tanaka, Yutaka

2015-01-01

The problem for establishing noninferiority is discussed between a new treatment and a standard (control) treatment with ordinal categorical data. A measure of treatment effect is used and a method of specifying noninferiority margin for the measure is provided. Two Z-type test statistics are proposed where the estimation of variance is constructed under the shifted null hypothesis using U-statistics. Furthermore, the confidence interval and the sample size formula are given based on the proposed test statistics. The proposed procedure is applied to a dataset from a clinical trial. A simulation study is conducted to compare the performance of the proposed test statistics with that of the existing ones, and the results show that the proposed test statistics are better in terms of the deviation from nominal level and the power.

17. An SPSS Macro to Compute Confidence Intervals for Pearsons Correlation

Directory of Open Access Journals (Sweden)

Bruce Weaver

2014-04-01

Full Text Available In many disciplines, including psychology, medical research, epidemiology and public health, authors are required, or at least encouraged to report confidence intervals (CIs along with effect size estimates. Many students and researchers in these areas use IBM-SPSS for statistical analysis. Unfortunately, the CORRELATIONS procedure in SPSS does not provide CIs in the output. Various work-around solutions have been suggested for obtaining CIs for rhowith SPSS, but most of them have been sub-optimal. Since release 18, it has been possible to compute bootstrap CIs, but only if users have the optional bootstrap module. The !rhoCI macro described in this article is accessible to all SPSS users with release 14 or later. It directs output from the CORRELATIONS procedure to another dataset, restructures that dataset to have one row per correlation, computes a CI for each correlation, and displays the results in a single table. Because the macro uses the CORRELATIONS procedure, it allows users to specify a list of two or more variables to include in the correlation matrix, to choose a confidence level, and to select either listwise or pairwise deletion. Thus, it offers substantial improvements over previous solutions to theproblem of how to compute CIs for rho with SPSS.

18. Computing confidence and prediction intervals of industrial equipment degradation by bootstrapped support vector regression

International Nuclear Information System (INIS)

Lins, Isis Didier; Droguett, Enrique López; Moura, Márcio das Chagas; Zio, Enrico; Jacinto, Carlos Magno

2015-01-01

Data-driven learning methods for predicting the evolution of the degradation processes affecting equipment are becoming increasingly attractive in reliability and prognostics applications. Among these, we consider here Support Vector Regression (SVR), which has provided promising results in various applications. Nevertheless, the predictions provided by SVR are point estimates whereas in order to take better informed decisions, an uncertainty assessment should be also carried out. For this, we apply bootstrap to SVR so as to obtain confidence and prediction intervals, without having to make any assumption about probability distributions and with good performance even when only a small data set is available. The bootstrapped SVR is first verified on Monte Carlo experiments and then is applied to a real case study concerning the prediction of degradation of a component from the offshore oil industry. The results obtained indicate that the bootstrapped SVR is a promising tool for providing reliable point and interval estimates, which can inform maintenance-related decisions on degrading components. - Highlights: • Bootstrap (pairs/residuals) and SVR are used as an uncertainty analysis framework. • Numerical experiments are performed to assess accuracy and coverage properties. • More bootstrap replications does not significantly improve performance. • Degradation of equipment of offshore oil wells is estimated by bootstrapped SVR. • Estimates about the scale growth rate can support maintenance-related decisions

19. Uncertainty in population growth rates: determining confidence intervals from point estimates of parameters.

Directory of Open Access Journals (Sweden)

Eleanor S Devenish Nelson

Full Text Available BACKGROUND: Demographic models are widely used in conservation and management, and their parameterisation often relies on data collected for other purposes. When underlying data lack clear indications of associated uncertainty, modellers often fail to account for that uncertainty in model outputs, such as estimates of population growth. METHODOLOGY/PRINCIPAL FINDINGS: We applied a likelihood approach to infer uncertainty retrospectively from point estimates of vital rates. Combining this with resampling techniques and projection modelling, we show that confidence intervals for population growth estimates are easy to derive. We used similar techniques to examine the effects of sample size on uncertainty. Our approach is illustrated using data on the red fox, Vulpes vulpes, a predator of ecological and cultural importance, and the most widespread extant terrestrial mammal. We show that uncertainty surrounding estimated population growth rates can be high, even for relatively well-studied populations. Halving that uncertainty typically requires a quadrupling of sampling effort. CONCLUSIONS/SIGNIFICANCE: Our results compel caution when comparing demographic trends between populations without accounting for uncertainty. Our methods will be widely applicable to demographic studies of many species.

20. Bayesian-statistical decision threshold, detection limit, and confidence interval in nuclear radiation measurement

International Nuclear Information System (INIS)

Weise, K.

1998-01-01

When a contribution of a particular nuclear radiation is to be detected, for instance, a spectral line of interest for some purpose of radiation protection, and quantities and their uncertainties must be taken into account which, such as influence quantities, cannot be determined by repeated measurements or by counting nuclear radiation events, then conventional statistics of event frequencies is not sufficient for defining the decision threshold, the detection limit, and the limits of a confidence interval. These characteristic limits are therefore redefined on the basis of Bayesian statistics for a wider applicability and in such a way that the usual practice remains as far as possible unaffected. The principle of maximum entropy is applied to establish probability distributions from available information. Quantiles of these distributions are used for defining the characteristic limits. But such a distribution must not be interpreted as a distribution of event frequencies such as the Poisson distribution. It rather expresses the actual state of incomplete knowledge of a physical quantity. The different definitions and interpretations and their quantitative consequences are presented and discussed with two examples. The new approach provides a theoretical basis for the DIN 25482-10 standard presently in preparation for general applications of the characteristic limits. (orig.) [de

1. Confidence interval estimation of the difference between two sensitivities to the early disease stage.

Science.gov (United States)

Dong, Tuochuan; Kang, Le; Hutson, Alan; Xiong, Chengjie; Tian, Lili

2014-03-01

Although most of the statistical methods for diagnostic studies focus on disease processes with binary disease status, many diseases can be naturally classified into three ordinal diagnostic categories, that is normal, early stage, and fully diseased. For such diseases, the volume under the ROC surface (VUS) is the most commonly used index of diagnostic accuracy. Because the early disease stage is most likely the optimal time window for therapeutic intervention, the sensitivity to the early diseased stage has been suggested as another diagnostic measure. For the purpose of comparing the diagnostic abilities on early disease detection between two markers, it is of interest to estimate the confidence interval of the difference between sensitivities to the early diseased stage. In this paper, we present both parametric and non-parametric methods for this purpose. An extensive simulation study is carried out for a variety of settings for the purpose of evaluating and comparing the performance of the proposed methods. A real example of Alzheimer's disease (AD) is analyzed using the proposed approaches. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

2. Confidence intervals permit, but don't guarantee, better inference than statistical significance testing

Directory of Open Access Journals (Sweden)

Melissa Coulson

2010-07-01

Full Text Available A statistically significant result, and a non-significant result may differ little, although significance status may tempt an interpretation of difference. Two studies are reported that compared interpretation of such results presented using null hypothesis significance testing (NHST, or confidence intervals (CIs. Authors of articles published in psychology, behavioural neuroscience, and medical journals were asked, via email, to interpret two fictitious studies that found similar results, one statistically significant, and the other non-significant. Responses from 330 authors varied greatly, but interpretation was generally poor, whether results were presented as CIs or using NHST. However, when interpreting CIs respondents who mentioned NHST were 60% likely to conclude, unjustifiably, the two results conflicted, whereas those who interpreted CIs without reference to NHST were 95% likely to conclude, justifiably, the two results were consistent. Findings were generally similar for all three disciplines. An email survey of academic psychologists confirmed that CIs elicit better interpretations if NHST is not invoked. Improved statistical inference can result from encouragement of meta-analytic thinking and use of CIs but, for full benefit, such highly desirable statistical reform requires also that researchers interpret CIs without recourse to NHST.

3. PCA-based bootstrap confidence interval tests for gene-disease association involving multiple SNPs

Directory of Open Access Journals (Sweden)

Xue Fuzhong

2010-01-01

Full Text Available Abstract Background Genetic association study is currently the primary vehicle for identification and characterization of disease-predisposing variant(s which usually involves multiple single-nucleotide polymorphisms (SNPs available. However, SNP-wise association tests raise concerns over multiple testing. Haplotype-based methods have the advantage of being able to account for correlations between neighbouring SNPs, yet assuming Hardy-Weinberg equilibrium (HWE and potentially large number degrees of freedom can harm its statistical power and robustness. Approaches based on principal component analysis (PCA are preferable in this regard but their performance varies with methods of extracting principal components (PCs. Results PCA-based bootstrap confidence interval test (PCA-BCIT, which directly uses the PC scores to assess gene-disease association, was developed and evaluated for three ways of extracting PCs, i.e., cases only(CAES, controls only(COES and cases and controls combined(CES. Extraction of PCs with COES is preferred to that with CAES and CES. Performance of the test was examined via simulations as well as analyses on data of rheumatoid arthritis and heroin addiction, which maintains nominal level under null hypothesis and showed comparable performance with permutation test. Conclusions PCA-BCIT is a valid and powerful method for assessing gene-disease association involving multiple SNPs.

4. R package to estimate intracluster correlation coefficient with confidence interval for binary data.

Science.gov (United States)

Chakraborty, Hrishikesh; Hossain, Akhtar

2018-03-01

The Intracluster Correlation Coefficient (ICC) is a major parameter of interest in cluster randomized trials that measures the degree to which responses within the same cluster are correlated. There are several types of ICC estimators and its confidence intervals (CI) suggested in the literature for binary data. Studies have compared relative weaknesses and advantages of ICC estimators as well as its CI for binary data and suggested situations where one is advantageous in practical research. The commonly used statistical computing systems currently facilitate estimation of only a very few variants of ICC and its CI. To address the limitations of current statistical packages, we developed an R package, ICCbin, to facilitate estimating ICC and its CI for binary responses using different methods. The ICCbin package is designed to provide estimates of ICC in 16 different ways including analysis of variance methods, moments based estimation, direct probabilistic methods, correlation based estimation, and resampling method. CI of ICC is estimated using 5 different methods. It also generates cluster binary data using exchangeable correlation structure. ICCbin package provides two functions for users. The function rcbin() generates cluster binary data and the function iccbin() estimates ICC and it's CI. The users can choose appropriate ICC and its CI estimate from the wide selection of estimates from the outputs. The R package ICCbin presents very flexible and easy to use ways to generate cluster binary data and to estimate ICC and it's CI for binary response using different methods. The package ICCbin is freely available for use with R from the CRAN repository (https://cran.r-project.org/package=ICCbin). We believe that this package can be a very useful tool for researchers to design cluster randomized trials with binary outcome. Copyright © 2017 Elsevier B.V. All rights reserved.

5. Adjusted Wald Confidence Interval for a Difference of Binomial Proportions Based on Paired Data

Science.gov (United States)

Bonett, Douglas G.; Price, Robert M.

2012-01-01

Adjusted Wald intervals for binomial proportions in one-sample and two-sample designs have been shown to perform about as well as the best available methods. The adjusted Wald intervals are easy to compute and have been incorporated into introductory statistics courses. An adjusted Wald interval for paired binomial proportions is proposed here and…

6. Simulation data for an estimation of the maximum theoretical value and confidence interval for the correlation coefficient.

Science.gov (United States)

Rocco, Paolo; Cilurzo, Francesco; Minghetti, Paola; Vistoli, Giulio; Pedretti, Alessandro

2017-10-01

The data presented in this article are related to the article titled "Molecular Dynamics as a tool for in silico screening of skin permeability" (Rocco et al., 2017) [1]. Knowledge of the confidence interval and maximum theoretical value of the correlation coefficient r can prove useful to estimate the reliability of developed predictive models, in particular when there is great variability in compiled experimental datasets. In this Data in Brief article, data from purposely designed numerical simulations are presented to show how much the maximum r value is worsened by increasing the data uncertainty. The corresponding confidence interval of r is determined by using the Fisher r → Z transform.

7. Confidence Intervals for Effect Sizes: Compliance and Clinical Significance in the "Journal of Consulting and Clinical Psychology"

Science.gov (United States)

Odgaard, Eric C.; Fowler, Robert L.

2010-01-01

Objective: In 2005, the "Journal of Consulting and Clinical Psychology" ("JCCP") became the first American Psychological Association (APA) journal to require statistical measures of clinical significance, plus effect sizes (ESs) and associated confidence intervals (CIs), for primary outcomes (La Greca, 2005). As this represents the single largest…

8. Coverage probability of bootstrap confidence intervals in heavy-tailed frequency models, with application to precipitation data

Czech Academy of Sciences Publication Activity Database

Kyselý, Jan

2010-01-01

Roč. 101, 3-4 (2010), s. 345-361 ISSN 0177-798X R&D Projects: GA AV ČR KJB300420801 Institutional research plan: CEZ:AV0Z30420517 Keywords : bootstrap * extreme value analysis * confidence intervals * heavy-tailed distributions * precipitation amounts Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 1.684, year: 2010

9. A computer program (COSTUM) to calculate confidence intervals for in situ stress measurements. V. 1

International Nuclear Information System (INIS)

Dzik, E.J.; Walker, J.R.; Martin, C.D.

1989-03-01

The state of in situ stress is one of the parameters required both for the design and analysis of underground excavations and for the evaluation of numerical models used to simulate underground conditions. To account for the variability and uncertainty of in situ stress measurements, it is desirable to apply confidence limits to measured stresses. Several measurements of the state of stress along a borehole are often made to estimate the average state of stress at a point. Since stress is a tensor, calculating the mean stress and confidence limits using scalar techniques is inappropriate as well as incorrect. A computer program has been written to calculate and present the mean principle stresses and the confidence limits for the magnitudes and directions of the mean principle stresses. This report describes the computer program, COSTUM

10. Confidence intervals for population allele frequencies: the general case of sampling from a finite diploid population of any size.

Science.gov (United States)

Fung, Tak; Keenan, Kevin

2014-01-01

The estimation of population allele frequencies using sample data forms a central component of studies in population genetics. These estimates can be used to test hypotheses on the evolutionary processes governing changes in genetic variation among populations. However, existing studies frequently do not account for sampling uncertainty in these estimates, thus compromising their utility. Incorporation of this uncertainty has been hindered by the lack of a method for constructing confidence intervals containing the population allele frequencies, for the general case of sampling from a finite diploid population of any size. In this study, we address this important knowledge gap by presenting a rigorous mathematical method to construct such confidence intervals. For a range of scenarios, the method is used to demonstrate that for a particular allele, in order to obtain accurate estimates within 0.05 of the population allele frequency with high probability (> or = 95%), a sample size of > 30 is often required. This analysis is augmented by an application of the method to empirical sample allele frequency data for two populations of the checkerspot butterfly (Melitaea cinxia L.), occupying meadows in Finland. For each population, the method is used to derive > or = 98.3% confidence intervals for the population frequencies of three alleles. These intervals are then used to construct two joint > or = 95% confidence regions, one for the set of three frequencies for each population. These regions are then used to derive a > or = 95%% confidence interval for Jost's D, a measure of genetic differentiation between the two populations. Overall, the results demonstrate the practical utility of the method with respect to informing sampling design and accounting for sampling uncertainty in studies of population genetics, important for scientific hypothesis-testing and also for risk-based natural resource management.

11. Confidence intervals for population allele frequencies: the general case of sampling from a finite diploid population of any size.

Directory of Open Access Journals (Sweden)

Tak Fung

Full Text Available The estimation of population allele frequencies using sample data forms a central component of studies in population genetics. These estimates can be used to test hypotheses on the evolutionary processes governing changes in genetic variation among populations. However, existing studies frequently do not account for sampling uncertainty in these estimates, thus compromising their utility. Incorporation of this uncertainty has been hindered by the lack of a method for constructing confidence intervals containing the population allele frequencies, for the general case of sampling from a finite diploid population of any size. In this study, we address this important knowledge gap by presenting a rigorous mathematical method to construct such confidence intervals. For a range of scenarios, the method is used to demonstrate that for a particular allele, in order to obtain accurate estimates within 0.05 of the population allele frequency with high probability (> or = 95%, a sample size of > 30 is often required. This analysis is augmented by an application of the method to empirical sample allele frequency data for two populations of the checkerspot butterfly (Melitaea cinxia L., occupying meadows in Finland. For each population, the method is used to derive > or = 98.3% confidence intervals for the population frequencies of three alleles. These intervals are then used to construct two joint > or = 95% confidence regions, one for the set of three frequencies for each population. These regions are then used to derive a > or = 95%% confidence interval for Jost's D, a measure of genetic differentiation between the two populations. Overall, the results demonstrate the practical utility of the method with respect to informing sampling design and accounting for sampling uncertainty in studies of population genetics, important for scientific hypothesis-testing and also for risk-based natural resource management.

12. Common pitfalls in statistical analysis: "P" values, statistical significance and confidence intervals

Directory of Open Access Journals (Sweden)

Priya Ranganathan

2015-01-01

Full Text Available In the second part of a series on pitfalls in statistical analysis, we look at various ways in which a statistically significant study result can be expressed. We debunk some of the myths regarding the ′P′ value, explain the importance of ′confidence intervals′ and clarify the importance of including both values in a paper

13. Common pitfalls in statistical analysis: “P” values, statistical significance and confidence intervals

Science.gov (United States)

Ranganathan, Priya; Pramesh, C. S.; Buyse, Marc

2015-01-01

In the second part of a series on pitfalls in statistical analysis, we look at various ways in which a statistically significant study result can be expressed. We debunk some of the myths regarding the ‘P’ value, explain the importance of ‘confidence intervals’ and clarify the importance of including both values in a paper PMID:25878958

14. The confidence-accuracy relationship for eyewitness identification decisions: Effects of exposure duration, retention interval, and divided attention.

Science.gov (United States)

Palmer, Matthew A; Brewer, Neil; Weber, Nathan; Nagesh, Ambika

2013-03-01

Prior research points to a meaningful confidence-accuracy (CA) relationship for positive identification decisions. However, there are theoretical grounds for expecting that different aspects of the CA relationship (calibration, resolution, and over/underconfidence) might be undermined in some circumstances. This research investigated whether the CA relationship for eyewitness identification decisions is affected by three, forensically relevant variables: exposure duration, retention interval, and divided attention at encoding. In Study 1 (N = 986), a field experiment, we examined the effects of exposure duration (5 s vs. 90 s) and retention interval (immediate testing vs. a 1-week delay) on the CA relationship. In Study 2 (N = 502), we examined the effects of attention during encoding on the CA relationship by reanalyzing data from a laboratory experiment in which participants viewed a stimulus video under full or divided attention conditions and then attempted to identify two targets from separate lineups. Across both studies, all three manipulations affected identification accuracy. The central analyses concerned the CA relation for positive identification decisions. For the manipulations of exposure duration and retention interval, overconfidence was greater in the more difficult conditions (shorter exposure; delayed testing) than the easier conditions. Only the exposure duration manipulation influenced resolution (which was better for 5 s than 90 s), and only the retention interval manipulation affected calibration (which was better for immediate testing than delayed testing). In all experimental conditions, accuracy and diagnosticity increased with confidence, particularly at the upper end of the confidence scale. Implications for theory and forensic settings are discussed.

15. The best confidence interval of the failure rate and unavailability per demand when few experimental data are available

International Nuclear Information System (INIS)

Goodman, J.

1985-01-01

Using a few available data the likelihood functions for the failure rate and unavailability per demand are constructed. These likelihood functions are used to obtain likelihood density functions for the failure rate and unavailability per demand. The best (or shortest) confidence intervals for these functions are provided. The failure rate and unavailability per demand are important characteristics needed for reliability and availability analysis. The methods of estimation of these characteristics when plenty of observed data are available are well known. However, on many occasions when we deal with rare failure modes or with new equipment or components for which sufficient experience has not accumulated, we have scarce data where few or zero failures have occurred. In these cases, a technique which reflects exactly our state of knowledge is required. This technique is based on likelihood density function or Bayesian methods depending on the available prior distribution. To extract the maximum amount of information from the data the best confidence interval is determined

16. Bootstrap confidence intervals and bias correction in the estimation of HIV incidence from surveillance data with testing for recent infection.

Science.gov (United States)

Carnegie, Nicole Bohme

2011-04-15

The incidence of new infections is a key measure of the status of the HIV epidemic, but accurate measurement of incidence is often constrained by limited data. Karon et al. (Statist. Med. 2008; 27:4617–4633) developed a model to estimate the incidence of HIV infection from surveillance data with biologic testing for recent infection for newly diagnosed cases. This method has been implemented by public health departments across the United States and is behind the new national incidence estimates, which are about 40 per cent higher than previous estimates. We show that the delta method approximation given for the variance of the estimator is incomplete, leading to an inflated variance estimate. This contributes to the generation of overly conservative confidence intervals, potentially obscuring important differences between populations. We demonstrate via simulation that an innovative model-based bootstrap method using the specified model for the infection and surveillance process improves confidence interval coverage and adjusts for the bias in the point estimate. Confidence interval coverage is about 94–97 per cent after correction, compared with 96–99 per cent before. The simulated bias in the estimate of incidence ranges from −6.3 to +14.6 per cent under the original model but is consistently under 1 per cent after correction by the model-based bootstrap. In an application to data from King County, Washington in 2007 we observe correction of 7.2 per cent relative bias in the incidence estimate and a 66 per cent reduction in the width of the 95 per cent confidence interval using this method. We provide open-source software to implement the method that can also be extended for alternate models.

17. [Confidence interval or p-value--similarities and differences between two important methods of statistical inference of quantitative studies].

Science.gov (United States)

Harari, Gil

2014-01-01

Statistic significance, also known as p-value, and CI (Confidence Interval) are common statistics measures and are essential for the statistical analysis of studies in medicine and life sciences. These measures provide complementary information about the statistical probability and conclusions regarding the clinical significance of study findings. This article is intended to describe the methodologies, compare between the methods, assert their suitability for the different needs of study results analysis and to explain situations in which each method should be used.

18. Monte Carlo simulation of parameter confidence intervals for non-linear regression analysis of biological data using Microsoft Excel.

Science.gov (United States)

Lambert, Ronald J W; Mytilinaios, Ioannis; Maitland, Luke; Brown, Angus M

2012-08-01

This study describes a method to obtain parameter confidence intervals from the fitting of non-linear functions to experimental data, using the SOLVER and Analysis ToolPaK Add-In of the Microsoft Excel spreadsheet. Previously we have shown that Excel can fit complex multiple functions to biological data, obtaining values equivalent to those returned by more specialized statistical or mathematical software. However, a disadvantage of using the Excel method was the inability to return confidence intervals for the computed parameters or the correlations between them. Using a simple Monte-Carlo procedure within the Excel spreadsheet (without recourse to programming), SOLVER can provide parameter estimates (up to 200 at a time) for multiple 'virtual' data sets, from which the required confidence intervals and correlation coefficients can be obtained. The general utility of the method is exemplified by applying it to the analysis of the growth of Listeria monocytogenes, the growth inhibition of Pseudomonas aeruginosa by chlorhexidine and the further analysis of the electrophysiological data from the compound action potential of the rodent optic nerve. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

19. Technical Report: Algorithm and Implementation for Quasispecies Abundance Inference with Confidence Intervals from Metagenomic Sequence Data

Energy Technology Data Exchange (ETDEWEB)

McLoughlin, Kevin [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

2016-01-11

This report describes the design and implementation of an algorithm for estimating relative microbial abundances, together with confidence limits, using data from metagenomic DNA sequencing. For the background behind this project and a detailed discussion of our modeling approach for metagenomic data, we refer the reader to our earlier technical report, dated March 4, 2014. Briefly, we described a fully Bayesian generative model for paired-end sequence read data, incorporating the effects of the relative abundances, the distribution of sequence fragment lengths, fragment position bias, sequencing errors and variations between the sampled genomes and the nearest reference genomes. A distinctive feature of our modeling approach is the use of a Chinese restaurant process (CRP) to describe the selection of genomes to be sampled, and thus the relative abundances. The CRP component is desirable for fitting abundances to reads that may map ambiguously to multiple targets, because it naturally leads to sparse solutions that select the best representative from each set of nearly equivalent genomes.

20. Tablet potency of Tianeptine in coated tablets by near infrared spectroscopy: model optimisation, calibration transfer and confidence intervals.

Science.gov (United States)

Boiret, Mathieu; Meunier, Loïc; Ginot, Yves-Michel

2011-02-20

A near infrared (NIR) method was developed for determination of tablet potency of active pharmaceutical ingredient (API) in a complex coated tablet matrix. The calibration set contained samples from laboratory and production scale batches. The reference values were obtained by high performance liquid chromatography (HPLC) and partial least squares (PLS) regression was used to establish a model. The model was challenged by calculating tablet potency of two external test sets. Root mean square errors of prediction were respectively equal to 2.0% and 2.7%. To use this model with a second spectrometer from the production field, a calibration transfer method called piecewise direct standardisation (PDS) was used. After the transfer, the root mean square error of prediction of the first test set was 2.4% compared to 4.0% without transferring the spectra. A statistical technique using bootstrap of PLS residuals was used to estimate confidence intervals of tablet potency calculations. This method requires an optimised PLS model, selection of the bootstrap number and determination of the risk. In the case of a chemical analysis, the tablet potency value will be included within the confidence interval calculated by the bootstrap method. An easy to use graphical interface was developed to easily determine if the predictions, surrounded by minimum and maximum values, are within the specifications defined by the regulatory organisation. Copyright © 2010 Elsevier B.V. All rights reserved.

1. Weighted profile likelihood-based confidence interval for the difference between two proportions with paired binomial data.

Science.gov (United States)

Pradhan, Vivek; Saha, Krishna K; Banerjee, Tathagata; Evans, John C

2014-07-30

Inference on the difference between two binomial proportions in the paired binomial setting is often an important problem in many biomedical investigations. Tang et al. (2010, Statistics in Medicine) discussed six methods to construct confidence intervals (henceforth, we abbreviate it as CI) for the difference between two proportions in paired binomial setting using method of variance estimates recovery. In this article, we propose weighted profile likelihood-based CIs for the difference between proportions of a paired binomial distribution. However, instead of the usual likelihood, we use weighted likelihood that is essentially making adjustments to the cell frequencies of a 2 × 2 table in the spirit of Agresti and Min (2005, Statistics in Medicine). We then conduct numerical studies to compare the performances of the proposed CIs with that of Tang et al. and Agresti and Min in terms of coverage probabilities and expected lengths. Our numerical study clearly indicates that the weighted profile likelihood-based intervals and Jeffreys interval (cf. Tang et al.) are superior in terms of achieving the nominal level, and in terms of expected lengths, they are competitive. Finally, we illustrate the use of the proposed CIs with real-life examples. Copyright © 2014 John Wiley & Sons, Ltd.

2. A comparison of confidence interval methods for the concordance correlation coefficient and intraclass correlation coefficient with small number of raters.

Science.gov (United States)

Feng, Dai; Svetnik, Vladimir; Coimbra, Alexandre; Baumgartner, Richard

2014-01-01

The intraclass correlation coefficient (ICC) with fixed raters or, equivalently, the concordance correlation coefficient (CCC) for continuous outcomes is a widely accepted aggregate index of agreement in settings with small number of raters. Quantifying the precision of the CCC by constructing its confidence interval (CI) is important in early drug development applications, in particular in qualification of biomarker platforms. In recent years, there have been several new methods proposed for construction of CIs for the CCC, but their comprehensive comparison has not been attempted. The methods consisted of the delta method and jackknifing with and without Fisher's Z-transformation, respectively, and Bayesian methods with vague priors. In this study, we carried out a simulation study, with data simulated from multivariate normal as well as heavier tailed distribution (t-distribution with 5 degrees of freedom), to compare the state-of-the-art methods for assigning CI to the CCC. When the data are normally distributed, the jackknifing with Fisher's Z-transformation (JZ) tended to provide superior coverage and the difference between it and the closest competitor, the Bayesian method with the Jeffreys prior was in general minimal. For the nonnormal data, the jackknife methods, especially the JZ method, provided the coverage probabilities closest to the nominal in contrast to the others which yielded overly liberal coverage. Approaches based upon the delta method and Bayesian method with conjugate prior generally provided slightly narrower intervals and larger lower bounds than others, though this was offset by their poor coverage. Finally, we illustrated the utility of the CIs for the CCC in an example of a wake after sleep onset (WASO) biomarker, which is frequently used in clinical sleep studies of drugs for treatment of insomnia.

3. User guide to the UNC1NLI1 package and three utility programs for computation of nonlinear confidence and prediction intervals using MODFLOW-2000

DEFF Research Database (Denmark)

Christensen, Steen; Cooley, R.L.

a model (for example when using the Parameter-Estimation Process of MODFLOW-2000) it is advantageous to also use regression-based methods to quantify uncertainty. For this reason the UNC Process computes (1) confidence intervals for parameters of the Parameter-Estimation Process and (2) confidence...

4. Determination and Interpretation of Characteristic Limits for Radioactivity Measurements: Decision Threshhold, Detection Limit and Limits of the Confidence Interval

International Nuclear Information System (INIS)

2017-01-01

Since 2004, the environment programme of the IAEA has included activities aimed at developing a set of procedures for analytical measurements of radionuclides in food and the environment. Reliable, comparable and fit for purpose results are essential for any analytical measurement. Guidelines and national and international standards for laboratory practices to fulfil quality assurance requirements are extremely important when performing such measurements. The guidelines and standards should be comprehensive, clearly formulated and readily available to both the analyst and the customer. ISO 11929:2010 is the international standard on the determination of the characteristic limits (decision threshold, detection limit and limits of the confidence interval) for measuring ionizing radiation. For nuclear analytical laboratories involved in the measurement of radioactivity in food and the environment, robust determination of the characteristic limits of radioanalytical techniques is essential with regard to national and international regulations on permitted levels of radioactivity. However, characteristic limits defined in ISO 11929:2010 are complex, and the correct application of the standard in laboratories requires a full understanding of various concepts. This publication provides additional information to Member States in the understanding of the terminology, definitions and concepts in ISO 11929:2010, thus facilitating its implementation in Member State laboratories.

5. Confidence intervals for effect sizes: compliance and clinical significance in the Journal of Consulting and clinical Psychology.

Science.gov (United States)

Odgaard, Eric C; Fowler, Robert L

2010-06-01

In 2005, the Journal of Consulting and Clinical Psychology (JCCP) became the first American Psychological Association (APA) journal to require statistical measures of clinical significance, plus effect sizes (ESs) and associated confidence intervals (CIs), for primary outcomes (La Greca, 2005). As this represents the single largest editorial effort to improve statistical reporting practices in any APA journal in at least a decade, in this article we investigate the efficacy of that change. All intervention studies published in JCCP in 2003, 2004, 2007, and 2008 were reviewed. Each article was coded for method of clinical significance, type of ES, and type of associated CI, broken down by statistical test (F, t, chi-square, r/R(2), and multivariate modeling). By 2008, clinical significance compliance was 75% (up from 31%), with 94% of studies reporting some measure of ES (reporting improved for individual statistical tests ranging from eta(2) = .05 to .17, with reasonable CIs). Reporting of CIs for ESs also improved, although only to 40%. Also, the vast majority of reported CIs used approximations, which become progressively less accurate for smaller sample sizes and larger ESs (cf. Algina & Kessleman, 2003). Changes are near asymptote for ESs and clinical significance, but CIs lag behind. As CIs for ESs are required for primary outcomes, we show how to compute CIs for the vast majority of ESs reported in JCCP, with an example of how to use CIs for ESs as a method to assess clinical significance.

6. Bootstrap Signal-to-Noise Confidence Intervals: An Objective Method for Subject Exclusion and Quality Control in ERP Studies

Science.gov (United States)

Parks, Nathan A.; Gannon, Matthew A.; Long, Stephanie M.; Young, Madeleine E.

2016-01-01

Analysis of event-related potential (ERP) data includes several steps to ensure that ERPs meet an appropriate level of signal quality. One such step, subject exclusion, rejects subject data if ERP waveforms fail to meet an appropriate level of signal quality. Subject exclusion is an important quality control step in the ERP analysis pipeline as it ensures that statistical inference is based only upon those subjects exhibiting clear evoked brain responses. This critical quality control step is most often performed simply through visual inspection of subject-level ERPs by investigators. Such an approach is qualitative, subjective, and susceptible to investigator bias, as there are no standards as to what constitutes an ERP of sufficient signal quality. Here, we describe a standardized and objective method for quantifying waveform quality in individual subjects and establishing criteria for subject exclusion. The approach uses bootstrap resampling of ERP waveforms (from a pool of all available trials) to compute a signal-to-noise ratio confidence interval (SNR-CI) for individual subject waveforms. The lower bound of this SNR-CI (SNRLB) yields an effective and objective measure of signal quality as it ensures that ERP waveforms statistically exceed a desired signal-to-noise criterion. SNRLB provides a quantifiable metric of individual subject ERP quality and eliminates the need for subjective evaluation of waveform quality by the investigator. We detail the SNR-CI methodology, establish the efficacy of employing this approach with Monte Carlo simulations, and demonstrate its utility in practice when applied to ERP datasets. PMID:26903849

7. Using Confidence Interval-Based Estimation of Relevance to Select Social-Cognitive Determinants for Behavior Change Interventions

Directory of Open Access Journals (Sweden)

Rik Crutzen

2017-07-01

Full Text Available When developing an intervention aimed at behavior change, one of the crucial steps in the development process is to select the most relevant social-cognitive determinants. These determinants can be seen as the buttons one needs to push to establish behavior change. Insight into these determinants is needed to select behavior change methods (i.e., general behavior change techniques that are applied in an intervention in the development process. Therefore, a study on determinants is often conducted as formative research in the intervention development process. Ideally, all relevant determinants identified in such a study are addressed by an intervention. However, when developing a behavior change intervention, there are limits in terms of, for example, resources available for intervention development and the amount of content that participants of an intervention can be exposed to. Hence, it is important to select those determinants that are most relevant to the target behavior as these determinants should be addressed in an intervention. The aim of the current paper is to introduce a novel approach to select the most relevant social-cognitive determinants and use them in intervention development. This approach is based on visualization of confidence intervals for the means and correlation coefficients for all determinants simultaneously. This visualization facilitates comparison, which is necessary when making selections. By means of a case study on the determinants of using a high dose of 3,4-methylenedioxymethamphetamine (commonly known as ecstasy, we illustrate this approach. We provide a freely available tool to facilitate the analyses needed in this approach.

8. Using Confidence Interval-Based Estimation of Relevance to Select Social-Cognitive Determinants for Behavior Change Interventions.

Science.gov (United States)

Crutzen, Rik; Peters, Gjalt-Jorn Ygram; Noijen, Judith

2017-01-01

When developing an intervention aimed at behavior change, one of the crucial steps in the development process is to select the most relevant social-cognitive determinants. These determinants can be seen as the buttons one needs to push to establish behavior change. Insight into these determinants is needed to select behavior change methods (i.e., general behavior change techniques that are applied in an intervention) in the development process. Therefore, a study on determinants is often conducted as formative research in the intervention development process. Ideally, all relevant determinants identified in such a study are addressed by an intervention. However, when developing a behavior change intervention, there are limits in terms of, for example, resources available for intervention development and the amount of content that participants of an intervention can be exposed to. Hence, it is important to select those determinants that are most relevant to the target behavior as these determinants should be addressed in an intervention. The aim of the current paper is to introduce a novel approach to select the most relevant social-cognitive determinants and use them in intervention development. This approach is based on visualization of confidence intervals for the means and correlation coefficients for all determinants simultaneously. This visualization facilitates comparison, which is necessary when making selections. By means of a case study on the determinants of using a high dose of 3,4-methylenedioxymethamphetamine (commonly known as ecstasy), we illustrate this approach. We provide a freely available tool to facilitate the analyses needed in this approach.

9. Five-band microwave radiometer system for noninvasive brain temperature measurement in newborn babies: Phantom experiment and confidence interval

Science.gov (United States)

Sugiura, T.; Hirata, H.; Hand, J. W.; van Leeuwen, J. M. J.; Mizushina, S.

2011-10-01

Clinical trials of hypothermic brain treatment for newborn babies are currently hindered by the difficulty in measuring deep brain temperatures. As one of the possible methods for noninvasive and continuous temperature monitoring that is completely passive and inherently safe is passive microwave radiometry (MWR). We have developed a five-band microwave radiometer system with a single dual-polarized, rectangular waveguide antenna operating within the 1-4 GHz range and a method for retrieving the temperature profile from five radiometric brightness temperatures. This paper addresses (1) the temperature calibration for five microwave receivers, (2) the measurement experiment using a phantom model that mimics the temperature profile in a newborn baby, and (3) the feasibility for noninvasive monitoring of deep brain temperatures. Temperature resolutions were 0.103, 0.129, 0.138, 0.105 and 0.111 K for 1.2, 1.65, 2.3, 3.0 and 3.6 GHz receivers, respectively. The precision of temperature estimation (2σ confidence interval) was about 0.7°C at a 5-cm depth from the phantom surface. Accuracy, which is the difference between the estimated temperature using this system and the measured temperature by a thermocouple at a depth of 5 cm, was about 2°C. The current result is not satisfactory for clinical application because the clinical requirement for accuracy must be better than 1°C for both precision and accuracy at a depth of 5 cm. Since a couple of possible causes for this inaccuracy have been identified, we believe that the system can take a step closer to the clinical application of MWR for hypothermic rescue treatment.

10. A comparison of confidence interval methods for the intraclass correlation coefficient in community-based cluster randomization trials with a binary outcome.

Science.gov (United States)

Braschel, Melissa C; Svec, Ivana; Darlington, Gerarda A; Donner, Allan

2016-04-01

Many investigators rely on previously published point estimates of the intraclass correlation coefficient rather than on their associated confidence intervals to determine the required size of a newly planned cluster randomized trial. Although confidence interval methods for the intraclass correlation coefficient that can be applied to community-based trials have been developed for a continuous outcome variable, fewer methods exist for a binary outcome variable. The aim of this study is to evaluate confidence interval methods for the intraclass correlation coefficient applied to binary outcomes in community intervention trials enrolling a small number of large clusters. Existing methods for confidence interval construction are examined and compared to a new ad hoc approach based on dividing clusters into a large number of smaller sub-clusters and subsequently applying existing methods to the resulting data. Monte Carlo simulation is used to assess the width and coverage of confidence intervals for the intraclass correlation coefficient based on Smith's large sample approximation of the standard error of the one-way analysis of variance estimator, an inverted modified Wald test for the Fleiss-Cuzick estimator, and intervals constructed using a bootstrap-t applied to a variance-stabilizing transformation of the intraclass correlation coefficient estimate. In addition, a new approach is applied in which clusters are randomly divided into a large number of smaller sub-clusters with the same methods applied to these data (with the exception of the bootstrap-t interval, which assumes large cluster sizes). These methods are also applied to a cluster randomized trial on adolescent tobacco use for illustration. When applied to a binary outcome variable in a small number of large clusters, existing confidence interval methods for the intraclass correlation coefficient provide poor coverage. However, confidence intervals constructed using the new approach combined with Smith

11. A nonparametric statistical method for determination of a confidence interval for the mean of a set of results obtained in a laboratory intercomparison

International Nuclear Information System (INIS)

Veglia, A.

1981-08-01

In cases where sets of data are obviously not normally distributed, the application of a nonparametric method for the estimation of a confidence interval for the mean seems to be more suitable than some other methods because such a method requires few assumptions about the population of data. A two-step statistical method is proposed which can be applied to any set of analytical results: elimination of outliers by a nonparametric method based on Tchebycheff's inequality, and determination of a confidence interval for the mean by a non-parametric method based on binominal distribution. The method is appropriate only for samples of size n>=10

12. The Confidence-Accuracy Relationship for Eyewitness Identification Decisions: Effects of Exposure Duration, Retention Interval, and Divided Attention

Science.gov (United States)

Palmer, Matthew A.; Brewer, Neil; Weber, Nathan; Nagesh, Ambika

2013-01-01

Prior research points to a meaningful confidence-accuracy (CA) relationship for positive identification decisions. However, there are theoretical grounds for expecting that different aspects of the CA relationship (calibration, resolution, and over/underconfidence) might be undermined in some circumstances. This research investigated whether the…

13. Perpetrator admissions and earwitness renditions: the effects of retention interval and rehearsal on accuracy of and confidence in memory for criminal accounts

OpenAIRE

Boydell, Carroll

2008-01-01

While much research has explored how well earwitnesses can identify the voice of a perpetrator, little research has examined how well they can recall details from a perpetrator’s confession. This study examines the accuracy-confidence correlation for memory for details from a perpetrator’s verbal account of a crime, as well as the effects of two variables commonly encountered in a criminal investigation (rehearsal and length of retention interval) on that correlation. Results suggest that con...

14. Empirical likelihood-based confidence intervals for the sensitivity of a continuous-scale diagnostic test at a fixed level of specificity.

Science.gov (United States)

Gengsheng Qin; Davis, Angela E; Jing, Bing-Yi

2011-06-01

For a continuous-scale diagnostic test, it is often of interest to find the range of the sensitivity of the test at the cut-off that yields a desired specificity. In this article, we first define a profile empirical likelihood ratio for the sensitivity of a continuous-scale diagnostic test and show that its limiting distribution is a scaled chi-square distribution. We then propose two new empirical likelihood-based confidence intervals for the sensitivity of the test at a fixed level of specificity by using the scaled chi-square distribution. Simulation studies are conducted to compare the finite sample performance of the newly proposed intervals with the existing intervals for the sensitivity in terms of coverage probability. A real example is used to illustrate the application of the recommended methods.

15. Prediction of the distillation temperatures of crude oils using ¹H NMR and support vector regression with estimated confidence intervals.

Science.gov (United States)

Filgueiras, Paulo R; Terra, Luciana A; Castro, Eustáquio V R; Oliveira, Lize M S L; Dias, Júlio C M; Poppi, Ronei J

2015-09-01

This paper aims to estimate the temperature equivalent to 10% (T10%), 50% (T50%) and 90% (T90%) of distilled volume in crude oils using (1)H NMR and support vector regression (SVR). Confidence intervals for the predicted values were calculated using a boosting-type ensemble method in a procedure called ensemble support vector regression (eSVR). The estimated confidence intervals obtained by eSVR were compared with previously accepted calculations from partial least squares (PLS) models and a boosting-type ensemble applied in the PLS method (ePLS). By using the proposed boosting strategy, it was possible to identify outliers in the T10% property dataset. The eSVR procedure improved the accuracy of the distillation temperature predictions in relation to standard PLS, ePLS and SVR. For T10%, a root mean square error of prediction (RMSEP) of 11.6°C was obtained in comparison with 15.6°C for PLS, 15.1°C for ePLS and 28.4°C for SVR. The RMSEPs for T50% were 24.2°C, 23.4°C, 22.8°C and 14.4°C for PLS, ePLS, SVR and eSVR, respectively. For T90%, the values of RMSEP were 39.0°C, 39.9°C and 39.9°C for PLS, ePLS, SVR and eSVR, respectively. The confidence intervals calculated by the proposed boosting methodology presented acceptable values for the three properties analyzed; however, they were lower than those calculated by the standard methodology for PLS. Copyright © 2015 Elsevier B.V. All rights reserved.

16. Phase II study of tailored S-1 monotherapy with a 1-week interval after a 2-week dosing period in elderly patients with advanced non-small cell lung cancer.

Science.gov (United States)

Goto, Hisatsugu; Okano, Yoshio; Machida, Hisanori; Hatakeyama, Nobuo; Ogushi, Fumitaka; Haku, Takashi; Kanematsu, Takanori; Urata, Tomoyuki; Kakiuchi, Soji; Hanibuchi, Masaki; Sone, Saburo; Nishioka, Yasuhiko

2018-01-01

17. A spreadsheet template compatible with Microsoft Excel and iWork Numbers that returns the simultaneous confidence intervals for all pairwise differences between multiple sample means.

Science.gov (United States)

Brown, Angus M

2010-04-01

The objective of the method described in this paper is to develop a spreadsheet template for the purpose of comparing multiple sample means. An initial analysis of variance (ANOVA) test on the data returns F--the test statistic. If F is larger than the critical F value drawn from the F distribution at the appropriate degrees of freedom, convention dictates rejection of the null hypothesis and allows subsequent multiple comparison testing to determine where the inequalities between the sample means lie. A variety of multiple comparison methods are described that return the 95% confidence intervals for differences between means using an inclusive pairwise comparison of the sample means. 2009 Elsevier Ireland Ltd. All rights reserved.

18. Factorial-based response-surface modeling with confidence intervals for optimizing thermal-optical transmission analysis of atmospheric black carbon

International Nuclear Information System (INIS)

Conny, J.M.; Norris, G.A.; Gould, T.R.

2009-01-01

Thermal-optical transmission (TOT) analysis measures black carbon (BC) in atmospheric aerosol on a fibrous filter. The method pyrolyzes organic carbon (OC) and employs laser light absorption to distinguish BC from the pyrolyzed OC; however, the instrument does not necessarily separate the two physically. In addition, a comprehensive temperature protocol for the analysis based on the Beer-Lambert Law remains elusive. Here, empirical response-surface modeling was used to show how the temperature protocol in TOT analysis can be modified to distinguish pyrolyzed OC from BC based on the Beer-Lambert Law. We determined the apparent specific absorption cross sections for pyrolyzed OC (σ Char ) and BC (σ BC ), which accounted for individual absorption enhancement effects within the filter. Response-surface models of these cross sections were derived from a three-factor central-composite factorial experimental design: temperature and duration of the high-temperature step in the helium phase, and the heating increase in the helium-oxygen phase. The response surface for σ BC , which varied with instrument conditions, revealed a ridge indicating the correct conditions for OC pyrolysis in helium. The intersection of the σ BC and σ Char surfaces indicated the conditions where the cross sections were equivalent, satisfying an important assumption upon which the method relies. 95% confidence interval surfaces defined a confidence region for a range of pyrolysis conditions. Analyses of wintertime samples from Seattle, WA revealed a temperature between 830 deg. C and 850 deg. C as most suitable for the helium high-temperature step lasting 150 s. However, a temperature as low as 750 deg. C could not be rejected statistically

19. "Normality of Residuals Is a Continuous Variable, and Does Seem to Influence the Trustworthiness of Confidence Intervals: A Response to, and Appreciation of, Williams, Grajales, and Kurkiewicz (2013"

Directory of Open Access Journals (Sweden)

Jason W. Osborne

2013-09-01

Full Text Available Osborne and Waters (2002 focused on checking some of the assumptions of multiple linear.regression. In a critique of that paper, Williams, Grajales, and Kurkiewicz correctly clarify that.regression models estimated using ordinary least squares require the assumption of normally.distributed errors, but not the assumption of normally distributed response or predictor variables..They go on to discuss estimate bias and provide a helpful summary of the assumptions of multiple.regression when using ordinary least squares. While we were not as precise as we could have been.when discussing assumptions of normality, the critical issue of the 2002 paper remains -' researchers.often do not check on or report on the assumptions of their statistical methods. This response.expands on the points made by Williams, advocates a thorough examination of data prior to.reporting results, and provides an example of how incremental improvements in meeting the.assumption of normality of residuals incrementally improves the accuracy of confidence intervals.

20. Zero- vs. one-dimensional, parametric vs. non-parametric, and confidence interval vs. hypothesis testing procedures in one-dimensional biomechanical trajectory analysis.

Science.gov (United States)

Pataky, Todd C; Vanrenterghem, Jos; Robinson, Mark A

2015-05-01

Biomechanical processes are often manifested as one-dimensional (1D) trajectories. It has been shown that 1D confidence intervals (CIs) are biased when based on 0D statistical procedures, and the non-parametric 1D bootstrap CI has emerged in the Biomechanics literature as a viable solution. The primary purpose of this paper was to clarify that, for 1D biomechanics datasets, the distinction between 0D and 1D methods is much more important than the distinction between parametric and non-parametric procedures. A secondary purpose was to demonstrate that a parametric equivalent to the 1D bootstrap exists in the form of a random field theory (RFT) correction for multiple comparisons. To emphasize these points we analyzed six datasets consisting of force and kinematic trajectories in one-sample, paired, two-sample and regression designs. Results showed, first, that the 1D bootstrap and other 1D non-parametric CIs were qualitatively identical to RFT CIs, and all were very different from 0D CIs. Second, 1D parametric and 1D non-parametric hypothesis testing results were qualitatively identical for all six datasets. Last, we highlight the limitations of 1D CIs by demonstrating that they are complex, design-dependent, and thus non-generalizable. These results suggest that (i) analyses of 1D data based on 0D models of randomness are generally biased unless one explicitly identifies 0D variables before the experiment, and (ii) parametric and non-parametric 1D hypothesis testing provide an unambiguous framework for analysis when one׳s hypothesis explicitly or implicitly pertains to whole 1D trajectories. Copyright © 2015 Elsevier Ltd. All rights reserved.

1. A comparison of confidence/credible interval methods for the area under the ROC curve for continuous diagnostic tests with small sample size.

Science.gov (United States)

Feng, Dai; Cortese, Giuliana; Baumgartner, Richard

2017-12-01

The receiver operating characteristic (ROC) curve is frequently used as a measure of accuracy of continuous markers in diagnostic tests. The area under the ROC curve (AUC) is arguably the most widely used summary index for the ROC curve. Although the small sample size scenario is common in medical tests, a comprehensive study of small sample size properties of various methods for the construction of the confidence/credible interval (CI) for the AUC has been by and large missing in the literature. In this paper, we describe and compare 29 non-parametric and parametric methods for the construction of the CI for the AUC when the number of available observations is small. The methods considered include not only those that have been widely adopted, but also those that have been less frequently mentioned or, to our knowledge, never applied to the AUC context. To compare different methods, we carried out a simulation study with data generated from binormal models with equal and unequal variances and from exponential models with various parameters and with equal and unequal small sample sizes. We found that the larger the true AUC value and the smaller the sample size, the larger the discrepancy among the results of different approaches. When the model is correctly specified, the parametric approaches tend to outperform the non-parametric ones. Moreover, in the non-parametric domain, we found that a method based on the Mann-Whitney statistic is in general superior to the others. We further elucidate potential issues and provide possible solutions to along with general guidance on the CI construction for the AUC when the sample size is small. Finally, we illustrate the utility of different methods through real life examples.

2. Tailoring Breast Cancer Screening Intervals by Breast Density and Risk for Women Aged 50 Years or Older: Collaborative Modeling of Screening Outcomes.

Science.gov (United States)

Trentham-Dietz, Amy; Kerlikowske, Karla; Stout, Natasha K; Miglioretti, Diana L; Schechter, Clyde B; Ergun, Mehmet Ali; van den Broek, Jeroen J; Alagoz, Oguzhan; Sprague, Brian L; van Ravesteyn, Nicolien T; Near, Aimee M; Gangnon, Ronald E; Hampton, John M; Chandler, Young; de Koning, Harry J; Mandelblatt, Jeanne S; Tosteson, Anna N A

2016-11-15

Biennial screening is generally recommended for average-risk women aged 50 to 74 years, but tailored screening may provide greater benefits. To estimate outcomes for various screening intervals after age 50 years based on breast density and risk for breast cancer. Collaborative simulation modeling using national incidence, breast density, and screening performance data. United States. Women aged 50 years or older with various combinations of breast density and relative risk (RR) of 1.0, 1.3, 2.0, or 4.0. Annual, biennial, or triennial digital mammography screening from ages 50 to 74 years (vs. no screening) and ages 65 to 74 years (vs. biennial digital mammography from ages 50 to 64 years). Lifetime breast cancer deaths, life expectancy and quality-adjusted life-years (QALYs), false-positive mammograms, benign biopsy results, overdiagnosis, cost-effectiveness, and ratio of false-positive results to breast cancer deaths averted. Screening benefits and overdiagnosis increase with breast density and RR. False-positive mammograms and benign results on biopsy decrease with increasing risk. Among women with fatty breasts or scattered fibroglandular density and an RR of 1.0 or 1.3, breast cancer deaths averted were similar for triennial versus biennial screening for both age groups (50 to 74 years, median of 3.4 to 5.1 vs. 4.1 to 6.5 deaths averted; 65 to 74 years, median of 1.5 to 2.1 vs. 1.8 to 2.6 deaths averted). Breast cancer deaths averted increased with annual versus biennial screening for women aged 50 to 74 years at all levels of breast density and an RR of 4.0, and those aged 65 to 74 years with heterogeneously or extremely dense breasts and an RR of 4.0. However, harms were almost 2-fold higher. Triennial screening for the average-risk subgroup and annual screening for the highest-risk subgroup cost less than \$100 000 per QALY gained. Models did not consider women younger than 50 years, those with an RR less than 1, or other imaging methods. Average-risk women

3. 用Delta法估计多维测验合成信度的置信区间%Estimating the Confidence Interval of Composite Reliability of a Multidimensional Test With the Delta Method

Institute of Scientific and Technical Information of China (English)

叶宝娟; 温忠麟

2012-01-01

Reliability is very important in evaluating the quality of a test. Based on the confirmatory factor analysis, composite reliabili- ty is a good index to estimate the test reliability for general applications. As is well known, point estimate contains limited information a- bout a population parameter and cannot indicate how far it can be from the population parameter. The confidence interval of the parame- ter can provide more information. In evaluating the quality of a test, the confidence interval of composite reliability has received atten- tion in recent years. There are three approaches to estimating the confidence interval of composite reliability of an unidimensional test: the Bootstrap method, the Delta method, and the direct use of the standard error of a software output (e. g. , LISREL). The Bootstrap method pro- vides empirical results of the standard error, and is the most credible method. But it needs data simulation techniques, and its computa- tion process is rather complex. The Delta method computes the standard error of composite reliability by approximate calculation. It is simpler than the Bootstrap method. The LISREL software can directly prompt the standard error, and it is the easiest among the three methods. By simulation study, it had been found that the interval estimates obtained by the Delta method and the Bootstrap method were almost identical, whereas the results obtained by LISREL and by the Bootstrap method were substantially different ( Ye & Wen, 2011 ). The Delta method is recommended when the confidence interval of composite reliability of a unidimensional test is estimated, because the Delta method is simpler than the Bootstrap method. There was little research about how to compute the confidence interval of composite reliability of a multidimensional test. We de- duced a formula by using the Delta method for computing the standard error of composite reliability of a multidimensional test. Based on the standard error, the

4. AlphaCI: un programa de cálculo de intervalos de confianza para el coeficiente alfa de Cronbach AlphaCI: a computer program for computing confidence intervals around Cronbach's alfa coefficient

Directory of Open Access Journals (Sweden)

Rubén Ledesma

2004-06-01

5. Bootstrap confidence intervals for principal response curves

NARCIS (Netherlands)

Timmerman, Marieke E.; Ter Braak, Cajo J. F.

2008-01-01

The principal response curve (PRC) model is of use to analyse multivariate data resulting from experiments involving repeated sampling in time. The time-dependent treatment effects are represented by PRCs, which are functional in nature. The sample PRCs can be estimated using a raw approach, or the

6. Bootstrap Confidence Intervals for Principal Response Curves

NARCIS (Netherlands)

Timmerman, M.E.; Braak, ter C.J.F.

2008-01-01

The principal response curve (PRC) model is of use to analyse multivariate data resulting from experiments involving repeated sampling in time. The time-dependent treatment effects are represented by PRCs, which are functional in nature. The sample PRCs can be estimated using a raw approach, or the

7. Interpretando correctamente en salud pública estimaciones puntuales, intervalos de confianza y contrastes de hipótesis Accurate interpretation of point estimates, confidence intervals, and hypothesis tests in public health

Directory of Open Access Journals (Sweden)

Manuel G Scotto

2003-12-01

Full Text Available El presente ensayo trata de aclarar algunos conceptos utilizados habitualmente en el campo de investigación de la salud pública, que en numerosas situaciones son interpretados de manera incorrecta. Entre ellos encontramos la estimación puntual, los intervalos de confianza, y los contrastes de hipótesis. Estableciendo un paralelismo entre estos tres conceptos, podemos observar cuáles son sus diferencias más importantes a la hora de ser interpretados, tanto desde el punto de vista del enfoque clásico como desde la óptica bayesiana.This essay reviews some statistical concepts frequently used in public health research that are commonly misinterpreted. These include point estimates, confidence intervals, and hypothesis tests. By comparing them using the classical and the Bayesian perspectives, their interpretation becomes clearer.

8. The Model Confidence Set

DEFF Research Database (Denmark)

Hansen, Peter Reinhard; Lunde, Asger; Nason, James M.

The paper introduces the model confidence set (MCS) and applies it to the selection of models. A MCS is a set of models that is constructed such that it will contain the best model with a given level of confidence. The MCS is in this sense analogous to a confidence interval for a parameter. The MCS......, beyond the comparison of models. We apply the MCS procedure to two empirical problems. First, we revisit the inflation forecasting problem posed by Stock and Watson (1999), and compute the MCS for their set of inflation forecasts. Second, we compare a number of Taylor rule regressions and determine...... the MCS of the best in terms of in-sample likelihood criteria....

9. Confidant Relations in Italy

Directory of Open Access Journals (Sweden)

Jenny Isaacs

2015-02-01

Full Text Available Confidants are often described as the individuals with whom we choose to disclose personal, intimate matters. The presence of a confidant is associated with both mental and physical health benefits. In this study, 135 Italian adults responded to a structured questionnaire that asked if they had a confidant, and if so, to describe various features of the relationship. The vast majority of participants (91% reported the presence of a confidant and regarded this relationship as personally important, high in mutuality and trust, and involving minimal lying. Confidants were significantly more likely to be of the opposite sex. Participants overall were significantly more likely to choose a spouse or other family member as their confidant, rather than someone outside of the family network. Familial confidants were generally seen as closer, and of greater value, than non-familial confidants. These findings are discussed within the context of Italian culture.

10. Statistical intervals a guide for practitioners

CERN Document Server

Hahn, Gerald J

2011-01-01

Presents a detailed exposition of statistical intervals and emphasizes applications in industry. The discussion differentiates at an elementary level among different kinds of statistical intervals and gives instruction with numerous examples and simple math on how to construct such intervals from sample data. This includes confidence intervals to contain a population percentile, confidence intervals on probability of meeting specified threshold value, and prediction intervals to include observation in a future sample. Also has an appendix containing computer subroutines for nonparametric stati

11. Confidence Intervals for Omega Coefficient: Proposal for Calculus.

Science.gov (United States)

Ventura-León, José Luis

2018-01-01

La confiabilidad es entendida como una propiedad métrica de las puntuaciones de un instrumento de medida. Recientemente se viene utilizando el coeficiente omega (ω) para la estimación de la confiabilidad. No obstante, la medición nunca es exacta por la influencia del error aleatorio, por esa razón es necesario calcular y reportar el intervalo de confianza (IC) que permite encontrar en valor verdadero en un rango de medida. En ese contexto, el artículo plantea una forma de estimar el IC mediante el método de bootstrap para facilitar este procedimiento se brindan códigos de R (un software de acceso libre) para que puedan realizarse los cálculos de una forma amigable. Se espera que el artículo sirva de ayuda a los investigadores de ámbito de salud.

12. Secure and Usable Bio-Passwords based on Confidence Interval

OpenAIRE

Aeyoung Kim; Geunshik Han; Seung-Hyun Seo

2017-01-01

The most popular user-authentication method is the password. Many authentication systems try to enhance their security by enforcing a strong password policy, and by using the password as the first factor, something you know, with the second factor being something you have. However, a strong password policy and a multi-factor authentication system can make it harder for a user to remember the password and login in. In this paper a bio-password-based scheme is proposed as a unique authenticatio...

13. Intervals of confidence: Uncertain accounts of global hunger

NARCIS (Netherlands)

Yates-Doerr, E.

2015-01-01

Global health policy experts tend to organize hunger through scales of ‘the individual’, ‘the community’ and ‘the global’. This organization configures hunger as a discrete, measurable object to be scaled up or down with mathematical certainty. This article offers a counter to this approach, using

14. A quick method to calculate QTL confidence interval

Indian Academy of Sciences (India)

2011-08-19

Aug 19, 2011 ... experimental design and analysis to reveal the real molecular nature of the ... strap sample form the bootstrap distribution of QTL location. The 2.5 and ..... ative probability to harbour a true QTL, hence x-LOD rule is not stable ... Darvasi A. and Soller M. 1997 A simple method to calculate resolv- ing power ...

15. Large Sample Confidence Intervals for Item Response Theory Reliability Coefficients

Science.gov (United States)

Andersson, Björn; Xin, Tao

2018-01-01

In applications of item response theory (IRT), an estimate of the reliability of the ability estimates or sum scores is often reported. However, analytical expressions for the standard errors of the estimators of the reliability coefficients are not available in the literature and therefore the variability associated with the estimated reliability…

16. An approximate confidence interval for recombination fraction in ...

African Journals Online (AJOL)

user

2011-02-14

Feb 14, 2011 ... whose parents are not in the pedigree) and θ be the recombination fraction. ( )|. P x g is the penetrance probability, that is, the probability that an individual with genotype g has phenotype x . Let (. ) | , k k k f m. P g g g be the transmission probability, that is, the probability that an individual having genotype k.

17. Tailored Porous Materials

Energy Technology Data Exchange (ETDEWEB)

BARTON,THOMAS J.; BULL,LUCY M.; KLEMPERER,WALTER G.; LOY,DOUGLAS A.; MCENANEY,BRIAN; MISONO,MAKOTO; MONSON,PETER A.; PEZ,GUIDO; SCHERER,GEORGE W.; VARTULI,JAMES C.; YAGHI,OMAR M.

1999-11-09

Tailoring of porous materials involves not only chemical synthetic techniques for tailoring microscopic properties such as pore size, pore shape, pore connectivity, and pore surface reactivity, but also materials processing techniques for tailoring the meso- and the macroscopic properties of bulk materials in the form of fibers, thin films and monoliths. These issues are addressed in the context of five specific classes of porous materials: oxide molecular sieves, porous coordination solids, porous carbons, sol-gel derived oxides, and porous heteropolyanion salts. Reviews of these specific areas are preceded by a presentation of background material and review of current theoretical approaches to adsorption phenomena. A concluding section outlines current research needs and opportunities.

18. Raising Confident Kids

Science.gov (United States)

... First Aid & Safety Doctors & Hospitals Videos Recipes for Kids Kids site Sitio para niños How the Body ... Videos for Educators Search English Español Raising Confident Kids KidsHealth / For Parents / Raising Confident Kids What's in ...

19. nigerian students' self-confidence in responding to statements

African Journals Online (AJOL)

Temechegn

Altogether the test is made up of 40 items covering students' ability to recall definition ... confidence interval within which student have confidence in their choice of the .... is mentioned these equilibrium systems come to memory of the learner.

20. Reclaim your creative confidence.

Science.gov (United States)

Kelley, Tom; Kelley, David

2012-12-01

Most people are born creative. But over time, a lot of us learn to stifle those impulses. We become warier of judgment, more cautious more analytical. The world seems to divide into "creatives" and "noncreatives," and too many people resign themselves to the latter category. And yet we know that creativity is essential to success in any discipline or industry. The good news, according to authors Tom Kelley and David Kelley of IDEO, is that we all can rediscover our creative confidence. The trick is to overcome the four big fears that hold most of us back: fear of the messy unknown, fear of judgment, fear of the first step, and fear of losing control. The authors use an approach based on the work of psychologist Albert Bandura in helping patients get over their snake phobias: You break challenges down into small steps and then build confidence by succeeding on one after another. Creativity is something you practice, say the authors, not just a talent you are born with.

1. Confidence in Numerical Simulations

International Nuclear Information System (INIS)

Hemez, Francois M.

2015-01-01

This PowerPoint presentation offers a high-level discussion of uncertainty, confidence and credibility in scientific Modeling and Simulation (M&S). It begins by briefly evoking M&S trends in computational physics and engineering. The first thrust of the discussion is to emphasize that the role of M&S in decision-making is either to support reasoning by similarity or to ''forecast,'' that is, make predictions about the future or extrapolate to settings or environments that cannot be tested experimentally. The second thrust is to explain that M&S-aided decision-making is an exercise in uncertainty management. The three broad classes of uncertainty in computational physics and engineering are variability and randomness, numerical uncertainty and model-form uncertainty. The last part of the discussion addresses how scientists ''think.'' This thought process parallels the scientific method where by a hypothesis is formulated, often accompanied by simplifying assumptions, then, physical experiments and numerical simulations are performed to confirm or reject the hypothesis. ''Confidence'' derives, not just from the levels of training and experience of analysts, but also from the rigor with which these assessments are performed, documented and peer-reviewed.

2. Tailored reflectors for illumination.

Science.gov (United States)

Jenkins, D; Winston, R

1996-04-01

We report on tailored reflector design methods that allow the placement of general illumination patterns onto a target plane. The use of a new integral design method based on the edge-ray principle of nonimaging optics gives much more compact reflector shapes by eliminating the need for a gap between the source and the reflector profile. In addition, the reflectivity of the reflector is incorporated as a design parameter. We show the performance of design for constant irradiance on a distant plane, and we show how a leading-edge-ray method may be used to achieve general illumination patterns on nearby targets.

3. Confidence bands for inverse regression models

International Nuclear Information System (INIS)

Birke, Melanie; Bissantz, Nicolai; Holzmann, Hajo

2010-01-01

We construct uniform confidence bands for the regression function in inverse, homoscedastic regression models with convolution-type operators. Here, the convolution is between two non-periodic functions on the whole real line rather than between two periodic functions on a compact interval, since the former situation arguably arises more often in applications. First, following Bickel and Rosenblatt (1973 Ann. Stat. 1 1071–95) we construct asymptotic confidence bands which are based on strong approximations and on a limit theorem for the supremum of a stationary Gaussian process. Further, we propose bootstrap confidence bands based on the residual bootstrap and prove consistency of the bootstrap procedure. A simulation study shows that the bootstrap confidence bands perform reasonably well for moderate sample sizes. Finally, we apply our method to data from a gel electrophoresis experiment with genetically engineered neuronal receptor subunits incubated with rat brain extract

4. Globalization of consumer confidence

Directory of Open Access Journals (Sweden)

2017-01-01

Full Text Available The globalization of world economies and the importance of nowcasting analysis have been at the core of the recent literature. Nevertheless, these two strands of research are hardly coupled. This study aims to fill this gap through examining the globalization of the consumer confidence index (CCI by applying conventional and unconventional econometric methods. The US CCI is used as the benchmark in tests of comovement among the CCIs of several developing and developed countries, with the data sets divided into three sub-periods: global liquidity abundance, the Great Recession, and postcrisis. The existence and/or degree of globalization of the CCIs vary according to the period, whereas globalization in the form of coherence and similar paths is observed only during the Great Recession and, surprisingly, stronger in developing/emerging countries.

5. Confidence in Numerical Simulations

Energy Technology Data Exchange (ETDEWEB)

Hemez, Francois M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

2015-02-23

This PowerPoint presentation offers a high-level discussion of uncertainty, confidence and credibility in scientific Modeling and Simulation (M&S). It begins by briefly evoking M&S trends in computational physics and engineering. The first thrust of the discussion is to emphasize that the role of M&S in decision-making is either to support reasoning by similarity or to “forecast,” that is, make predictions about the future or extrapolate to settings or environments that cannot be tested experimentally. The second thrust is to explain that M&S-aided decision-making is an exercise in uncertainty management. The three broad classes of uncertainty in computational physics and engineering are variability and randomness, numerical uncertainty and model-form uncertainty. The last part of the discussion addresses how scientists “think.” This thought process parallels the scientific method where by a hypothesis is formulated, often accompanied by simplifying assumptions, then, physical experiments and numerical simulations are performed to confirm or reject the hypothesis. “Confidence” derives, not just from the levels of training and experience of analysts, but also from the rigor with which these assessments are performed, documented and peer-reviewed.

6. Tailored Random Graph Ensembles

International Nuclear Information System (INIS)

Roberts, E S; Annibale, A; Coolen, A C C

2013-01-01

Tailored graph ensembles are a developing bridge between biological networks and statistical mechanics. The aim is to use this concept to generate a suite of rigorous tools that can be used to quantify and compare the topology of cellular signalling networks, such as protein-protein interaction networks and gene regulation networks. We calculate exact and explicit formulae for the leading orders in the system size of the Shannon entropies of random graph ensembles constrained with degree distribution and degree-degree correlation. We also construct an ergodic detailed balance Markov chain with non-trivial acceptance probabilities which converges to a strictly uniform measure and is based on edge swaps that conserve all degrees. The acceptance probabilities can be generalized to define Markov chains that target any alternative desired measure on the space of directed or undirected graphs, in order to generate graphs with more sophisticated topological features.

7. Confidence bounds for normal and lognormal distribution coefficients of variation

Science.gov (United States)

Steve Verrill

2003-01-01

This paper compares the so-called exact approach for obtaining confidence intervals on normal distribution coefficients of variation to approximate methods. Approximate approaches were found to perform less well than the exact approach for large coefficients of variation and small sample sizes. Web-based computer programs are described for calculating confidence...

8. The idiosyncratic nature of confidence.

Science.gov (United States)

Navajas, Joaquin; Hindocha, Chandni; Foda, Hebah; Keramati, Mehdi; Latham, Peter E; Bahrami, Bahador

2017-11-01

Confidence is the 'feeling of knowing' that accompanies decision making. Bayesian theory proposes that confidence is a function solely of the perceived probability of being correct. Empirical research has suggested, however, that different individuals may perform different computations to estimate confidence from uncertain evidence. To test this hypothesis, we collected confidence reports in a task where subjects made categorical decisions about the mean of a sequence. We found that for most individuals, confidence did indeed reflect the perceived probability of being correct. However, in approximately half of them, confidence also reflected a different probabilistic quantity: the perceived uncertainty in the estimated variable. We found that the contribution of both quantities was stable over weeks. We also observed that the influence of the perceived probability of being correct was stable across two tasks, one perceptual and one cognitive. Overall, our findings provide a computational interpretation of individual differences in human confidence.

9. Personally tailored activities for improving psychosocial outcomes for people with dementia in long-term care.

Science.gov (United States)

Möhler, Ralph; Renom, Anna; Renom, Helena; Meyer, Gabriele

2018-02-13

years and in seven studies the mean MMSE score was 12 or lower. Seven studies were randomised controlled trials (three individually randomised, parallel group studies, one individually randomised cross-over study and three cluster-randomised trials) and one study was a non-randomised clinical trial. Five studies included a control group receiving usual care, two studies an active control intervention (activities which were not personally tailored) and one study included both an active control and usual care. Personally tailored activities were mainly delivered directly to the participants; in one study the nursing staff were trained to deliver the activities. The selection of activities was based on different theoretical models but the activities did not vary substantially.We found low-quality evidence indicating that personally tailored activities may slightly improve challenging behaviour (standardised mean difference (SMD) -0.21, 95% confidence interval (CI) -0.49 to 0.08; I² = 50%; 6 studies; 439 participants). We also found low-quality evidence from one study that was not included in the meta-analysis, indicating that personally tailored activities may make little or no difference to general restlessness, aggression, uncooperative behaviour, very negative and negative verbal behaviour (180 participants). There was very little evidence related to our other primary outcome of quality of life, which was assessed in only one study. From this study, we found that quality of life rated by proxies was slightly worse in the group receiving personally tailored activities (moderate-quality evidence, mean difference (MD) -1.93, 95% CI -3.63 to -0.23; 139 participants). Self-rated quality of life was only available for a small number of participants, and there was little or no difference between personally tailored activities and usual care on this outcome (low-quality evidence, MD 0.26, 95% CI -3.04 to 3.56; 42 participants). We found low-quality evidence that personally

10. Diverse interpretations of confidence building

International Nuclear Information System (INIS)

Macintosh, J.

1998-01-01

This paper explores the variety of operational understandings associated with the term 'confidence building'. Collectively, these understandings constitute what should be thought of as a 'family' of confidence building approaches. This unacknowledged and generally unappreciated proliferation of operational understandings that function under the rubric of confidence building appears to be an impediment to effective policy. The paper's objective is to analyze these different understandings, stressing the important differences in their underlying assumptions. In the process, the paper underlines the need for the international community to clarify its collective thinking about what it means when it speaks of 'confidence building'. Without enhanced clarity, it will be unnecessarily difficult to employ the confidence building approach effectively due to the lack of consistent objectives and common operating assumptions. Although it is not the intention of this paper to promote a particular account of confidence building, dissecting existing operational understandings should help to identify whether there are fundamental elements that define what might be termed 'authentic' confidence building. Implicit here is the view that some operational understandings of confidence building may diverge too far from consensus models to count as meaningful members of the confidence building family. (author)

11. Age- and sex-tailored serum phosphate thresholds do not improve cardiovascular risk estimation in CKD.

Science.gov (United States)

Ferraro, Pietro Manuel; Bonello, Monica; Gambaro, Alessia; Sturniolo, Antonio; Gambaro, Giovanni

2011-01-01

Disordered metabolism of phosphorus is one of the hallmarks of chronic kidney disease (CKD), resulting in increased cardiovascular morbidity and mortality. Age and sex may affect the metabolism of phosphorus and subsequently its serum level. We evaluated if age- and sex-specific cutoffs for hyperphosphatemia may define cardiovascular risk better than the current guideline cutoffs. We used data from 16,834 subjects participating in the 1999-2006 National Health and Nutrition Examination Survey (NHANES); the prevalence of self-reported cardiovascular disease (CVD) and mortality rates were analyzed in CKD patients for both the classic definitions (CH; i.e., NKF-KDOQI and K-DIGO) and a tailored definition (TH) of hyperphosphatemia by means of regression models adjusted for age, sex, race/ethnicity, smoking status and body mass index. The cutoffs for TH were represented by the 95th percentile of an age- and sex-matched non-CKD population. Serum phosphorus levels showed an inverse correlation with age (r = -0.12; pdefinition and CVD was marginally better compared with the CH definition (odds ratio [OR] = 1.49, 95% confidence interval [95% CI], 1.04-2.13; p=0.030 vs. OR=1.55, 95% CI, 0.98-2.44; p = 0.059), the TH model was not superior in predicting CVD or mortality. Our data suggest that a tailored, age- and sex-specific definition of hyperphosphatemia is not superior to conventional definitions in predicting cardiovascular events in patients with CKD.

12. Correct Bayesian and frequentist intervals are similar

International Nuclear Information System (INIS)

Atwood, C.L.

1986-01-01

This paper argues that Bayesians and frequentists will normally reach numerically similar conclusions, when dealing with vague data or sparse data. It is shown that both statistical methodologies can deal reasonably with vague data. With sparse data, in many important practical cases Bayesian interval estimates and frequentist confidence intervals are approximately equal, although with discrete data the frequentist intervals are somewhat longer. This is not to say that the two methodologies are equally easy to use: The construction of a frequentist confidence interval may require new theoretical development. Bayesians methods typically require numerical integration, perhaps over many variables. Also, Bayesian can easily fall into the trap of over-optimism about their amount of prior knowledge. But in cases where both intervals are found correctly, the two intervals are usually not very different. (orig.)

13. Interval selection with machine-dependent intervals

OpenAIRE

Bohmova K.; Disser Y.; Mihalak M.; Widmayer P.

2013-01-01

We study an offline interval scheduling problem where every job has exactly one associated interval on every machine. To schedule a set of jobs, exactly one of the intervals associated with each job must be selected, and the intervals selected on the same machine must not intersect.We show that deciding whether all jobs can be scheduled is NP-complete already in various simple cases. In particular, by showing the NP-completeness for the case when all the intervals associated with the same job...

14. Five-Year Risk of Interval-Invasive Second Breast Cancer

Science.gov (United States)

Buist, Diana S. M.; Houssami, Nehmat; Dowling, Emily C.; Halpern, Elkan F.; Gazelle, G. Scott; Lehman, Constance D.; Henderson, Louise M.; Hubbard, Rebecca A.

2015-01-01

Background: Earlier detection of second breast cancers after primary breast cancer (PBC) treatment improves survival, yet mammography is less accurate in women with prior breast cancer. The purpose of this study was to examine women presenting clinically with second breast cancers after negative surveillance mammography (interval cancers), and to estimate the five-year risk of interval-invasive second cancers for women with varying risk profiles. Methods: We evaluated a prospective cohort of 15 114 women with 47 717 surveillance mammograms diagnosed with stage 0-II unilateral PBC from 1996 through 2008 at facilities in the Breast Cancer Surveillance Consortium. We used discrete time survival models to estimate the association between odds of an interval-invasive second breast cancer and candidate predictors, including demographic, PBC, and imaging characteristics. All statistical tests were two-sided. Results: The cumulative incidence of second breast cancers after five years was 54.4 per 1000 women, with 325 surveillance-detected and 138 interval-invasive second breast cancers. The five-year risk of interval-invasive second cancer for women with referent category characteristics was 0.60%. For women with the most and least favorable profiles, the five-year risk ranged from 0.07% to 6.11%. Multivariable modeling identified grade II PBC (odds ratio [OR] = 1.95, 95% confidence interval [CI] = 1.15 to 3.31), treatment with lumpectomy without radiation (OR = 3.27, 95% CI = 1.91 to 5.62), interval PBC presentation (OR = 2.01, 95% CI 1.28 to 3.16), and heterogeneously dense breasts on mammography (OR = 1.54, 95% CI = 1.01 to 2.36) as independent predictors of interval-invasive second breast cancers. Conclusions: PBC diagnosis and treatment characteristics contribute to variation in subsequent-interval second breast cancer risk. Consideration of these factors may be useful in developing tailored post-treatment imaging surveillance plans. PMID:25904721

15. An individually-tailored smoking cessation intervention for rural Veterans: a pilot randomized trial

Directory of Open Access Journals (Sweden)

Mark W. Vander Weg

2016-08-01

Full Text Available Abstract Background Tobacco use remains prevalent among Veterans of military service and those residing in rural areas. Smokers frequently experience tobacco-related issues including risky alcohol use, post-cessation weight gain, and depressive symptoms that may adversely impact their likelihood of quitting and maintaining abstinence. Telephone-based interventions that simultaneously address these issues may help to increase treatment access and improve outcomes. Methods This study was a two-group randomized controlled pilot trial. Participants were randomly assigned to an individually-tailored telephone tobacco intervention combining counseling for tobacco use and related issues including depressive symptoms, risky alcohol use, and weight concerns or to treatment provided through their state tobacco quitline. Selection of pharmacotherapy was based on medical history and a shared decision interview in both groups. Participants included 63 rural Veteran smokers (mean age = 56.8 years; 87 % male; mean number of cigarettes/day = 24.7. The primary outcome was self-reported 7-day point prevalence abstinence at 12 weeks and 6 months. Results Twelve-week quit rates based on an intention-to-treat analysis did not differ significantly by group (Tailored = 39 %; Quitline Referral = 25 %; odds ratio [OR]; 95 % confidence interval [CI] = 1.90; 0.56, 5.57. Six-month quit rates for the Tailored and Quitline Referral conditions were 29 and 28 %, respectively (OR; 95 % CI = 1.05; 0.35, 3.12. Satisfaction with the Tailored tobacco intervention was high. Conclusions Telephone-based treatment that concomitantly addresses other health-related factors that may adversely affect quitting appears to be a promising strategy. Larger studies are needed to determine whether this approach improves cessation outcomes. Trial registration ClinicalTrials.gov identifier number NCT01592695 registered 11 April 2012.

16. Nuclear power: restoring public confidence

International Nuclear Information System (INIS)

Arnold, L.

1986-01-01

The paper concerns a one day conference on nuclear power organised by the Centre for Science Studies and Science Policy, Lancaster, April 1986. Following the Chernobyl reactor accident, the conference concentrated on public confidence in nuclear power. Causes of lack of public confidence, public perceptions of risk, and the effect of Chernobyl in the United Kingdom, were all discussed. A Select Committee on the Environment examined the problems of radioactive waste disposal. (U.K.)

17. Neonates need tailored drug formulations.

Science.gov (United States)

Allegaert, Karel

2013-02-08

Drugs are very strong tools used to improve outcome in neonates. Despite this fact and in contrast to tailored perfusion equipment, incubators or ventilators for neonates, we still commonly use drug formulations initially developed for adults. We would like to make the point that drug formulations given to neonates need to be tailored for this age group. Besides the obvious need to search for active compounds that take the pathophysiology of the newborn into account, this includes the dosage and formulation. The dosage or concentration should facilitate the administration of low amounts and be flexible since clearance is lower in neonates with additional extensive between-individual variability. Formulations need to be tailored for dosage variability in the low ranges and also to the clinical characteristics of neonates. A specific focus of interest during neonatal drug development therefore is a need to quantify and limit excipient exposure based on the available knowledge of their safety or toxicity. Until such tailored vials and formulations become available, compounding practices for drug formulations in neonates should be evaluated to guarantee the correct dosing, product stability and safety.

18. Confidence in critical care nursing.

Science.gov (United States)

Evans, Jeanne; Bell, Jennifer L; Sweeney, Annemarie E; Morgan, Jennifer I; Kelly, Helen M

2010-10-01

The purpose of the study was to gain an understanding of the nursing phenomenon, confidence, from the experience of nurses in the nursing subculture of critical care. Leininger's theory of cultural care diversity and universality guided this qualitative descriptive study. Questions derived from the sunrise model were used to elicit nurses' perspectives about cultural and social structures that exist within the critical care nursing subculture and the influence that these factors have on confidence. Twenty-eight critical care nurses from a large Canadian healthcare organization participated in semistructured interviews about confidence. Five themes arose from the descriptions provided by the participants. The three themes, tenuously navigating initiation rituals, deliberately developing holistic supportive relationships, and assimilating clinical decision-making rules were identified as social and cultural factors related to confidence. The remaining two themes, preserving a sense of security despite barriers and accommodating to diverse challenges, were identified as environmental factors related to confidence. Practice and research implications within the culture of critical care nursing are discussed in relation to each of the themes.

19. Professional confidence: a concept analysis.

Science.gov (United States)

Holland, Kathlyn; Middleton, Lyn; Uys, Leana

2012-03-01

Professional confidence is a concept that is frequently used and or implied in occupational therapy literature, but often without specifying its meaning. Rodgers's Model of Concept Analysis was used to analyse the term "professional confidence". Published research obtained from a federated search in four health sciences databases was used to inform the concept analysis. The definitions, attributes, antecedents, and consequences of professional confidence as evidenced in the literature are discussed. Surrogate terms and related concepts are identified, and a model case of the concept provided. Based on the analysis, professional confidence can be described as a dynamic, maturing personal belief held by a professional or student. This includes an understanding of and a belief in the role, scope of practice, and significance of the profession, and is based on their capacity to competently fulfil these expectations, fostered through a process of affirming experiences. Developing and fostering professional confidence should be nurtured and valued to the same extent as professional competence, as the former underpins the latter, and both are linked to professional identity.

20. Targeting Low Career Confidence Using the Career Planning Confidence Scale

Science.gov (United States)

McAuliffe, Garrett; Jurgens, Jill C.; Pickering, Worth; Calliotte, James; Macera, Anthony; Zerwas, Steven

2006-01-01

The authors describe the development and validation of a test of career planning confidence that makes possible the targeting of specific problem issues in employment counseling. The scale, developed using a rational process and the authors' experience with clients, was tested for criterion-related validity against 2 other measures. The scale…

1. Convex Interval Games

NARCIS (Netherlands)

Alparslan-Gok, S.Z.; Brânzei, R.; Tijs, S.H.

2008-01-01

In this paper, convex interval games are introduced and some characterizations are given. Some economic situations leading to convex interval games are discussed. The Weber set and the Shapley value are defined for a suitable class of interval games and their relations with the interval core for

2. Self-confidence of anglers in identification of freshwater sport fish

Science.gov (United States)

Chizinski, C.J.; Martin, D. R.; Pope, Kevin L.

2014-01-01

Although several studies have focused on how well anglers identify species using replicas and pictures, there has been no study assessing the confidence that can be placed in angler's ability to identify recreationally important fish. Understanding factors associated with low self-confidence will be useful in tailoring education programmes to improve self-confidence in identifying common species. The purposes of this assessment were to quantify the confidence of recreational anglers to identify 13 commonly encountered warm water fish species and to relate self-confidence to species availability and angler experience. Significant variation was observed in anglers self-confidence among species and levels of self-declared skill, with greater confidence associated with greater skill and with greater exposure. This study of angler self-confidence strongly highlights the need for educational programmes that target lower skilled anglers and the importance of teaching all anglers about less common species, regardless of skill level.

3. Tailored Web-Based Interventions for Pain: Systematic Review and Meta-Analysis.

Science.gov (United States)

Martorella, Geraldine; Boitor, Madalina; Berube, Melanie; Fredericks, Suzanne; Le May, Sylvie; Gélinas, Céline

2017-11-10

Efforts have multiplied in the past decade to underline the importance of pain management. For both acute and chronic pain management, various barriers generate considerable treatment accessibility issues, thereby providing an opportunity for alternative intervention formats to be implemented. Several systematic reviews on Web-based interventions with a large emphasis on chronic pain and cognitive behavioral therapy have been recently conducted to explore the influence of these interventions on pain management However, to our knowledge, the specific contribution of tailored Web-based interventions for pain management has not been described and their effect on pain has not been evaluated. The primary aim of this systematic review was to answer the following research question: What is the effect of tailored Web-based pain management interventions for adults on pain intensity compared with usual care, face-to-face interventions, and standardized Web-based interventions? A secondary aim was to examine the effects of these interventions on physical and psychological functions. We conducted a systematic review of articles published from January 2000 to December 2015. We used the DerSimonian-Laird random effects models with 95% confidence intervals to calculate effect estimates for all analyses. We calculated standardized mean differences from extracted means and standard deviations, as outcome variables were measured on different continuous scales. We evaluated 5 different outcomes: pain intensity (primary outcome), pain-related disability, anxiety, depression, and pain catastrophizing. We assessed effects according to 3 time intervals: short term (Web-based intervention showed benefits immediately after, with small effect sizes (Web-based interventions did not prove to be more efficacious than standardized Web-based interventions in terms of pain intensity, pain-related disability, anxiety, and depression. An interesting finding was that some efficacy was shown on pain

4. Tailoring Earned Value Management. General Guidelines

National Research Council Canada - National Science Library

2002-01-01

Partial Contents: General Principles, A Spectrum of Implementation, OMB Guidance, A Special Note about DOD, Risk Factors to Consider, How can EVMS be tailored, Tailor EVMS to Inherent Risk, Application Thresholds-DoD...

5. Normal probability plots with confidence.

Science.gov (United States)

Chantarangsi, Wanpen; Liu, Wei; Bretz, Frank; Kiatsupaibul, Seksan; Hayter, Anthony J; Wan, Fang

2015-01-01

Normal probability plots are widely used as a statistical tool for assessing whether an observed simple random sample is drawn from a normally distributed population. The users, however, have to judge subjectively, if no objective rule is provided, whether the plotted points fall close to a straight line. In this paper, we focus on how a normal probability plot can be augmented by intervals for all the points so that, if the population distribution is normal, then all the points should fall into the corresponding intervals simultaneously with probability 1-α. These simultaneous 1-α probability intervals provide therefore an objective mean to judge whether the plotted points fall close to the straight line: the plotted points fall close to the straight line if and only if all the points fall into the corresponding intervals. The powers of several normal probability plot based (graphical) tests and the most popular nongraphical Anderson-Darling and Shapiro-Wilk tests are compared by simulation. Based on this comparison, recommendations are given in Section 3 on which graphical tests should be used in what circumstances. An example is provided to illustrate the methods. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

6. Methodology for building confidence measures

Science.gov (United States)

Bramson, Aaron L.

2004-04-01

This paper presents a generalized methodology for propagating known or estimated levels of individual source document truth reliability to determine the confidence level of a combined output. Initial document certainty levels are augmented by (i) combining the reliability measures of multiply sources, (ii) incorporating the truth reinforcement of related elements, and (iii) incorporating the importance of the individual elements for determining the probability of truth for the whole. The result is a measure of confidence in system output based on the establishing of links among the truth values of inputs. This methodology was developed for application to a multi-component situation awareness tool under development at the Air Force Research Laboratory in Rome, New York. Determining how improvements in data quality and the variety of documents collected affect the probability of a correct situational detection helps optimize the performance of the tool overall.

7. Alan Greenspan, the confidence strategy

Directory of Open Access Journals (Sweden)

Edwin Le Heron

2006-12-01

Full Text Available To evaluate the Greenspan era, we nevertheless need to address three questions: Is his success due to talent or just luck? Does he have a system of monetary policy or is he himself the system? What will be his legacy? Greenspan was certainly lucky, but he was also clairvoyant. Above all, he has developed a profoundly original monetary policy. His confidence strategy is clearly opposed to the credibility strategy developed in central banks and the academic milieu after 1980, but also inflation targeting, which today constitutes the mainstream monetary policy regime. The question of his legacy seems more nuanced. However, Greenspan will remain 'for a considerable period of time' a highly heterodox and original central banker. His political vision, his perception of an uncertain world, his pragmatism and his openness form the structure of a powerful alternative system, the confidence strategy, which will leave its mark on the history of monetary policy.

8. Graphical interpretation of confidence curves in rankit plots

DEFF Research Database (Denmark)

Hyltoft Petersen, Per; Blaabjerg, Ole; Andersen, Marianne

2004-01-01

A well-known transformation from the bell-shaped Gaussian (normal) curve to a straight line in the rankit plot is investigated, and a tool for evaluation of the distribution of reference groups is presented. It is based on the confidence intervals for percentiles of the calculated Gaussian distri...

9. Deep drawing simulation of Tailored Blanks

NARCIS (Netherlands)

van den Berg, Albert; Meinders, Vincent T.; Stokman, B.

1998-01-01

Tailored blanks are increasingly used in the automotive industry. A tailored blank consists of different metal parts, which are joined by a welding process. These metal parts usually have different material properties. Hence, the main advantage of using a tailored blank is to provide the right

10. Leadership by Confidence in Teams

OpenAIRE

Kobayashi, Hajime; Suehiro, Hideo

2008-01-01

We study endogenous signaling by analyzing a team production problem with endogenous timing. Each agent of the team is privately endowed with some level of confidence about team productivity. Each of them must then commit a level of effort in one of two periods. At the end of each period, each agent observes his partner' s move in this period. Both agents are rewarded by a team output determined by team productivity and total invested effort. Each agent must personally incur the cost of effor...

11. Towards confidence in transport safety

International Nuclear Information System (INIS)

Robison, R.W.

1992-01-01

The U.S. Department of Energy (US DOE) plans to demonstrate to the public that high-level waste can be transported safely to the proposed repository. The author argues US DOE should begin now to demonstrate its commitment to safety by developing an extraordinary safety program for nuclear cargo it is now shipping. The program for current shipments should be developed with State, Tribal, and local officials. Social scientists should be involved in evaluating the effect of the safety program on public confidence. The safety program developed in cooperation with western states for shipments to the Waste Isolation Pilot plant is a good basis for designing that extraordinary safety program

12. Time series with tailored nonlinearities

Science.gov (United States)

Räth, C.; Laut, I.

2015-10-01

It is demonstrated how to generate time series with tailored nonlinearities by inducing well-defined constraints on the Fourier phases. Correlations between the phase information of adjacent phases and (static and dynamic) measures of nonlinearities are established and their origin is explained. By applying a set of simple constraints on the phases of an originally linear and uncorrelated Gaussian time series, the observed scaling behavior of the intensity distribution of empirical time series can be reproduced. The power law character of the intensity distributions being typical for, e.g., turbulence and financial data can thus be explained in terms of phase correlations.

13. Workshop on confidence limits. Proceedings

International Nuclear Information System (INIS)

James, F.; Lyons, L.; Perrin, Y.

2000-01-01

The First Workshop on Confidence Limits was held at CERN on 17-18 January 2000. It was devoted to the problem of setting confidence limits in difficult cases: number of observed events is small or zero, background is larger than signal, background not well known, and measurements near a physical boundary. Among the many examples in high-energy physics are searches for the Higgs, searches for neutrino oscillations, B s mixing, SUSY, compositeness, neutrino masses, and dark matter. Several different methods are on the market: the CL s methods used by the LEP Higgs searches; Bayesian methods; Feldman-Cousins and modifications thereof; empirical and combined methods. The Workshop generated considerable interest, and attendance was finally limited by the seating capacity of the CERN Council Chamber where all the sessions took place. These proceedings contain all the papers presented, as well as the full text of the discussions after each paper and of course the last session which was a discussion session. The list of participants and the 'required reading', which was expected to be part of the prior knowledge of all participants, are also included. (orig.)

14. The Great Recession and confidence in homeownership

OpenAIRE

Anat Bracha; Julian Jamison

2013-01-01

Confidence in homeownership shifts for those who personally experienced real estate loss during the Great Recession. Older Americans are confident in the value of homeownership. Younger Americans are less confident.

15. Tailoring PKI for the battlespace

Science.gov (United States)

Covey, Carlin R.

2003-07-01

A Public Key Infrastructure (PKI) can provide useful communication protections for friendly forces in the battlespace. The PKI would be used in conjunction with communication facilities that are accorded physical and Type-1 cryptographic protections. The latter protections would safeguard the confidentiality and (optionally) the integrity of communications between enclaves of users, whereas the PKI protections would furnish identification, authentication, authorization and privacy services for individual users. However, Commercial-Off-the-Shelf (COTS) and most Government-Off-the-Shelf (GOTS) PKI solutions are not ideally tailored for the battlespace environment. Most PKI solutions assume a relatively static, high-bandwidth communication network, whereas communication links in the battlespace will be dynamically reconfigured and bandwidth-limited. Most enterprise-wide PKI systems assume that users will enroll and disenroll at an orderly pace, whereas the battlespace PKI "enterprise" will grow and shrink abruptly as units are deployed or withdrawn from the battlespace. COTS and GOTS PKIs are seldom required to incorporate temporary "enterprise mergers", whereas the battlespace "enterprise" will need to incorporate temporary coalitions of forces drawn from various nations. This paper addresses both well-known and novel techniques for tailoring PKI for the battlespace environment. These techniques include the design of the security architecture, the selection of appropriate options within PKI standards, and some new PKI protocols that offer significant advantages in the battlespace.

16. Programming with Intervals

Science.gov (United States)

Matsakis, Nicholas D.; Gross, Thomas R.

Intervals are a new, higher-level primitive for parallel programming with which programmers directly construct the program schedule. Programs using intervals can be statically analyzed to ensure that they do not deadlock or contain data races. In this paper, we demonstrate the flexibility of intervals by showing how to use them to emulate common parallel control-flow constructs like barriers and signals, as well as higher-level patterns such as bounded-buffer producer-consumer. We have implemented intervals as a publicly available library for Java and Scala.

17. Public confidence and nuclear energy

International Nuclear Information System (INIS)

1990-01-01

Today in France there are 54 nuclear power units in operation at 18 sites. They supply 75% of all electricity produced, 12% of which is exported to neighbouring countries, and play an important role in the French economy. For the French, nuclear power is a fact of life, and most accept it. However, the accident of Chernobyl has made public opinion more sensitive, and the public relations work has had to be reconsidered carefully with a view to increase the confidence of the French public in nuclear power, anticipating media crises and being equipped to deal with such crises. The three main approaches are the following: keeping the public better informed, providing clear information at time of crisis and international activities

18. Knowledge, Self Confidence and Courage

DEFF Research Database (Denmark)

Selberg, Hanne; Steenberg Holtzmann, Jette; Hovedskov, Jette

. Results The students identified their major learning outcomes as transfer of operational skills, experiencing self-efficacy and enhanced understanding of the patients' perspective.Involving simulated patients in the training of technical skills contributed to the development of the students' communication......Knowledge, self confidence and courage – long lasting learning outcomes through simulation in a clinical context. Hanne Selberg1, Jette Hovedskov2, Jette Steenberg Holtzmann2 The significance and methodology of the researchThe study focuses on simulation alongside the clinical practice and linked...... Development, Clinical Lecturer, Metropolitan University College, Faculty of Nursing, Email: hase@phoe.dk, phone: +45-72282830. 2. Jette Hovedskov, RN, Development Consultant, Glostrup University Hospital, Department of Development Email : jeho@glo.regionh.dk ,phone: +45- 43232090 3. Jette Holtzmann Steenberg...

19. Tailor-welded blanks and their production

Science.gov (United States)

Yan, Qi

2005-01-01

Tailor welded blanks had been widely used in the automobile industry. A tailor welded blank consists of several flat sheets that were laser welded together before stamping. A combination of different materials, thickness, and coatings could be welded together to form a blank for stamping car body panels. As for the material for automobile industry, this technology was one of the development trend for automobile industry because of its weight reduction, safety improvement and economical use of materials. In this paper, the characters and production of tailor welded blanks in the market were discussed in detail. There had two major methods to produce tailor welded blanks. Laser welding would replace mesh seam welding for the production of tailor welded blanks in the future. The requirements on the edge preparation of unwelded blanks for tailor welded blanks were higher than the other steel processing technology. In order to produce the laser welded blank, there had the other process before the laser welding in the factory. In the world, there had three kinds of patterns for the large volume production of tailor welded blanks. In China, steel factory played the important role in the promotion of the application of tailor welded blanks. The competition for the supply of tailor welded blanks to the automobile industry would become fierce in the near future. As a result, the demand for the quality control on the production of tailor welded blanks would be the first priority concern for the factory.

20. Doubly Bayesian Analysis of Confidence in Perceptual Decision-Making.

Science.gov (United States)

Aitchison, Laurence; Bang, Dan; Bahrami, Bahador; Latham, Peter E

2015-10-01

Humans stand out from other animals in that they are able to explicitly report on the reliability of their internal operations. This ability, which is known as metacognition, is typically studied by asking people to report their confidence in the correctness of some decision. However, the computations underlying confidence reports remain unclear. In this paper, we present a fully Bayesian method for directly comparing models of confidence. Using a visual two-interval forced-choice task, we tested whether confidence reports reflect heuristic computations (e.g. the magnitude of sensory data) or Bayes optimal ones (i.e. how likely a decision is to be correct given the sensory data). In a standard design in which subjects were first asked to make a decision, and only then gave their confidence, subjects were mostly Bayes optimal. In contrast, in a less-commonly used design in which subjects indicated their confidence and decision simultaneously, they were roughly equally likely to use the Bayes optimal strategy or to use a heuristic but suboptimal strategy. Our results suggest that, while people's confidence reports can reflect Bayes optimal computations, even a small unusual twist or additional element of complexity can prevent optimality.

1. Confidence limits for parameters of Poisson and binomial distributions

International Nuclear Information System (INIS)

Arnett, L.M.

1976-04-01

The confidence limits for the frequency in a Poisson process and for the proportion of successes in a binomial process were calculated and tabulated for the situations in which the observed values of the frequency or proportion and an a priori distribution of these parameters are available. Methods are used that produce limits with exactly the stated confidence levels. The confidence interval [a,b] is calculated so that Pr [a less than or equal to lambda less than or equal to b c,μ], where c is the observed value of the parameter, and μ is the a priori hypothesis of the distribution of this parameter. A Bayesian type analysis is used. The intervals calculated are narrower and appreciably different from results, known to be conservative, that are often used in problems of this type. Pearson and Hartley recognized the characteristics of their methods and contemplated that exact methods could someday be used. The calculation of the exact intervals requires involved numerical analyses readily implemented only on digital computers not available to Pearson and Hartley. A Monte Carlo experiment was conducted to verify a selected interval from those calculated. This numerical experiment confirmed the results of the analytical methods and the prediction of Pearson and Hartley that their published tables give conservative results

2. Taming Parasites by Tailoring Them

Directory of Open Access Journals (Sweden)

Bingjian Ren

2017-07-01

Full Text Available The next-generation gene editing based on CRISPR (clustered regularly interspaced short palindromic repeats has been successfully implemented in a wide range of organisms including some protozoan parasites. However, application of such a versatile game-changing technology in molecular parasitology remains fairly underexplored. Here, we briefly introduce state-of-the-art in human and mouse research and usher new directions to drive the parasitology research in the years to come. In precise, we outline contemporary ways to embolden existing apicomplexan and kinetoplastid parasite models by commissioning front-line gene-tailoring methods, and illustrate how we can break the enduring gridlock of gene manipulation in non-model parasitic protists to tackle intriguing questions that remain long unresolved otherwise. We show how a judicious solicitation of the CRISPR technology can eventually balance out the two facets of pathogen-host interplay.

3. Confidence building in safety assessments

International Nuclear Information System (INIS)

Grundfelt, Bertil

1999-01-01

Future generations should be adequately protected from damage caused by the present disposal of radioactive waste. This presentation discusses the core of safety and performance assessment: The demonstration and building of confidence that the disposal system meets the safety requirements stipulated by society. The major difficulty is to deal with risks in the very long time perspective of the thousands of years during which the waste is hazardous. Concern about these problems has stimulated the development of the safety assessment discipline. The presentation concentrates on two of the elements of safety assessment: (1) Uncertainty and sensitivity analysis, and (2) validation and review. Uncertainty is associated both with respect to what is the proper conceptual model and with respect to parameter values for a given model. A special kind of uncertainty derives from the variation of a property in space. Geostatistics is one approach to handling spatial variability. The simplest way of doing a sensitivity analysis is to offset the model parameters one by one and observe how the model output changes. The validity of the models and data used to make predictions is central to the credibility of safety assessments for radioactive waste repositories. There are several definitions of model validation. The presentation discusses it as a process and highlights some aspects of validation methodologies

4. Effectiveness of individually tailored smoking cessation advice letters as an adjunct to telephone counselling and generic self-help materials: randomized controlled trial.

Science.gov (United States)

Sutton, Stephen; Gilbert, Hazel

2007-06-01

To evaluate the effectiveness of individually tailored smoking cessation advice letters as an adjunct to telephone counselling and generic self-help materials. Randomized controlled trial. The UK Quitline. A total of 1508 current smokers and recent ex-smokers. The control group received usual care (telephone counselling and an information pack sent through the post). The intervention group received in addition a computer-generated individually tailored advice letter. All outcomes were assessed at 6-month follow-up. The primary outcome measure was self-reported prolonged abstinence for at least 3 months. Secondary outcomes were self-reported prolonged abstinence for at least 1 month and 7-day and 24-hour point-prevalence abstinence. For the sample as a whole, quit rates did not differ significantly between the two conditions. However, among the majority (n = 1164) who were smokers at baseline, quit rates were consistently higher in the intervention group: prolonged abstinence for 3 months, 12.2% versus 9.0% [odds ratio (OR) = 1.40, 95% confidence interval (CI) = 0.96-2.04, P = 0.080); prolonged abstinence for 1 month, 16.4% versus 11.3% (OR = 1.53, 95% CI = 1.09-2.15, P = 0.013); 7-day point-prevalence abstinence, 18.9% versus 12.7% (OR = 1.59, 95% CI = 1.15-2.19, P = 0.004); 24-hour point-prevalence abstinence, 20.9% versus 15.4% (OR = 1.45, 95% CI = 1.07-1.96, P = 0.015). The results for the smokers are encouraging in showing a small but useful effect of the tailored letter on quit rate. Versions of the tailoring program could be used on the web and in general practices, pharmacies and primary care trusts.

5. Confidence bounds for nonlinear dose-response relationships

DEFF Research Database (Denmark)

Baayen, C; Hougaard, P

2015-01-01

An important aim of drug trials is to characterize the dose-response relationship of a new compound. Such a relationship can often be described by a parametric (nonlinear) function that is monotone in dose. If such a model is fitted, it is useful to know the uncertainty of the fitted curve...... intervals for the dose-response curve. These confidence bounds have better coverage than Wald intervals and are more precise and generally faster than bootstrap methods. Moreover, if monotonicity is assumed, the profile likelihood approach takes this automatically into account. The approach is illustrated...

6. Confidence Intervals Verification for Simulated Error Rate Performance of Wireless Communication System

KAUST Repository

Smadi, Mahmoud A.; Ghaeb, Jasim A.; Jazzar, Saleh; Saraereh, Omar A.

2012-01-01

In this paper, we derived an efficient simulation method to evaluate the error rate of wireless communication system. Coherent binary phase-shift keying system is considered with imperfect channel phase recovery. The results presented demonstrate

7. Optimizing lengths of confidence intervals: fourth-order efficiency in location models

NARCIS (Netherlands)

Klaassen, C.; Venetiaan, S.

2010-01-01

Under regularity conditions the maximum likelihood estimator of the location parameter in a location model is asymptotically efficient among translation equivariant estimators. Additional regularity conditions warrant third- and even fourth-order efficiency, in the sense that no translation

8. Confidence Intervals for System Reliability and Availability of Maintained Systems Using Monte Carlo Techniques

Science.gov (United States)

1981-12-01

DTIC _JUN ,I 51982 UNITED STATES AIR FORCE AIR UNIVERSITY E AIR FORCE INSTITUTE OF TECHNOLOGY Wright-Patterson Air-force Base,Ohio S 2 B 14 Best...t’re Air F:or- e -ns"it’.,, e of Technclogy Air Uv-ýerz.tyj in Partial 𔄁ulfilIThent Reýquirements fol- ,-hth D,ýýr.e c4" MastLer of’ OperaZ-ins...iesearc- VeTA 3 MohamedO ’’’’Jo SpD’ Fas.abal-la Lt. C ol. Egyplt.’.an Army Gradua~’p ( ler ons Research December 1981 Approcved fL~r pu>ý’ rclea.se

9. Confidence intervals and hypothesis testing for the Permutation Entropy with an application to epilepsy

Science.gov (United States)

Traversaro, Francisco; O. Redelico, Francisco

2018-04-01

In nonlinear dynamics, and to a lesser extent in other fields, a widely used measure of complexity is the Permutation Entropy. But there is still no known method to determine the accuracy of this measure. There has been little research on the statistical properties of this quantity that characterize time series. The literature describes some resampling methods of quantities used in nonlinear dynamics - as the largest Lyapunov exponent - but these seems to fail. In this contribution, we propose a parametric bootstrap methodology using a symbolic representation of the time series to obtain the distribution of the Permutation Entropy estimator. We perform several time series simulations given by well-known stochastic processes: the 1/fα noise family, and show in each case that the proposed accuracy measure is as efficient as the one obtained by the frequentist approach of repeating the experiment. The complexity of brain electrical activity, measured by the Permutation Entropy, has been extensively used in epilepsy research for detection in dynamical changes in electroencephalogram (EEG) signal with no consideration of the variability of this complexity measure. An application of the parametric bootstrap methodology is used to compare normal and pre-ictal EEG signals.

10. Confidence Intervals for a Semiparametric Approach to Modeling Nonlinear Relations among Latent Variables

Science.gov (United States)

Pek, Jolynn; Losardo, Diane; Bauer, Daniel J.

2011-01-01

Compared to parametric models, nonparametric and semiparametric approaches to modeling nonlinearity between latent variables have the advantage of recovering global relationships of unknown functional form. Bauer (2005) proposed an indirect application of finite mixtures of structural equation models where latent components are estimated in the…

11. Technical Report: Benchmarking for Quasispecies Abundance Inference with Confidence Intervals from Metagenomic Sequence Data

Energy Technology Data Exchange (ETDEWEB)

McLoughlin, K. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

2016-01-22

The software application “MetaQuant” was developed by our group at Lawrence Livermore National Laboratory (LLNL). It is designed to profile microbial populations in a sample using data from whole-genome shotgun (WGS) metagenomic DNA sequencing. Several other metagenomic profiling applications have been described in the literature. We ran a series of benchmark tests to compare the performance of MetaQuant against that of a few existing profiling tools, using real and simulated sequence datasets. This report describes our benchmarking procedure and results.

12. Bayesian Methods and Confidence Intervals for Automatic Target Recognition of SAR Canonical Shapes

Science.gov (United States)

2014-03-27

and DirectX [22]. The CUDA platform was developed by the NVIDIA Corporation to allow programmers access to the computational capabilities of the...were used for the intense repetitive computations. Developing CUDA software requires writing code for specialized compilers provided by NVIDIA and

13. Statistical Significance, Effect Size Reporting, and Confidence Intervals: Best Reporting Strategies

Science.gov (United States)

Capraro, Robert M.

2004-01-01

With great interest the author read the May 2002 editorial in the "Journal for Research in Mathematics Education (JRME)" (King, 2002) regarding changes to the 5th edition of the "Publication Manual of the American Psychological Association" (APA, 2001). Of special note to him, and of great import to the field of mathematics education research, are…

14. Technical Report on Modeling for Quasispecies Abundance Inference with Confidence Intervals from Metagenomic Sequence Data

Energy Technology Data Exchange (ETDEWEB)

McLoughlin, K. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

2016-01-11

The overall aim of this project is to develop a software package, called MetaQuant, that can determine the constituents of a complex microbial sample and estimate their relative abundances by analysis of metagenomic sequencing data. The goal for Task 1 is to create a generative model describing the stochastic process underlying the creation of sequence read pairs in the data set. The stages in this generative process include the selection of a source genome sequence for each read pair, with probability dependent on its abundance in the sample. The other stages describe the evolution of the source genome from its nearest common ancestor with a reference genome, breakage of the source DNA into short fragments, and the errors in sequencing the ends of the fragments to produce read pairs.

15. Five-year risk of interval-invasive second breast cancer.

Science.gov (United States)

Lee, Janie M; Buist, Diana S M; Houssami, Nehmat; Dowling, Emily C; Halpern, Elkan F; Gazelle, G Scott; Lehman, Constance D; Henderson, Louise M; Hubbard, Rebecca A

2015-07-01

Earlier detection of second breast cancers after primary breast cancer (PBC) treatment improves survival, yet mammography is less accurate in women with prior breast cancer. The purpose of this study was to examine women presenting clinically with second breast cancers after negative surveillance mammography (interval cancers), and to estimate the five-year risk of interval-invasive second cancers for women with varying risk profiles. We evaluated a prospective cohort of 15 114 women with 47 717 surveillance mammograms diagnosed with stage 0-II unilateral PBC from 1996 through 2008 at facilities in the Breast Cancer Surveillance Consortium. We used discrete time survival models to estimate the association between odds of an interval-invasive second breast cancer and candidate predictors, including demographic, PBC, and imaging characteristics. All statistical tests were two-sided. The cumulative incidence of second breast cancers after five years was 54.4 per 1000 women, with 325 surveillance-detected and 138 interval-invasive second breast cancers. The five-year risk of interval-invasive second cancer for women with referent category characteristics was 0.60%. For women with the most and least favorable profiles, the five-year risk ranged from 0.07% to 6.11%. Multivariable modeling identified grade II PBC (odds ratio [OR] = 1.95, 95% confidence interval [CI] = 1.15 to 3.31), treatment with lumpectomy without radiation (OR = 3.27, 95% CI = 1.91 to 5.62), interval PBC presentation (OR = 2.01, 95% CI 1.28 to 3.16), and heterogeneously dense breasts on mammography (OR = 1.54, 95% CI = 1.01 to 2.36) as independent predictors of interval-invasive second breast cancers. PBC diagnosis and treatment characteristics contribute to variation in subsequent-interval second breast cancer risk. Consideration of these factors may be useful in developing tailored post-treatment imaging surveillance plans. © The Author 2015. Published by Oxford University Press. All rights reserved

16. Molecular tailoring of solid surfaces

Energy Technology Data Exchange (ETDEWEB)

Evenson, Simon Alan

1997-07-01

The overall performance of a material can be dramatically improved by tailoring its surface at the molecular level. The aim of this project was to develop a universal technique for attaching dendrimers (well-defined, nanoscale, functional polymers) and Jeffamines (high molecular weight polymer chains) to the surface of any shaped solid substrate. This desire for controlled functionalization is ultimately driven by the need to improve material compatibility in various biomedical applications. Atomic force microscopy (AFM) was used initially to study the packing and structure of Langmuir-Blodgett films on surfaces, and subsequently resulted in the first visualization of individual, spherically shaped, nanoscopic polyamidoamine dendrimers. The next goal was to develop a methodology for attaching such macromolecules to inert surfaces. Thin copolymer films were deposited onto solid substrates to produce materials with a fixed concentration of surface anhydride groups. Vapor-phase functionalization reactions were then carried out with trifluorinated amines to confirm the viability of this technique to bond molecules to surfaces. Finally, pulsed plasma polymerization of maleic anhydride took this approach one stage further, by forming well-adhered polymer films containing a predetermined concentration of reactive anhydride groups. Subsequent functionalization reactions led to the secure attachment of dendrimers and Jeffamines at any desired packing density. An alternative route to biocompatibilization used 1,2-ethanedithiol to yield thiolated surfaces containing very high polymeric sulfur : carbon ratios. (author)

17. Molecular tailoring of solid surfaces

International Nuclear Information System (INIS)

Evenson, Simon Alan

1997-01-01

The overall performance of a material can be dramatically improved by tailoring its surface at the molecular level. The aim of this project was to develop a universal technique for attaching dendrimers (well-defined, nanoscale, functional polymers) and Jeffamines (high molecular weight polymer chains) to the surface of any shaped solid substrate. This desire for controlled functionalization is ultimately driven by the need to improve material compatibility in various biomedical applications. Atomic force microscopy (AFM) was used initially to study the packing and structure of Langmuir-Blodgett films on surfaces, and subsequently resulted in the first visualization of individual, spherically shaped, nanoscopic polyamidoamine dendrimers. The next goal was to develop a methodology for attaching such macromolecules to inert surfaces. Thin copolymer films were deposited onto solid substrates to produce materials with a fixed concentration of surface anhydride groups. Vapor-phase functionalization reactions were then carried out with trifluorinated amines to confirm the viability of this technique to bond molecules to surfaces. Finally, pulsed plasma polymerization of maleic anhydride took this approach one stage further, by forming well-adhered polymer films containing a predetermined concentration of reactive anhydride groups. Subsequent functionalization reactions led to the secure attachment of dendrimers and Jeffamines at any desired packing density. An alternative route to biocompatibilization used 1,2-ethanedithiol to yield thiolated surfaces containing very high polymeric sulfur : carbon ratios. (author)

18. Formability of stainless steel tailored blanks

DEFF Research Database (Denmark)

Bagger, Claus; Gong, Hui; Olsen, Flemming Ove

2004-01-01

In a number of systematic tests, the formability of tailored blanks consisting of even and different combinations of AISI304 and AISI316 in thickness of 0.8 mm and 1.5 mm have been investigated. In order to analyse the formability of tailored blanks with different sheet thickness, a method based ...

19. Trial Protocol: Using genotype to tailor prescribing of nicotine replacement therapy: a randomised controlled trial assessing impact of communication upon adherence

Directory of Open Access Journals (Sweden)

Prevost A Toby

2010-11-01

% confidence interval for observed between-arm difference in mean NRT consumption (Hypothesis I. Motivation to make another quit attempt will be compared between arms in those failing to quit by six months (Hypothesis II. Discussion This is the first clinical trial evaluating the behavioural impact on adherence of prescribing medication using genetic rather than phenotypic information. Specific issues regarding the choice of design for trials of interventions of this kind are discussed. Trial details Funder: Medical Research Council (MRC Grant number: G0500274 ISRCTN: 14352545 Date trial stated: June 2007 Expected end date: December 2009 Expected reporting date: December 2010

20. High Confidence Software and Systems Research Needs

Data.gov (United States)

Networking and Information Technology Research and Development, Executive Office of the President — This White Paper presents a survey of high confidence software and systems research needs. It has been prepared by the High Confidence Software and Systems...

1. Confidence Building Strategies in the Public Schools.

Science.gov (United States)

Achilles, C. M.; And Others

1985-01-01

Data from the Phi Delta Kappa Commission on Public Confidence in Education indicate that "high-confidence" schools make greater use of marketing and public relations strategies. Teacher attitudes were ranked first and administrator attitudes second by 409 respondents for both gain and loss of confidence in schools. (MLF)

2. Overconfidence in Interval Estimates

Science.gov (United States)

Soll, Jack B.; Klayman, Joshua

2004-01-01

Judges were asked to make numerical estimates (e.g., "In what year was the first flight of a hot air balloon?"). Judges provided high and low estimates such that they were X% sure that the correct answer lay between them. They exhibited substantial overconfidence: The correct answer fell inside their intervals much less than X% of the time. This…

3. Regional Competition for Confidence: Features of Formation

Directory of Open Access Journals (Sweden)

Irina Svyatoslavovna Vazhenina

2016-09-01

Full Text Available The increase in economic independence of the regions inevitably leads to an increase in the quality requirements of the regional economic policy. The key to successful regional policy, both during its development and implementation, is the understanding of the necessity of gaining confidence (at all levels, and the inevitable participation in the competition for confidence. The importance of confidence in the region is determined by its value as a competitive advantage in the struggle for partners, resources and tourists, and attracting investments. In today’s environment the focus of governments, regions and companies on long-term cooperation is clearly expressed, which is impossible without a high level of confidence between partners. Therefore, the most important competitive advantages of territories are intangible assets such as an attractive image and a good reputation, which builds up confidence of the population and partners. The higher the confidence in the region is, the broader is the range of potential partners, the larger is the planning horizon of long-term concerted action, the better are the chances of acquiring investment, the higher is the level of competitive immunity of the territories. The article defines competition for confidence as purposeful behavior of a market participant in economic environment, aimed at acquiring specific intangible competitive advantage – the confidence of the largest possible number of other market actors. The article also highlights the specifics of confidence as a competitive goal, presents factors contributing to the destruction of confidence, proposes a strategy to fight for confidence as a program of four steps, considers the factors which integrate regional confidence and offers several recommendations for the establishment of effective regional competition for confidence

4. Applications of interval computations

CERN Document Server

1996-01-01

Primary Audience for the Book • Specialists in numerical computations who are interested in algorithms with automatic result verification. • Engineers, scientists, and practitioners who desire results with automatic verification and who would therefore benefit from the experience of suc­ cessful applications. • Students in applied mathematics and computer science who want to learn these methods. Goal Of the Book This book contains surveys of applications of interval computations, i. e. , appli­ cations of numerical methods with automatic result verification, that were pre­ sented at an international workshop on the subject in EI Paso, Texas, February 23-25, 1995. The purpose of this book is to disseminate detailed and surveyed information about existing and potential applications of this new growing field. Brief Description of the Papers At the most fundamental level, interval arithmetic operations work with sets: The result of a single arithmetic operation is the set of all possible results as the o...

5. Continuous tailoring activities in software engineering

OpenAIRE

Ribaud , Vincent; Saliou , Philippe

2004-01-01

International audience; Software activities belong to different processes. Tailoring software processes aims to relate the operational software processes of an organization to the effective project. With the information technology industry moving ever faster, established positions are undergoing constant evolutionary change. The failure of a complex tailoring process of a management information system is reported. There is a need to adopt software processes that can operate under constant cha...

6. The integrated model of sport confidence: a canonical correlation and mediational analysis.

Science.gov (United States)

Koehn, Stefan; Pearce, Alan J; Morris, Tony

2013-12-01

The main purpose of the study was to examine crucial parts of Vealey's (2001) integrated framework hypothesizing that sport confidence is a mediating variable between sources of sport confidence (including achievement, self-regulation, and social climate) and athletes' affect in competition. The sample consisted of 386 athletes, who completed the Sources of Sport Confidence Questionnaire, Trait Sport Confidence Inventory, and Dispositional Flow Scale-2. Canonical correlation analysis revealed a confidence-achievement dimension underlying flow. Bias-corrected bootstrap confidence intervals in AMOS 20.0 were used in examining mediation effects between source domains and dispositional flow. Results showed that sport confidence partially mediated the relationship between achievement and self-regulation domains and flow, whereas no significant mediation was found for social climate. On a subscale level, full mediation models emerged for achievement and flow dimensions of challenge-skills balance, clear goals, and concentration on the task at hand.

7. Concurrent predictors of dysfunctional parenting and maternal confidence: implications for parenting interventions.

Science.gov (United States)

Morawska, A; Sanders, M R

2007-11-01

The often intense nature of the conflict between parents and their toddlers requires better understanding of what happens during this stage of development and how difficulties can be prevented from escalating in the future. Clarification of the nature of family and parenting factors related to toddler behaviour allows better capacity for intervention development and tailoring to individual families. A total of 126 mothers of toddlers completed a self-report assessment battery, examining child behaviour, parenting style and confidence, as well as broader family adjustment measures. The study found that maternal confidence and dysfunctional parenting were interrelated and were also predicted best by parenting variables, in contrast to socio-demographic and child variables. Maternal confidence also mediated the relationships between family income and toddler behaviour. Parenting style and confidence are important modifiable factors to target in parenting interventions. The implications for the development, implementation and delivery of parenting interventions are discussed.

8. Tailoring of mobility advices to consumers. Executive summary; Tailoring van mobiliteitsadviezen aan consumenten. Managementsamenvatting

Energy Technology Data Exchange (ETDEWEB)

De Weerdt, I.; Jonkers, R. [ResCon, Haarlem (Netherlands)

2003-09-01

An outline is given of the options to apply so-called computer tailoring in the field of mobility. A feasibility study has been carried out for the realization of a computerized tailored mobility programme. Tailoring is a method, based on social-scientific theories on behavioral change, by means of which information is tailored to individual circumstances, preferences and motivation. [Dutch] De mogelijkheden van computer tailoring (tailoring is een methodiek die gebaseerd is op sociaal-wetenschappelijke theorieen over gedragsverandering, waarbij de aangeboden informatie is afgestemd op individuele omstandigheden, preferenties en motivaties) op het gebied van mobiliteit worden verkend. Er is een haalbaarheidsonderzoek uitgevoerd ter voorbereiding op de realisatie van een computer tailored mobiliteitsprogramma. In dit onderzoek is nagegaan: of consumenten belangstelling hebben voor informatie op maat over mobiliteit; waar consumenten zelf de meeste mogelijkheden zien om hun mobiliteitspatroon te veranderen (en dus meer duurzame mobiliteitsopties toe te passen); hoe het gedrag van consumenten m.b.t. mobiliteit d.m.v. een tailoring systeem gericht beinvloed kan worden; of er organisaties te vinden zijn die de exploitatie van een tailoring systeem m.b.t. mobiliteit op zich zouden willen nemen; of de ontwikkeling van een dergelijk systeem kosten effectief kan zijn.

9. Tailoring of mobility advices to consumers. A determinants survey; Tailoring van mobiliteitsadviezen aan consumenten. Een determinantenonderzoek

Energy Technology Data Exchange (ETDEWEB)

De Weerdt, I.; Jonkers, R. [ResCon, Haarlem (Netherlands)

2003-08-01

An outline is given of the options to apply so-called computer tailoring in the field of mobility. A feasibility study has been carried out for the realization of a computerized tailored mobility programme. Tailoring is a method, based on social-scientific theories on behavioral change, by means of which information is tailored to individual circumstances, preferences and motivation. [Dutch] De mogelijkheden van computer tailoring (tailoring is een methodiek die gebaseerd is op sociaal-wetenschappelijke theorieen over gedragsverandering, waarbij de aangeboden informatie is afgestemd op individuele omstandigheden, preferenties en motivaties) op het gebied van mobiliteit worden verkend. Er is een haalbaarheidsonderzoek uitgevoerd ter voorbereiding op de realisatie van een computer tailored mobiliteitsprogramma. In dit onderzoek is nagegaan: of consumenten belangstelling hebben voor informatie op maat over mobiliteit; waar consumenten zelf de meeste mogelijkheden zien om hun mobiliteitspatroon te veranderen (en dus meer duurzame mobiliteitsopties toe te passen); hoe het gedrag van consumenten m.b.t. mobiliteit d.m.v. een tailoring systeem gericht beinvloed kan worden; of er organisaties te vinden zijn die de exploitatie van een tailoring systeem m.b.t. mobiliteit op zich zouden willen nemen; of de ontwikkeling van een dergelijk systeem kosten effectief kan zijn.

10. Surveillance test interval optimization

International Nuclear Information System (INIS)

Cepin, M.; Mavko, B.

1995-01-01

Technical specifications have been developed on the bases of deterministic analyses, engineering judgment, and expert opinion. This paper introduces our risk-based approach to surveillance test interval (STI) optimization. This approach consists of three main levels. The first level is the component level, which serves as a rough estimation of the optimal STI and can be calculated analytically by a differentiating equation for mean unavailability. The second and third levels give more representative results. They take into account the results of probabilistic risk assessment (PRA) calculated by a personal computer (PC) based code and are based on system unavailability at the system level and on core damage frequency at the plant level

11. The prognostic value of the QT interval and QT interval dispersion in all-cause and cardiac mortality and morbidity in a population of Danish citizens.

Science.gov (United States)

Elming, H; Holm, E; Jun, L; Torp-Pedersen, C; Køber, L; Kircshoff, M; Malik, M; Camm, J

1998-09-01

To evaluate the prognostic value of the QT interval and QT interval dispersion in total and in cardiovascular mortality, as well as in cardiac morbidity, in a general population. The QT interval was measured in all leads from a standard 12-lead ECG in a random sample of 1658 women and 1797 men aged 30-60 years. QT interval dispersion was calculated from the maximal difference between QT intervals in any two leads. All cause mortality over 13 years, and cardiovascular mortality as well as cardiac morbidity over 11 years, were the main outcome parameters. Subjects with a prolonged QT interval (430 ms or more) or prolonged QT interval dispersion (80 ms or more) were at higher risk of cardiovascular death and cardiac morbidity than subjects whose QT interval was less than 360 ms, or whose QT interval dispersion was less than 30 ms. Cardiovascular death relative risk ratios, adjusted for age, gender, myocardial infarct, angina pectoris, diabetes mellitus, arterial hypertension, smoking habits, serum cholesterol level, and heart rate were 2.9 for the QT interval (95% confidence interval 1.1-7.8) and 4.4 for QT interval dispersion (95% confidence interval 1.0-19-1). Fatal and non-fatal cardiac morbidity relative risk ratios were similar, at 2.7 (95% confidence interval 1.4-5.5) for the QT interval and 2.2 (95% confidence interval 1.1-4.0) for QT interval dispersion. Prolongation of the QT interval and QT interval dispersion independently affected the prognosis of cardiovascular mortality and cardiac fatal and non-fatal morbidity in a general population over 11 years.

12. Chaos on the interval

CERN Document Server

Ruette, Sylvie

2017-01-01

The aim of this book is to survey the relations between the various kinds of chaos and related notions for continuous interval maps from a topological point of view. The papers on this topic are numerous and widely scattered in the literature; some of them are little known, difficult to find, or originally published in Russian, Ukrainian, or Chinese. Dynamical systems given by the iteration of a continuous map on an interval have been broadly studied because they are simple but nevertheless exhibit complex behaviors. They also allow numerical simulations, which enabled the discovery of some chaotic phenomena. Moreover, the "most interesting" part of some higher-dimensional systems can be of lower dimension, which allows, in some cases, boiling it down to systems in dimension one. Some of the more recent developments such as distributional chaos, the relation between entropy and Li-Yorke chaos, sequence entropy, and maps with infinitely many branches are presented in book form for the first time. The author gi...

13. Self-Confidence in the Hospitality Industry

Directory of Open Access Journals (Sweden)

Michael Oshins

2014-02-01

Full Text Available Few industries rely on self-confidence to the extent that the hospitality industry does because guests must feel welcome and that they are in capable hands. This article examines the results of hundreds of student interviews with industry professionals at all levels to determine where the majority of the hospitality industry gets their self-confidence.

14. Consumer confidence or the business cycle

DEFF Research Database (Denmark)

Møller, Stig Vinther; Nørholm, Henrik; Rangvid, Jesper

2014-01-01

Answer: The business cycle. We show that consumer confidence and the output gap both excess returns on stocks in many European countries: When the output gap is positive (the economy is doing well), expected returns are low, and when consumer confidence is high, expected returns are also low...

15. Financial Literacy, Confidence and Financial Advice Seeking

NARCIS (Netherlands)

Kramer, Marc M.

2016-01-01

We find that people with higher confidence in their own financial literacy are less likely to seek financial advice, but no relation between objective measures of literacy and advice seeking. The negative association between confidence and advice seeking is more pronounced among wealthy households.

16. Aging and Confidence Judgments in Item Recognition

Science.gov (United States)

Voskuilen, Chelsea; Ratcliff, Roger; McKoon, Gail

2018-01-01

We examined the effects of aging on performance in an item-recognition experiment with confidence judgments. A model for confidence judgments and response time (RTs; Ratcliff & Starns, 2013) was used to fit a large amount of data from a new sample of older adults and a previously reported sample of younger adults. This model of confidence…

17. Organic labbeling systems and consumer confidence

OpenAIRE

Sønderskov, Kim Mannemar; Daugbjerg, Carsten

2009-01-01

A research analysis suggests that a state certification and labelling system creates confidence in organic labelling systems and consequently green consumerism. Danish consumers have higher levels of confidence in the labelling system than consumers in countries where the state plays a minor role in labelling and certification.

18. Self-confidence and metacognitive processes

Directory of Open Access Journals (Sweden)

Kleitman Sabina

2005-01-01

Full Text Available This paper examines the status of Self-confidence trait. Two studies strongly suggest that Self-confidence is a component of metacognition. In the first study, participants (N=132 were administered measures of Self-concept, a newly devised Memory and Reasoning Competence Inventory (MARCI, and a Verbal Reasoning Test (VRT. The results indicate a significant relationship between confidence ratings on the VRT and the Reasoning component of MARCI. The second study (N=296 employed an extensive battery of cognitive tests and several metacognitive measures. Results indicate the presence of robust Self-confidence and Metacognitive Awareness factors, and a significant correlation between them. Self-confidence taps not only processes linked to performance on items that have correct answers, but also beliefs about events that may never occur.

19. Tailored nutrition education: is it really effective?

Science.gov (United States)

Eyles, Helen; Ni Mhurchu, Cliona

2012-03-01

There has been a growing interest in tailored nutrition education over the previous decade, with a number of literature reviews suggesting this intervention strategy holds considerable potential. Nevertheless, the majority of intervention trials undertaken to date have employed subjective self-report outcome measures (such as dietary recalls). The aim of the present review is to further consider the likely true effect of tailored nutrition education by assessing the findings of tailored nutrition education intervention trials where objective outcome measures (such as sales data) have been employed. Four trials of tailored nutrition education employing objective outcome measures were identified: one was undertaken in eight low-cost supermarkets in New Zealand (2010; n 1104); one was an online intervention trial in Australia (2006; n 497); and two were undertaken in US supermarkets (1997 and 2001; n 105 and 296, respectively). Findings from the high-quality New Zealand trial were negative. Findings from the US trials were also generally negative, although reporting was poor making it difficult to assess quality. Findings from the high-quality online trial were positive, although have limited generalisability for public health. Trials employing objective outcome measures strongly suggest tailored nutrition education is not effective as a stand-alone strategy. However, further large, high-quality trials employing objective outcome measures are needed to determine the true effectiveness of this popular nutrition intervention strategy. Regardless, education plays an important role in generating social understanding and acceptance of broader interventions to improve nutrition.

20. Nanoparticles and their tailoring with laser light

International Nuclear Information System (INIS)

Hubenthal, Frank

2009-01-01

Monodisperse noble metal nanoparticles are of tremendous interest for numerous applications, such as surface-enhanced Raman spectroscopy, catalysis or biosensing. However, preparation of monodisperse metal nanoparticles is still a challenging task, because typical preparation methods yield nanoparticle ensembles with broad shape and/or size distributions. To overcome this drawback, tailoring of metal nanoparticles with laser light has been developed, which is based on the pronounced shape- and size-dependent optical properties of metal nanoparticles. I will demonstrate that nanoparticle tailoring with ns-pulsed laser light is a suitable method to prepare nanoparticle ensembles with a narrow shape and/or size distribution. While irradiation with ns-pulsed laser light during nanoparticle growth permits a precise shape tailoring, post-grown irradiation allows a size tailoring. For example, the initial broad Gaussian size distribution of silver nanoparticles on quartz substrates with a standard deviation of σ= 30% is significantly reduced to as little as σ= 10% after tailoring. This paper addresses teachers of undergraduate and advanced school level as well as students. It assumes some fundamental knowledge in solid-state physics, thermodynamics and resonance vibration.

1. Interval methods: An introduction

DEFF Research Database (Denmark)

Achenie, L.E.K.; Kreinovich, V.; Madsen, Kaj

2006-01-01

This chapter contains selected papers presented at the Minisymposium on Interval Methods of the PARA'04 Workshop '' State-of-the-Art in Scientific Computing ''. The emphasis of the workshop was on high-performance computing (HPC). The ongoing development of ever more advanced computers provides...... the potential for solving increasingly difficult computational problems. However, given the complexity of modern computer architectures, the task of realizing this potential needs careful attention. A main concern of HPC is the development of software that optimizes the performance of a given computer....... An important characteristic of the computer performance in scientific computing is the accuracy of the Computation results. Often, we can estimate this accuracy by using traditional statistical techniques. However, in many practical situations, we do not know the probability distributions of different...

2. Multichannel interval timer

International Nuclear Information System (INIS)

Turko, B.T.

1983-10-01

A CAMAC based modular multichannel interval timer is described. The timer comprises twelve high resolution time digitizers with a common start enabling twelve independent stop inputs. Ten time ranges from 2.5 μs to 1.3 μs can be preset. Time can be read out in twelve 24-bit words either via CAMAC Crate Controller or an external FIFO register. LSB time calibration is 78.125 ps. An additional word reads out the operational status of twelve stop channels. The system consists of two modules. The analog module contains a reference clock and 13 analog time stretchers. The digital module contains counters, logic and interface circuits. The timer has an excellent differential linearity, thermal stability and crosstalk free performance

3. Experimenting with musical intervals

Science.gov (United States)

Lo Presto, Michael C.

2003-07-01

When two tuning forks of different frequency are sounded simultaneously the result is a complex wave with a repetition frequency that is the fundamental of the harmonic series to which both frequencies belong. The ear perceives this 'musical interval' as a single musical pitch with a sound quality produced by the harmonic spectrum responsible for the waveform. This waveform can be captured and displayed with data collection hardware and software. The fundamental frequency can then be calculated and compared with what would be expected from the frequencies of the tuning forks. Also, graphing software can be used to determine equations for the waveforms and predict their shapes. This experiment could be used in an introductory physics or musical acoustics course as a practical lesson in superposition of waves, basic Fourier series and the relationship between some of the ear's subjective perceptions of sound and the physical properties of the waves that cause them.

4. Interpregnancy interval and risk of autistic disorder.

Science.gov (United States)

Gunnes, Nina; Surén, Pål; Bresnahan, Michaeline; Hornig, Mady; Lie, Kari Kveim; Lipkin, W Ian; Magnus, Per; Nilsen, Roy Miodini; Reichborn-Kjennerud, Ted; Schjølberg, Synnve; Susser, Ezra Saul; Øyen, Anne-Siri; Stoltenberg, Camilla

2013-11-01

A recent California study reported increased risk of autistic disorder in children conceived within a year after the birth of a sibling. We assessed the association between interpregnancy interval and risk of autistic disorder using nationwide registry data on pairs of singleton full siblings born in Norway. We defined interpregnancy interval as the time from birth of the first-born child to conception of the second-born child in a sibship. The outcome of interest was autistic disorder in the second-born child. Analyses were restricted to sibships in which the second-born child was born in 1990-2004. Odds ratios (ORs) were estimated by fitting ordinary logistic models and logistic generalized additive models. The study sample included 223,476 singleton full-sibling pairs. In sibships with interpregnancy intervals autistic disorder, compared with 0.13% in the reference category (≥ 36 months). For interpregnancy intervals shorter than 9 months, the adjusted OR of autistic disorder in the second-born child was 2.18 (95% confidence interval 1.42-3.26). The risk of autistic disorder in the second-born child was also increased for interpregnancy intervals of 9-11 months in the adjusted analysis (OR = 1.71 [95% CI = 1.07-2.64]). Consistent with a previous report from California, interpregnancy intervals shorter than 1 year were associated with increased risk of autistic disorder in the second-born child. A possible explanation is depletion of micronutrients in mothers with closely spaced pregnancies.

5. The Economy of Persistence: Mario the Tailor

Directory of Open Access Journals (Sweden)

Prudence Black

2016-03-01

Full Text Available Mario Conte has had a tailor shop in King Street, Newtown since the mid 1960s. Taking an interview with Mario as its point of departure, this article describes the persistence of a skilled worker whose practices and techniques remain the same in a world that has long changed. While inattentive to what rules might be used to decorate a shop window, Mario continues to make and sew in the way that he learnt in post-war Italy. Mario’s persistence could be described as all the skills and other elements that need to be in place to keep him working, in particular the tradition of tailoring techniques he has remained true to over the last fifty years. The hand stitching of his tailoring is like a metronome of that persistence.

6. We will be champions: Leaders' confidence in 'us' inspires team members' team confidence and performance.

Science.gov (United States)

Fransen, K; Steffens, N K; Haslam, S A; Vanbeselaere, N; Vande Broek, G; Boen, F

2016-12-01

7. Communication confidence in persons with aphasia.

Science.gov (United States)

Babbitt, Edna M; Cherney, Leora R

2010-01-01

Communication confidence is a construct that has not been explored in the aphasia literature. Recently, national and international organizations have endorsed broader assessment methods that address quality of life and include participation, activity, and impairment domains as well as psychosocial areas. Individuals with aphasia encounter difficulties in all these areas on a daily basis in living with a communication disorder. Improvements are often reflected in narratives that are not typically included in standard assessments. This article illustrates how a new instrument measuring communication confidence might fit into a broad assessment framework and discusses the interaction of communication confidence, autonomy, and self-determination for individuals living with aphasia.

8. Confidence building in implementation of geological disposal

International Nuclear Information System (INIS)

Umeki, Hiroyuki

2004-01-01

Long-term safety of the disposal system should be demonstrated to the satisfaction of the stakeholders. Convincing arguments are therefore required that instil in the stakeholders confidence in the safety of a particular concept for the siting and design of a geological disposal, given the uncertainties that inevitably exist in its a priori description and in its evolution. The step-wise approach associated with making safety case at each stage is a key to building confidence in the repository development programme. This paper discusses aspects and issues on confidence building in the implementation of HLW disposal in Japan. (author)

9. Confidence rating of marine eutrophication assessments

DEFF Research Database (Denmark)

Murray, Ciarán; Andersen, Jesper Harbo; Kaartokallio, Hermanni

2011-01-01

of the 'value' of the indicators on which the primary assessment is made. Such secondary assessment of confidence represents a first step towards linking status classification with information regarding their accuracy and precision and ultimately a tool for improving or targeting actions to improve the health......This report presents the development of a methodology for assessing confidence in eutrophication status classifications. The method can be considered as a secondary assessment, supporting the primary assessment of eutrophication status. The confidence assessment is based on a transparent scoring...

10. Tailoring group velocity by topology optimization

DEFF Research Database (Denmark)

Stainko, Roman; Sigmund, Ole

2007-01-01

The paper describes a systematic method for the tailoring of dispersion properties of slab-based photonic crystal waveguides. The method is based on the topology optimization method which consists in repeated finite element frequency domain analyses. The goal of the optimization process is to come...... up with slow light, zero group velocity dispersion photonic waveguides or photonic waveguides with tailored dispersion properties for dispersion compensation purposes. An example concerning the design of a wide bandwidth, constant low group velocity waveguide demonstrate the e±ciency of the method....

11. Planning an Availability Demonstration Test with Consideration of Confidence Level

Directory of Open Access Journals (Sweden)

Frank Müller

2017-08-01

Full Text Available The full service life of a technical product or system is usually not completed after an initial failure. With appropriate measures, the system can be returned to a functional state. Availability is an important parameter for evaluating such repairable systems: Failure and repair behaviors are required to determine this availability. These data are usually given as mean value distributions with a certain confidence level. Consequently, the availability value also needs to be expressed with a confidence level. This paper first highlights the bootstrap Monte Carlo simulation (BMCS for availability demonstration and inference with confidence intervals based on limited failure and repair data. The BMCS enables point-, steady-state and average availability to be determined with a confidence level based on the pure samples or mean value distributions in combination with the corresponding sample size of failure and repair behavior. Furthermore, the method enables individual sample sizes to be used. A sample calculation of a system with Weibull-distributed failure behavior and a sample of repair times is presented. Based on the BMCS, an extended, new procedure is introduced: the “inverse bootstrap Monte Carlo simulation” (IBMCS to be used for availability demonstration tests with consideration of confidence levels. The IBMCS provides a test plan comprising the required number of failures and repair actions that must be observed to demonstrate a certain availability value. The concept can be applied to each type of availability and can also be applied to the pure samples or distribution functions of failure and repair behavior. It does not require special types of distribution. In other words, for example, a Weibull, a lognormal or an exponential distribution can all be considered as distribution functions of failure and repair behavior. After presenting the IBMCS, a sample calculation will be carried out and the potential of the BMCS and the IBMCS

12. An Exact Confidence Region in Multivariate Calibration

OpenAIRE

Mathew, Thomas; Kasala, Subramanyam

1994-01-01

In the multivariate calibration problem using a multivariate linear model, an exact confidence region is constructed. It is shown that the region is always nonempty and is invariant under nonsingular transformations.

13. Weighting Mean and Variability during Confidence Judgments

Science.gov (United States)

de Gardelle, Vincent; Mamassian, Pascal

2015-01-01

Humans can not only perform some visual tasks with great precision, they can also judge how good they are in these tasks. However, it remains unclear how observers produce such metacognitive evaluations, and how these evaluations might be dissociated from the performance in the visual task. Here, we hypothesized that some stimulus variables could affect confidence judgments above and beyond their impact on performance. In a motion categorization task on moving dots, we manipulated the mean and the variance of the motion directions, to obtain a low-mean low-variance condition and a high-mean high-variance condition with matched performances. Critically, in terms of confidence, observers were not indifferent between these two conditions. Observers exhibited marked preferences, which were heterogeneous across individuals, but stable within each observer when assessed one week later. Thus, confidence and performance are dissociable and observers’ confidence judgments put different weights on the stimulus variables that limit performance. PMID:25793275

14. Distinguishing highly confident accurate and inaccurate memory: insights about relevant and irrelevant influences on memory confidence

OpenAIRE

Chua, Elizabeth F.; Hannula, Deborah E.; Ranganath, Charan

2012-01-01

It is generally believed that accuracy and confidence in one’s memory are related, but there are many instances when they diverge. Accordingly, it is important to disentangle the factors which contribute to memory accuracy and confidence, especially those factors that contribute to confidence, but not accuracy. We used eye movements to separately measure fluent cue processing, the target recognition experience, and relative evidence assessment on recognition confidence and accuracy. Eye movem...

15. How do regulators measure public confidence?

International Nuclear Information System (INIS)

Schmitt, A.; Besenyei, E.

2006-01-01

The conclusions and recommendations of this session can be summarized this way. - There are some important elements of confidence: visibility, satisfaction, credibility and reputation. The latter can consist of trust, positive image and knowledge of the role the organisation plays. A good reputation is hard to achieve but easy to lose. - There is a need to define what public confidence is and what to measure. The difficulty is that confidence is a matter of perception of the public, so what we try to measure is the perception. - It is controversial how to take into account the results of confidence measurement because of the influence of the context. It is not an exact science, results should be examined cautiously and surveys should be conducted frequently, at least every two years. - Different experiences were explained: - Quantitative surveys - among the general public or more specific groups like the media; - Qualitative research - with test groups and small panels; - Semi-quantitative studies - among stakeholders who have regular contracts with the regulatory body. It is not clear if the results should be shared with the public or just with other authorities and governmental organisations. - Efforts are needed to increase visibility, which is a prerequisite for confidence. - A practical example of organizing an emergency exercise and an information campaign without taking into account the real concerns of the people was given to show how public confidence can be decreased. - We learned about a new method - the so-called socio-drama - which addresses another issue also connected to confidence - the notion of understanding between stakeholders around a nuclear site. It is another way of looking at confidence in a more restricted group. (authors)

16. Confidence in leadership among the newly qualified.

Science.gov (United States)

Bayliss-Pratt, Lisa; Morley, Mary; Bagley, Liz; Alderson, Steven

2013-10-23

The Francis report highlighted the importance of strong leadership from health professionals but it is unclear how prepared those who are newly qualified feel to take on a leadership role. We aimed to assess the confidence of newly qualified health professionals working in the West Midlands in the different competencies of the NHS Leadership Framework. Most respondents felt confident in their abilities to demonstrate personal qualities and work with others, but less so at managing or improving services or setting direction.

17. [Sources of leader's confidence in organizations].

Science.gov (United States)

Ikeda, Hiroshi; Furukawa, Hisataka

2006-04-01

18. Tailoring endocrine treatment for early breast cancer

NARCIS (Netherlands)

Fontein, Duveken Berthe Yvonne

2014-01-01

This thesis describes several important aspects of adjuvant endocrine therapy for postmenopausal women with endocrine-sensitive, early-stage breast cancer. In our ongoing efforts to tailor treatment so as to provide the best possible care to each of our patients, we studied the influence of various

19. Tinker Tailor Robot Pi -- The Project

Science.gov (United States)

Bianchi, Lynne

2017-01-01

Tinker Tailor Robot Pi (TTRP) is an innovative curriculum development project, which started in September 2014. It involves in-service primary and secondary teachers, university academic engineers, business partners and pupils at Key Stages 1, 2 and 3 (ages 5-14). The focus of the work has been to explore how a pedagogy for primary engineering…

20. LIFE-STYLE SEGMENTATION WITH TAILORED INTERVIEWING

NARCIS (Netherlands)

KAMAKURA, WA; WEDEL, M

The authors present a tailored interviewing procedure for life-style segmentation. The procedure assumes that a life-style measurement instrument has been designed. A classification of a sample of consumers into life-style segments is obtained using a latent-class model. With these segments, the

1. CONFIDENCE LEVELS AND/VS. STATISTICAL HYPOTHESIS TESTING IN STATISTICAL ANALYSIS. CASE STUDY

Directory of Open Access Journals (Sweden)

ILEANA BRUDIU

2009-05-01

Full Text Available Estimated parameters with confidence intervals and testing statistical assumptions used in statistical analysis to obtain conclusions on research from a sample extracted from the population. Paper to the case study presented aims to highlight the importance of volume of sample taken in the study and how this reflects on the results obtained when using confidence intervals and testing for pregnant. If statistical testing hypotheses not only give an answer "yes" or "no" to some questions of statistical estimation using statistical confidence intervals provides more information than a test statistic, show high degree of uncertainty arising from small samples and findings build in the "marginally significant" or "almost significant (p very close to 0.05.

2. Increasing Product Confidence-Shifting Paradigms.

Science.gov (United States)

Phillips, Marla; Kashyap, Vishal; Cheung, Mee-Shew

2015-01-01

Leaders in the pharmaceutical, medical device, and food industries expressed a unilateral concern over product confidence throughout the total product lifecycle, an unsettling fact for these leaders to manage given that their products affect the lives of millions of people each year. Fueled by the heparin incident of intentional adulteration in 2008, initial efforts for increasing product confidence were focused on improving the confidence of incoming materials, with a belief that supplier performance must be the root cause. As in the heparin case, concern over supplier performance extended deep into the supply chain to include suppliers of the suppliers-which is often a blind spot for pharmaceutical, device, and food manufacturers. Resolved to address the perceived lack of supplier performance, these U.S. Food and Drug Administration (FDA)-regulated industries began to adopt the supplier relationship management strategy, developed by the automotive industry, that emphasizes "management" of suppliers for the betterment of the manufacturers. Current product and supplier management strategies, however, have not led to a significant improvement in product confidence. As a result of the enduring concern by industry leaders over the lack of product confidence, Xavier University launched the Integrity of Supply Initiative in 2012 with a team of industry leaders and FDA officials. Through a methodical research approach, data generated by the pharmaceutical, medical device, and food manufacturers surprisingly pointed to themselves as a source of the lack of product confidence, and revealed that manufacturers either unknowingly increase the potential for error or can control/prevent many aspects of product confidence failure. It is only through this paradigm shift that manufacturers can work collaboratively with their suppliers as equal partners, instead of viewing their suppliers as "lesser" entities needing to be controlled. The basis of this shift provides manufacturers

Energy Technology Data Exchange (ETDEWEB)

Cleminson, F.R. [Dept. of Foreign Affairs and International Trade, Verification, Non-Proliferation, Arms Control and Disarmament Div (IDA), Ottawa, Ontario (Canada)

1998-07-01

Confidence-building has come into its own as a 'tool of choice' in facilitating the non-proliferation, arms control and disarmament (NACD) agenda, whether regional or global. From the Middle East Peace Process (MEPP) to the ASEAN Intersessional Group on Confidence-Building (ARF ISG on CBMS), confidence-building has assumed a central profile in regional terms. In the Four Power Talks begun in Geneva on December 9, 1997, the United States identified confidence-building as one of two subject areas for initial discussion as part of a structured peace process between North and South Korea. Thus, with CBMs assuming such a high profile internationally, it seems prudent for Canadians to pause and take stock of the significant role which Canada has already played in the conceptual development of the process over the last two decades. Since the Helsinki accords of 1975, Canada has developed a significant expertise in this area through an unbroken series of original, basic research projects. These have contributed to defining the process internationally from concept to implementation. Today, these studies represent a solid and unique Departmental investment in basic research from which to draw in meeting Canada's current commitments to multilateral initiatives in the area of confidence-building and to provide a 'step up' in terms of future-oriented leadership. (author)

4. Confidence Leak in Perceptual Decision Making.

Science.gov (United States)

Rahnev, Dobromir; Koizumi, Ai; McCurdy, Li Yan; D'Esposito, Mark; Lau, Hakwan

2015-11-01

People live in a continuous environment in which the visual scene changes on a slow timescale. It has been shown that to exploit such environmental stability, the brain creates a continuity field in which objects seen seconds ago influence the perception of current objects. What is unknown is whether a similar mechanism exists at the level of metacognitive representations. In three experiments, we demonstrated a robust intertask confidence leak-that is, confidence in one's response on a given task or trial influencing confidence on the following task or trial. This confidence leak could not be explained by response priming or attentional fluctuations. Better ability to modulate confidence leak predicted higher capacity for metacognition as well as greater gray matter volume in the prefrontal cortex. A model based on normative principles from Bayesian inference explained the results by postulating that observers subjectively estimate the perceptual signal strength in a stable environment. These results point to the existence of a novel metacognitive mechanism mediated by regions in the prefrontal cortex. © The Author(s) 2015.

5. ADAM SMITH: THE INVISIBLE HAND OR CONFIDENCE

Directory of Open Access Journals (Sweden)

Fernando Luis, Gache

2010-01-01

Full Text Available In 1776 Adam Smith raised the matter that an invisible hand was the one which moved the markets to obtain its efficiency. Despite in the present paper we are going to raise the hypothesis, that this invisible hand is in fact the confidence that each person feels when he is going to do business. That in addition it is unique, because it is different from the confidence of the others and that is a variable nonlinear that essentially is ligatured to respective personal histories. For that we are going to take as its bases the paper by Leopoldo Abadía (2009, with respect to the financial economy crisis that happened in 2007-2008, to evidence the form in which confidence operates. Therefore the contribution that we hope to do with this paper is to emphasize that, the level of confidence of the different actors, is the one which really moves the markets, (therefore the economy and that the crisis of the subprime mortgages is a confidence crisis at world-wide level.

International Nuclear Information System (INIS)

Cleminson, F.R.

1998-01-01

Confidence-building has come into its own as a 'tool of choice' in facilitating the non-proliferation, arms control and disarmament (NACD) agenda, whether regional or global. From the Middle East Peace Process (MEPP) to the ASEAN Intersessional Group on Confidence-Building (ARF ISG on CBMS), confidence-building has assumed a central profile in regional terms. In the Four Power Talks begun in Geneva on December 9, 1997, the United States identified confidence-building as one of two subject areas for initial discussion as part of a structured peace process between North and South Korea. Thus, with CBMs assuming such a high profile internationally, it seems prudent for Canadians to pause and take stock of the significant role which Canada has already played in the conceptual development of the process over the last two decades. Since the Helsinki accords of 1975, Canada has developed a significant expertise in this area through an unbroken series of original, basic research projects. These have contributed to defining the process internationally from concept to implementation. Today, these studies represent a solid and unique Departmental investment in basic research from which to draw in meeting Canada's current commitments to multilateral initiatives in the area of confidence-building and to provide a 'step up' in terms of future-oriented leadership. (author)

7. Determination of confidence limits for experiments with low numbers of counts

International Nuclear Information System (INIS)

Kraft, R.P.; Burrows, D.N.; Nousek, J.A.

1991-01-01

Two different methods, classical and Bayesian, for determining confidence intervals involving Poisson-distributed data are compared. Particular consideration is given to cases where the number of counts observed is small and is comparable to the mean number of background counts. Reasons for preferring the Bayesian over the classical method are given. Tables of confidence limits calculated by the Bayesian method are provided for quick reference. 12 refs

8. High confidence in falsely recognizing prototypical faces.

Science.gov (United States)

Sampaio, Cristina; Reinke, Victoria; Mathews, Jeffrey; Swart, Alexandra; Wallinger, Stephen

2018-06-01

We applied a metacognitive approach to investigate confidence in recognition of prototypical faces. Participants were presented with sets of faces constructed digitally as deviations from prototype/base faces. Participants were then tested with a simple recognition task (Experiment 1) or a multiple-choice task (Experiment 2) for old and new items plus new prototypes, and they showed a high rate of confident false alarms to the prototypes. Confidence and accuracy relationship in this face recognition paradigm was found to be positive for standard items but negative for the prototypes; thus, it was contingent on the nature of the items used. The data have implications for lineups that employ match-to-suspect strategies.

9. A systematic review of maternal confidence for physiologic birth: characteristics of prenatal care and confidence measurement.

Science.gov (United States)

Avery, Melissa D; Saftner, Melissa A; Larson, Bridget; Weinfurter, Elizabeth V

2014-01-01

Because a focus on physiologic labor and birth has reemerged in recent years, care providers have the opportunity in the prenatal period to help women increase confidence in their ability to give birth without unnecessary interventions. However, most research has only examined support for women during labor. The purpose of this systematic review was to examine the research literature for information about prenatal care approaches that increase women's confidence for physiologic labor and birth and tools to measure that confidence. Studies were reviewed that explored any element of a pregnant woman's interaction with her prenatal care provider that helped build confidence in her ability to labor and give birth. Timing of interaction with pregnant women included during pregnancy, labor and birth, and the postpartum period. In addition, we looked for studies that developed a measure of women's confidence related to labor and birth. Outcome measures included confidence or similar concepts, descriptions of components of prenatal care contributing to maternal confidence for birth, and reliability and validity of tools measuring confidence. The search of MEDLINE, CINAHL, PsycINFO, and Scopus databases provided a total of 893 citations. After removing duplicates and articles that did not meet inclusion criteria, 6 articles were included in the review. Three relate to women's confidence for labor during the prenatal period, and 3 describe tools to measure women's confidence for birth. Research about enhancing women's confidence for labor and birth was limited to qualitative studies. Results suggest that women desire information during pregnancy and want to use that information to participate in care decisions in a relationship with a trusted provider. Further research is needed to develop interventions to help midwives and physicians enhance women's confidence in their ability to give birth and to develop a tool to measure confidence for use during prenatal care. © 2014 by

10. Predicting fecal coliform using the interval-to-interval approach and SWAT in the Miyun watershed, China.

Science.gov (United States)

Bai, Jianwen; Shen, Zhenyao; Yan, Tiezhu; Qiu, Jiali; Li, Yangyang

2017-06-01

Pathogens in manure can cause waterborne-disease outbreaks, serious illness, and even death in humans. Therefore, information about the transformation and transport of bacteria is crucial for determining their source. In this study, the Soil and Water Assessment Tool (SWAT) was applied to simulate fecal coliform bacteria load in the Miyun Reservoir watershed, China. The data for the fecal coliform were obtained at three sampling sites, Chenying (CY), Gubeikou (GBK), and Xiahui (XH). The calibration processes of the fecal coliform were conducted using the CY and GBK sites, and validation was conducted at the XH site. An interval-to-interval approach was designed and incorporated into the processes of fecal coliform calibration and validation. The 95% confidence interval of the predicted values and the 95% confidence interval of measured values were considered during calibration and validation in the interval-to-interval approach. Compared with the traditional point-to-point comparison, this method can improve simulation accuracy. The results indicated that the simulation of fecal coliform using the interval-to-interval approach was reasonable for the watershed. This method could provide a new research direction for future model calibration and validation studies.

11. Confidence building - is science the only approach

International Nuclear Information System (INIS)

Bragg, K.

1990-01-01

The Atomic Energy Control Board (AECB) has begun to develop some simplified methods to determine if it is possible to provide confidence that dose, risk and environmental criteria can be respected without undue reliance on detailed scientific models. The progress to date will be outlined and the merits of this new approach will be compared to the more complex, traditional approach. Stress will be given to generating confidence in both technical and non-technical communities as well as the need to enhance communication between them. 3 refs., 1 tab

12. Self Confidence Spillovers and Motivated Beliefs

DEFF Research Database (Denmark)

Banerjee, Ritwik; Gupta, Nabanita Datta; Villeval, Marie Claire

that success when competing in a task increases the performers’ self-confidence and competitiveness in the subsequent task. We also find that such spillovers affect the self-confidence of low-status individuals more than that of high-status individuals. Receiving good news under Affirmative Action, however......Is success in a task used strategically by individuals to motivate their beliefs prior to taking action in a subsequent, unrelated, task? Also, is the distortion of beliefs reinforced for individuals who have lower status in society? Conducting an artefactual field experiment in India, we show...

13. Aeroelastic tailoring of composite aircraft wings

Science.gov (United States)

Mihaila-Andres, Mihai; Larco, Ciprian; Rosu, Paul-Virgil; Rotaru, Constantin

2017-07-01

The need of a continuously increasing size and performance of aerospace structures has settled the composite materials as the preferred materials in aircraft structures. Apart from the clear capacity to reduce the structural weight and with it the manufacture cost and the fuel consumption while preserving proper airworthiness, the prospect of tailoring a structure using the unique directional stiffness properties of composite materials allows an aerospace engineer to optimize aircraft structures to achieve particular design objectives. This paper presents a brief review of what is known as the aeroelastic tailoring of airframes with the intent of understanding the evolution of this research topic and at the same time providing useful references for further studies.

14. Confident Communication: Speaking Tips for Educators.

Science.gov (United States)

Parker, Douglas A.

This resource book seeks to provide the building blocks needed for public speaking while eliminating the fear factor. The book explains how educators can perfect their oratorical capabilities as well as enjoy the security, confidence, and support needed to create and deliver dynamic speeches. Following an Introduction: A Message for Teachers,…

15. Principles of psychological confidence of NPP operators

International Nuclear Information System (INIS)

Alpeev, A.S.

1994-01-01

The problems of operator interaction with subsystems supporting his activity are discussed from the point of view of formation of his psychological confidence on the basis of the automation intellectual means capabilities. The functions of operator activity supporting subsystems, which realization will provide to decrease greatly the portion of accidents at NPPs connected with mistakes in operator actions, are derived. 6 refs

16. Growing confidence, building skills | IDRC - International ...

International Development Research Centre (IDRC) Digital Library (Canada)

In 2012 Rashid explored the influence of think tanks on policy in Bangladesh, as well as their relationships with international donors and media. In 2014, he explored two-way student exchanges between Canadian and ... his IDRC experience “gave me the confidence to conduct high quality research in social sciences.”.

17. Detecting Disease in Radiographs with Intuitive Confidence

Directory of Open Access Journals (Sweden)

Stefan Jaeger

2015-01-01

Full Text Available This paper argues in favor of a specific type of confidence for use in computer-aided diagnosis and disease classification, namely, sine/cosine values of angles represented by points on the unit circle. The paper shows how this confidence is motivated by Chinese medicine and how sine/cosine values are directly related with the two forces Yin and Yang. The angle for which sine and cosine are equal (45° represents the state of equilibrium between Yin and Yang, which is a state of nonduality that indicates neither normality nor abnormality in terms of disease classification. The paper claims that the proposed confidence is intuitive and can be readily understood by physicians. The paper underpins this thesis with theoretical results in neural signal processing, stating that a sine/cosine relationship between the actual input signal and the perceived (learned input is key to neural learning processes. As a practical example, the paper shows how to use the proposed confidence values to highlight manifestations of tuberculosis in frontal chest X-rays.

18. Current Developments in Measuring Academic Behavioural Confidence

Science.gov (United States)

Sander, Paul

2009-01-01

Using published findings and by further analyses of existing data, the structure, validity and utility of the Academic Behavioural Confidence scale (ABC) is critically considered. Validity is primarily assessed through the scale's relationship with other existing scales as well as by looking for predicted differences. The utility of the ABC scale…

19. Evaluating Measures of Optimism and Sport Confidence

Science.gov (United States)

Fogarty, Gerard J.; Perera, Harsha N.; Furst, Andrea J.; Thomas, Patrick R.

2016-01-01

The psychometric properties of the Life Orientation Test-Revised (LOT-R), the Sport Confidence Inventory (SCI), and the Carolina SCI (CSCI) were examined in a study involving 260 athletes. The study aimed to test the dimensional structure, convergent and divergent validity, and invariance over competition level of scores generated by these…

20. Distinguishing highly confident accurate and inaccurate memory: insights about relevant and irrelevant influences on memory confidence.

Science.gov (United States)

Chua, Elizabeth F; Hannula, Deborah E; Ranganath, Charan

2012-01-01

It is generally believed that accuracy and confidence in one's memory are related, but there are many instances when they diverge. Accordingly it is important to disentangle the factors that contribute to memory accuracy and confidence, especially those factors that contribute to confidence, but not accuracy. We used eye movements to separately measure fluent cue processing, the target recognition experience, and relative evidence assessment on recognition confidence and accuracy. Eye movements were monitored during a face-scene associative recognition task, in which participants first saw a scene cue, followed by a forced-choice recognition test for the associated face, with confidence ratings. Eye movement indices of the target recognition experience were largely indicative of accuracy, and showed a relationship to confidence for accurate decisions. In contrast, eye movements during the scene cue raised the possibility that more fluent cue processing was related to higher confidence for both accurate and inaccurate recognition decisions. In a second experiment we manipulated cue familiarity, and therefore cue fluency. Participants showed higher confidence for cue-target associations for when the cue was more familiar, especially for incorrect responses. These results suggest that over-reliance on cue familiarity and under-reliance on the target recognition experience may lead to erroneous confidence.

1. Tailoring electronic structure of polyazomethines thin films

OpenAIRE

J. Weszka; B. Hajduk; M. Domański; M. Chwastek; J. Jurusik; B. Jarząbek; H. Bednarski; P. Jarka

2010-01-01

Purpose: The aim of this work is to show how electronic properties of polyazomethine thin films deposited by chemical vapor deposition method (CVD) can be tailored by manipulating technological parameters of pristine films preparation as well as modifying them while the as-prepared films put into iodine atmosphere.Design/methodology/approach: The recent achievements in the field of designing and preparation methods to be used while preparing polymer photovoltaic solar cells or optoelectronic ...

2. Building Public Confidence in Nuclear Activities

International Nuclear Information System (INIS)

Isaacs, T

2002-01-01

Achieving public acceptance has become a central issue in discussions regarding the future of nuclear power and associated nuclear activities. Effective public communication and public participation are often put forward as the key building blocks in garnering public acceptance. A recent international workshop in Finland provided insights into other features that might also be important to building and sustaining public confidence in nuclear activities. The workshop was held in Finland in close cooperation with Finnish stakeholders. This was most appropriate because of the recent successes in achieving positive decisions at the municipal, governmental, and Parliamentary levels, allowing the Finnish high-level radioactive waste repository program to proceed, including the identification and approval of a proposed candidate repository site. Much of the workshop discussion appropriately focused on the roles of public participation and public communications in building public confidence. It was clear that well constructed and implemented programs of public involvement and communication and a sense of fairness were essential in building the extent of public confidence needed to allow the repository program in Finland to proceed. It was also clear that there were a number of other elements beyond public involvement that contributed substantially to the success in Finland to date. And, in fact, it appeared that these other factors were also necessary to achieving the Finnish public acceptance. In other words, successful public participation and communication were necessary but not sufficient. What else was important? Culture, politics, and history vary from country to country, providing differing contexts for establishing and maintaining public confidence. What works in one country will not necessarily be effective in another. Nonetheless, there appear to be certain elements that might be common to programs that are successful in sustaining public confidence and some of

3. Building Public Confidence in Nuclear Activities

International Nuclear Information System (INIS)

Isaacs, T

2002-01-01

Achieving public acceptance has become a central issue in discussions regarding the future of nuclear power and associated nuclear activities. Effective public communication and public participation are often put forward as the key building blocks in garnering public acceptance. A recent international workshop in Finland provided insights into other features that might also be important to building and sustaining public confidence in nuclear activities. The workshop was held in Finland in close cooperation with Finnish stakeholders. This was most appropriate because of the recent successes in achieving positive decisions at the municipal, governmental, and Parliamentary levels, allowing the Finnish high-level radioactive waste repository program to proceed, including the identification and approval of a proposed candidate repository site Much of the workshop discussion appropriately focused on the roles of public participation and public communications in building public confidence. It was clear that well constructed and implemented programs of public involvement and communication and a sense of fairness were essential in building the extent of public confidence needed to allow the repository program in Finland to proceed. It was also clear that there were a number of other elements beyond public involvement that contributed substantially to the success in Finland to date. And, in fact, it appeared that these other factors were also necessary to achieving the Finnish public acceptance. In other words, successful public participation and communication were necessary but not sufficient. What else was important? Culture, politics, and history vary from country to country, providing differing contexts for establishing and maintaining public confidence. What works in one country will not necessarily be effective in another. Nonetheless, there appear to be certain elements that might be common to programs that are successful in sustaining public confidence, and some of

4. Tailored information about cancer risk and screening: a systematic review.

NARCIS (Netherlands)

Albada, A.; Ausems, M.G.E.M.; Bensing, J.M.; Dulmen, S. van

2009-01-01

OBJECTIVE: To study interventions that provide people with information about cancer risk and about screening that is tailored to their personal characteristics. We assess the tailoring characteristics, theory base and effects on risk perception, knowledge and screening behavior of these

5. Tailored Trustworthy Spaces: Solutions for the Smart Grid

Data.gov (United States)

Networking and Information Technology Research and Development, Executive Office of the President — The NITRD workshop on Tailored Trustworthy Spaces: Solutions for the Smart Grid was conceived by the Federal government to probe deeper into how Tailored Trustworthy...

6. Lay Health Influencers: How They Tailor Brief Tobacco Cessation Interventions

Science.gov (United States)

Yuan, Nicole P.; Castaneda, Heide; Nichter, Mark; Nichter, Mimi; Wind, Steven; Carruth, Lauren; Muramoto, Myra

2012-01-01

Interventions tailored to individual smoker characteristics have increasingly received attention in the tobacco control literature. The majority of tailored interventions are generated by computers and administered with printed materials or web-based programs. The purpose of this study was to examine the tailoring activities of community lay…

7. Challenge for reconstruction of public confidence

International Nuclear Information System (INIS)

Matsuura, S.

2001-01-01

Past incidents and scandals that have had a large influence on damaging public confidence in nuclear energy safety are presented. Radiation leak on nuclear-powered ship 'Mutsu' (1974), the T.M.I. incident in 1979, Chernobyl accident (1986), the sodium leak at the Monju reactor (1995), fire and explosion at a low level waste asphalt solidification facility (1997), J.C.O. incident (Tokai- MURA, 1999), are so many examples that have created feelings of distrust and anxiety in society. In order to restore public confidence there is no other course but to be prepared for difficulty and work honestly to our fullest ability, with all steps made openly and accountably. (N.C.)

8. Tables of Confidence Limits for Proportions

Science.gov (United States)

1990-09-01

0.972 180 49 0.319 0.332 0,357 175 165 0.964 0.969 0.976 ISO 50 0.325 0.338 0.363 175 166 0.969 0.973 0.980 180 51 0.331 0.344 0.368 175 167 0.973 0.977...0.528 180 18 0.135 0 145 0.164 180 19 0.141 0.151 0.171 ISO 80 0.495 0,508 0.534 347 UPPER CONFIDENCE LIMIT FOR PROPORTIONS CONFIDENCE LEVEL...500 409 0.8401 0.8459 0.8565 500 355 0.7364 0.7434 0.7564 500 356 0.7383 0.7453 0.7582 500 410 0.8420 0.8478 0 8583 500 357 0.7402 0.7472 0.7602 500

9. Social media sentiment and consumer confidence

OpenAIRE

Daas, Piet J.H.; Puts, Marco J.H.

2014-01-01

Changes in the sentiment of Dutch public social media messages were compared with changes in monthly consumer confidence over a period of three-and-a-half years, revealing that both were highly correlated (up to r = 0.9) and that both series cointegrated. This phenomenon is predominantly affected by changes in the sentiment of all Dutch public Facebook messages. The inclusion of various selections of public Twitter messages improved this association and the response to changes in sentiment. G...

10. Confidence, Visual Research, and the Aesthetic Function

Directory of Open Access Journals (Sweden)

Stan Ruecker

2007-05-01

Full Text Available The goal of this article is to identify and describe one of the primary purposes of aesthetic quality in the design of computer interfaces and visualization tools. We suggest that humanists can derive advantages in visual research by acknowledging by their efforts to advance aesthetic quality that a significant function of aesthetics in this context is to inspire the user’s confidence. This confidence typically serves to create a sense of trust in the provider of the interface or tool. In turn, this increased trust may result in an increased willingness to engage with the object, on the basis that it demonstrates an attention to detail that promises to reward increased engagement. In addition to confidence, the aesthetic may also contribute to a heightened degree of satisfaction with having spent time using or investigating the object. In the realm of interface design and visualization research, we propose that these aesthetic functions have implications not only for the quality of interactions, but also for the results of the standard measures of performance and preference.

11. Confidence-Based Learning in Investment Analysis

Science.gov (United States)

Serradell-Lopez, Enric; Lara-Navarra, Pablo; Castillo-Merino, David; González-González, Inés

12. Predictor sort sampling and one-sided confidence bounds on quantiles

Science.gov (United States)

Steve Verrill; Victoria L. Herian; David W. Green

2002-01-01

Predictor sort experiments attempt to make use of the correlation between a predictor that can be measured prior to the start of an experiment and the response variable that we are investigating. Properly designed and analyzed, they can reduce necessary sample sizes, increase statistical power, and reduce the lengths of confidence intervals. However, if the non- random...

13. Exact nonparametric confidence bands for the survivor function.

Science.gov (United States)

Matthews, David

2013-10-12

A method to produce exact simultaneous confidence bands for the empirical cumulative distribution function that was first described by Owen, and subsequently corrected by Jager and Wellner, is the starting point for deriving exact nonparametric confidence bands for the survivor function of any positive random variable. We invert a nonparametric likelihood test of uniformity, constructed from the Kaplan-Meier estimator of the survivor function, to obtain simultaneous lower and upper bands for the function of interest with specified global confidence level. The method involves calculating a null distribution and associated critical value for each observed sample configuration. However, Noe recursions and the Van Wijngaarden-Decker-Brent root-finding algorithm provide the necessary tools for efficient computation of these exact bounds. Various aspects of the effect of right censoring on these exact bands are investigated, using as illustrations two observational studies of survival experience among non-Hodgkin's lymphoma patients and a much larger group of subjects with advanced lung cancer enrolled in trials within the North Central Cancer Treatment Group. Monte Carlo simulations confirm the merits of the proposed method of deriving simultaneous interval estimates of the survivor function across the entire range of the observed sample. This research was supported by the Natural Sciences and Engineering Research Council (NSERC) of Canada. It was begun while the author was visiting the Department of Statistics, University of Auckland, and completed during a subsequent sojourn at the Medical Research Council Biostatistics Unit in Cambridge. The support of both institutions, in addition to that of NSERC and the University of Waterloo, is greatly appreciated.

14. Advances in Precision Medicine: Tailoring Individualized Therapies.

Science.gov (United States)

Matchett, Kyle B; Lynam-Lennon, Niamh; Watson, R William; Brown, James A L

2017-10-25

The traditional bench-to-bedside pipeline involves using model systems and patient samples to provide insights into pathways deregulated in cancer. This discovery reveals new biomarkers and therapeutic targets, ultimately stratifying patients and informing cohort-based treatment options. Precision medicine (molecular profiling of individual tumors combined with established clinical-pathological parameters) reveals, in real-time, individual patient's diagnostic and prognostic risk profile, informing tailored and tumor-specific treatment plans. Here we discuss advances in precision medicine presented at the Irish Association for Cancer Research Annual Meeting, highlighting examples where personalized medicine approaches have led to precision discovery in individual tumors, informing customized treatment programs.

15. Functions of myosin motors tailored for parasitism

DEFF Research Database (Denmark)

Mueller, Christina; Graindorge, Arnault; Soldati-Favre, Dominique

2017-01-01

Myosin motors are one of the largest protein families in eukaryotes that exhibit divergent cellular functions. Their roles in protozoans, a diverse group of anciently diverged, single celled organisms with many prominent members known to be parasitic and to cause diseases in human and livestock......, are largely unknown. In the recent years many different approaches, among them whole genome sequencing, phylogenetic analyses and functional studies have increased our understanding on the distribution, protein architecture and function of unconventional myosin motors in protozoan parasites. In Apicomplexa......, myosins turn out to be highly specialized and to exhibit unique functions tailored to accommodate the lifestyle of these parasites....

16. Effectiveness of a Web-based multiple tailored smoking cessation program: a randomized controlled trial among Dutch adult smokers.

Science.gov (United States)

Smit, Eline Suzanne; de Vries, Hein; Hoving, Ciska

2012-06-11

Distributing a multiple computer-tailored smoking cessation intervention through the Internet has several advantages for both provider and receiver. Most important, a large audience of smokers can be reached while a highly individualized and personal form of feedback can be maintained. However, such a smoking cessation program has yet to be developed and implemented in The Netherlands. To investigate the effects of a Web-based multiple computer-tailored smoking cessation program on smoking cessation outcomes in a sample of Dutch adult smokers. Smokers were recruited from December 2009 to June 2010 by advertising our study in the mass media and on the Internet. Those interested and motivated to quit smoking within 6 months (N = 1123) were randomly assigned to either the experimental (n = 552) or control group (n = 571). Respondents in the experimental group received the fully automated Web-based smoking cessation program, while respondents in the control group received no intervention. After 6 weeks and after 6 months, we assessed the effect of the intervention on self-reported 24-hour point prevalence abstinence, 7-day point prevalence abstinence, and prolonged abstinence using logistic regression analyses. Of the 1123 respondents, 449 (40.0%) completed the 6-week follow-up questionnaire and 291 (25.9%) completed the 6-month follow-up questionnaire. We used a negative scenario to replace missing values. That is, we considered respondents lost to follow-up to still be smoking. The computer-tailored program appeared to have significantly increased 24-hour point prevalence abstinence (odds ratio [OR] 1.85, 95% confidence interval [CI] 1.30-2.65), 7-day point prevalence abstinence (OR 2.17, 95% CI 1.44-3.27), and prolonged abstinence (OR 1.99, 95% CI 1.28-3.09) rates reported after 6 weeks. After 6 months, however, no intervention effects could be identified. Results from complete-case analyses were similar. The results presented suggest that the Web-based computer-tailored

17. Interval stability for complex systems

Science.gov (United States)

Klinshov, Vladimir V.; Kirillov, Sergey; Kurths, Jürgen; Nekorkin, Vladimir I.

2018-04-01

Stability of dynamical systems against strong perturbations is an important problem of nonlinear dynamics relevant to many applications in various areas. Here, we develop a novel concept of interval stability, referring to the behavior of the perturbed system during a finite time interval. Based on this concept, we suggest new measures of stability, namely interval basin stability (IBS) and interval stability threshold (IST). IBS characterizes the likelihood that the perturbed system returns to the stable regime (attractor) in a given time. IST provides the minimal magnitude of the perturbation capable to disrupt the stable regime for a given interval of time. The suggested measures provide important information about the system susceptibility to external perturbations which may be useful for practical applications. Moreover, from a theoretical viewpoint the interval stability measures are shown to bridge the gap between linear and asymptotic stability. We also suggest numerical algorithms for quantification of the interval stability characteristics and demonstrate their potential for several dynamical systems of various nature, such as power grids and neural networks.

18. A tailored implementation strategy to reduce the duration of intravenous antibiotic treatment in community-acquired pneumonia: a controlled before-and-after study.

Science.gov (United States)

Engel, M F; Bruns, A H W; Hulscher, M E J L; Gaillard, C A J M; Sankatsing, S U C; Teding van Berkhout, F; Emmelot-Vonk, M H; Kuck, E M; Steeghs, M H M; den Breeijen, J H; Stellato, R K; Hoepelman, A I M; Oosterheert, J J

2014-11-01

We previously showed that 40 % of clinically stable patients hospitalised for community-acquired pneumonia (CAP) are not switched to oral therapy in a timely fashion because of physicians' barriers. We aimed to decrease this proportion by implementing a novel protocol. In a multi-centre controlled before-and-after study, we evaluated the effect of an implementation strategy tailored to previously identified barriers to an early switch. In three Dutch hospitals, a protocol dictating a timely switch strategy was implemented using educational sessions, pocket reminders and active involvement of nursing staff. Primary outcomes were the proportion of patients switched timely and the duration of intravenous antibiotic therapy. Length of hospital stay (LOS), patient outcome, education effects 6 months after implementation and implementation costs were secondary outcomes. Statistical analysis was performed using mixed-effects models. Prior to implementation, 146 patients were included and, after implementation, 213 patients were included. The case mix was comparable. The implementation did not change the proportion of patients switched on time (66 %). The median duration of intravenous antibiotic administration decreased from 4 days [interquartile range (IQR) 2-5] to 3 days (IQR 2-4), a decrease of 21 % [95 % confidence interval (CI) 11 %; 30 %) in the multi-variable analysis. LOS and patient outcome were comparable before and after implementation. Forty-three percent (56/129) of physicians attended the educational sessions. After 6 months, 24 % (10/42) of the interviewed attendees remembered the protocol's main message. Cumulative implementation costs were 5,798 (20/reduced intravenous treatment day). An implementation strategy tailored to previously identified barriers reduced the duration of intravenous antibiotic administration in hospitalised CAP patients by 1 day, at minimal cost.

19. Transparency as an element of public confidence

International Nuclear Information System (INIS)

Kim, H.K.

2007-01-01

In the modern society, there is increasing demands for greater transparency. It has been discussed with respect to corruption or ethics issues in social science. The need for greater openness and transparency in nuclear regulation is widely recognised as public expectations on regulator grow. It is also related to the digital and information technology that enables disclosures of every activity and information of individual and organisation, characterised by numerous 'small brothers'. Transparency has become a key word in this ubiquitous era. Transparency in regulatory activities needs to be understood in following contexts. First, transparency is one of elements to build public confidence in regulator and eventually to achieve regulatory goal of providing the public with satisfaction at nuclear safety. Transparent bases of competence, independence, ethics and integrity of working process of regulatory body would enhance public confidence. Second, activities transmitting information on nuclear safety and preparedness to be accessed are different types of transparency. Communication is an active method of transparency. With increasing use of web-sites, 'digital transparency' is also discussed as passive one. Transparency in regulatory process may be more important than that of contents. Simply providing more information is of little value and specific information may need to be protected for security reason. Third, transparency should be discussed in international, national and organizational perspectives. It has been demanded through international instruments. for each country, transparency is demanded by residents, public, NGOs, media and other stakeholders. Employees also demand more transparency in operating and regulatory organisations. Whistle-blower may appear unless they are satisfied. Fourth, pursuing transparency may cause undue social cost or adverse effects. Over-transparency may decrease public confidence and the process for transparency may also hinder

20. Asymptotically Honest Confidence Regions for High Dimensional

DEFF Research Database (Denmark)

Caner, Mehmet; Kock, Anders Bredahl

While variable selection and oracle inequalities for the estimation and prediction error have received considerable attention in the literature on high-dimensional models, very little work has been done in the area of testing and construction of confidence bands in high-dimensional models. However...... develop an oracle inequality for the conservative Lasso only assuming the existence of a certain number of moments. This is done by means of the Marcinkiewicz-Zygmund inequality which in our context provides sharper bounds than Nemirovski's inequality. As opposed to van de Geer et al. (2014) we allow...

1. National Debate and Public Confidence in Sweden

International Nuclear Information System (INIS)

Lindquist, Ted

2014-01-01

Ted Lindquist, coordinator of the Association of Swedish Municipalities with Nuclear Facilities (KSO), closed the first day of conferences. He showed what the nuclear landscape was in Sweden, and in particular that through time there has been a rather good support from the population. He explained that the reason could be the confidence of the public in the national debate. On a more local scale, Ted Lindquist showed how overwhelmingly strong the support was in towns where the industry would like to operate long-term storage facilities

2. Diagnosing Anomalous Network Performance with Confidence

Energy Technology Data Exchange (ETDEWEB)

Settlemyer, Bradley W [ORNL; Hodson, Stephen W [ORNL; Kuehn, Jeffery A [ORNL; Poole, Stephen W [ORNL

2011-04-01

Variability in network performance is a major obstacle in effectively analyzing the throughput of modern high performance computer systems. High performance interconnec- tion networks offer excellent best-case network latencies; how- ever, highly parallel applications running on parallel machines typically require consistently high levels of performance to adequately leverage the massive amounts of available computing power. Performance analysts have usually quantified network performance using traditional summary statistics that assume the observational data is sampled from a normal distribution. In our examinations of network performance, we have found this method of analysis often provides too little data to under- stand anomalous network performance. Our tool, Confidence, instead uses an empirically derived probability distribution to characterize network performance. In this paper we describe several instances where the Confidence toolkit allowed us to understand and diagnose network performance anomalies that we could not adequately explore with the simple summary statis- tics provided by traditional measurement tools. In particular, we examine a multi-modal performance scenario encountered with an Infiniband interconnection network and we explore the performance repeatability on the custom Cray SeaStar2 interconnection network after a set of software and driver updates.

3. Tinnitus therapy using tailor-made notched music delivered via a smartphone application and Ginko combined treatment: A pilot study.

Science.gov (United States)

Kim, So Young; Chang, Mun Young; Hong, Min; Yoo, Sun-Gil; Oh, Dongik; Park, Moo Kyun

2017-10-01

Notched music therapy has been suggested to be effective for relieving tinnitus. We have developed a smartphone application using tailor-made notched music for tinnitus patients. This study aimed to evaluate the effect of this smartphone application on reducing tinnitus. In addition, we investigated the predictive factors for tinnitus treatment outcome using this smartphone application. A total of 26 patients who were chronically distressed by tinnitus with a ≥18 Tinnitus Handicap Inventory (THI) score were recruited from March 2013 to March 2015 (National Clinical Trial (NCT) Identifier Number 01663467). Patients were instructed to listen to tailor-made notched music through our smartphone application for 30-60min per day and were prescribed Ginkgo biloba for 3 months. Treatment outcome was evaluated using the THI, a visual analogue scale that measures the effects of tinnitus in terms of loudness, noticeable time, annoyance, and disruption of daily life. Demographic data, including age, sex, duration of tinnitus, and pre-treatment scores on questionnaires such as the Beck Depression Inventory (BDI), State Trait Anxiety Inventory (TAI), and Pittsburgh Sleep Quality Index (PSQI) scores were compared between the effective and non-effective groups according to the differences between their pre- and post-treatment THI scores. Smartphone application-delivered notched music therapy and Ginko combined treatment improved the THI score from 33.9±18.9 to 23.1±15.2; the effect was particularly marked for the emotional score of the THI. Improvement in the THI score was positively correlated with the initial THI score (P=0.001, adjusted estimated value=0.49, 95% confidence interval=0.25-0.73). Chronic tinnitus patients who underwent smartphone application-delivered notched music therapy and Ginko combined treatment showed improved THI scores, particularly the emotional score of the THI. A smartphone application-delivered therapy and Ginko combined treatment may be more

4. The relationship between confidence in charitable organizations and volunteering revisited

NARCIS (Netherlands)

Bekkers, René H.F.P.; Bowman, Woods

2009-01-01

Confidence in charitable organizations (charitable confidence) would seem to be an important prerequisite for philanthropic behavior. Previous research relying on cross-sectional data has suggested that volunteering promotes charitable confidence and vice versa. This research note, using new

5. Experimental uncertainty estimation and statistics for data having interval uncertainty.

Energy Technology Data Exchange (ETDEWEB)

Kreinovich, Vladik (Applied Biomathematics, Setauket, New York); Oberkampf, William Louis (Applied Biomathematics, Setauket, New York); Ginzburg, Lev (Applied Biomathematics, Setauket, New York); Ferson, Scott (Applied Biomathematics, Setauket, New York); Hajagos, Janos (Applied Biomathematics, Setauket, New York)

2007-05-01

This report addresses the characterization of measurements that include epistemic uncertainties in the form of intervals. It reviews the application of basic descriptive statistics to data sets which contain intervals rather than exclusively point estimates. It describes algorithms to compute various means, the median and other percentiles, variance, interquartile range, moments, confidence limits, and other important statistics and summarizes the computability of these statistics as a function of sample size and characteristics of the intervals in the data (degree of overlap, size and regularity of widths, etc.). It also reviews the prospects for analyzing such data sets with the methods of inferential statistics such as outlier detection and regressions. The report explores the tradeoff between measurement precision and sample size in statistical results that are sensitive to both. It also argues that an approach based on interval statistics could be a reasonable alternative to current standard methods for evaluating, expressing and propagating measurement uncertainties.

6. Deep Drawing of High-Strength Tailored Blanks by Using Tailored Tools

Directory of Open Access Journals (Sweden)

Thomas Mennecart

2016-01-01

Full Text Available In most forming processes based on tailored blanks, the tool material remains the same as that of sheet metal blanks without tailored properties. A novel concept of lightweight construction for deep drawing tools is presented in this work to improve the forming behavior of tailored blanks. The investigations presented here deal with the forming of tailored blanks of dissimilar strengths using tailored dies made of two different materials. In the area of the steel blank with higher strength, typical tool steel is used. In the area of the low-strength steel, a hybrid tool made out of a polymer and a fiber-reinforced surface replaces the steel half. Cylindrical cups of DP600/HX300LAD are formed and analyzed regarding their formability. The use of two different halves of tool materials shows improved blank thickness distribution, weld-line movement and pressure distribution compared to the use of two steel halves. An improvement in strain distribution is also observed by the inclusion of springs in the polymer side of tools, which is implemented to control the material flow in the die. Furthermore, a reduction in tool weight of approximately 75% can be achieved by using this technique. An accurate finite element modeling strategy is developed to analyze the problem numerically and is verified experimentally for the cylindrical cup. This strategy is then applied to investigate the thickness distribution and weld-line movement for a complex geometry, and its transferability is validated. The inclusion of springs in the hybrid tool leads to better material flow, which results in reduction of weld-line movement by around 60%, leading to more uniform thickness distribution.

7. Comparing interval estimates for small sample ordinal CFA models.

Science.gov (United States)

Natesan, Prathiba

2015-01-01

Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading

8. Confidence crisis of results in biomechanics research.

Science.gov (United States)

Knudson, Duane

2017-11-01

Many biomechanics studies have small sample sizes and incorrect statistical analyses, so reporting of inaccurate inferences and inflated magnitude of effects are common in the field. This review examines these issues in biomechanics research and summarises potential solutions from research in other fields to increase the confidence in the experimental effects reported in biomechanics. Authors, reviewers and editors of biomechanics research reports are encouraged to improve sample sizes and the resulting statistical power, improve reporting transparency, improve the rigour of statistical analyses used, and increase the acceptance of replication studies to improve the validity of inferences from data in biomechanics research. The application of sports biomechanics research results would also improve if a larger percentage of unbiased effects and their uncertainty were reported in the literature.

9. Technology in a crisis of confidence

Energy Technology Data Exchange (ETDEWEB)

Damodaran, G R

1979-04-01

The power that technological progress has given to engineers is examined to see if there has been a corresponding growth in human happiness. A credit/debit approach is discussed, whereby technological advancement is measured against the criteria of social good. The credit side includes medicine, agriculture, and energy use, while the debit side lists pollution, unequal distribution of technology and welfare, modern weaponry, resource depletion, and a possible decline in the quality of life. The present anti-technologists claim the debit side is now predominant, but the author challenges this position by examining the role of technology and the engineer in the society. He sees a need for renewed self-confidence and a sense of direction among engineers, but is generally optimistic that technology and civilization will continue to be intertwined. (DCK)

10. Considering public confidence in developing regulatory programs

International Nuclear Information System (INIS)

Collins, S.J.

2001-01-01

In the area of public trust and in any investment, planning and strategy are important. While it is accepted in the United States that an essential part of our mission is to leverage our resources to improving Public Confidence this performance goal must be planned for, managed and measured. Similar to our premier performance goal of Maintaining Safety, a strategy must be developed and integrated with our external stake holders but with internal regulatory staff as well. In order to do that, business is to be conducted in an open environment, the basis for regulatory decisions has to be available through public documents and public meetings, communication must be done in clear and consistent terms. (N.C.)

11. Tailoring superelasticity of soft magnetic materials

Science.gov (United States)

Cremer, Peet; Löwen, Hartmut; Menzel, Andreas M.

2015-10-01

Embedding magnetic colloidal particles in an elastic polymer matrix leads to smart soft materials that can reversibly be addressed from outside by external magnetic fields. We discover a pronounced nonlinear superelastic stress-strain behavior of such materials using numerical simulations. This behavior results from a combination of two stress-induced mechanisms: a detachment mechanism of embedded particle aggregates and a reorientation mechanism of magnetic moments. The superelastic regime can be reversibly tuned or even be switched on and off by external magnetic fields and thus be tailored during operation. Similarities to the superelastic behavior of shape-memory alloys suggest analogous applications, with the additional benefit of reversible switchability and a higher biocompatibility of soft materials.

12. Engineering tailored nanoparticles with microbes: quo vadis?

Science.gov (United States)

Prasad, Ram; Pandey, Rishikesh; Barman, Ishan

2016-01-01

In the quest for less toxic and cleaner methods of nanomaterials production, recent developments in the biosynthesis of nanoparticles have underscored the important role of microorganisms. Their intrinsic ability to withstand variable extremes of temperature, pressure, and pH coupled with the minimal downstream processing requirements provide an attractive route for diverse applications. Yet, controlling the dispersity and facile tuning of the morphology of the nanoparticles of desired chemical compositions remains an ongoing challenge. In this Focus Review, we critically review the advances in nanoparticle synthesis using microbes, ranging from bacteria and fungi to viruses, and discuss new insights into the cellular mechanisms of such formation that may, in the near future, allow complete control over particle morphology and functionalization. In addition to serving as paradigms for cost-effective, biocompatible, and eco-friendly synthesis, microbes hold the promise for a unique template for synthesis of tailored nanoparticles targeted at therapeutic and diagnostic platform technologies. © 2015 Wiley Periodicals, Inc.

13. Tailored vacuum chambers for ac magnets

International Nuclear Information System (INIS)

Harvey, A.

1985-01-01

The proposed LAMPF-II accelerator has a 60-Hz booster synchrotron and a 3-Hz main ring. To provide a vacuum enclosure inside the magnets with low eddy-current losses and minimal field distortion, yet capable of carrying rf image currents and providing beam stabilization, we propose an innovative combination pipe. Structurally, the enclosure is high-purity alumina ceramic, which is strong, radiation resistant, and has good vacuum properties. Applied to the chamber are thin, spaced, silver conductors using adapted thick-film technology. The conductor design can be tailored to the stabilization requirements, for example, longitudinal conductors for image currents, circumferential for transverse stabilization. The inside of the chamber has a thin, resistive coating to avoid charge build-up. The overall 60-Hz power loss is less than 100 W/m

14. Computer-Tailored Intervention for Juvenile Offenders

Science.gov (United States)

LEVESQUE, DEBORAH A.; JOHNSON, JANET L.; WELCH, CAROL A.; PROCHASKA, JANICE M.; FERNANDEZ, ANNE C.

2012-01-01

Studies assessing the efficacy of juvenile justice interventions show small effects on recidivism and other outcomes. This paper describes the development of a prototype of a multimedia computer-tailored intervention (“Rise Above Your Situation”or RAYS) that relies on an evidence-based model of behavior change, the Transtheoretical Model, and expert system technology to deliver assessments, feedback, printed reports, and counselor reports with intervention ideas. In a feasibility test involving 60 system-involved youths and their counselors, evaluations of the program were favorable: 91.7% of youths agreed that the program could help them make positive changes, and 86.7% agreed that the program could give their counselor helpful information about them. PMID:23264754

15. Optimizing Tailored Health Promotion for Older Adults

Science.gov (United States)

Marcus-Varwijk, Anne Esther; Koopmans, Marg; Visscher, Tommy L. S.; Seidell, Jacob C.; Slaets, Joris P. J.; Smits, Carolien H. M.

2016-01-01

Objective: This study explores older adults’ perspectives on healthy living, and their interactions with professionals regarding healthy living. This perspective is necessary for health professionals when they engage in tailored health promotion in their daily work routines. Method: In a qualitative study, 18 semi-structured interviews were carried out with older adults (aged 55-98) living in the Netherlands. The framework analysis method was used to analyze the transcripts. Results: Three themes emerged from the data—(a) healthy living: daily routines and staying active, (b) enacting healthy living: accepting and adapting, (c) interaction with health professionals with regard to healthy living: autonomy and reciprocity. Discussion: Older adults experience healthy living in a holistic way in which they prefer to live active and independent lives. Health professionals should focus on building an equal relationship of trust and focus on positive health outcomes, such as autonomy and self-sufficiency when communicating about healthy living. PMID:28138485

16. Effects of maternal confidence and competence on maternal parenting stress in newborn care.

Science.gov (United States)

Liu, Chien-Chi; Chen, Yueh-Chih; Yeh, Yen-Po; Hsieh, Yeu-Sheng

2012-04-01

This paper is a report of a correlational study of the relations of maternal confidence and maternal competence to maternal parenting stress during newborn care. Maternal role development is a cognitive and social process influenced by cultural and family contexts and mother and child characteristics. Most knowledge about maternal role development comes from western society. However, perceptions of the maternal role in contemporary Taiwanese society may be affected by contextual and environmental factors. A prospective correlational design was used to recruit 372 postpartum Taiwanese women and their infants from well-child clinics at 16 health centres in central Taiwan. Inclusion criteria for mothers were gestational age >37 weeks, ≥18 years old, and healthy, with infants maternal confidence, maternal competence and self-perceived maternal parenting stress. After controlling for maternal parity and infant temperament, high maternal confidence and competence were associated with low maternal parenting stress. Maternal confidence influenced maternal parenting stress both directly and indirectly via maternal competence. To assist postpartum women in infant care programmes achieve positive outcomes, nurses should evaluate and bolster mothers' belief in their own abilities. Likewise, nurses should not only consider mothers' infant care skills, but also mothers' parity and infant temperament. Finally, it is crucial for nurses and researchers to recognize that infant care programmes should be tailored to mothers' specific maternal characteristics. © 2011 The Authors. Journal of Advanced Nursing © 2011 Blackwell Publishing Ltd.

17. Tailoring CSR Strategy to Company Size?

Directory of Open Access Journals (Sweden)

Alexandra ZBUCHEA

2017-09-01

18. Synroc tailored waste forms for actinide immobilization

Energy Technology Data Exchange (ETDEWEB)

Gregg, Daniel J.; Vance, Eric R. [Australian Nuclear Science and Technology Organisation, Kirrawee (Australia). ANSTOsynroc, Inst. of Materials Engineering

2017-07-01

Since the end of the 1970s, Synroc at the Australian Nuclear Science and Technology Organisation (ANSTO) has evolved from a focus on titanate ceramics directed at PUREX waste to a platform waste treatment technology to fabricate tailored glass-ceramic and ceramic waste forms for different types of actinide, high- and intermediate level wastes. The particular emphasis for Synroc is on wastes which are problematic for glass matrices or existing vitrification process technologies. In particular, nuclear wastes containing actinides, notably plutonium, pose a unique set of requirements for a waste form, which Synroc ceramic and glass-ceramic waste forms can be tailored to meet. Key aspects to waste form design include maximising the waste loading, producing a chemically durable product, maintaining flexibility to accommodate waste variations, a proliferation resistance to prevent theft and diversion, and appropriate process technology to produce waste forms that meet requirements for actinide waste streams. Synroc waste forms incorporate the actinides within mineral phases, producing products which are much more durable in water than baseline borosilicate glasses. Further, Synroc waste forms can incorporate neutron absorbers and {sup 238}U which provide criticality control both during processing and whilst within the repository. Synroc waste forms offer proliferation resistance advantages over baseline borosilicate glasses as it is much more difficult to retrieve the actinide and they can reduce the radiation dose to workers compared to borosilicate glasses. Major research and development into Synroc at ANSTO over the past 40 years has included the development of waste forms for excess weapons plutonium immobilization in collaboration with the US and for impure plutonium residues in collaboration with the UK, as examples. With a waste loading of 40-50 wt.%, Synroc would also be considered a strong candidate as an engineered waste form for used nuclear fuel and highly

19. Chinese Management Research Needs Self-Confidence but not Over-confidence

DEFF Research Database (Denmark)

Li, Xin; Ma, Li

2018-01-01

Chinese management research aims to contribute to global management knowledge by offering rigorous and innovative theories and practical recommendations both for managing in China and outside. However, two seemingly opposite directions that researchers are taking could prove detrimental......-confidence, limiting theoretical innovation and practical relevance. Yet going in the other direction of overly indigenous research reflects over-confidence, often isolating the Chinese management research from the mainstream academia and at times, even becoming anti-science. A more integrated approach of conducting...... to the healthy development of Chinese management research. We argue that the two directions share a common ground that lies in the mindset regarding the confidence in the work on and from China. One direction of simply following the American mainstream on academic rigor demonstrates a lack of self...

20. Haemostatic reference intervals in pregnancy

DEFF Research Database (Denmark)

Szecsi, Pal Bela; Jørgensen, Maja; Klajnbard, Anna

2010-01-01

largely unchanged during pregnancy, delivery, and postpartum and were within non-pregnant reference intervals. However, levels of fibrinogen, D-dimer, and coagulation factors VII, VIII, and IX increased markedly. Protein S activity decreased substantially, while free protein S decreased slightly and total......Haemostatic reference intervals are generally based on samples from non-pregnant women. Thus, they may not be relevant to pregnant women, a problem that may hinder accurate diagnosis and treatment of haemostatic disorders during pregnancy. In this study, we establish gestational age......-20, 21-28, 29-34, 35-42, at active labor, and on postpartum days 1 and 2. Reference intervals for each gestational period using only the uncomplicated pregnancies were calculated in all 391 women for activated partial thromboplastin time (aPTT), fibrinogen, fibrin D-dimer, antithrombin, free protein S...

1. Molecular tailoring approach for exploring structures, energetics and ...

Indian Academy of Sciences (India)

Keywords. Molecular clusters; linear scaling methods; molecular tailoring approach (MTA); Hartree– ..... energy decomposition analysis also performed and which clearly ... through molecular dynamics simulation furnished by. Takeguchi,. 46.

2. Inverse Interval Matrix: A Survey

Czech Academy of Sciences Publication Activity Database

Rohn, Jiří; Farhadsefat, R.

2011-01-01

Roč. 22, - (2011), s. 704-719 E-ISSN 1081-3810 R&D Projects: GA ČR GA201/09/1957; GA ČR GC201/08/J020 Institutional research plan: CEZ:AV0Z10300504 Keywords : interval matrix * inverse interval matrix * NP-hardness * enclosure * unit midpoint * inverse sign stability * nonnegative invertibility * absolute value equation * algorithm Subject RIV: BA - General Mathematics Impact factor: 0.808, year: 2010 http://www.math.technion.ac.il/iic/ ela / ela -articles/articles/vol22_pp704-719.pdf

3. Tailored model abstraction in performance assessments

International Nuclear Information System (INIS)

Kessler, J.H.

1995-01-01

Total System Performance Assessments (TSPAs) are likely to be one of the most significant parts of making safety cases for the continued development and licensing of geologic repositories for the disposal of spent fuel and HLW. Thus, it is critical that the TSPA model capture the 'essence' of the physical processes relevant to demonstrating the appropriate regulation is met. But how much detail about the physical processes must be modeled and understood before there is enough confidence that the appropriate essence has been captured? In this summary the level of model abstraction that is required is discussed. Approaches for subsystem and total system performance analyses are outlined, and the role of best estimate models is examined. It is concluded that a conservative approach for repository performance, based on limited amount of field and laboratory data, can provide sufficient confidence for a regulatory decision

4. A new model for cork weight estimation in Northern Portugal with methodology for construction of confidence intervals

Science.gov (United States)

Teresa J.F. Fonseca; Bernard R. Parresol

2001-01-01

Cork, a unique biological material, is a highly valued non-timber forest product. Portugal is the leading producer of cork with 52 percent of the world production. Tree cork weight models have been developed for Southern Portugal, but there are no representative published models for Northern Portugal. Because cork trees may have a different form between Northern and...

5. A Validation Study of the Rank-Preserving Structural Failure Time Model: Confidence Intervals and Unique, Multiple, and Erroneous Solutions.

Science.gov (United States)

Ouwens, Mario; Hauch, Ole; Franzén, Stefan

2018-05-01

The rank-preserving structural failure time model (RPSFTM) is used for health technology assessment submissions to adjust for switching patients from reference to investigational treatment in cancer trials. It uses counterfactual survival (survival when only reference treatment would have been used) and assumes that, at randomization, the counterfactual survival distribution for the investigational and reference arms is identical. Previous validation reports have assumed that patients in the investigational treatment arm stay on therapy throughout the study period. To evaluate the validity of the RPSFTM at various levels of crossover in situations in which patients are taken off the investigational drug in the investigational arm. The RPSFTM was applied to simulated datasets differing in percentage of patients switching, time of switching, underlying acceleration factor, and number of patients, using exponential distributions for the time on investigational and reference treatment. There were multiple scenarios in which two solutions were found: one corresponding to identical counterfactual distributions, and the other to two different crossing counterfactual distributions. The same was found for the hazard ratio (HR). Unique solutions were observed only when switching patients were on investigational treatment for <40% of the time that patients in the investigational arm were on treatment. Distributions other than exponential could have been used for time on treatment. An HR equal to 1 is a necessary but not always sufficient condition to indicate acceleration factors associated with equal counterfactual survival. Further assessment to distinguish crossing counterfactual curves from equal counterfactual curves is especially needed when the time that switchers stay on investigational treatment is relatively long compared to the time direct starters stay on investigational treatment.

6. Robust Coefficients Alpha and Omega and Confidence Intervals with Outlying Observations and Missing Data: Methods and Software

Science.gov (United States)

Zhang, Zhiyong; Yuan, Ke-Hai

2016-01-01

Cronbach's coefficient alpha is a widely used reliability measure in social, behavioral, and education sciences. It is reported in nearly every study that involves measuring a construct through multiple items. With non-tau-equivalent items, McDonald's omega has been used as a popular alternative to alpha in the literature. Traditional estimation…

7. Robust Coefficients Alpha and Omega and Confidence Intervals with Outlying Observations and Missing Data Methods and Software

Science.gov (United States)

Zhang, Zhiyong; Yuan, Ke-Hai

2016-01-01

Cronbach's coefficient alpha is a widely used reliability measure in social, behavioral, and education sciences. It is reported in nearly every study that involves measuring a construct through multiple items. With non-tau-equivalent items, McDonald's omega has been used as a popular alternative to alpha in the literature. Traditional estimation…

8. Meta-analysis to refine map position and reduce confidence intervals for delayed canopy wilting QTLs in soybean

Science.gov (United States)

Slow canopy wilting in soybean has been identified as a potentially beneficial trait for ameliorating drought effects on yield. Previous research identified QTLs for slow wilting from two different bi-parental populations and this information was combined with data from three other populations to id...

9. Noise annoyance from stationary sources: Relationships with exposure metric day-evening-night level (DENL) and their confidence intervals

NARCIS (Netherlands)

Miedema, H.M.E.; Vos, H.

2004-01-01

Relationships between exposure to noise [metric: day-evening-night levels (DENL)] from stationary sources (shunting yards, a seasonal industry, and other industries) and annoyance are presented. Curves are presented for expected annoyance score, the percentage "highly annoyed" (%HA, cutoff at 72 on

10. Derivation of confidence intervals of service measures in a base-stock inventory control system with low-frequent demand

DEFF Research Database (Denmark)

Larsen, Christian

We explore a base-stock system with backlogging where the demand process is a compound renewal process and the compound element is a delayed geometric distribution. For this setting it is proven in [4] that the long-run average service measures order fill rate (OFR) and volume fill rate (VFR) are...

11. Derivation of confidence intervals of service measures in a base-stock inventory control system with low-frequent demand

DEFF Research Database (Denmark)

Larsen, Christian

2011-01-01

We explore a base-stock system with backlogging where the demand process is a compound renewal process and the compound element is a delayed geometric distribution. For this setting it holds that the long-run average service measures order fill rate (OFR) and volume fill rate (VFR) are equal in v...

12. The theory of confidence-building measures

International Nuclear Information System (INIS)

Darilek, R.E.

1992-01-01

This paper discusses the theory of Confidence-Building Measures (CBMs) in two ways. First, it employs a top-down, deductively oriented approach to explain CBM theory in terms of the arms control goals and objectives to be achieved, the types of measures to be employed, and the problems or limitations likely to be encountered when applying CBMs to conventional or nuclear forces. The chapter as a whole asks how various types of CBMs might function during a political - military escalation from peacetime to a crisis and beyond (i.e. including conflict), as well as how they might operate in a de-escalatory environment. In pursuit of these overarching issues, the second section of the chapter raises a fundamental but complicating question: how might the next all-out war actually come aoubt - by unpremeditated escalation resulting from misunderstanding or miscalculation, or by premeditation resulting in a surprise attack? The second section of the paper addresses this question, explores its various implications for CBMs, and suggests the potential contribution of different types of CBMs toward successful resolution of the issues involved

13. Trust versus confidence: Microprocessors and personnel monitoring

International Nuclear Information System (INIS)

Chiaro, P.J. Jr.

1993-01-01

Due to recent technological advances, substantial improvements have been made in personnel contamination monitoring. In all likelihood, these advances will close out the days of manually frisking personnel for radioactive contamination. Unfortunately, as microprocessor-based monitors become more widely used, not only at commercial power reactors but also at government facilities, questions concerning their trustworthiness arise. Algorithms make decisions that were previously made by technicians. Trust is placed not in technicians but in machines. In doing this it is assumed that the machine never misses. Inevitably, this trust drops, due largely to open-quotes false alarms.close quotes This is especially true when monitoring for alpha contamination. What is a open-quotes false alarm?close quotes Do these machines and their algorithms that we put our trust in make mistakes? An analysis was performed on half-body and hand-and-foot monitors at Oak Ridge National Laboratory (ORNL) in order to justify the suggested confidence level used for alarm point determination. Sources used in this analysis had activities approximating ORNL's contamination limits

14. Trust versus confidence: Microprocessors and personnel monitoring

International Nuclear Information System (INIS)

Chiaro, P.J. Jr.

1993-01-01

Due to recent technological advances, substantial improvements have been made in personnel contamination monitoring. In all likelihood, these advances will close out the days of manually frisking personnel for radioactive contamination. Unfortunately, as microprocessor-based monitors become more widely used, not only at commercial power reactors but also at government facilities, questions concerning their trustworthiness arise. Algorithms make decisions that were previously made by technicians. Trust is placed not in technicians but in machines. In doing this it is assumed that the machine never misses. Inevitably, this trust drops, due largely to ''false alarms''. This is especially true when monitoring for alpha contamination. What is a ''false alarm''? Do these machines and their algorithms that we put our trust in make mistakes? An analysis was performed on half-body and hand-and-foot monitors at Oak Ridge National Laboratory (ORNL) in order to justify the suggested confidence level used for alarm point determination. Sources used in this analysis had activities approximating ORNL's contamination limits

15. Trust versus confidence: Microprocessors and personnel monitoring

International Nuclear Information System (INIS)

Chiaro, P.J. Jr.

1994-01-01

Due to recent technological advances, substantial improvements have been made in personnel contamination monitoring. In all likelihood, these advances will close out the days of manually frisking personnel for radioactive contamination. Unfortunately, as microprocessor-based monitors become more widely used, not only at commercial power reactors but also at government facilities, questions concerning their trustworthiness arise. Algorithms make decisions that were previously made by technicians. Trust is placed not in technicians but in machines. In doing this it is assumed that the machine never misses. Inevitably, this trust drops, due largely to ''false alarms''. This is especially true when monitoring for alpha contamination. What is a ''false alarm''? Do these machines and their algorithms that they put their trust in make mistakes? An analysis was performed on half-body and hand-and-foot monitors at Oak Ridge National Laboratory (ORNL) in order to justify the suggested confidence level used for alarm point determination. Sources used in this analysis had activities approximating ORNL's contamination limits

16. Dynamic Properties of QT Intervals

Czech Academy of Sciences Publication Activity Database

Halámek, Josef; Jurák, Pavel; Vondra, Vlastimil; Lipoldová, J.; Leinveber, Pavel; Plachý, M.; Fráňa, P.; Kára, T.

2009-01-01

Roč. 36, - (2009), s. 517-520 ISSN 0276-6574 R&D Projects: GA ČR GA102/08/1129; GA MŠk ME09050 Institutional research plan: CEZ:AV0Z20650511 Keywords : QT Intervals * arrhythmia diagnosis Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering http://cinc.mit.edu/archives/2009/pdf/0517.pdf

17. Haemostatic reference intervals in pregnancy

DEFF Research Database (Denmark)

Szecsi, Pal Bela; Jørgensen, Maja; Klajnbard, Anna

2010-01-01

Haemostatic reference intervals are generally based on samples from non-pregnant women. Thus, they may not be relevant to pregnant women, a problem that may hinder accurate diagnosis and treatment of haemostatic disorders during pregnancy. In this study, we establish gestational age-specific refe......Haemostatic reference intervals are generally based on samples from non-pregnant women. Thus, they may not be relevant to pregnant women, a problem that may hinder accurate diagnosis and treatment of haemostatic disorders during pregnancy. In this study, we establish gestational age......-specific reference intervals for coagulation tests during normal pregnancy. Eight hundred one women with expected normal pregnancies were included in the study. Of these women, 391 had no complications during pregnancy, vaginal delivery, or postpartum period. Plasma samples were obtained at gestational weeks 13......-20, 21-28, 29-34, 35-42, at active labor, and on postpartum days 1 and 2. Reference intervals for each gestational period using only the uncomplicated pregnancies were calculated in all 391 women for activated partial thromboplastin time (aPTT), fibrinogen, fibrin D-dimer, antithrombin, free protein S...

18. Interval matrices: Regularity generates singularity

Czech Academy of Sciences Publication Activity Database

Rohn, Jiří; Shary, S.P.

2018-01-01

Roč. 540, 1 March (2018), s. 149-159 ISSN 0024-3795 Institutional support: RVO:67985807 Keywords : interval matrix * regularity * singularity * P-matrix * absolute value equation * diagonally singilarizable matrix Subject RIV: BA - General Mathematics Impact factor: 0.973, year: 2016

19. Chaotic dynamics from interspike intervals

DEFF Research Database (Denmark)

Pavlov, A N; Sosnovtseva, Olga; Mosekilde, Erik

2001-01-01

Considering two different mathematical models describing chaotic spiking phenomena, namely, an integrate-and-fire and a threshold-crossing model, we discuss the problem of extracting dynamics from interspike intervals (ISIs) and show that the possibilities of computing the largest Lyapunov expone...

20. Promoting a Culture of Tailoring for Systems Engineering Policy Expectations

Science.gov (United States)

Blankenship, Van A.

2016-01-01

NASA's Marshall Space Flight Center (MSFC) has developed an integrated systems engineering approach to promote a culture of tailoring for program and project policy requirements. MSFC's culture encourages and supports tailoring, with an emphasis on risk-based decision making, for enhanced affordability and efficiency. MSFC's policy structure integrates the various Agency requirements into a single, streamlined implementation approach which serves as a "one-stop-shop" for our programs and projects to follow. The engineers gain an enhanced understanding of policy and technical expectations, as well as lesson's learned from MSFC's history of spaceflight and science missions, to enable them to make appropriate, risk-based tailoring recommendations. The tailoring approach utilizes a standard methodology to classify projects into predefined levels using selected mission and programmatic scaling factors related to risk tolerance. Policy requirements are then selectively applied and tailored, with appropriate rationale, and approved by the governing authorities, to support risk-informed decisions to achieve the desired cost and schedule efficiencies. The policy is further augmented by implementation tools and lifecycle planning aids which help promote and support the cultural shift toward more tailoring. The MSFC Customization Tool is an integrated spreadsheet that ties together everything that projects need to understand, navigate, and tailor the policy. It helps them classify their project, understand the intent of the requirements, determine their tailoring approach, and document the necessary governance approvals. It also helps them plan for and conduct technical reviews throughout the lifecycle. Policy tailoring is thus established as a normal part of project execution, with the tools provided to facilitate and enable the tailoring process. MSFC's approach to changing the culture emphasizes risk-based tailoring of policy to achieve increased flexibility, efficiency

1. Method for calculating the variance and prediction intervals for biomass estimates obtained from allometric equations

CSIR Research Space (South Africa)

Kirton, A

2010-08-01

Full Text Available for calculating the variance and prediction intervals for biomass estimates obtained from allometric equations A KIRTON B SCHOLES S ARCHIBALD CSIR Ecosystem Processes and Dynamics, Natural Resources and the Environment P.O. BOX 395, Pretoria, 0001, South... intervals (confidence intervals for predicted values) for allometric estimates can be obtained using an example of estimating tree biomass from stem diameter. It explains how to deal with relationships which are in the power function form - a common form...

2. Examining Belief and Confidence in Schizophrenia

Science.gov (United States)

Joyce, Dan W.; Averbeck, Bruno B.; Frith, Chris D.; Shergill, Sukhwinder S.

2018-01-01

Background People with psychoses often report fixed, delusional beliefs that are sustained even in the presence of unequivocal contrary evidence. Such delusional beliefs are the result of integrating new and old evidence inappropriately in forming a cognitive model. We propose and test a cognitive model of belief formation using experimental data from an interactive “Rock Paper Scissors” game. Methods Participants (33 controls and 27 people with schizophrenia) played a competitive, time-pressured interactive two-player game (Rock, Paper, Scissors). Participant’s behavior was modeled by a generative computational model using leaky-integrator and temporal difference methods. This model describes how new and old evidence is integrated to form both a playing strategy to beat the opponent and provide a mechanism for reporting confidence in one’s playing strategy to win against the opponent Results People with schizophrenia fail to appropriately model their opponent’s play despite consistent (rather than random) patterns that can be exploited in the simulated opponent’s play. This is manifest as a failure to weigh existing evidence appropriately against new evidence. Further, participants with schizophrenia show a ‘jumping to conclusions’ bias, reporting successful discovery of a winning strategy with insufficient evidence. Conclusions The model presented suggests two tentative mechanisms in delusional belief formation – i) one for modeling patterns in other’s behavior, where people with schizophrenia fail to use old evidence appropriately and ii) a meta-cognitive mechanism for ‘confidence’ in such beliefs where people with schizophrenia overweight recent reward history in deciding on the value of beliefs about the opponent. PMID:23521846

3. Human Motion Capture Data Tailored Transform Coding.

Science.gov (United States)

Junhui Hou; Lap-Pui Chau; Magnenat-Thalmann, Nadia; Ying He

2015-07-01

Human motion capture (mocap) is a widely used technique for digitalizing human movements. With growing usage, compressing mocap data has received increasing attention, since compact data size enables efficient storage and transmission. Our analysis shows that mocap data have some unique characteristics that distinguish themselves from images and videos. Therefore, directly borrowing image or video compression techniques, such as discrete cosine transform, does not work well. In this paper, we propose a novel mocap-tailored transform coding algorithm that takes advantage of these features. Our algorithm segments the input mocap sequences into clips, which are represented in 2D matrices. Then it computes a set of data-dependent orthogonal bases to transform the matrices to frequency domain, in which the transform coefficients have significantly less dependency. Finally, the compression is obtained by entropy coding of the quantized coefficients and the bases. Our method has low computational cost and can be easily extended to compress mocap databases. It also requires neither training nor complicated parameter setting. Experimental results demonstrate that the proposed scheme significantly outperforms state-of-the-art algorithms in terms of compression performance and speed.

4. Tailorable software architectures in the accelerator control system environment

International Nuclear Information System (INIS)

Mejuev, Igor; Kumagai, Akira; Kadokura, Eiichi

2001-01-01

Tailoring is further evolution of an application after deployment in order to adapt it to requirements that were not accounted for in the original design. End-user tailorability has been extensively researched in applied computer science from HCl and software engineering perspectives. Tailorability allows coping with flexibility requirements, decreasing maintenance and development costs of software products. In general, dynamic or diverse software requirements constitute the need for implementing end-user tailorability in computer systems. In accelerator physics research the factor of dynamic requirements is especially important, due to frequent software and hardware modifications resulting in correspondingly high upgrade and maintenance costs. In this work we introduce the results of feasibility study on implementing end-user tailorability in the software for accelerator control system, considering the design and implementation of distributed monitoring application for 12 GeV KEK Proton Synchrotron as an example. The software prototypes used in this work are based on a generic tailoring platform (VEDICI), which allows decoupling of tailoring interfaces and runtime components. While representing a reusable application-independent framework, VEDICI can be potentially applied for tailoring of arbitrary compositional Web-based applications

5. Deep drawing simulations of tailored blanks and experimental verification

NARCIS (Netherlands)

Meinders, Vincent T.; van den Berg, Albert; Huetink, Han

2000-01-01

Tailored Blanks are increasingly used in the automotive industry. A combination of different materials, thickness, and coatings can be welded together to form a blank for stamping car body panels. The main advantage of using Tailored Blanks is to have specific characteristics at particular parts of

6. Tailoring self-assembled monolayers at the electrochemical interface

Indian Academy of Sciences (India)

(SAMs) for functionalisation with different receptors, catalytic materials, biomolecules, enzymes, anti- gen-antibody, etc for various applications. ... and tailoring of SAMs by incorporation of suitable recognition elements. ... compatible with most organic functional groups and ...... the interfacial architecture can be tailored using.

7. Tailored cognitive-behavioural therapy and exercise training improves the physical fitness of patients with fibromyalgia.

Science.gov (United States)

van Koulil, S; van Lankveld, W; Kraaimaat, F W; van Helmond, T; Vedder, A; van Hoorn, H; Donders, A R T; Wirken, L; Cats, H; van Riel, P L C M; Evers, A W M

2011-12-01

Patients with fibromyalgia have diminished levels of physical fitness, which may lead to functional disability and exacerbating complaints. Multidisciplinary treatment comprising cognitive-behavioural therapy (CBT) and exercise training has been shown to be effective in improving physical fitness. However, due to the high drop-out rates and large variability in patients' functioning, it was proposed that a tailored treatment approach might yield more promising treatment outcomes. High-risk fibromyalgia patients were randomly assigned to a waiting list control group (WLC) or a treatment condition (TC), with the treatment consisting of 16 twice-weekly sessions of CBT and exercise training tailored to the patient's cognitive-behavioural pattern. Physical fitness was assessed with two physical tests before and 3 months after treatment and at corresponding intervals in the WLC. Treatment effects were evaluated using linear mixed models. The level of physical fitness had improved significantly in the TC compared with the WLC. Attrition rates were low, effect sizes large and reliable change indices indicated a clinically relevant improvement among the TC. A tailored multidisciplinary treatment approach for fibromyalgia consisting of CBT and exercise training is well tolerated, yields clinically relevant changes, and appears a promising approach to improve patients' physical fitness. ClinicalTrials.gov ID NCT00268606.

8. Does tailoring really make a difference? : the development and evaluation of tailored interventions aimed at benzodiazepine cessation

NARCIS (Netherlands)

Wolde, Geeske Brecht ten

2008-01-01

Because of the problems associated with chronic benzodiazepine use, there is impetus to prevent and reduce chronic benzodiazepine use. The overall aim was to develop a 'tailor-made' intervention in order to reduce chronic use. Before developing tailored patient education, it is first of all

9. Sources of sport confidence, imagery type and performance among competitive athletes: the mediating role of sports confidence.

Science.gov (United States)

Levy, A R; Perry, J; Nicholls, A R; Larkin, D; Davies, J

2015-01-01

This study explored the mediating role of sport confidence upon (1) sources of sport confidence-performance relationship and (2) imagery-performance relationship. Participants were 157 competitive athletes who completed state measures of confidence level/sources, imagery type and performance within one hour after competition. Among the current sample, confirmatory factor analysis revealed appropriate support for the nine-factor SSCQ and the five-factor SIQ. Mediational analysis revealed that sport confidence had a mediating influence upon the achievement source of confidence-performance relationship. In addition, both cognitive and motivational imagery types were found to be important sources of confidence, as sport confidence mediated imagery type- performance relationship. Findings indicated that athletes who construed confidence from their own achievements and report multiple images on a more frequent basis are likely to benefit from enhanced levels of state sport confidence and subsequent performance.

10. Ceramic laminates with tailored residual stresses

Directory of Open Access Journals (Sweden)

Baudín, C.

2009-12-01

Full Text Available Severe environments imposed by new technologies demand new materials with better properties and ensured reliability. The intrinsic brittleness of ceramics has forced scientists to look for new materials and processing routes to improve the mechanical behaviour of ceramics in order to allow their use under severe thermomechanical conditions. The laminate approach has allowed the fabrication of a new family of composite materials with strength and reliability superior to those of monolithic ceramics with microstructures similar to those of the constituent layers. The different ceramic laminates developed since the middle 1970´s can be divided in two large groups depending on whether the development of residual stresses between layers is the main design tool. This paper reviews the developments in the control and tailoring of residual stresses in ceramic laminates. The tailoring of the thickness and location of layers in compression can lead to extremely performing structures in terms of strength values and reliability. External layers in compression lead to the strengthening of the structure. When relatively thin and highly compressed layers are located inside the material, threshold strength, crack bifurcation and crack arrest during fracture occur.

Las severas condiciones de trabajo de las nuevas aplicaciones tecnológicas exigen el uso de materiales con mejores propiedades y alta fiabilidad. La potencialidad de uso de materiales frágiles, como los cerámicos, en estas aplicaciones exige el desarrollo de nuevos materiales y métodos de procesamiento que mejoren su comportamiento mecánico. El concepto de material laminado ha permitido la fabricación de una nueva familia de materiales con tensiones de fractura y fiabilidad superiores a las de materiales monolíticos con microestructuras similares a las de las láminas que conforman el laminado. Los distintos materiales laminados desarrollados desde mediados de los años 70 se pueden

11. Thin tailored composite wing for civil tiltrotor

Science.gov (United States)

Rais-Rohani, Masoud

1994-01-01

The tiltrotor aircraft is a flight vehicle which combines the efficient low speed (i.e., take-off, landing, and hover) characteristics of a helicopter with the efficient cruise speed of a turboprop airplane. A well-known example of such vehicle is the Bell-Boeing V-22 Osprey. The high cruise speed and range constraints placed on the civil tiltrotor require a relatively thin wing to increase the drag-divergence Mach number which translates into lower compressibility drag. It is required to reduce the wing maximum thickness-to-chord ratio t/c from 23% (i.e., V-22 wing) to 18%. While a reduction in wing thickness results in improved aerodynamic efficiency, it has an adverse effect on the wing structure and it tends to reduce structural stiffness. If ignored, the reduction in wing stiffness leads to susceptibility to aeroelastic and dynamic instabilities which may consequently cause a catastrophic failure. By taking advantage of the directional stiffness characteristics of composite materials the wing structure may be tailored to have the necessary stiffness, at a lower thickness, while keeping the weight low. The goal of this study is to design a wing structure for minimum weight subject to structural, dynamic and aeroelastic constraints. The structural constraints are in terms of strength and buckling allowables. The dynamic constraints are in terms of wing natural frequencies in vertical and horizontal bending and torsion. The aeroelastic constraints are in terms of frequency placement of the wing structure relative to those of the rotor system. The wing-rotor-pylon aeroelastic and dynamic interactions are limited in this design study by holding the cruise speed, rotor-pylon system, and wing geometric attributes fixed. To assure that the wing-rotor stability margins are maintained a more rigorous analysis based on a detailed model of the rotor system will need to ensue following the design study. The skin-stringer-rib type architecture is used for the wing

12. Vaccination Confidence and Parental Refusal/Delay of Early Childhood Vaccines.

Directory of Open Access Journals (Sweden)

Melissa B Gilkey

Full Text Available To support efforts to address parental hesitancy towards early childhood vaccination, we sought to validate the Vaccination Confidence Scale using data from a large, population-based sample of U.S. parents.We used weighted data from 9,354 parents who completed the 2011 National Immunization Survey. Parents reported on the immunization history of a 19- to 35-month-old child in their households. Healthcare providers then verified children's vaccination status for vaccines including measles, mumps, and rubella (MMR, varicella, and seasonal flu. We used separate multivariable logistic regression models to assess associations between parents' mean scores on the 8-item Vaccination Confidence Scale and vaccine refusal, vaccine delay, and vaccination status.A substantial minority of parents reported a history of vaccine refusal (15% or delay (27%. Vaccination confidence was negatively associated with refusal of any vaccine (odds ratio [OR] = 0.58, 95% confidence interval [CI], 0.54-0.63 as well as refusal of MMR, varicella, and flu vaccines specifically. Negative associations between vaccination confidence and measures of vaccine delay were more moderate, including delay of any vaccine (OR = 0.81, 95% CI, 0.76-0.86. Vaccination confidence was positively associated with having received vaccines, including MMR (OR = 1.53, 95% CI, 1.40-1.68, varicella (OR = 1.54, 95% CI, 1.42-1.66, and flu vaccines (OR = 1.32, 95% CI, 1.23-1.42.Vaccination confidence was consistently associated with early childhood vaccination behavior across multiple vaccine types. Our findings support expanding the application of the Vaccination Confidence Scale to measure vaccination beliefs among parents of young children.

13. Tailoring magnetism by light-ion irradiation

International Nuclear Information System (INIS)

Fassbender, J; Ravelosona, D; Samson, Y

2004-01-01

Owing to their reduced dimensions, the magnetic properties of ultrathin magnetic films and multilayers, e.g. magnetic anisotropies and exchange coupling, often depend strongly on the surface and interface structure. In addition, chemical composition, crystallinity, grain sizes and their distribution govern the magnetic behaviour. All these structural properties can be modified by light-ion irradiation in an energy range of 5-150 keV due to the energy loss of the ions in the solid along their trajectory. Consequently the magnetic properties can be tailored by ion irradiation. Similar effects can also be observed using Ga + ion irradiation, which is the common ion source in focused ion beam lithography. Examples of ion-induced modifications of magnetic anisotropies and exchange coupling are presented. This review is limited to radiation-induced structural changes giving rise to a modification of magnetic parameters. Ion implantation is discussed only in special cases. Due to the local nature of the interaction, magnetic patterning without affecting the surface topography becomes feasible, which may be of interest in applications. The main patterning technique is homogeneous ion irradiation through masks. Focused ion beam and ion projection lithography are usually only relevant for larger ion masses. The creation of magnetic feature sizes below 50 nm is shown. In contrast to topographic nanostructures the surrounding area of these nanostructures can be left ferromagnetic, leading to new phenomena at their mutual interface. Most of the material systems discussed here are important for technological applications. The main areas are magnetic data storage applications, such as hard magnetic media with a large perpendicular magnetic anisotropy or patterned media with an improved signal to noise ratio and magnetic sensor elements. It will be shown that light-ion irradiation has many advantages in the design of new material properties and in the fabrication technology of

14. Alternative confidence measure for local matching stereo algorithms

CSIR Research Space (South Africa)

Ndhlovu, T

2009-11-01

Full Text Available The authors present a confidence measure applied to individual disparity estimates in local matching stereo correspondence algorithms. It aims at identifying textureless areas, where most local matching algorithms fail. The confidence measure works...

15. Simultaneous confidence bands for the integrated hazard function

OpenAIRE

Dudek, Anna; Gocwin, Maciej; Leskow, Jacek

2006-01-01

The construction of the simultaneous confidence bands for the integrated hazard function is considered. The Nelson--Aalen estimator is used. The simultaneous confidence bands based on bootstrap methods are presented. Two methods of construction of such confidence bands are proposed. The weird bootstrap method is used for resampling. Simulations are made to compare the actual coverage probability of the bootstrap and the asymptotic simultaneous confidence bands. It is shown that the equal--tai...

16. Automatically producing tailored web materials for public administration

Science.gov (United States)

Colineau, Nathalie; Paris, Cécile; Vander Linden, Keith

2013-06-01

Public administration organizations commonly produce citizen-focused, informational materials describing public programs and the conditions under which citizens or citizen groups are eligible for these programs. The organizations write these materials for generic audiences because of the excessive human resource costs that would be required to produce personalized materials for everyone. Unfortunately, generic materials tend to be longer and harder to understand than materials tailored for particular citizens. Our work explores the feasibility and effectiveness of automatically producing tailored materials. We have developed an adaptive hypermedia application system that automatically produces tailored informational materials and have evaluated it in a series of studies. The studies demonstrate that: (1) subjects prefer tailored materials over generic materials, even if the tailoring requires answering a set of demographic questions first; (2) tailored materials are more effective at supporting subjects in their task of learning about public programs; and (3) the time required to specify the demographic information on which the tailoring is based does not significantly slow down the subjects in their information seeking task.

17. 49 CFR 1103.23 - Confidences of a client.

Science.gov (United States)

2010-10-01

... 49 Transportation 8 2010-10-01 2010-10-01 false Confidences of a client. 1103.23 Section 1103.23... Responsibilities Toward A Client § 1103.23 Confidences of a client. (a) The practitioner's duty to preserve his client's confidence outlasts the practitioner's employment by the client, and this duty extends to the...

18. Contrasting Academic Behavioural Confidence in Mexican and European Psychology Students

Science.gov (United States)

Ochoa, Alma Rosa Aguila; Sander, Paul

2012-01-01

Introduction: Research with the Academic Behavioural Confidence scale using European students has shown that students have high levels of confidence in their academic abilities. It is generally accepted that people in more collectivist cultures have more realistic confidence levels in contrast to the overconfidence seen in individualistic European…

19. Dijets at large rapidity intervals

CERN Document Server

Pope, B G

2001-01-01

Inclusive diet production at large pseudorapidity intervals ( Delta eta ) between the two jets has been suggested as a regime for observing BFKL dynamics. We have measured the dijet cross section for large Delta eta in pp collisions at square root s = 1800 and 630 GeV using the DOE detector. The partonic cross section increases strongly with the size of Delta eta . The observed growth is even stronger than expected on the basis of BFKL resummation in the leading logarithmic approximation. The growth of the partonic cross section can be accommodated with an effective BFKL intercept of alpha /sub BFKL/(20 GeV) = 1.65 +or- 0.07.

20. Variational collocation on finite intervals

International Nuclear Information System (INIS)

Amore, Paolo; Cervantes, Mayra; Fernandez, Francisco M

2007-01-01

In this paper, we study a set of functions, defined on an interval of finite width, which are orthogonal and which reduce to the sinc functions when the appropriate limit is taken. We show that these functions can be used within a variational approach to obtain accurate results for a variety of problems. We have applied them to the interpolation of functions on finite domains and to the solution of the Schroedinger equation, and we have compared the performance of the present approach with others

1. Semiconductor quantum optics with tailored photonic nanostructures

International Nuclear Information System (INIS)

Laucht, Arne

2011-01-01

the understanding of coupling phenomena between excitons in self-assembled quantum dots and optical modes of tailored photonic nanostructures realized on the basis of two-dimensional photonic crystals. While we highlight the potential for advanced applications in the direction of quantum optics and quantum computation, we also identify some of the challenges which will need to be overcome on the way. (orig.)

2. [Tailored cranioplasty using CAD-CAM technology].

Science.gov (United States)

Vitanovics, Dusán; Major, Ottó; Lovas, László; Banczerowski, Péter

2014-11-30

The majority of cranial defects are results of surgical intervention. The defect must be covered within resonable period of time usually after 4-6 week given the fact that the replacement of bone improve the brain circulation. Number of surgical techniques and materials are available to perform cranioplasty. Due to favorable properties we chosed ultra high molecular weight polyethylene as material. In this paper the authors show a procedure which allows tailored artificial bone replacement using state of art medical and engineering techniques. between 2004 and 2012, 19 patients were operated on cranial bone defect and a total of 22 3D custom-designed implants were implanted. The average age of patients was 35.4 years. In 12 patients we performed primary cranioplasty, while seven patients had the replacement at least once. Later the implants had to be removed due to infection or other causes (bone necrosis, fracture). All patients had native and bone-windowed 1 mm resolution CT. The 3D design was made using the original CT images and with design program. Computer controlled lathe was used to prepare a precise-fitting model. During surgery, the defect was exposed and the implant was fixed to normal bone using mini titanium plates and screws. All of our patients had control CT at 3, 6 and 12 months after surgery and at the same time neurological examination. Twenty-one polyethylene and one titanium implants were inserted. The average follow-up of the patients was 21.5 months, ranged from two to 96 months. We follow 12 patients (63.15%) more than one year. No intraoperative implant modifications had to be made. Each of the 22 implant exactly matched the bone defect proved by CT scan. No one of our patients reported aesthetic problems and we did not notice any kind of aesthetic complication. We had short term complication in three cases due to cranioplasty, subdural, epidural haemorrhage and skin defect. Polyethylene is in all respects suitable for primary and secondary

3. Semiconductor quantum optics with tailored photonic nanostructures

Energy Technology Data Exchange (ETDEWEB)

Laucht, Arne

2011-06-15

single photon emission into the waveguide. The results obtained during the course of this thesis contribute significantly to the understanding of coupling phenomena between excitons in self-assembled quantum dots and optical modes of tailored photonic nanostructures realized on the basis of two-dimensional photonic crystals. While we highlight the potential for advanced applications in the direction of quantum optics and quantum computation, we also identify some of the challenges which will need to be overcome on the way. (orig.)

4. Tailored Materials for High Efficiency CIDI Engines

Energy Technology Data Exchange (ETDEWEB)

Grant, G.J.; Jana, S.

2012-03-30

The overall goal of the project, Tailored Materials for High Efficiency Compression Ignition Direct Injection (CIDI) Engines, is to enable the implementation of new combustion strategies, such as homogeneous charge compression ignition (HCCI), that have the potential to significantly increase the energy efficiency of current diesel engines and decrease fuel consumption and environmental emissions. These strategies, however, are increasing the demands on conventional engine materials, either from increases in peak cylinder pressure (PCP) or from increases in the temperature of operation. The specific objective of this project is to investigate the application of a new material processing technology, friction stir processing (FSP), to improve the thermal and mechanical properties of engine components. The concept is to modify the surfaces of conventional, low-cost engine materials. The project focused primarily on FSP in aluminum materials that are compositional analogs to the typical piston and head alloys seen in small- to mid-sized CIDI engines. Investigations have been primarily of two types over the duration of this project: (1) FSP of a cast hypoeutectic Al-Si-Mg (A356/357) alloy with no introduction of any new components, and (2) FSP of Al-Cu-Ni alloys (Alloy 339) by physically stirring-in various quantities of carbon nanotubes/nanofibers or carbon fibers. Experimental work to date on aluminum systems has shown significant increases in fatigue lifetime and stress-level performance in aluminum-silicon alloys using friction processing alone, but work to demonstrate the addition of carbon nanotubes and fibers into aluminum substrates has shown mixed results due primarily to the difficulty in achieving porosity-free, homogeneous distributions of the particulate. A limited effort to understand the effects of FSP on steel materials was also undertaken during the course of this project. Processed regions were created in high-strength, low-alloyed steels up to 0.5 in

5. Comprehensive Plan for Public Confidence in Nuclear Regulator

International Nuclear Information System (INIS)

Choi, Kwang Sik; Choi, Young Sung; Kim, Ho ki

2008-01-01

Public confidence in nuclear regulator has been discussed internationally. Public trust or confidence is needed for achieving regulatory goal of assuring nuclear safety to the level that is acceptable by the public or providing public ease for nuclear safety. In Korea, public ease or public confidence has been suggested as major policy goal in the 'Nuclear regulatory policy direction' annually announced. This paper reviews theory of trust, its definitions and defines nuclear safety regulation, elements of public trust or public confidence developed based on the study conducted so far. Public ease model developed and 10 measures for ensuring public confidence are also presented and future study directions are suggested

6. Parents' obesity-related behavior and confidence to support behavioral change in their obese child: data from the STAR study.

Science.gov (United States)

Arsenault, Lisa N; Xu, Kathleen; Taveras, Elsie M; Hacker, Karen A

2014-01-01

Successful childhood obesity interventions frequently focus on behavioral modification and involve parents or family members. Parental confidence in supporting behavior change may be an element of successful family-based prevention efforts. We aimed to determine whether parents' own obesity-related behaviors were related to their confidence in supporting their child's achievement of obesity-related behavioral goals. Cross-sectional analyses of data collected at baseline of a randomized control trial testing a treatment intervention for obese children (n = 787) in primary care settings (n = 14). Five obesity-related behaviors (physical activity, screen time, sugar-sweetened beverage, sleep duration, fast food) were self-reported by parents for themselves and their child. Behaviors were dichotomized on the basis of achievement of behavioral goals. Five confidence questions asked how confident the parent was in helping their child achieve each goal. Logistic regression modeling high confidence was conducted with goal achievement and demographics as independent variables. Parents achieving physical activity or sleep duration goals were significantly more likely to be highly confident in supporting their child's achievement of those goals (physical activity, odds ratio 1.76; 95% confidence interval 1.19-2.60; sleep, odds ratio 1.74; 95% confidence interval 1.09-2.79) independent of sociodemographic variables and child's current behavior. Parental achievements of TV watching and fast food goals were also associated with confidence, but significance was attenuated after child's behavior was included in models. Parents' own obesity-related behaviors are factors that may affect their confidence to support their child's behavior change. Providers seeking to prevent childhood obesity should address parent/family behaviors as part of their obesity prevention strategies. Copyright © 2014 Academic Pediatric Association. Published by Elsevier Inc. All rights reserved.

7. Effects of postidentification feedback on eyewitness identification and nonidentification confidence.

Science.gov (United States)

Semmler, Carolyn; Brewer, Neil; Wells, Gary L

2004-04-01

Two experiments investigated new dimensions of the effect of confirming feedback on eyewitness identification confidence using target-absent and target-present lineups and (previously unused) unbiased witness instructions (i.e., "offender not present" option highlighted). In Experiment 1, participants viewed a crime video and were later asked to try to identify the thief from an 8-person target-absent photo array. Feedback inflated witness confidence for both mistaken identifications and correct lineup rejections. With target-present lineups in Experiment 2, feedback inflated confidence for correct and mistaken identifications and lineup rejections. Although feedback had no influence on the confidence-accuracy correlation, it produced clear overconfidence. Confidence inflation varied with the confidence measure reference point (i.e., retrospective vs. current confidence) and identification response latency.

8. Effects of confidence and anxiety on flow state in competition.

Science.gov (United States)

Koehn, Stefan

2013-01-01

Confidence and anxiety are important variables that underlie the experience of flow in sport. Specifically, research has indicated that confidence displays a positive relationship and anxiety a negative relationship with flow. The aim of this study was to assess potential direct and indirect effects of confidence and anxiety dimensions on flow state in tennis competition. A sample of 59 junior tennis players completed measures of Competitive State Anxiety Inventory-2d and Flow State Scale-2. Following predictive analysis, results showed significant positive correlations between confidence (intensity and direction) and anxiety symptoms (only directional perceptions) with flow state. Standard multiple regression analysis indicated confidence as the only significant predictor of flow. The results confirmed a protective function of confidence against debilitating anxiety interpretations, but there were no significant interaction effects between confidence and anxiety on flow state.

9. Some Characterizations of Convex Interval Games

NARCIS (Netherlands)

Brânzei, R.; Tijs, S.H.; Alparslan-Gok, S.Z.

2008-01-01

This paper focuses on new characterizations of convex interval games using the notions of exactness and superadditivity. We also relate big boss interval games with concave interval games and obtain characterizations of big boss interval games in terms of exactness and subadditivity.

10. Can Deterrence Be Tailored? Strategic Forum, Number 225, January 2007

National Research Council Canada - National Science Library

Bunn, M. E

2007-01-01

.... The Bush administration has outlined a concept for tailored deterrence to address the distinctive challenges posed by advanced military competitors, regional powers armed with weapons of mass destruction (WMD...

11. Effects of tailoring ingredients in auditory persuasive health messages on fruit and vegetable intake

NARCIS (Netherlands)

Elbert, Sarah P.; Dijkstra, Arie; Rozema, Andrea

2017-01-01

Objective: Health messages can be tailored by applying different tailoring ingredients, among which personalisation, feedback and adaptation. This experiment investigated the separate effects of these tailoring ingredients on behaviour in auditory health persuasion. Furthermore, the moderating

12. Family Partner Intervention Influences Self-Care Confidence and Treatment Self-Regulation in Patients with Heart Failure

Science.gov (United States)

Stamp, Kelly D.; Dunbar, Sandra B.; Clark, Patricia C.; Reilly, Carolyn M.; Gary, Rebecca A.; Higgins, Melinda; Ryan, Richard M

2015-01-01

Background Heart failure self-care requires confidence in one’s ability and motivation to perform a recommended behavior. Most self-care occurs within a family context, yet little is known about the influence of family on heart failure self-care or motivating factors. Aims To examine the association of family functioning and the self-care antecedents of confidence and motivation among heart failure participants and determine if a family partnership intervention would promote higher levels of perceived confidence and treatment self-regulation (motivation) at four and eight months compared to patient-family education or usual care groups. Methods Heart failure patients (N = 117) and a family member were randomized to a family partnership intervention, patient-family education or usual care groups. Measures of patient’s perceived family functioning, confidence, motivation for medications and following a low-sodium diet were analyzed. Data were collected at baseline, four and eight months. Results Family functioning was related to self-care confidence for diet (p=.02) and autonomous motivation for adhering to their medications (p=.05 and diet p=0.2). The family partnership intervention group significantly improved confidence (p=.05) and motivation (medications (p=.004; diet p=.012) at four months whereas patient-family education group and usual care did not change. Conclusion Perceived confidence and motivation for self-care was enhanced by family partnership intervention, regardless of family functioning. Poor family functioning at baseline contributed to lower confidence. Family functioning should be assessed to guide tailored family-patient interventions for better outcomes. PMID:25673525

13. Tailoring hospital marketing efforts to physicians' needs.

Science.gov (United States)

Mackay, J M; Lamb, C W

1988-12-01

Marketing has become widely recognized as an important component of hospital management (Kotler and Clarke 1987; Ludke, Curry, and Saywell 1983). Physicians are becoming recognized as an important target market that warrants more marketing attention than it has received in the past (Super 1987; Wotruba, Haas, and Hartman 1982). Some experts predict that hospitals will begin focusing more marketing attention on physicians and less on consumers (Super 1986). Much of this attention is likely to take the form of practice management assistance, such as computer-based information system support or consulting services. The survey results reported here are illustrative only of how one hospital addressed the problem of physician need assessment. Other potential target markets include physicians who admit patients only to competitor hospitals and physicians who admit to multiple hospitals. The market might be segmented by individual versus group practice, area of specialization, or possibly even physician practice life cycle stage (Wotruba, Haas, and Hartman 1982). The questions included on the survey and the survey format are likely to be situation-specific. The key is the process, not the procedure. It is important for hospital marketers to recognize that practice management assistance needs will vary among markets (Jensen 1987). Therefore, hospitals must carefully identify their target physician market(s) and survey them about their specific needs before developing and implementing new physician marketing programs. Only then can they be reasonably confident that their marketing programs match their customers' needs.

14. A Methodology for Evaluation of Inservice Test Intervals for Pumps and Motor Operated Valves

International Nuclear Information System (INIS)

McElhaney, K.L.

1999-01-01

The nuclear industry has begun efforts to reevaluate inservice tests (ISTs) for key components such as pumps and valves. At issue are two important questions--What kinds of tests provide the most meaningful information about component health, and what periodic test intervals are appropriate? In the past, requirements for component testing were prescribed by the American Society of Mechanical Engineers (ASME) Boiler and Pressure Vessel Code. The tests and test intervals specified in the Code were generic in nature and test intervals were relatively short. Operating experience has shown, however, that performance and safety improvements and cost savings could be realized by tailoring IST programs to similar components with comparable safety importance and service conditions. In many cases, test intervals may be lengthened, resulting in cost savings for utilities and their customers

15. Tailoring of EIA-649-1: Definition of Major (Class I) Engineering Change Proposal

Science.gov (United States)

2015-05-15

MISSILE SYSTEMS CENTER TAILORING TAILORING OF EIA -649-1: DEFINITION OF MAJOR (CLASS I) ENGINEERING CHANGE PROPOSAL APPROVED FOR...PUBLIC RELEASE; DISTRIBUTION IS UNLIMITED 1 Tailoring of EIA -649-1: Definition of Major (Class I) ECP. 1. Intent of this Tailoring Document...This tailoring document remedies a requirements gap in the industry consensus standard, EIA -649-1: 2015. Specifically, this tailoring provides a

16. Beyond hypercorrection: remembering corrective feedback for low-confidence errors.

Science.gov (United States)

Griffiths, Lauren; Higham, Philip A

2018-02-01

Correcting errors based on corrective feedback is essential to successful learning. Previous studies have found that corrections to high-confidence errors are better remembered than low-confidence errors (the hypercorrection effect). The aim of this study was to investigate whether corrections to low-confidence errors can also be successfully retained in some cases. Participants completed an initial multiple-choice test consisting of control, trick and easy general-knowledge questions, rated their confidence after answering each question, and then received immediate corrective feedback. After a short delay, they were given a cued-recall test consisting of the same questions. In two experiments, we found high-confidence errors to control questions were better corrected on the second test compared to low-confidence errors - the typical hypercorrection effect. However, low-confidence errors to trick questions were just as likely to be corrected as high-confidence errors. Most surprisingly, we found that memory for the feedback and original responses, not confidence or surprise, were significant predictors of error correction. We conclude that for some types of material, there is an effortful process of elaboration and problem solving prior to making low-confidence errors that facilitates memory of corrective feedback.

17. Factors affecting midwives' confidence in intrapartum care: a phenomenological study.

Science.gov (United States)

Bedwell, Carol; McGowan, Linda; Lavender, Tina

2015-01-01

midwives are frequently the lead providers of care for women throughout labour and birth. In order to perform their role effectively and provide women with the choices they require midwives need to be confident in their practice. This study explores factors which may affect midwives' confidence in their practice. hermeneutic phenomenology formed the theoretical basis for the study. Prospective longitudinal data collection was completed using diaries and semi-structured interviews. Twelve midwives providing intrapartum care in a variety of settings were recruited to ensure a variety of experiences in different contexts were captured. the principal factor affecting workplace confidence, both positively and negatively, was the influence of colleagues. Perceived autonomy and a sense of familiarity could also enhance confidence. However, conflict in the workplace was a critical factor in reducing midwives' confidence. Confidence was an important, but fragile, phenomenon to midwives and they used a variety of coping strategies, emotional intelligence and presentation management to maintain it. this is the first study to highlight both the factors influencing midwives' workplace confidence and the strategies midwives employed to maintain their confidence. Confidence is important in maintaining well-being and workplace culture may play a role in explaining the current low morale within the midwifery workforce. This may have implications for women's choices and care. Support, effective leadership and education may help midwives develop and sustain a positive sense of confidence. Copyright © 2014 Elsevier Ltd. All rights reserved.

18. Can confidence indicators forecast the probability of expansion in Croatia?

Directory of Open Access Journals (Sweden)

Mirjana Čižmešija

2016-04-01

Full Text Available The aim of this paper is to investigate how reliable are confidence indicators in forecasting the probability of expansion. We consider three Croatian Business Survey indicators: the Industrial Confidence Indicator (ICI, the Construction Confidence Indicator (BCI and the Retail Trade Confidence Indicator (RTCI. The quarterly data, used in the research, covered the periods from 1999/Q1 to 2014/Q1. Empirical analysis consists of two parts. The non-parametric Bry-Boschan algorithm is used for distinguishing periods of expansion from the period of recession in the Croatian economy. Then, various nonlinear probit models were estimated. The models differ with respect to the regressors (confidence indicators and the time lags. The positive signs of estimated parameters suggest that the probability of expansion increases with an increase in Confidence Indicators. Based on the obtained results, the conclusion is that ICI is the most powerful predictor of the probability of expansion in Croatia.

19. Confidence mediates the sex difference in mental rotation performance.

Science.gov (United States)

Estes, Zachary; Felker, Sydney

2012-06-01

On tasks that require the mental rotation of 3-dimensional figures, males typically exhibit higher accuracy than females. Using the most common measure of mental rotation (i.e., the Mental Rotations Test), we investigated whether individual variability in confidence mediates this sex difference in mental rotation performance. In each of four experiments, the sex difference was reliably elicited and eliminated by controlling or manipulating participants' confidence. Specifically, confidence predicted performance within and between sexes (Experiment 1), rendering confidence irrelevant to the task reliably eliminated the sex difference in performance (Experiments 2 and 3), and manipulating confidence significantly affected performance (Experiment 4). Thus, confidence mediates the sex difference in mental rotation performance and hence the sex difference appears to be a difference of performance rather than ability. Results are discussed in relation to other potential mediators and mechanisms, such as gender roles, sex stereotypes, spatial experience, rotation strategies, working memory, and spatial attention.

20. Coping skills: role of trait sport confidence and trait anxiety.

Science.gov (United States)

Cresswell, Scott; Hodge, Ken

2004-04-01

The current research assesses relationships among coping skills, trait sport confidence, and trait anxiety. Two samples (n=47 and n=77) of international competitors from surf life saving (M=23.7 yr.) and touch rugby (M=26.2 yr.) completed the Athletic Coping Skills Inventory, Trait Sport Confidence Inventory, and Sport Anxiety Scale. Analysis yielded significant correlations amongst trait anxiety, sport confidence, and coping. Specifically confidence scores were positively associated with coping with adversity scores and anxiety scores were negatively associated. These findings support the inclusion of the personality characteristics of confidence and anxiety within the coping model presented by Hardy, Jones, and Gould, Researchers should be aware that confidence and anxiety may influence the coping processes of athletes.

1. Assessing Mediational Models: Testing and Interval Estimation for Indirect Effects.

Science.gov (United States)

Biesanz, Jeremy C; Falk, Carl F; Savalei, Victoria

2010-08-06

Theoretical models specifying indirect or mediated effects are common in the social sciences. An indirect effect exists when an independent variable's influence on the dependent variable is mediated through an intervening variable. Classic approaches to assessing such mediational hypotheses ( Baron & Kenny, 1986 ; Sobel, 1982 ) have in recent years been supplemented by computationally intensive methods such as bootstrapping, the distribution of the product methods, and hierarchical Bayesian Markov chain Monte Carlo (MCMC) methods. These different approaches for assessing mediation are illustrated using data from Dunn, Biesanz, Human, and Finn (2007). However, little is known about how these methods perform relative to each other, particularly in more challenging situations, such as with data that are incomplete and/or nonnormal. This article presents an extensive Monte Carlo simulation evaluating a host of approaches for assessing mediation. We examine Type I error rates, power, and coverage. We study normal and nonnormal data as well as complete and incomplete data. In addition, we adapt a method, recently proposed in statistical literature, that does not rely on confidence intervals (CIs) to test the null hypothesis of no indirect effect. The results suggest that the new inferential method-the partial posterior p value-slightly outperforms existing ones in terms of maintaining Type I error rates while maximizing power, especially with incomplete data. Among confidence interval approaches, the bias-corrected accelerated (BC a ) bootstrapping approach often has inflated Type I error rates and inconsistent coverage and is not recommended; In contrast, the bootstrapped percentile confidence interval and the hierarchical Bayesian MCMC method perform best overall, maintaining Type I error rates, exhibiting reasonable power, and producing stable and accurate coverage rates.

2. Employee Perceptions of Workplace Health Promotion Programs: Comparison of a Tailored, Semi-Tailored, and Standardized Approach.

Science.gov (United States)

Street, Tamara D; Lacey, Sarah J

2018-04-28

In the design of workplace health promotion programs (WHPPs), employee perceptions represent an integral variable which is predicted to translate into rate of user engagement (i.e., participation) and program loyalty. This study evaluated employee perceptions of three workplace health programs promoting nutritional consumption and physical activity. Programs included: (1) an individually tailored consultation with an exercise physiologist and dietitian; (2) a semi-tailored 12-week SMS health message program; and (3) a standardized group workshop delivered by an expert. Participating employees from a transport company completed program evaluation surveys rating the overall program, affect, and utility of: consultations ( n = 19); SMS program ( n = 234); and workshops ( n = 86). Overall, participants’ affect and utility evaluations were positive for all programs, with the greatest satisfaction being reported in the tailored individual consultation and standardized group workshop conditions. Furthermore, mode of delivery and the physical presence of an expert health practitioner was more influential than the degree to which the information was tailored to the individual. Thus, the synergy in ratings between individually tailored consultations and standardized group workshops indicates that low-cost delivery health programs may be as appealing to employees as tailored, and comparatively high-cost, program options.

3. Is consumer confidence an indicator of JSE performance?

OpenAIRE

Kamini Solanki; Yudhvir Seetharam

2014-01-01

While most studies examine the impact of business confidence on market performance, we instead focus on the consumer because consumer spending habits are a natural extension of trading activity on the equity market. This particular study examines investor sentiment as measured by the Consumer Confidence Index in South Africa and its effect on the Johannesburg Stock Exchange (JSE). We employ Granger causality tests to investigate the relationship across time between the Consumer Confidence Ind...

4. Preservice teachers' perceived confidence in teaching school violence prevention.

Science.gov (United States)

Kandakai, Tina L; King, Keith A

2002-01-01

To examine preservice teachers' perceived confidence in teaching violence prevention and the potential effect of violence-prevention training on preservice teachers' confidence in teaching violence prevention. Six Ohio universities participated in the study. More than 800 undergraduate and graduate students completed surveys. Violence-prevention training, area of certification, and location of student- teaching placement significantly influenced preservice teachers' perceived confidence in teaching violence prevention. Violence-prevention training positively influences preservice teachers' confidence in teaching violence prevention. The results suggest that such training should be considered as a requirement for teacher preparation programs.

5. The antecedents and belief-polarized effects of thought confidence.

Science.gov (United States)

Chou, Hsuan-Yi; Lien, Nai-Hwa; Liang, Kuan-Yu

2011-01-01

This article investigates 2 possible antecedents of thought confidence and explores the effects of confidence induced before or during ad exposure. The results of the experiments indicate that both consumers' dispositional optimism and spokesperson attractiveness have significant effects on consumers' confidence in thoughts that are generated after viewing the advertisement. Higher levels of thought confidence will influence the quality of the thoughts that people generate, lead to either positively or negatively polarized message processing, and therefore induce better or worse advertising effectiveness, depending on the valence of thoughts. The authors posit the belief-polarization hypothesis to explain these findings.

6. Intraclass Correlation Coefficients in Hierarchical Design Studies with Discrete Response Variables: A Note on a Direct Interval Estimation Procedure

Science.gov (United States)

Raykov, Tenko; Marcoulides, George A.

2015-01-01

A latent variable modeling procedure that can be used to evaluate intraclass correlation coefficients in two-level settings with discrete response variables is discussed. The approach is readily applied when the purpose is to furnish confidence intervals at prespecified confidence levels for these coefficients in setups with binary or ordinal…

7. Interpregnancy intervals: impact of postpartum contraceptive effectiveness and coverage.

Science.gov (United States)

Thiel de Bocanegra, Heike; Chang, Richard; Howell, Mike; Darney, Philip

2014-04-01

8. Direct Interval Forecasting of Wind Power

DEFF Research Database (Denmark)

Wan, Can; Xu, Zhao; Pinson, Pierre

2013-01-01

This letter proposes a novel approach to directly formulate the prediction intervals of wind power generation based on extreme learning machine and particle swarm optimization, where prediction intervals are generated through direct optimization of both the coverage probability and sharpness...

9. A note on birth interval distributions

International Nuclear Information System (INIS)

Shrestha, G.

1989-08-01

A considerable amount of work has been done regarding the birth interval analysis in mathematical demography. This paper is prepared with the intention of reviewing some probability models related to interlive birth intervals proposed by different researchers. (author). 14 refs

10. Identifying the bad guy in a lineup using confidence judgments under deadline pressure.

Science.gov (United States)

Brewer, Neil; Weber, Nathan; Wootton, David; Lindsay, D Stephen

2012-10-01

11. Confidence limits for regional cerebral blood flow values obtained with circular positron system, using krypton-77

International Nuclear Information System (INIS)

Meyer, E.; Yamamoto, Y.L.; Thompson, C.J.

1978-01-01

The 90% confidence limits have been determined for regional cerebral blood flow (rCBF) values obtained in each cm 2 of a cross section of the human head after inhalation of radioactive krypton-77, using the MNI circular positron emission tomography system (Positome). CBF values for small brain tissue elements are calculated by linear regression analysis on the semi-logarithmically transformed clearance curve. A computer program displays CBF values and their estimated error in numeric and gray scale forms. The following typical results have been obtained on a control subject: mean CBF in the entire cross section of the head: 54.6 + - 5 ml/min/100 g tissue, rCBF for small area of frontal gray matter: 75.8 + - 9 ml/min/100 g tissue. Confidence intervals for individual rCBF values varied between + - 13 and + - 55% except for areas pertaining to the ventricular system where particularly poor statistics have been obtained. Knowledge of confidence limits for rCBF values improves their diagnostic significance, particularly with respect to the assessment of reduced rCBF in stroke patients. A nomogram for convenient determination of 90% confidence limits for slope values obtained in linear regression analysis has been designed with the number of fitted points (n) and the correlation coefficient (r) as parameters. (author)

12. Shape-Tailored Features and their Application to Texture Segmentation

KAUST Repository

Khan, Naeemullah

2014-04-01

Texture Segmentation is one of the most challenging areas of computer vision. One reason for this difficulty is the huge variety and variability of textures occurring in real world, making it very difficult to quantitatively study textures. One of the key tools used for texture segmentation is local invariant descriptors. Texture consists of textons, the basic building block of textures, that may vary by small nuisances like illumination variation, deformations, and noise. Local invariant descriptors are robust to these nuisances making them beneficial for texture segmentation. However, grouping dense descriptors directly for segmentation presents a problem: existing descriptors aggregate data from neighborhoods that may contain different textured regions, making descriptors from these neighborhoods difficult to group, leading to significant errors in segmentation. This work addresses this issue by proposing dense local descriptors, called Shape-Tailored Features, which are tailored to an arbitrarily shaped region, aggregating data only within the region of interest. Since the segmentation, i.e., the regions, are not known a-priori, we propose a joint problem for Shape-Tailored Features and the regions. We present a framework based on variational methods. Extensive experiments on a new large texture dataset, which we introduce, show that the joint approach with Shape-Tailored Features leads to better segmentations over the non-joint non Shape-Tailored approach, and the method out-performs existing state-of-the-art.

13. Optimal Data Interval for Estimating Advertising Response

OpenAIRE

Gerard J. Tellis; Philip Hans Franses

2006-01-01

The abundance of highly disaggregate data (e.g., at five-second intervals) raises the question of the optimal data interval to estimate advertising carryover. The literature assumes that (1) the optimal data interval is the interpurchase time, (2) too disaggregate data causes a disaggregation bias, and (3) recovery of true parameters requires assumption of the underlying advertising process. In contrast, we show that (1) the optimal data interval is what we call , (2) too disaggregate data do...

14. Dynamic visual noise reduces confidence in short-term memory for visual information.

Science.gov (United States)

Kemps, Eva; Andrade, Jackie

2012-05-01

Previous research has shown effects of the visual interference technique, dynamic visual noise (DVN), on visual imagery, but not on visual short-term memory, unless retention of precise visual detail is required. This study tested the prediction that DVN does also affect retention of gross visual information, specifically by reducing confidence. Participants performed a matrix pattern memory task with three retention interval interference conditions (DVN, static visual noise and no interference control) that varied from trial to trial. At recall, participants indicated whether or not they were sure of their responses. As in previous research, DVN did not impair recall accuracy or latency on the task, but it did reduce recall confidence relative to static visual noise and no interference. We conclude that DVN does distort visual representations in short-term memory, but standard coarse-grained recall measures are insensitive to these distortions.

15. Understanding public confidence in government to prevent terrorist attacks.

Energy Technology Data Exchange (ETDEWEB)

Baldwin, T. E.; Ramaprasad, A,; Samsa, M. E.; Decision and Information Sciences; Univ. of Illinois at Chicago

2008-04-02

A primary goal of terrorism is to instill a sense of fear and vulnerability in a population and to erode its confidence in government and law enforcement agencies to protect citizens against future attacks. In recognition of its importance, the Department of Homeland Security includes public confidence as one of the principal metrics used to assess the consequences of terrorist attacks. Hence, a detailed understanding of the variations in public confidence among individuals, terrorist event types, and as a function of time is critical to developing this metric. In this exploratory study, a questionnaire was designed, tested, and administered to small groups of individuals to measure public confidence in the ability of federal, state, and local governments and their public safety agencies to prevent acts of terrorism. Data was collected from three groups before and after they watched mock television news broadcasts portraying a smallpox attack, a series of suicide bomber attacks, a refinery explosion attack, and cyber intrusions on financial institutions, resulting in identity theft. Our findings are: (a) although the aggregate confidence level is low, there are optimists and pessimists; (b) the subjects are discriminating in interpreting the nature of a terrorist attack, the time horizon, and its impact; (c) confidence recovery after a terrorist event has an incubation period; and (d) the patterns of recovery of confidence of the optimists and the pessimists are different. These findings can affect the strategy and policies to manage public confidence after a terrorist event.

16. Animal Spirits and Extreme Confidence: No Guts, No Glory?

NARCIS (Netherlands)

M.G. Douwens-Zonneveld (Mariska)

2012-01-01

textabstractThis study investigates to what extent extreme confidence of either management or security analysts may impact financial or operating performance. We construct a multidimensional degree of company confidence measure from a wide range of corporate decisions. We empirically test this

17. Trust, confidence, and the 2008 global financial crisis.

Science.gov (United States)

Earle, Timothy C

2009-06-01

The 2008 global financial crisis has been compared to a "once-in-a-century credit tsunami," a disaster in which the loss of trust and confidence played key precipitating roles and the recovery from which will require the restoration of these crucial factors. Drawing on the analogy between the financial crisis and environmental and technological hazards, recent research on the role of trust and confidence in the latter is used to provide a perspective on the former. Whereas "trust" and "confidence" are used interchangeably and without explicit definition in most discussions of the financial crisis, this perspective uses the TCC model of cooperation to clearly distinguish between the two and to demonstrate how this distinction can lead to an improved understanding of the crisis. The roles of trust and confidence-both in precipitation and in possible recovery-are discussed for each of the three major sets of actors in the crisis, the regulators, the banks, and the public. The roles of trust and confidence in the larger context of risk management are also examined; trust being associated with political approaches, confidence with technical. Finally, the various stances that government can take with regard to trust-such as supportive or skeptical-are considered. Overall, it is argued that a clear understanding of trust and confidence and a close examination of the specific, concrete circumstances of a crisis-revealing when either trust or confidence is appropriate-can lead to useful insights for both recovery and prevention of future occurrences.

18. True and False Memories, Parietal Cortex, and Confidence Judgments

Science.gov (United States)

Urgolites, Zhisen J.; Smith, Christine N.; Squire, Larry R.

2015-01-01

Recent studies have asked whether activity in the medial temporal lobe (MTL) and the neocortex can distinguish true memory from false memory. A frequent complication has been that the confidence associated with correct memory judgments (true memory) is typically higher than the confidence associated with incorrect memory judgments (false memory).…

19. The Metamemory Approach to Confidence: A Test Using Semantic Memory

Science.gov (United States)

Brewer, William F.; Sampaio, Cristina

2012-01-01

The metamemory approach to memory confidence was extended and elaborated to deal with semantic memory tasks. The metamemory approach assumes that memory confidence is based on the products and processes of a completed memory task, as well as metamemory beliefs that individuals have about how their memory products and processes relate to memory…

20. Confidence Sharing in the Vocational Counselling Interview: Emergence and Repercussions

Science.gov (United States)

Olry-Louis, Isabelle; Bremond, Capucine; Pouliot, Manon

2012-01-01

Confidence sharing is an asymmetrical dialogic episode to which both parties consent, in which one reveals something personal to the other who participates in the emergence and unfolding of the confidence. We describe how this is achieved at a discursive level within vocational counselling interviews. Based on a corpus of 64 interviews, we analyse…

1. A scale for consumer confidence in the safety of food

NARCIS (Netherlands)

Jonge, de J.; Trijp, van J.C.M.; Lans, van der I.A.; Renes, R.J.; Frewer, L.J.

2008-01-01

The aim of this study was to develop and validate a scale to measure general consumer confidence in the safety of food. Results from exploratory and confirmatory analyses indicate that general consumer confidence in the safety of food consists of two distinct dimensions, optimism and pessimism,

2. Confidence Scoring of Speaking Performance: How Does Fuzziness become Exact?

Science.gov (United States)

Jin, Tan; Mak, Barley; Zhou, Pei

2012-01-01

The fuzziness of assessing second language speaking performance raises two difficulties in scoring speaking performance: "indistinction between adjacent levels" and "overlap between scales". To address these two problems, this article proposes a new approach, "confidence scoring", to deal with such fuzziness, leading to "confidence" scores between…

3. Monitoring consumer confidence in food safety: an exploratory study

NARCIS (Netherlands)

Jonge, de J.; Frewer, L.J.; Trijp, van J.C.M.; Renes, R.J.; Wit, de W.; Timmers, J.C.M.

2004-01-01

Abstract: In response to the potential for negative economic and societal effects resulting from a low level of consumer confidence in food safety, it is important to know how confidence is potentially influenced by external events. The aim of this article is to describe the development of a monitor

4. Modeling Confidence and Response Time in Recognition Memory

Science.gov (United States)

Ratcliff, Roger; Starns, Jeffrey J.

2009-01-01

A new model for confidence judgments in recognition memory is presented. In the model, the match between a single test item and memory produces a distribution of evidence, with better matches corresponding to distributions with higher means. On this match dimension, confidence criteria are placed, and the areas between the criteria under the…

5. Music educators : their artistry and self-confidence

NARCIS (Netherlands)

Lion-Slovak, Brigitte; Stöger, Christine; Smilde, Rineke; Malmberg, Isolde; de Vugt, Adri

2013-01-01

How does artistic identity influence the self-confidence of music educators? What is the interconnection between the artistic and the teacher identity? What is actually meant by artistic identity in music education? What is a fruitful environment for the development of artistic self-confidence of

6. To protect and serve: Restoring public confidence in the SAPS ...

African Journals Online (AJOL)

Persistent incidents of brutality, criminal behaviour and abuse of authority by members of South Africa's police agencies have serious implications for public trust and confidence in the police. A decline in trust and confidence in the police is inevitably harmful to the ability of the government to reduce crime and improve public ...

7. Improved realism of confidence for an episodic memory event

Directory of Open Access Journals (Sweden)

Sandra Buratti

2012-09-01

8. Variance misperception explains illusions of confidence in simple perceptual decisions

NARCIS (Netherlands)

Zylberberg, Ariel; Roelfsema, Pieter R.; Sigman, Mariano

2014-01-01

Confidence in a perceptual decision is a judgment about the quality of the sensory evidence. The quality of the evidence depends not only on its strength ('signal') but critically on its reliability ('noise'), but the separate contribution of these quantities to the formation of confidence judgments

9. On-line confidence monitoring during decision making.

Science.gov (United States)

Dotan, Dror; Meyniel, Florent; Dehaene, Stanislas

2018-02-01

Humans can readily assess their degree of confidence in their decisions. Two models of confidence computation have been proposed: post hoc computation using post-decision variables and heuristics, versus online computation using continuous assessment of evidence throughout the decision-making process. Here, we arbitrate between these theories by continuously monitoring finger movements during a manual sequential decision-making task. Analysis of finger kinematics indicated that subjects kept separate online records of evidence and confidence: finger deviation continuously reflected the ongoing accumulation of evidence, whereas finger speed continuously reflected the momentary degree of confidence. Furthermore, end-of-trial finger speed predicted the post-decisional subjective confidence rating. These data indicate that confidence is computed on-line, throughout the decision process. Speed-confidence correlations were previously interpreted as a post-decision heuristics, whereby slow decisions decrease subjective confidence, but our results suggest an adaptive mechanism that involves the opposite causality: by slowing down when unconfident, participants gain time to improve their decisions. Copyright © 2017 Elsevier B.V. All rights reserved.

10. A simultaneous confidence band for sparse longitudinal regression

KAUST Repository

Ma, Shujie; Yang, Lijian; Carroll, Raymond J.

2012-01-01

Functional data analysis has received considerable recent attention and a number of successful applications have been reported. In this paper, asymptotically simultaneous confidence bands are obtained for the mean function of the functional regression model, using piecewise constant spline estimation. Simulation experiments corroborate the asymptotic theory. The confidence band procedure is illustrated by analyzing CD4 cell counts of HIV infected patients.

11. What are effective techniques for improving public confidence or restoring lost confidence in a regulator?

International Nuclear Information System (INIS)

Harbitz, O.; Isaksson, R.

2006-01-01

The conclusions and recommendations of this session can be summarized this way. The following list contains thoughts related to restoring lost confidence: - hard, long lasting event; - strategy: maximum transparency; - to listen, be open, give phone numbers etc. - ways to rebuild trust: frequent communication, being there, open and transparent; - don't be too defensive; if things could be done better, say it; - technical staff and public affair staff together from the beginning - answer all questions; - classifications, actions, instructions that differ much from the earlier ones must be well explained and motivated - and still cause a lot of problems; - things may turn out to be political; - communicative work in an early stage saves work later; - communication experts must be working shoulder to shoulder with other staff; On handling emergencies in general, some recipes proposed are: - better to over react than to under react; - do not avoid extreme actions: hit hard, hit fast; - base your decisions in strict principles; - first principle: public safety first; - when you are realizing plant A, you must have a plant B in your pocket: - be transparent - from the beginning; - crisis communication: early, frequent etc - people need to see political leaders, someone who is making decisions - technical experts are needed but are not enough. On how to involve stakeholders and the public in decision making, recommendations are: - new kind of thinking -. demanding for a organisation; - go to local level, meet local people, speak language people understand, you have to start from the very beginning - introducing yourself tell who you are and why you are there. (authors)

12. An Adequate First Order Logic of Intervals

DEFF Research Database (Denmark)

Chaochen, Zhou; Hansen, Michael Reichhardt

1998-01-01

This paper introduces left and right neighbourhoods as primitive interval modalities to define other unary and binary modalities of intervals in a first order logic with interval length. A complete first order logic for the neighbourhood modalities is presented. It is demonstrated how the logic can...... support formal specification and verification of liveness and fairness, and also of various notions of real analysis....

13. Consistency and refinement for Interval Markov Chains

DEFF Research Database (Denmark)

Delahaye, Benoit; Larsen, Kim Guldstrand; Legay, Axel

2012-01-01

Interval Markov Chains (IMC), or Markov Chains with probability intervals in the transition matrix, are the base of a classic specification theory for probabilistic systems [18]. The standard semantics of IMCs assigns to a specification the set of all Markov Chains that satisfy its interval...

14. Multivariate interval-censored survival data

DEFF Research Database (Denmark)

Hougaard, Philip

2014-01-01

Interval censoring means that an event time is only known to lie in an interval (L,R], with L the last examination time before the event, and R the first after. In the univariate case, parametric models are easily fitted, whereas for non-parametric models, the mass is placed on some intervals, de...

15. Family Health Histories and Their Impact on Retirement Confidence.

Science.gov (United States)

Zick, Cathleen D; Mayer, Robert N; Smith, Ken R

2015-08-01

Retirement confidence is a key social barometer. In this article, we examine how personal and parental health histories relate to working-age adults' feelings of optimism or pessimism about their overall retirement prospects. This study links survey data on retirement planning with information on respondents' own health histories and those of their parents. The multivariate models control for the respondents' socio-demographic and economic characteristics along with past retirement planning activities when estimating the relationships between family health histories and retirement confidence. Retirement confidence is inversely related to parental history of cancer and cardiovascular disease but not to personal health history. In contrast, retirement confidence is positively associated with both parents being deceased. As members of the public become increasingly aware of how genetics and other family factors affect intergenerational transmission of chronic diseases, it is likely that the link between family health histories and retirement confidence will intensify. © The Author(s) 2015.

16. Multivoxel neurofeedback selectively modulates confidence without changing perceptual performance

Science.gov (United States)

Cortese, Aurelio; Amano, Kaoru; Koizumi, Ai; Kawato, Mitsuo; Lau, Hakwan

2016-01-01

A central controversy in metacognition studies concerns whether subjective confidence directly reflects the reliability of perceptual or cognitive processes, as suggested by normative models based on the assumption that neural computations are generally optimal. This view enjoys popularity in the computational and animal literatures, but it has also been suggested that confidence may depend on a late-stage estimation dissociable from perceptual processes. Yet, at least in humans, experimental tools have lacked the power to resolve these issues convincingly. Here, we overcome this difficulty by using the recently developed method of decoded neurofeedback (DecNef) to systematically manipulate multivoxel correlates of confidence in a frontoparietal network. Here we report that bi-directional changes in confidence do not affect perceptual accuracy. Further psychophysical analyses rule out accounts based on simple shifts in reporting strategy. Our results provide clear neuroscientific evidence for the systematic dissociation between confidence and perceptual performance, and thereby challenge current theoretical thinking. PMID:27976739

17. Large multi-centre pilot randomized controlled trial testing a low-cost, tailored, self-help smoking cessation text message intervention for pregnant smokers (MiQuit).

Science.gov (United States)

Naughton, Felix; Cooper, Sue; Foster, Katharine; Emery, Joanne; Leonardi-Bee, Jo; Sutton, Stephen; Jones, Matthew; Ussher, Michael; Whitemore, Rachel; Leighton, Matthew; Montgomery, Alan; Parrott, Steve; Coleman, Tim

2017-07-01

To estimate the effectiveness of pregnancy smoking cessation support delivered by short message service (SMS) text message and key parameters needed to plan a definitive trial. Multi-centre, parallel-group, single-blinded, individual randomized controlled trial. Sixteen antenatal clinics in England. Four hundred and seven participants were randomized to the intervention (n = 203) or usual care (n = 204). Eligible women were 5 pre-pregnancy), were able to receive and understand English SMS texts and were not already using text-based cessation support. All participants received a smoking cessation leaflet; intervention participants also received a 12-week programme of individually tailored, automated, interactive, self-help smoking cessation text messages (MiQuit). Seven smoking outcomes, including validated continuous abstinence from 4 weeks post-randomization until 36 weeks gestation, design parameters for a future trial and cost-per-quitter. Using the validated, continuous abstinence outcome, 5.4% (11 of 203) of MiQuit participants were abstinent versus 2.0% (four of 204) of usual care participants [odds ratio (OR) = 2.7, 95% confidence interval (CI) = 0.93-9.35]. The Bayes factor for this outcome was 2.23. Completeness of follow-up at 36 weeks gestation was similar in both groups; provision of self-report smoking data was 64% (MiQuit) and 65% (usual care) and abstinence validation rates were 56% (MiQuit) and 61% (usual care). The incremental cost-per-quitter was £133.53 (95% CI = -£395.78 to 843.62). There was some evidence, although not conclusive, that a text-messaging programme may increase cessation rates in pregnant smokers when provided alongside routine NHS cessation care. © 2017 The Authors. Addiction published by John Wiley & Sons Ltd on behalf of Society for the Study of Addiction.

18. Maternal Confidence for Physiologic Childbirth: A Concept Analysis.

Science.gov (United States)

Neerland, Carrie E

2018-06-06

Confidence is a term often used in research literature and consumer media in relation to birth, but maternal confidence has not been clearly defined, especially as it relates to physiologic labor and birth. The aim of this concept analysis was to define maternal confidence in the context of physiologic labor and childbirth. Rodgers' evolutionary method was used to identify attributes, antecedents, and consequences of maternal confidence for physiologic birth. Databases searched included Ovid MEDLINE, CINAHL, PsycINFO, and Sociological Abstracts from the years 1995 to 2015. A total of 505 articles were retrieved, using the search terms pregnancy, obstetric care, prenatal care, and self-efficacy and the keyword confidence. Articles were identified for in-depth review and inclusion based on whether the term confidence was used or assessed in relationship to labor and/or birth. In addition, a hand search of the reference lists of the selected articles was performed. Twenty-four articles were reviewed in this concept analysis. We define maternal confidence for physiologic birth as a woman's belief that physiologic birth can be achieved, based on her view of birth as a normal process and her belief in her body's innate ability to birth, which is supported by social support, knowledge, and information founded on a trusted relationship with a maternity care provider in an environment where the woman feels safe. This concept analysis advances the concept of maternal confidence for physiologic birth and provides new insight into how women's confidence for physiologic birth might be enhanced during the prenatal period. Further investigation of confidence for physiologic birth across different cultures is needed to identify cultural differences in constructions of the concept. © 2018 by the American College of Nurse-Midwives.

19. Power, effects, confidence, and significance: an investigation of statistical practices in nursing research.

Science.gov (United States)

2014-05-01

To (a) assess the statistical power of nursing research to detect small, medium, and large effect sizes; (b) estimate the experiment-wise Type I error rate in these studies; and (c) assess the extent to which (i) a priori power analyses, (ii) effect sizes (and interpretations thereof), and (iii) confidence intervals were reported. Statistical review. Papers published in the 2011 volumes of the 10 highest ranked nursing journals, based on their 5-year impact factors. Papers were assessed for statistical power, control of experiment-wise Type I error, reporting of a priori power analyses, reporting and interpretation of effect sizes, and reporting of confidence intervals. The analyses were based on 333 papers, from which 10,337 inferential statistics were identified. The median power to detect small, medium, and large effect sizes was .40 (interquartile range [IQR]=.24-.71), .98 (IQR=.85-1.00), and 1.00 (IQR=1.00-1.00), respectively. The median experiment-wise Type I error rate was .54 (IQR=.26-.80). A priori power analyses were reported in 28% of papers. Effect sizes were routinely reported for Spearman's rank correlations (100% of papers in which this test was used), Poisson regressions (100%), odds ratios (100%), Kendall's tau correlations (100%), Pearson's correlations (99%), logistic regressions (98%), structural equation modelling/confirmatory factor analyses/path analyses (97%), and linear regressions (83%), but were reported less often for two-proportion z tests (50%), analyses of variance/analyses of covariance/multivariate analyses of variance (18%), t tests (8%), Wilcoxon's tests (8%), Chi-squared tests (8%), and Fisher's exact tests (7%), and not reported for sign tests, Friedman's tests, McNemar's tests, multi-level models, and Kruskal-Wallis tests. Effect sizes were infrequently interpreted. Confidence intervals were reported in 28% of papers. The use, reporting, and interpretation of inferential statistics in nursing research need substantial

20. Building Tailorable Hypermedia Systems: The embedded-interpreter approach

DEFF Research Database (Denmark)

Grønbæk, Kaj; Malhotra, Jawahar

1994-01-01

This paper discusses an approach for developing dynamically tailorable hypermedia systems in an object-oriented environment. The approach is aimed at making applications developed in compiled languages like Beta and C++ tailorable at run-time. The approach is based on use of: 1) a hypermedia...... application framework (DEVISE Hyper-media), and 2) an embeddable interpreter for the framework language. A specific hypermedia system is instantiated from the framework with the interpreter embedded in the executable. The specific hypermedia system has a number of “open points” which can be filled via......-type. The paper describes the framework and illustrates how the interpreter is integrated. It describes steps involved in tailoring a specific hypermedia system with a new drawing media-type, where graphical objects can be endpoints for links. Since the hypermedia framework uses a persistent object...

1. Disconnections Between Teacher Expectations and Student Confidence in Bioethics

Science.gov (United States)

Hanegan, Nikki L.; Price, Laura; Peterson, Jeremy

2008-09-01

This study examines how student practice of scientific argumentation using socioscientific bioethics issues affects both teacher expectations of students’ general performance and student confidence in their own work. When teachers use bioethical issues in the classroom students can gain not only biology content knowledge but also important decision-making skills. Learning bioethics through scientific argumentation gives students opportunities to express their ideas, formulate educated opinions and value others’ viewpoints. Research has shown that science teachers’ expectations of student success and knowledge directly influence student achievement and confidence levels. Our study analyzes pre-course and post-course surveys completed by students enrolled in a university level bioethics course ( n = 111) and by faculty in the College of Biology and Agriculture faculty ( n = 34) based on their perceptions of student confidence. Additionally, student data were collected from classroom observations and interviews. Data analysis showed a disconnect between faculty and students perceptions of confidence for both knowledge and the use of science argumentation. Student reports of their confidence levels regarding various bioethical issues were higher than faculty reports. A further disconnect showed up between students’ preferred learning styles and the general faculty’s common teaching methods; students learned more by practicing scientific argumentation than listening to traditional lectures. Students who completed a bioethics course that included practice in scientific argumentation, significantly increased their confidence levels. This study suggests that professors’ expectations and teaching styles influence student confidence levels in both knowledge and scientific argumentation.

2. Sex differences in confidence influence patterns of conformity.

Science.gov (United States)

Cross, Catharine P; Brown, Gillian R; Morgan, Thomas J H; Laland, Kevin N

2017-11-01

Lack of confidence in one's own ability can increase the likelihood of relying on social information. Sex differences in confidence have been extensively investigated in cognitive tasks, but implications for conformity have not been directly tested. Here, we tested the hypothesis that, in a task that shows sex differences in confidence, an indirect effect of sex on social information use will also be evident. Participants (N = 168) were administered a mental rotation (MR) task or a letter transformation (LT) task. After providing an answer, participants reported their confidence before seeing the responses of demonstrators and being allowed to change their initial answer. In the MR, but not the LT, task, women showed lower levels of confidence than men, and confidence mediated an indirect effect of sex on the likelihood of switching answers. These results provide novel, experimental evidence that confidence is a general explanatory mechanism underpinning susceptibility to social influences. Our results have implications for the interpretation of the wider literature on sex differences in conformity. © 2016 The British Psychological Society.

3. Confidence in Alternative Dispute Resolution: Experience from Switzerland

Directory of Open Access Journals (Sweden)

Christof Schwenkel

2014-06-01

Full Text Available Alternative Dispute Resolution plays a crucial role in the justice system of Switzerland. With the unified Swiss Code of Civil Procedure, it is required that each litigation session shall be preceded by an attempt at conciliation before a conciliation authority. However, there has been little research on conciliation authorities and the public's perception of the authorities. This paper looks at public confidence in conciliation authorities and provides results of a survey conducted with more than 3,400 participants. This study found that public confidence in Swiss conciliation authorities is generally high, exceeds the ratings for confidence in cantonal governments and parliaments, but is lower than confidence in courts.Since the institutional models of the conciliation authorities (meaning the organization of the authorities and the selection of the conciliators differ widely between the 26 Swiss cantons, the influence of the institutional models on public confidence is analyzed. Contrary to assumptions based on New Institutional-ism approaches, this study reports that the institutional models do not impact public confidence. Also, the relationship between a participation in an election of justices of the peace or conciliators and public confidence in these authorities is found to be at most very limited (and negative. Similar to common findings on courts, the results show that general contacts with conciliation authorities decrease public confidence in these institutions whereas a positive experience with a conciliation authority leads to more confidence.The Study was completed as part of the research project 'Basic Research into Court Management in Switzerland', supported by the Swiss National Science Foundation (SNSF. Christof Schwenkel is a PhD student at the University of Lucerne and a research associate and project manager at Interface Policy Studies. A first version of this article was presented at the 2013 European Group for Public

4. Food skills confidence and household gatekeepers' dietary practices.

Science.gov (United States)

Burton, Melissa; Reid, Mike; Worsley, Anthony; Mavondo, Felix

2017-01-01

Household food gatekeepers have the potential to influence the food attitudes and behaviours of family members, as they are mainly responsible for food-related tasks in the home. The aim of this study was to determine the role of gatekeepers' confidence in food-related skills and nutrition knowledge on food practices in the home. An online survey was completed by 1059 Australian dietary gatekeepers selected from the Global Market Insite (GMI) research database. Participants responded to questions about food acquisition and preparation behaviours, the home eating environment, perceptions and attitudes towards food, and demographics. Two-step cluster analysis was used to identify groups based on confidence regarding food skills and nutrition knowledge. Chi-square tests and one-way ANOVAs were used to compare the groups on the dependent variables. Three groups were identified: low confidence, moderate confidence and high confidence. Gatekeepers in the highest confidence group were significantly more likely to report lower body mass index (BMI), and indicate higher importance of fresh food products, vegetable prominence in meals, product information use, meal planning, perceived behavioural control and overall diet satisfaction. Gatekeepers in the lowest confidence group were significantly more likely to indicate more perceived barriers to healthy eating, report more time constraints and more impulse purchasing practices, and higher convenience ingredient use. Other smaller associations were also found. Household food gatekeepers with high food skills confidence were more likely to engage in several healthy food practices, while those with low food skills confidence were more likely to engage in unhealthy food practices. Food education strategies aimed at building food-skills and nutrition knowledge will enable current and future gatekeepers to make healthier food decisions for themselves and for their families. Copyright Â© 2016 Elsevier Ltd. All rights reserved.

5. Determining the confidence levels of sensor outputs using neural networks

Energy Technology Data Exchange (ETDEWEB)

Broten, G S; Wood, H C [Saskatchewan Univ., Saskatoon, SK (Canada). Dept. of Electrical Engineering

1996-12-31

This paper describes an approach for determining the confidence level of a sensor output using multi-sensor arrays, sensor fusion and artificial neural networks. The authors have shown in previous work that sensor fusion and artificial neural networks can be used to learn the relationships between the outputs of an array of simulated partially selective sensors and the individual analyte concentrations in a mixture of analyses. Other researchers have shown that an array of partially selective sensors can be used to determine the individual gas concentrations in a gaseous mixture. The research reported in this paper shows that it is possible to extract confidence level information from an array of partially selective sensors using artificial neural networks. The confidence level of a sensor output is defined as a numeric value, ranging from 0% to 100%, that indicates the confidence associated with a output of a given sensor. A three layer back-propagation neural network was trained on a subset of the sensor confidence level space, and was tested for its ability to generalize, where the confidence level space is defined as all possible deviations from the correct sensor output. A learning rate of 0.1 was used and no momentum terms were used in the neural network. This research has shown that an artificial neural network can accurately estimate the confidence level of individual sensors in an array of partially selective sensors. This research has also shown that the neural network`s ability to determine the confidence level is influenced by the complexity of the sensor`s response and that the neural network is able to estimate the confidence levels even if more than one sensor is in error. The fundamentals behind this research could be applied to other configurations besides arrays of partially selective sensors, such as an array of sensors separated spatially. An example of such a configuration could be an array of temperature sensors in a tank that is not in

6. A Poisson process approximation for generalized K-5 confidence regions

Science.gov (United States)

Arsham, H.; Miller, D. R.

1982-01-01

One-sided confidence regions for continuous cumulative distribution functions are constructed using empirical cumulative distribution functions and the generalized Kolmogorov-Smirnov distance. The band width of such regions becomes narrower in the right or left tail of the distribution. To avoid tedious computation of confidence levels and critical values, an approximation based on the Poisson process is introduced. This aproximation provides a conservative confidence region; moreover, the approximation error decreases monotonically to 0 as sample size increases. Critical values necessary for implementation are given. Applications are made to the areas of risk analysis, investment modeling, reliability assessment, and analysis of fault tolerant systems.

7. Clinical value of MRI liver-specific contrast agents: a tailored examination for a confident non-invasive diagnosis of focal liver lesions

International Nuclear Information System (INIS)

Ba-Ssalamah, Ahmed; Uffmann, Martin; Bastati, Nina; Herold, Christian; Schima, Wolfgang; Saini, Sanjai

2009-01-01

Screening of the liver for hepatic lesion detection and characterization is usually performed with either ultrasound or CT. However, both techniques are suboptimal for liver lesion characterization and magnetic resonance (MR) imaging has emerged as the preferred radiological investigation. In addition to unenhanced MR imaging techniques, contrast-enhanced MR imaging can demonstrate tissue-specific physiological information, thereby facilitating liver lesion characterization. Currently, the classes of contrast agents available for MR imaging of the liver include non-tissue-specific extracellular gadolinium chelates and tissue-specific hepatobiliary or reticuloendothelial agents. In this review, we describe the MR features of the more common focal hepatic lesions, as well as appropriate imaging protocols. A special emphasis is placed on the clinical use of non-specific and liver-specific contrast agents for differentiation of focal liver lesions. This may aid in the accurate diagnostic workup of patients in order to avoid invasive procedures, such as biopsy, for lesion characterization. A diagnostic strategy that considers the clinical situation is also presented. (orig.)

8. Incidence of interval cancers in faecal immunochemical test colorectal screening programmes in Italy.

Science.gov (United States)

Giorgi Rossi, Paolo; Carretta, Elisa; Mangone, Lucia; Baracco, Susanna; Serraino, Diego; Zorzi, Manuel

2018-03-01

Objective In Italy, colorectal screening programmes using the faecal immunochemical test from ages 50 to 69 every two years have been in place since 2005. We aimed to measure the incidence of interval cancers in the two years after a negative faecal immunochemical test, and compare this with the pre-screening incidence of colorectal cancer. Methods Using data on colorectal cancers diagnosed in Italy from 2000 to 2008 collected by cancer registries in areas with active screening programmes, we identified cases that occurred within 24 months of negative screening tests. We used the number of tests with a negative result as a denominator, grouped by age and sex. Proportional incidence was calculated for the first and second year after screening. Results Among 579,176 and 226,738 persons with negative test results followed up at 12 and 24 months, respectively, we identified 100 interval cancers in the first year and 70 in the second year. The proportional incidence was 13% (95% confidence interval 10-15) and 23% (95% confidence interval 18-25), respectively. The estimate for the two-year incidence is 18%, which was slightly higher in females (22%; 95% confidence interval 17-26), and for proximal colon (22%; 95% confidence interval 16-28). Conclusion The incidence of interval cancers in the two years after a negative faecal immunochemical test in routine population-based colorectal cancer screening was less than one-fifth of the expected incidence. This is direct evidence that the faecal immunochemical test-based screening programme protocol has high sensitivity for cancers that will become symptomatic.

9. Decoded fMRI neurofeedback can induce bidirectional confidence changes within single participants.

Science.gov (United States)

Cortese, Aurelio; Amano, Kaoru; Koizumi, Ai; Lau, Hakwan; Kawato, Mitsuo

2017-04-01

Neurofeedback studies using real-time functional magnetic resonance imaging (rt-fMRI) have recently incorporated the multi-voxel pattern decoding approach, allowing for fMRI to serve as a tool to manipulate fine-grained neural activity embedded in voxel patterns. Because of its tremendous potential for clinical applications, certain questions regarding decoded neurofeedback (DecNef) must be addressed. Specifically, can the same participants learn to induce neural patterns in opposite directions in different sessions? If so, how does previous learning affect subsequent induction effectiveness? These questions are critical because neurofeedback effects can last for months, but the short- to mid-term dynamics of such effects are unknown. Here we employed a within-subjects design, where participants underwent two DecNef training sessions to induce behavioural changes of opposing directionality (up or down regulation of perceptual confidence in a visual discrimination task), with the order of training counterbalanced across participants. Behavioral results indicated that the manipulation was strongly influenced by the order and the directionality of neurofeedback training. We applied nonlinear mathematical modeling to parametrize four main consequences of DecNef: main effect of change in confidence, strength of down-regulation of confidence relative to up-regulation, maintenance of learning effects, and anterograde learning interference. Modeling results revealed that DecNef successfully induced bidirectional confidence changes in different sessions within single participants. Furthermore, the effect of up- compared to down-regulation was more prominent, and confidence changes (regardless of the direction) were largely preserved even after a week-long interval. Lastly, the effect of the second session was markedly diminished as compared to the effect of the first session, indicating strong anterograde learning interference. These results are interpreted in the framework

10. Confidence and the stock market: an agent-based approach.

Science.gov (United States)

Bertella, Mario A; Pires, Felipe R; Feng, Ling; Stanley, Harry Eugene

2014-01-01

Using a behavioral finance approach we study the impact of behavioral bias. We construct an artificial market consisting of fundamentalists and chartists to model the decision-making process of various agents. The agents differ in their strategies for evaluating stock prices, and exhibit differing memory lengths and confidence levels. When we increase the heterogeneity of the strategies used by the agents, in particular the memory lengths, we observe excess volatility and kurtosis, in agreement with real market fluctuations--indicating that agents in real-world financial markets exhibit widely differing memory lengths. We incorporate the behavioral traits of adaptive confidence and observe a positive correlation between average confidence and return rate, indicating that market sentiment is an important driver in price fluctuations. The introduction of market confidence increases price volatility, reflecting the negative effect of irrationality in market behavior.

11. CERN confident of LHC start-up in 2007

CERN Document Server

2007-01-01

"Delegates attending the 140th meeting of CERN Council heard a confident report from the Laboratory about the scheduled start-up of the world's highest energy particle accelerator, the Large Hadron Collier (LHC), in 2007." (1 page)

12. Confidence Measurement in the Light of Signal Detection Theory

Directory of Open Access Journals (Sweden)

Sébastien eMassoni

2014-12-01

Full Text Available We compare three alternative methods for eliciting retrospective confidence in the context of a simple perceptual task: the Simple Confidence Rating (a direct report on a numerical scale, the Quadratic Scoring Rule (a post-wagering procedure and the Matching Probability (a generalization of the no-loss gambling method. We systematically compare the results obtained with these three rules to the theoretical confidence levels that can be inferred from performance in the perceptual task using Signal Detection Theory. We find that the Matching Probability provides better results in that respect. We conclude that Matching Probability is particularly well suited for studies of confidence that use Signal Detection Theory as a theoretical framework.

13. Confidence-building measures in the Asia-Pacific region

International Nuclear Information System (INIS)

Qin Huasun

1991-01-01

The regional confidence-building, security and disarmament issues in the Asia-Pacific region, and in particular, support to non-proliferation regime and establishing nuclear-weapon-free zones are reviewed

14. Building Supervisory Confidence--A Key to Transfer of Training

Science.gov (United States)

Byham, William C.; Robinson, James

1977-01-01

A training concept is described which suggests that efforts toward maintaining and/or building the confidence of the participants in supervisory training programs can increase their likelihood of using the skills on the job. (TA)

15. Confidence assessment. Site-descriptive modelling SDM-Site Laxemar

International Nuclear Information System (INIS)

2009-06-01

The objective of this report is to assess the confidence that can be placed in the Laxemar site descriptive model, based on the information available at the conclusion of the surface-based investigations (SDM-Site Laxemar). In this exploration, an overriding question is whether remaining uncertainties are significant for repository engineering design or long-term safety assessment and could successfully be further reduced by more surface-based investigations or more usefully by explorations underground made during construction of the repository. Procedures for this assessment have been progressively refined during the course of the site descriptive modelling, and applied to all previous versions of the Forsmark and Laxemar site descriptive models. They include assessment of whether all relevant data have been considered and understood, identification of the main uncertainties and their causes, possible alternative models and their handling, and consistency between disciplines. The assessment then forms the basis for an overall confidence statement. The confidence in the Laxemar site descriptive model, based on the data available at the conclusion of the surface based site investigations, has been assessed by exploring: - Confidence in the site characterization data base, - remaining issues and their handling, - handling of alternatives, - consistency between disciplines and - main reasons for confidence and lack of confidence in the model. Generally, the site investigation database is of high quality, as assured by the quality procedures applied. It is judged that the Laxemar site descriptive model has an overall high level of confidence. Because of the relatively robust geological model that describes the site, the overall confidence in the Laxemar Site Descriptive model is judged to be high, even though details of the spatial variability remain unknown. The overall reason for this confidence is the wide spatial distribution of the data and the consistency between

16. Confidence assessment. Site-descriptive modelling SDM-Site Laxemar

Energy Technology Data Exchange (ETDEWEB)

2008-12-15

The objective of this report is to assess the confidence that can be placed in the Laxemar site descriptive model, based on the information available at the conclusion of the surface-based investigations (SDM-Site Laxemar). In this exploration, an overriding question is whether remaining uncertainties are significant for repository engineering design or long-term safety assessment and could successfully be further reduced by more surface-based investigations or more usefully by explorations underground made during construction of the repository. Procedures for this assessment have been progressively refined during the course of the site descriptive modelling, and applied to all previous versions of the Forsmark and Laxemar site descriptive models. They include assessment of whether all relevant data have been considered and understood, identification of the main uncertainties and their causes, possible alternative models and their handling, and consistency between disciplines. The assessment then forms the basis for an overall confidence statement. The confidence in the Laxemar site descriptive model, based on the data available at the conclusion of the surface based site investigations, has been assessed by exploring: - Confidence in the site characterization data base, - remaining issues and their handling, - handling of alternatives, - consistency between disciplines and - main reasons for confidence and lack of confidence in the model. Generally, the site investigation database is of high quality, as assured by the quality procedures applied. It is judged that the Laxemar site descriptive model has an overall high level of confidence. Because of the relatively robust geological model that describes the site, the overall confidence in the Laxemar Site Descriptive model is judged to be high, even though details of the spatial variability remain unknown. The overall reason for this confidence is the wide spatial distribution of the data and the consistency between

17. Microstructural gradients in thin hard coatings -- tailor-made

DEFF Research Database (Denmark)

Pantleon, Karen; Oettel, Heinrich

1998-01-01

) alternating sputtering with and without substrate voltage and (c) pulsed bias voltage. On the basis of X-ray diffraction measurements, it is demonstrated that residual stress gradients and texture gradients can be designed tailor-made. Furthermore, results of microhardness measurements and scratch tests...

18. Tailorable Trimethyl chitosans as adjuvant for intranasal immunization

NARCIS (Netherlands)

Verheul, R.J.

2010-01-01

Tailorable Trimethyl Chitosans as Adjuvant for Intranasal Immunization Active vaccination has proven to be the most (cost) effective tool in the fight against infectious diseases. Nowadays, most vaccines are administered via parenteral injection. However, the risk of contaminated needles and need

19. Enabling Tailored Music Programs in Elementary Schools: An Australian Exemplar

Science.gov (United States)

McFerran, Katrina Skewes; Crooke, Alexander Hew Dale

2014-01-01

Participation in meaningful school music programs is the right of all children. Although music education is widely supported by policy, significant gaps exist in practice in most developed Western countries. These gaps mean the extrinsic and intrinsic benefits associated with participation in tailored programs are not equally available to all…

20. Tailored Cloze: Improved with Classical Item Analysis Techniques.

Science.gov (United States)

Brown, James Dean

1988-01-01

The reliability and validity of a cloze procedure used as an English-as-a-second-language (ESL) test in China were improved by applying traditional item analysis and selection techniques. The 'best' test items were chosen on the basis of item facility and discrimination indices, and were administered as a 'tailored cloze.' 29 references listed.…

1. Comparing tailored and untailored text messages for smoking cessation

DEFF Research Database (Denmark)

Skov-Ettrup, L S; Ringgaard, L W; Dalum, P

2014-01-01

The aim was to compare the effectiveness of untailored text messages for smoking cessation to tailored text messages delivered at a higher frequency. From February 2007 to August 2009, 2030 users of an internet-based smoking cessation program with optional text message support aged 15-25 years were...... of text messages increases quit rates among young smokers....

2. Tailoring Dispersion properties of photonic crystal waveguides by topology optimization

DEFF Research Database (Denmark)

Stainko, Roman; Sigmund, Ole

2007-01-01

based design updates. The goal of the optimization process is to come up with slow light, zero group velocity dispersion photonic waveguides or photonic waveguides with tailored dispersion properties for dispersion compensation purposes. Two examples concerning reproduction of a specific dispersion...

3. Excellent bonding behaviour of novel surface-tailored fibre ...

Indian Academy of Sciences (India)

tured completely before pull-out, leading to full utilization of its tensile strength, and ... Composite rods; surface tailoring; cementitious matrix; pull-out test; bonding characteristics. 1. ... machine (Lloyd LR50K) at a speed of 0∙5 mm/min with a.

4. Promotion of Active Aging using a tailored recommendation system

NARCIS (Netherlands)

Cabrita, M.; Vollenbroek-Hutten, Miriam Marie Rosé

Active Aging deals with the support and integration of the elderly population in a society focusing on improving physical and mental well-being. Persuasive technology provides solutions for tailored interventions aiming at maintaining an active lifestyle. The present paper introduces the initial

5. Understanding the impact of graphene sheet tailoring on the ...

Indian Academy of Sciences (India)

Tailoring the channel decreases mobility and transmission probability to a great ... AGNR was used throughout the text so N denotes the number of dimer lines. Figure 2. .... ber of dimer lines and z is a positive integer) and through local density ...

6. Application-Tailored I/O with Streamline

NARCIS (Netherlands)

de Bruijn, W.J.; Bos, H.J.; Bal, H.E.

2011-01-01

Streamline is a stream-based OS communication subsystem that spans from peripheral hardware to userspace processes. It improves performance of I/O-bound applications (such as webservers and streaming media applications) by constructing tailor-made I/O paths through the operating system for each

7. Analysis, design and elastic tailoring of composite rotor blades

Science.gov (United States)

Rehfield, Lawrence W.; Atilgan, Ali R.

1987-01-01

The development of structural models for composite rotor blades is summarized. The models are intended for use in design analysis for the purpose of exploring the potential of elastic tailoring. The research was performed at the Center for Rotary Wing Aircraft Technology.

8. Tailor-made blanks for the aircraft industry

NARCIS (Netherlands)

2010-01-01

Tailor-Made Blanks (TMBs) are hybrid assemblies made of sheet metals with different materials and/or thicknesses that are joined together prior to forming. Alternatively, a monolithic sheet can be machined to create required thickness variations (machined TMBs). The possibility of having several

9. Tailoring Small IT Projects in the Project Planning Phase

Science.gov (United States)

Mulhearn, Michael F.

2011-01-01

Project management (PM) and systems engineering (SE) are essential skills in information technology (IT). There is an abundance of information available detailing the comprehensive bodies of knowledge, standards, and best practices. Despite the volume of information, there is surprisingly little information about how to tailor PM and SE tasks for…

10. Aerodynamic tailoring of the Learjet Model 60 wing

Science.gov (United States)

Chandrasekharan, Reuben M.; Hawke, Veronica M.; Hinson, Michael L.; Kennelly, Robert A., Jr.; Madson, Michael D.

1993-01-01

The wing of the Learjet Model 60 was tailored for improved aerodynamic characteristics using the TRANAIR transonic full-potential computational fluid dynamics (CFD) code. A root leading edge glove and wing tip fairing were shaped to reduce shock strength, improve cruise drag and extend the buffet limit. The aerodynamic design was validated by wind tunnel test and flight test data.

11. LPWA using supersonic gas jet with tailored density profile

Science.gov (United States)

Kononenko, O.; Bohlen, S.; Dale, J.; D'Arcy, R.; Dinter, M.; Erbe, J. H.; Indorf, G.; di Lucchio, L.; Goldberg, L.; Gruse, J. N.; Karstensen, S.; Libov, V.; Ludwig, K.; Martinez de La Ossa, A.; Marutzky, F.; Niroula, A.; Osterhoff, J.; Quast, M.; Schaper, L.; Schwinkendorf, J.-P.; Streeter, M.; Tauscher, G.; Weichert, S.; Palmer, C.; Horbatiuk, Taras

2016-10-01

Laser driven plasma wakefield accelerators have been explored as a potential compact, reproducible source of relativistic electron bunches, utilising an electric field of many GV/m. Control over injection of electrons into the wakefield is of crucial importance in producing stable, mono-energetic electron bunches. Density tailoring of the target, to control the acceleration process, can also be used to improve the quality of the bunch. By using gas jets to provide tailored targets it is possible to provide good access for plasma diagnostics while also producing sharp density gradients for density down-ramp injection. OpenFOAM hydrodynamic simulations were used to investigate the possibility of producing tailored density targets in a supersonic gas jet. Particle-in-cell simulations of the resulting density profiles modelled the effect of the tailored density on the properties of the accelerated electron bunch. Here, we present the simulation results together with preliminary experimental measurements of electron and x-ray properties from LPWA experiments using gas jet targets and a 25 TW, 25 fs Ti:Sa laser system at DESY.

12. The Promise of Tailoring Incentives for Healthy Behaviors.

Science.gov (United States)

Kullgren, Jeffrey T; Williams, Geoffrey C; Resnicow, Kenneth; An, Lawrence C; Rothberg, Amy; Volpp, Kevin G; Heisler, Michele

2016-01-01

To describe how tailoring financial incentives for healthy behaviors to employees' goals, values, and aspirations might improve the efficacy of incentives. We integrate insights from self-determination theory (SDT) with principles from behavioral economics in the design of financial incentives by linking how incentives could help meet an employee's life goals, values, or aspirations. Tailored financial incentives could be more effective than standard incentives in promoting autonomous motivation necessary to initiate healthy behaviors and sustain them after incentives are removed. Previous efforts to improve the design of financial incentives have tested different incentive designs that vary the size, schedule, timing, and target of incentives. Our strategy for tailoring incentives builds on strong evidence that difficult behavior changes are more successful when integrated with important life goals and values. We outline necessary research to examine the effectiveness of this approach among at-risk employees. Instead of offering simple financial rewards for engaging in healthy behaviors, existing programs could leverage incentives to promote employees' autonomous motivation for sustained health improvements. Effective application of these concepts could lead to programs more effective at improving health, potentially at lower cost. Our approach for the first time integrates key insights from SDT, behavioral economics, and tailoring to turn an extrinsic reward for behavior change into an internalized, self-sustaining motivator for long-term engagement in risk-reducing behaviors.

13. Sequential Interval Estimation of a Location Parameter with Fixed Width in the Nonregular Case

OpenAIRE

Koike, Ken-ichi

2007-01-01

For a location-scale parameter family of distributions with a finite support, a sequential confidence interval with a fixed width is obtained for the location parameter, and its asymptotic consistency and efficiency are shown. Some comparisons with the Chow-Robbins procedure are also done.

14. The Sense of Confidence during Probabilistic Learning: A Normative Account.

Directory of Open Access Journals (Sweden)

Florent Meyniel

2015-06-01

Full Text Available Learning in a stochastic environment consists of estimating a model from a limited amount of noisy data, and is therefore inherently uncertain. However, many classical models reduce the learning process to the updating of parameter estimates and neglect the fact that learning is also frequently accompanied by a variable "feeling of knowing" or confidence. The characteristics and the origin of these subjective confidence estimates thus remain largely unknown. Here we investigate whether, during learning, humans not only infer a model of their environment, but also derive an accurate sense of confidence from their inferences. In our experiment, humans estimated the transition probabilities between two visual or auditory stimuli in a changing environment, and reported their mean estimate and their confidence in this report. To formalize the link between both kinds of estimate and assess their accuracy in comparison to a normative reference, we derive the optimal inference strategy for our task. Our results indicate that subjects accurately track the likelihood that their inferences are correct. Learning and estimating confidence in what has been learned appear to be two intimately related abilities, suggesting that they arise from a single inference process. We show that human performance matches several properties of the optimal probabilistic inference. In particular, subjective confidence is impacted by environmental uncertainty, both at the first level (uncertainty in stimulus occurrence given the inferred stochastic characteristics and at the second level (uncertainty due to unexpected changes in these stochastic characteristics. Confidence also increases appropriately with the number of observations within stable periods. Our results support the idea that humans possess a quantitative sense of confidence in their inferences about abstract non-sensory parameters of the environment. This ability cannot be reduced to simple heuristics, it seems

15. Confidence limits for small numbers of events in astrophysical data

Science.gov (United States)

Gehrels, N.

1986-01-01

The calculation of limits for small numbers of astronomical counts is based on standard equations derived from Poisson and binomial statistics; although the equations are straightforward, their direct use is cumbersome and involves both table-interpolations and several mathematical operations. Convenient tables and approximate formulae are here presented for confidence limits which are based on such Poisson and binomial statistics. The limits in the tables are given for all confidence levels commonly used in astrophysics.

16. Non-Asymptotic Confidence Sets for Circular Means

Directory of Open Access Journals (Sweden)

Thomas Hotz

2016-10-01

Full Text Available The mean of data on the unit circle is defined as the minimizer of the average squared Euclidean distance to the data. Based on Hoeffding’s mass concentration inequalities, non-asymptotic confidence sets for circular means are constructed which are universal in the sense that they require no distributional assumptions. These are then compared with asymptotic confidence sets in simulations and for a real data set.

17. Learning style and confidence: an empirical investigation of Japanese employees

OpenAIRE

Yoshitaka Yamazaki

2012-01-01

This study aims to examine how learning styles relate to employees' confidence through a view of Kolb's experiential learning theory. For this aim, an empirical investigation was conducted using the sample of 201 Japanese employees who work for a Japanese multinational corporation. Results illustrated that the learning style group of acting orientation described a significantly higher level of job confidence than that of reflecting orientation, whereas the two groups of feeling and thinking o...

18. Probability Distribution for Flowing Interval Spacing

International Nuclear Information System (INIS)

Kuzio, S.

2001-01-01

The purpose of this analysis is to develop a probability distribution for flowing interval spacing. A flowing interval is defined as a fractured zone that transmits flow in the Saturated Zone (SZ), as identified through borehole flow meter surveys (Figure 1). This analysis uses the term ''flowing interval spacing'' as opposed to fractured spacing, which is typically used in the literature. The term fracture spacing was not used in this analysis because the data used identify a zone (or a flowing interval) that contains fluid-conducting fractures but does not distinguish how many or which fractures comprise the flowing interval. The flowing interval spacing is measured between the midpoints of each flowing interval. Fracture spacing within the SZ is defined as the spacing between fractures, with no regard to which fractures are carrying flow. The Development Plan associated with this analysis is entitled, ''Probability Distribution for Flowing Interval Spacing'', (CRWMS M and O 2000a). The parameter from this analysis may be used in the TSPA SR/LA Saturated Zone Flow and Transport Work Direction and Planning Documents: (1) ''Abstraction of Matrix Diffusion for SZ Flow and Transport Analyses'' (CRWMS M and O 1999a) and (2) ''Incorporation of Heterogeneity in SZ Flow and Transport Analyses'', (CRWMS M and O 1999b). A limitation of this analysis is that the probability distribution of flowing interval spacing may underestimate the effect of incorporating matrix diffusion processes in the SZ transport model because of the possible overestimation of the flowing interval spacing. Larger flowing interval spacing results in a decrease in the matrix diffusion processes. This analysis may overestimate the flowing interval spacing because the number of fractures that contribute to a flowing interval cannot be determined from the data. Because each flowing interval probably has more than one fracture contributing to a flowing interval, the true flowing interval spacing could be

19. The Development of Confidence Limits for Fatigue Strength Data

International Nuclear Information System (INIS)

SUTHERLAND, HERBERT J.; VEERS, PAUL S.

1999-01-01

Over the past several years, extensive databases have been developed for the S-N behavior of various materials used in wind turbine blades, primarily fiberglass composites. These data are typically presented both in their raw form and curve fit to define their average properties. For design, confidence limits must be placed on these descriptions. In particular, most designs call for the 95/95 design values; namely, with a 95% level of confidence, the designer is assured that 95% of the material will meet or exceed the design value. For such material properties as the ultimate strength, the procedures for estimating its value at a particular confidence level is well defined if the measured values follow a normal or a log-normal distribution. Namely, based upon the number of sample points and their standard deviation, a commonly-found table may be used to determine the survival percentage at a particular confidence level with respect to its mean value. The same is true for fatigue data at a constant stress level (the number of cycles to failure N at stress level S(sub 1)). However, when the stress level is allowed to vary, as with a typical S-N fatigue curve, the procedures for determining confidence limits are not as well defined. This paper outlines techniques for determining confidence limits of fatigue data. Different approaches to estimating the 95/95 level are compared. Data from the MSU/DOE and the FACT fatigue databases are used to illustrate typical results

20. Learning to make collective decisions: the impact of confidence escalation.

Science.gov (United States)

2013-01-01

Little is known about how people learn to take into account others' opinions in joint decisions. To address this question, we combined computational and empirical approaches. Human dyads made individual and joint visual perceptual decision and rated their confidence in those decisions (data previously published). We trained a reinforcement (temporal difference) learning agent to get the participants' confidence level and learn to arrive at a dyadic decision by finding the policy that either maximized the accuracy of the model decisions or maximally conformed to the empirical dyadic decisions. When confidences were shared visually without verbal interaction, RL agents successfully captured social learning. When participants exchanged confidences visually and interacted verbally, no collective benefit was achieved and the model failed to predict the dyadic behaviour. Behaviourally, dyad members' confidence increased progressively and verbal interaction accelerated this escalation. The success of the model in drawing collective benefit from dyad members was inversely related to confidence escalation rate. The findings show an automated learning agent can, in principle, combine individual opinions and achieve collective benefit but the same agent cannot discount the escalation suggesting that one cognitive component of collective decision making in human may involve discounting of overconfidence arising from interactions.

1. Restricted Interval Valued Neutrosophic Sets and Restricted Interval Valued Neutrosophic Topological Spaces

Directory of Open Access Journals (Sweden)

Anjan Mukherjee

2016-08-01

Full Text Available In this paper we introduce the concept of restricted interval valued neutrosophic sets (RIVNS in short. Some basic operations and properties of RIVNS are discussed. The concept of restricted interval valued neutrosophic topology is also introduced together with restricted interval valued neutrosophic finer and restricted interval valued neutrosophic coarser topology. We also define restricted interval valued neutrosophic interior and closer of a restricted interval valued neutrosophic set. Some theorems and examples are cites. Restricted interval valued neutrosophic subspace topology is also studied.

2. Development of free statistical software enabling researchers to calculate confidence levels, clinical significance curves and risk-benefit contours

International Nuclear Information System (INIS)

Shakespeare, T.P.; Mukherjee, R.K.; Gebski, V.J.

2003-01-01

Confidence levels, clinical significance curves, and risk-benefit contours are tools improving analysis of clinical studies and minimizing misinterpretation of published results, however no software has been available for their calculation. The objective was to develop software to help clinicians utilize these tools. Excel 2000 spreadsheets were designed using only built-in functions, without macros. The workbook was protected and encrypted so that users can modify only input cells. The workbook has 4 spreadsheets for use in studies comparing two patient groups. Sheet 1 comprises instructions and graphic examples for use. Sheet 2 allows the user to input the main study results (e.g. survival rates) into a 2-by-2 table. Confidence intervals (95%), p-value and the confidence level for Treatment A being better than Treatment B are automatically generated. An additional input cell allows the user to determine the confidence associated with a specified level of benefit. For example if the user wishes to know the confidence that Treatment A is at least 10% better than B, 10% is entered. Sheet 2 automatically displays clinical significance curves, graphically illustrating confidence levels for all possible benefits of one treatment over the other. Sheet 3 allows input of toxicity data, and calculates the confidence that one treatment is more toxic than the other. It also determines the confidence that the relative toxicity of the most effective arm does not exceed user-defined tolerability. Sheet 4 automatically calculates risk-benefit contours, displaying the confidence associated with a specified scenario of minimum benefit and maximum risk of one treatment arm over the other. The spreadsheet is freely downloadable at www.ontumor.com/professional/statistics.htm A simple, self-explanatory, freely available spreadsheet calculator was developed using Excel 2000. The incorporated decision-making tools can be used for data analysis and improve the reporting of results of any

3. Extended score interval in the assessment of basic surgical skills.

Science.gov (United States)

Acosta, Stefan; Sevonius, Dan; Beckman, Anders

2015-01-01

The Basic Surgical Skills course uses an assessment score interval of 0-3. An extended score interval, 1-6, was proposed by the Swedish steering committee of the course. The aim of this study was to analyze the trainee scores in the current 0-3 scored version compared to a proposed 1-6 scored version. Sixteen participants, seven females and nine males, were evaluated in the current and proposed assessment forms by instructors, observers, and learners themselves during the first and second day. In each assessment form, 17 tasks were assessed. The inter-rater reliability between the current and the proposed score sheets were evaluated with intraclass correlation (ICC) with 95% confidence intervals (CI). The distribution of scores for 'knot tying' at the last time point and 'bowel anastomosis side to side' given by the instructors in the current assessment form showed that the highest score was given in 31 and 62%, respectively. No ceiling effects were found in the proposed assessment form. The overall ICC between the current and proposed score sheets after assessment by the instructors increased from 0.38 (95% CI 0.77-0.78) on Day 1 to 0.83 (95% CI 0.51-0.94) on Day 2. A clear ceiling effect of scores was demonstrated in the current assessment form, questioning its validity. The proposed score sheet provides more accurate scores and seems to be a better feedback instrument for learning technical surgical skills in the Basic Surgical Skills course.

4. Haematological and biochemical reference intervals for free-ranging brown bears (Ursus arctos) in Sweden

DEFF Research Database (Denmark)

Græsli, Anne Randi; Fahlman, Åsa; Evans, Alina L.

2014-01-01

BackgroundEstablishment of haematological and biochemical reference intervals is important to assess health of animals on individual and population level. Reference intervals for 13 haematological and 34 biochemical variables were established based on 88 apparently healthy free-ranging brown bears...... and marking for ecological studies. For each of the variables, the reference interval was described based on the 95% confidence interval, and differences due to host characteristics sex and age were included if detected. To our knowledge, this is the first report of reference intervals for free-ranging brown...... and the differences due to host factors age and gender can be useful for evaluation of health status in free-ranging European brown bears....

5. CIMP status of interval colon cancers: another piece to the puzzle.

Science.gov (United States)

Arain, Mustafa A; Sawhney, Mandeep; Sheikh, Shehla; Anway, Ruth; Thyagarajan, Bharat; Bond, John H; Shaukat, Aasma

2010-05-01

Colon cancers diagnosed in the interval after a complete colonoscopy may occur due to limitations of colonoscopy or due to the development of new tumors, possibly reflecting molecular and environmental differences in tumorigenesis resulting in rapid tumor growth. In a previous study from our group, interval cancers (colon cancers diagnosed within 5 years of a complete colonoscopy) were almost four times more likely to demonstrate microsatellite instability (MSI) than non-interval cancers. In this study we extended our molecular analysis to compare the CpG island methylator phenotype (CIMP) status of interval and non-interval colorectal cancers and investigate the relationship between the CIMP and MSI pathways in the pathogenesis of interval cancers. We searched our institution's cancer registry for interval cancers, defined as colon cancers that developed within 5 years of a complete colonoscopy. These were frequency matched in a 1:2 ratio by age and sex to patients with non-interval cancers (defined as colon cancers diagnosed on a patient's first recorded colonoscopy). Archived cancer specimens for all subjects were retrieved and tested for CIMP gene markers. The MSI status of subjects identified between 1989 and 2004 was known from our previous study. Tissue specimens of newly identified cases and controls (between 2005 and 2006) were tested for MSI. There were 1,323 cases of colon cancer diagnosed over the 17-year study period, of which 63 were identified as having interval cancer and matched to 131 subjects with non-interval cancer. Study subjects were almost all Caucasian men. CIMP was present in 57% of interval cancers compared to 33% of non-interval cancers (P=0.004). As shown previously, interval cancers were more likely than non-interval cancers to occur in the proximal colon (63% vs. 39%; P=0.002), and have MSI 29% vs. 11%, P=0.004). In multivariable logistic regression model, proximal location (odds ratio (OR) 1.85; 95% confidence interval (CI) 1

6. Prolonged corrected QT interval is predictive of future stroke events even in subjects without ECG-diagnosed left ventricular hypertrophy.

Science.gov (United States)

Ishikawa, Joji; Ishikawa, Shizukiyo; Kario, Kazuomi

2015-03-01

We attempted to evaluate whether subjects who exhibit prolonged corrected QT (QTc) interval (≥440 ms in men and ≥460 ms in women) on ECG, with and without ECG-diagnosed left ventricular hypertrophy (ECG-LVH; Cornell product, ≥244 mV×ms), are at increased risk of stroke. Among the 10 643 subjects, there were a total of 375 stroke events during the follow-up period (128.7±28.1 months; 114 142 person-years). The subjects with prolonged QTc interval (hazard ratio, 2.13; 95% confidence interval, 1.22-3.73) had an increased risk of stroke even after adjustment for ECG-LVH (hazard ratio, 1.71; 95% confidence interval, 1.22-2.40). When we stratified the subjects into those with neither a prolonged QTc interval nor ECG-LVH, those with a prolonged QTc interval but without ECG-LVH, and those with ECG-LVH, multivariate-adjusted Cox proportional hazards analysis demonstrated that the subjects with prolonged QTc intervals but not ECG-LVH (1.2% of all subjects; incidence, 10.7%; hazard ratio, 2.70, 95% confidence interval, 1.48-4.94) and those with ECG-LVH (incidence, 7.9%; hazard ratio, 1.83; 95% confidence interval, 1.31-2.57) had an increased risk of stroke events, compared with those with neither a prolonged QTc interval nor ECG-LVH. In conclusion, prolonged QTc interval was associated with stroke risk even among patients without ECG-LVH in the general population. © 2014 American Heart Association, Inc.

7. Conditional prediction intervals of wind power generation

DEFF Research Database (Denmark)

Pinson, Pierre; Kariniotakis, Georges

2010-01-01

A generic method for the providing of prediction intervals of wind power generation is described. Prediction intervals complement the more common wind power point forecasts, by giving a range of potential outcomes for a given probability, their so-called nominal coverage rate. Ideally they inform...... on the characteristics of prediction errors for providing conditional interval forecasts. By simultaneously generating prediction intervals with various nominal coverage rates, one obtains full predictive distributions of wind generation. Adapted resampling is applied here to the case of an onshore Danish wind farm...... to the case of a large number of wind farms in Europe and Australia among others is finally discussed....

8. Stability in the metamemory realism of eyewitness confidence judgments.

Science.gov (United States)

Buratti, Sandra; Allwood, Carl Martin; Johansson, Marcus

2014-02-01

9. Intuitive Feelings of Warmth and Confidence in Insight and Noninsight Problem Solving of Magic Tricks

Science.gov (United States)

Hedne, Mikael R.; Norman, Elisabeth; Metcalfe, Janet

2016-01-01

The focus of the current study is on intuitive feelings of insight during problem solving and the extent to which such feelings are predictive of successful problem solving. We report the results from an experiment (N = 51) that applied a procedure where the to-be-solved problems were 32 short (15 s) video recordings of magic tricks. The procedure included metacognitive ratings similar to the “warmth ratings” previously used by Metcalfe and colleagues, as well as confidence ratings. At regular intervals during problem solving, participants indicated the perceived closeness to the correct solution. Participants also indicated directly whether each problem was solved by insight or not. Problems that people claimed were solved by insight were characterized by higher accuracy and higher confidence than noninsight solutions. There was no difference between the two types of solution in warmth ratings, however. Confidence ratings were more strongly associated with solution accuracy for noninsight than insight trials. Moreover, for insight trials the participants were more likely to repeat their incorrect solutions on a subsequent recognition test. The results have implications for understanding people's metacognitive awareness of the cognitive processes involved in problem solving. They also have general implications for our understanding of how intuition and insight are related. PMID:27630598

10. Registered nurse leadership style and confidence in delegation.

Science.gov (United States)

Saccomano, Scott J; Pinto-Zipp, Genevieve

2011-05-01

11. Determining the confidence levels of sensor outputs using neural networks

International Nuclear Information System (INIS)

Broten, G.S.; Wood, H.C.

1995-01-01

This paper describes an approach for determining the confidence level of a sensor output using multi-sensor arrays, sensor fusion and artificial neural networks. The authors have shown in previous work that sensor fusion and artificial neural networks can be used to learn the relationships between the outputs of an array of simulated partially selective sensors and the individual analyte concentrations in a mixture of analyses. Other researchers have shown that an array of partially selective sensors can be used to determine the individual gas concentrations in a gaseous mixture. The research reported in this paper shows that it is possible to extract confidence level information from an array of partially selective sensors using artificial neural networks. The confidence level of a sensor output is defined as a numeric value, ranging from 0% to 100%, that indicates the confidence associated with a output of a given sensor. A three layer back-propagation neural network was trained on a subset of the sensor confidence level space, and was tested for its ability to generalize, where the confidence level space is defined as all possible deviations from the correct sensor output. A learning rate of 0.1 was used and no momentum terms were used in the neural network. This research has shown that an artificial neural network can accurately estimate the confidence level of individual sensors in an array of partially selective sensors. This research has also shown that the neural network's ability to determine the confidence level is influenced by the complexity of the sensor's response and that the neural network is able to estimate the confidence levels even if more than one sensor is in error. The fundamentals behind this research could be applied to other configurations besides arrays of partially selective sensors, such as an array of sensors separated spatially. An example of such a configuration could be an array of temperature sensors in a tank that is not in

12. A modified Wald interval for the area under the ROC curve (AUC) in diagnostic case-control studies.

Science.gov (United States)

Kottas, Martina; Kuss, Oliver; Zapf, Antonia

2014-02-19

The area under the receiver operating characteristic (ROC) curve, referred to as the AUC, is an appropriate measure for describing the overall accuracy of a diagnostic test or a biomarker in early phase trials without having to choose a threshold. There are many approaches for estimating the confidence interval for the AUC. However, all are relatively complicated to implement. Furthermore, many approaches perform poorly for large AUC values or small sample sizes. The AUC is actually a probability. So we propose a modified Wald interval for a single proportion, which can be calculated on a pocket calculator. We performed a simulation study to compare this modified Wald interval (without and with continuity correction) with other intervals regarding coverage probability and statistical power. The main result is that the proposed modified Wald intervals maintain and exploit the type I error much better than the intervals of Agresti-Coull, Wilson, and Clopper-Pearson. The interval suggested by Bamber, the Mann-Whitney interval without transformation and also the interval of the binormal AUC are very liberal. For small sample sizes the Wald interval with continuity has a comparable coverage probability as the LT interval and higher power. For large sample sizes the results of the LT interval and of the Wald interval without continuity correction are comparable. If individual patient data is not available, but only the estimated AUC and the total sample size, the modified Wald intervals can be recommended as confidence intervals for the AUC. For small sample sizes the continuity correction should be used.

13. Market Confidence Predicts Stock Price: Beyond Supply and Demand.

Science.gov (United States)

Sun, Xiao-Qian; Shen, Hua-Wei; Cheng, Xue-Qi; Zhang, Yuqing

2016-01-01

Stock price prediction is an important and challenging problem in stock market analysis. Existing prediction methods either exploit autocorrelation of stock price and its correlation with the supply and demand of stock, or explore predictive indictors exogenous to stock market. In this paper, using transaction record of stocks with identifier of traders, we introduce an index to characterize market confidence, i.e., the ratio of the number of traders who is active in two successive trading days to the number of active traders in a certain trading day. Strong Granger causality is found between the index of market confidence and stock price. We further predict stock price by incorporating the index of market confidence into a neural network based on time series of stock price. Experimental results on 50 stocks in two Chinese Stock Exchanges demonstrate that the accuracy of stock price prediction is significantly improved by the inclusion of the market confidence index. This study sheds light on using cross-day trading behavior to characterize market confidence and to predict stock price.

14. Conquering Credibility for Monetary Policy Under Sticky Confidence

Directory of Open Access Journals (Sweden)

Jaylson Jair da Silveira

2015-06-01

Full Text Available We derive a best-reply monetary policy when the confidence by price setters on the monetary authority’s commitment to price level targeting may be both incomplete and sticky. We find that complete confidence (or full credibility is not a necessary condition for the achievement of a price level target even when heterogeneity in firms’ price level expectations is endogenously time-varying and may emerge as a long-run equilibrium outcome. In fact, in the absence of exogenous perturbations to the dynamic of confidence building, it is the achievement of a price level target for long enough that, due to stickiness in the state of confidence, rather ensures the conquering of full credibility. This result has relevant implications for the conduct of monetary policy in pursuit of price stability. One implication is that setting a price level target matters more as a means to provide monetary policy with a sharper focus on price stability than as a device to conquer credibility. As regards the conquering of credibility for monetary policy, it turns out that actions speak louder than words, as the continuing achievement of price stability is what ultimately performs better as a confidence-building device.

15. Market Confidence Predicts Stock Price: Beyond Supply and Demand.

Directory of Open Access Journals (Sweden)

Xiao-Qian Sun

Full Text Available Stock price prediction is an important and challenging problem in stock market analysis. Existing prediction methods either exploit autocorrelation of stock price and its correlation with the supply and demand of stock, or explore predictive indictors exogenous to stock market. In this paper, using transaction record of stocks with identifier of traders, we introduce an index to characterize market confidence, i.e., the ratio of the number of traders who is active in two successive trading days to the number of active traders in a certain trading day. Strong Granger causality is found between the index of market confidence and stock price. We further predict stock price by incorporating the index of market confidence into a neural network based on time series of stock price. Experimental results on 50 stocks in two Chinese Stock Exchanges demonstrate that the accuracy of stock price prediction is significantly improved by the inclusion of the market confidence index. This study sheds light on using cross-day trading behavior to characterize market confidence and to predict stock price.

16. Kangaroo Care Education Effects on Nurses' Knowledge and Skills Confidence.

Science.gov (United States)

Almutairi, Wedad Matar; Ludington-Hoe, Susan M

2016-11-01

Less than 20% of the 996 NICUs in the United States routinely practice kangaroo care, due in part to the inadequate knowledge and skills confidence of nurses. Continuing education improves knowledge and skills acquisition, but the effects of a kangaroo care certification course on nurses' knowledge and skills confidence are unknown. A pretest-posttest quasi-experiment was conducted. The Kangaroo Care Knowledge and Skills Confidence Tool was administered to 68 RNs at a 2.5-day course about kangaroo care evidence and skills. Measures of central tendency, dispersion, and paired t tests were conducted on 57 questionnaires. The nurses' characteristics were varied. The mean posttest Knowledge score (M = 88.54, SD = 6.13) was significantly higher than the pretest score (M = 78.7, SD = 8.30), t [54] = -9.1, p = .000), as was the posttest Skills Confidence score (pretest M = 32.06, SD = 3.49; posttest M = 26.80, SD = 5.22), t [53] = -8.459, p = .000). The nurses' knowledge and skills confidence of kangaroo care improved following continuing education, suggesting a need for continuing education in this area. J Contin Educ Nurs. 2016;47(11):518-524. Copyright 2016, SLACK Incorporated.

17. Institutional Confidence in the United States: Attitudes of Secular Americans

Directory of Open Access Journals (Sweden)

Isabella Kasselstrand

2017-04-01

Full Text Available The First Amendment to the United States’ Constitution addresses freedom of religion and the separation of church and state. However, the historical influence of religion in laws, policies, and political representation have left secular individuals feeling excluded. At the same time, levels of confidence in social and political institutions in the United States are at an all-time low. This begs the question: Is there a relationship between secularity and confidence in various social and political institutions (e.g. the armed forces, churches, major companies, government, police, and political parties? This question is examined using data on the United States from the World Values Survey from 1995–2011. While controlling for a range of key demographics, the findings show a negative relationship between secularity and institutional confidence. More specifically, atheists and nonreligious individuals are less likely than those who are religious to have confidence in all six institutions. Based on previous literature and the empirical evidence presented in this study, we argue that overall lower levels of institutional confidence among secular Americans is an outcome of the exclusion of such individuals from American social life. Thus, it highlights the importance of addressing the stereotypes and prejudice that this minority group faces.

18. Nurse leader certification preparation: how are confidence levels impacted?

Science.gov (United States)

Junger, Stacey; Trinkle, Nicole; Hall, Norma

2016-09-01

19. Maximum-confidence discrimination among symmetric qudit states

International Nuclear Information System (INIS)

Jimenez, O.; Solis-Prosser, M. A.; Delgado, A.; Neves, L.

2011-01-01

We study the maximum-confidence (MC) measurement strategy for discriminating among nonorthogonal symmetric qudit states. Restricting to linearly dependent and equally likely pure states, we find the optimal positive operator valued measure (POVM) that maximizes our confidence in identifying each state in the set and minimizes the probability of obtaining inconclusive results. The physical realization of this POVM is completely determined and it is shown that after an inconclusive outcome, the input states may be mapped into a new set of equiprobable symmetric states, restricted, however, to a subspace of the original qudit Hilbert space. By applying the MC measurement again onto this new set, we can still gain some information about the input states, although with less confidence than before. This leads us to introduce the concept of sequential maximum-confidence (SMC) measurements, where the optimized MC strategy is iterated in as many stages as allowed by the input set, until no further information can be extracted from an inconclusive result. Within each stage of this measurement our confidence in identifying the input states is the highest possible, although it decreases from one stage to the next. In addition, the more stages we accomplish within the maximum allowed, the higher will be the probability of correct identification. We will discuss an explicit example of the optimal SMC measurement applied in the discrimination among four symmetric qutrit states and propose an optical network to implement it.

20. Emotor control: computations underlying bodily resource allocation, emotions, and confidence.

Science.gov (United States)

Kepecs, Adam; Mensh, Brett D

2015-12-01

Emotional processes are central to behavior, yet their deeply subjective nature has been a challenge for neuroscientific study as well as for psychiatric diagnosis. Here we explore the relationships between subjective feelings and their underlying brain circuits from a computational perspective. We apply recent insights from systems neuroscience-approaching subjective behavior as the result of mental computations instantiated in the brain-to the study of emotions. We develop the hypothesis that emotions are the product of neural computations whose motor role is to reallocate bodily resources mostly gated by smooth muscles. This "emotor" control system is analagous to the more familiar motor control computations that coordinate skeletal muscle movements. To illustrate this framework, we review recent research on "confidence." Although familiar as a feeling, confidence is also an objective statistical quantity: an estimate of the probability that a hypothesis is correct. This model-based approach helped reveal the neural basis of decision confidence in mammals and provides a bridge to the subjective feeling of confidence in humans. These results have important implications for psychiatry, since disorders of confidence computations appear to contribute to a number of psychopathologies. More broadly, this computational approach to emotions resonates with the emerging view that psychiatric nosology may be best parameterized in terms of disorders of the cognitive computations underlying complex behavior.

1. Nurses' training and confidence on deep venous catheterization.

Science.gov (United States)

Liachopoulou, A P; Synodinou-Kamilou, E E; Deligiannidi, P G; Giannakopoulou, M; Birbas, K N

2008-01-01

The rough estimation of the education and the self-confidence of nurses, both students and professionals, regarding deep venous catheterization in adult patients, the evaluation of the change in self-confidence of one team of students who were trained with a simulator on deep venous catheterization and the correlation of their self-confidence with their performance recorded by the simulator. Seventy-six nurses and one hundred twenty-four undergraduate students participated in the study. Fourty-four University students took part in a two-day educational seminar and were trained on subclavian and femoral vein paracentesis with a simulator and an anatomical model. Three questionnaires were filled in by the participants: one from nurses, one from students of Technological institutions, while the University students filled in the previous questionnaire before their attendance of the seminar, and another questionnaire after having attended it. Impressive results in improving the participants' self-confidence were recorded. However, the weak correlation of their self-confidence with the score automatically provided by the simulator after each user's training obligates us to be particularly cautious about the ability of the users to repeat the action successfully in a clinical environment. Educational courses and simulators are useful educational tools that are likely to shorten but in no case can efface the early phase of the learning curve in clinical setting, substituting the clinical training of inexperienced users.

2. Interval logic. Proof theory and theorem proving

DEFF Research Database (Denmark)

Rasmussen, Thomas Marthedal

2002-01-01

of a direction of an interval, and present a sound and complete Hilbert proof system for it. Because of its generality, SIL can conveniently act as a general formalism in which other interval logics can be encoded. We develop proof theory for SIL including both a sequent calculus system and a labelled natural...

3. Risk factors for QTc interval prolongation

NARCIS (Netherlands)

Heemskerk, Charlotte P.M.; Pereboom, Marieke; van Stralen, Karlijn; Berger, Florine A.; van den Bemt, Patricia M.L.A.; Kuijper, Aaf F.M.; van der Hoeven, Ruud T M; Mantel-Teeuwisse, Aukje K.; Becker, Matthijs L

2018-01-01

Purpose: Prolongation of the QTc interval may result in Torsade de Pointes, a ventricular arrhythmia. Numerous risk factors for QTc interval prolongation have been described, including the use of certain drugs. In clinical practice, there is much debate about the management of the risks involved. In

4. Interval Forecast for Smooth Transition Autoregressive Model ...

African Journals Online (AJOL)

In this paper, we propose a simple method for constructing interval forecast for smooth transition autoregressive (STAR) model. This interval forecast is based on bootstrapping the residual error of the estimated STAR model for each forecast horizon and computing various Akaike information criterion (AIC) function. This new ...

5. New interval forecast for stationary autoregressive models ...

African Journals Online (AJOL)

In this paper, we proposed a new forecasting interval for stationary Autoregressive, AR(p) models using the Akaike information criterion (AIC) function. Ordinarily, the AIC function is used to determine the order of an AR(p) process. In this study however, AIC forecast interval compared favorably with the theoretical forecast ...

6. QT interval in healthy dogs: which method of correcting the QT interval in dogs is appropriate for use in small animal clinics?

Directory of Open Access Journals (Sweden)

Maira S. Oliveira

2014-05-01

Full Text Available The electrocardiography (ECG QT interval is influenced by fluctuations in heart rate (HR what may lead to misinterpretation of its length. Considering that alterations in QT interval length reflect abnormalities of the ventricular repolarisation which predispose to occurrence of arrhythmias, this variable must be properly evaluated. The aim of this work is to determine which method of correcting the QT interval is the most appropriate for dogs regarding different ranges of normal HR (different breeds. Healthy adult dogs (n=130; German Shepherd, Boxer, Pit Bull Terrier, and Poodle were submitted to ECG examination and QT intervals were determined in triplicates from the bipolar limb II lead and corrected for the effects of HR through the application of three published formulae involving quadratic, cubic or linear regression. The mean corrected QT values (QTc obtained using the diverse formulae were significantly different (ρ<0.05, while those derived according to the equation QTcV = QT + 0.087(1- RR were the most consistent (linear regression. QTcV values were strongly correlated (r=0.83 with the QT interval and showed a coefficient of variation of 8.37% and a 95% confidence interval of 0.22-0.23 s. Owing to its simplicity and reliability, the QTcV was considered the most appropriate to be used for the correction of QT interval in dogs.

7. Expressing Intervals in Automated Service Negotiation

Science.gov (United States)

Clark, Kassidy P.; Warnier, Martijn; van Splunter, Sander; Brazier, Frances M. T.

During automated negotiation of services between autonomous agents, utility functions are used to evaluate the terms of negotiation. These terms often include intervals of values which are prone to misinterpretation. It is often unclear if an interval embodies a continuum of real numbers or a subset of natural numbers. Furthermore, it is often unclear if an agent is expected to choose only one value, multiple values, a sub-interval or even multiple sub-intervals. Additional semantics are needed to clarify these issues. Normally, these semantics are stored in a domain ontology. However, ontologies are typically domain specific and static in nature. For dynamic environments, in which autonomous agents negotiate resources whose attributes and relationships change rapidly, semantics should be made explicit in the service negotiation. This paper identifies issues that are prone to misinterpretation and proposes a notation for expressing intervals. This notation is illustrated using an example in WS-Agreement.

8. Reviewing interval cancers: Time well spent?

International Nuclear Information System (INIS)

Gower-Thomas, Kate; Fielder, Hilary M.P.; Branston, Lucy; Greening, Sarah; Beer, Helen; Rogers, Cerilan

2002-01-01

OBJECTIVES: To categorize interval cancers, and thus identify false-negatives, following prevalent and incident screens in the Welsh breast screening programme. SETTING: Breast Test Wales (BTW) Llandudno, Cardiff and Swansea breast screening units. METHODS: Five hundred and sixty interval breast cancers identified following negative mammographic screening between 1989 and 1997 were reviewed by eight screening radiologists. The blind review was achieved by mixing the screening films of women who subsequently developed an interval cancer with screen negative films of women who did not develop cancer, in a ratio of 4:1. Another radiologist used patients' symptomatic films to record a reference against which the reviewers' reports of the screening films were compared. Interval cancers were categorized as 'true', 'occult', 'false-negative' or 'unclassified' interval cancers or interval cancers with minimal signs, based on the National Health Service breast screening programme (NHSBSP) guidelines. RESULTS: Of the classifiable interval films, 32% were false-negatives, 55% were true intervals and 12% occult. The proportion of false-negatives following incident screens was half that following prevalent screens (P = 0.004). Forty percent of the seed films were recalled by the panel. CONCLUSIONS: Low false-negative interval cancer rates following incident screens (18%) versus prevalent screens (36%) suggest that lower cancer detection rates at incident screens may have resulted from fewer cancers than expected being present, rather than from a failure to detect tumours. The panel method for categorizing interval cancers has significant flaws as the results vary markedly with different protocol and is no more accurate than other, quicker and more timely methods. Gower-Thomas, K. et al. (2002)

9. Exploring Self - Confidence Level of High School Students Doing Sport

Directory of Open Access Journals (Sweden)

Nurullah Emir Ekinci

2014-10-01

Full Text Available The aim of this study was to investigate self-confidence levels of high school students, who do sport, in the extent of their gender, sport branch (individual/team sports and aim for participating in sport (professional/amateur. 185 active high school students from Kutahya voluntarily participated for the study. In the study as data gathering tool self-confidence scale was used. In the evaluation of the data as a hypothesis test Mann Whitney U non parametric test was used. As a result self-confidence levels of participants showed significant differences according to their gender and sport branch but there was no significant difference according to aim for participating in sport.

10. Building and strengthening confidence and security in Asia

International Nuclear Information System (INIS)

Corden, P.S.

1992-01-01

This paper presents a few thoughts on the question of building and strengthening confidence and security in Asia, in particular in the area centred on the Korean peninsula. This question includes the process of establishing and implementing confidence- and security-building measures, some of which might involve States other than North and South Korea. The development of CSBMs has now been well established in Europe, and there are encouraging signs that such measures are taking hold in other areas of the world, including in Korea. Consequently there is a fairly rich mine of information, precedent and experience from which to draw in focusing on the particular subject at hand. In these remarks the concept of confidence- and security-building is briefly addressed and measures are examined that have proven useful in other circumstances and review some possibilities that appear of interest in the present context

11. Perceptual learning effect on decision and confidence thresholds.

Science.gov (United States)

Solovey, Guillermo; Shalom, Diego; Pérez-Schuster, Verónica; Sigman, Mariano

2016-10-01

12. Label-Driven Learning Framework: Towards More Accurate Bayesian Network Classifiers through Discrimination of High-Confidence Labels

Directory of Open Access Journals (Sweden)

Yi Sun

2017-12-01

Full Text Available Bayesian network classifiers (BNCs have demonstrated competitive classification accuracy in a variety of real-world applications. However, it is error-prone for BNCs to discriminate among high-confidence labels. To address this issue, we propose the label-driven learning framework, which incorporates instance-based learning and ensemble learning. For each testing instance, high-confidence labels are first selected by a generalist classifier, e.g., the tree-augmented naive Bayes (TAN classifier. Then, by focusing on these labels, conditional mutual information is redefined to more precisely measure mutual dependence between attributes, thus leading to a refined generalist with a more reasonable network structure. To enable finer discrimination, an expert classifier is tailored for each high-confidence label. Finally, the predictions of the refined generalist and the experts are aggregated. We extend TAN to LTAN (Label-driven TAN by applying the proposed framework. Extensive experimental results demonstrate that LTAN delivers superior classification accuracy to not only several state-of-the-art single-structure BNCs but also some established ensemble BNCs at the expense of reasonable computation overhead.

13. Aerogels in Chemical Engineering: Strategies Toward Tailor-Made Aerogels.

Science.gov (United States)

Smirnova, Irina; Gurikov, Pavel

2017-06-07

The present review deals with recent advances in the rapidly growing field of aerogel research and technology. The major focus of the review lies in approaches that allow tailoring of aerogel properties to meet application-driven requirements. The decisive properties of aerogels are discussed with regard to existing and potential application areas. Various tailoring strategies, such as modulation of the pore structure, coating, surface modification, and post-treatment, are illustrated by results of the last decade. In view of commercialization of aerogel-based products, a panorama of current industrial aerogel suppliers is given, along with a discussion of possible alternative sources for raw materials and precursors. Finally, growing points and perspectives of the aerogel field are summarized.

14. Field profile tailoring in a-Si:H radiation detectors

International Nuclear Information System (INIS)

Fujieda, I.; Cho, G.; Conti, M.; Drewery, J.; Kaplan, S.N.; Perez-Mendez, V.; Quershi, S.; Wildermuth, D.; Street, R.A.

1990-03-01

The capability of tailoring the field profile in reverse-biased a-Si:H diodes by doping and/or manipulating electrode shapes opens a way to many interesting device structures. Charge collection in a-Si:H radiation detectors is improved for high LET particle detection by inserting thin doped layers into the i-layer of the usual p-i-n diode. This buried p-i-n structure enables us to apply higher reverse-bias and the electric field is enhanced in the mid i-layer. Field profiles of the new structures are calculated and the improved charge collection process is discussed. Also discussed is the possibility of field profile tailoring by utilizing the fixed space charges in i-layers and/or manipulating electrode shapes of the reverse-biased p-i-n diodes. 10 refs., 7 figs

15. Highly tailorable thiol-ene based emulsion-templated monoliths

DEFF Research Database (Denmark)

Lafleur, J. P.; Kutter, J. P.

2014-01-01

The attractive surface properties of thiol-ene polymers combined with their ease of processing make them ideal substrates in many bioanalytical applications. We report the synthesis of highly tailorable emulsion-templated porous polymers and beads in microfluidic devices based on off-stoichiometr......The attractive surface properties of thiol-ene polymers combined with their ease of processing make them ideal substrates in many bioanalytical applications. We report the synthesis of highly tailorable emulsion-templated porous polymers and beads in microfluidic devices based on off......-stoichiometry thiolene chemistry. The method allows monolith synthesis and anchoring inside thiol-ene microchannels in a single step. Variations in the monomer stoichiometric ratios and/or amount of porogen used allow for the creation of extremely varied polymer morphologies, from foam-like materials to dense networks...

16. FSW of Aluminum Tailor Welded Blanks across Machine Platforms

Energy Technology Data Exchange (ETDEWEB)

Hovanski, Yuri; Upadhyay, Piyush; Carlson, Blair; Szymanski, Robert; Luzanski, Tom; Marshall, Dustin

2015-02-16

Development and characterization of friction stir welded aluminum tailor welded blanks was successfully carried out on three separate machine platforms. Each was a commercially available, gantry style, multi-axis machine designed specifically for friction stir welding. Weld parameters were developed to support high volume production of dissimilar thickness aluminum tailor welded blanks at speeds of 3 m/min and greater. Parameters originally developed on an ultra-high stiffness servo driven machine where first transferred to a high stiffness servo-hydraulic friction stir welding machine, and subsequently transferred to a purpose built machine designed to accommodate thin sheet aluminum welding. The inherent beam stiffness, bearing compliance, and control system for each machine were distinctly unique, which posed specific challenges in transferring welding parameters across machine platforms. This work documents the challenges imposed by successfully transferring weld parameters from machine to machine, produced from different manufacturers and with unique control systems and interfaces.

17. Conservatism implications of shock test tailoring for multiple design environments

Science.gov (United States)

Baca, Thomas J.; Bell, R. Glenn; Robbins, Susan A.

1987-01-01

A method for analyzing shock conservation in test specifications that have been tailored to qualify a structure for multiple design environments is discussed. Shock test conservation is qualified for shock response spectra, shock intensity spectra and ranked peak acceleration data in terms of an Index of Conservation (IOC) and an Overtest Factor (OTF). The multi-environment conservation analysis addresses the issue of both absolute and average conservation. The method is demonstrated in a case where four laboratory tests have been specified to qualify a component which must survive seven different field environments. Final judgment of the tailored test specification is shown to require an understanding of the predominant failure modes of the test item.

18. PULSED MODE LASER CUTTING OF SHEETS FOR TAILORED BLANKS

DEFF Research Database (Denmark)

Bagger, Claus; Olsen, Flemming Ove

1999-01-01

This paper describes how the laser cutting process can be optimised in such a way that the cut sheets can subsequently be used to laser weld tailored blanks. In a number of systematic laboratory experiments the effect of cutting speed, assist gas pressure, average laser power and pulse energy...... item for parameter optimisation of laser cut sheets used for tailored blanks. It was concluded that high quality cut edges with a squareness as small as 0.015 mm may be obtained. Such edges are well suited for subsequent laser welding....... was analysed. For quality assessment the squareness, roughness and dross attachment of laser cut blanks were measured. In all tests, the medium strength steel GA 260 with a thickness of 1.8 mm was used. In this work it has been successfully demonstrated that the squareness of a cut can be used as a quality...

19. Nearest unlike neighbor (NUN): an aid to decision confidence estimation

Science.gov (United States)

Dasarathy, Belur V.

1995-09-01

The concept of nearest unlike neighbor (NUN), proposed and explored previously in the design of nearest neighbor (NN) based decision systems, is further exploited in this study to develop a measure of confidence in the decisions made by NN-based decision systems. This measure of confidence, on the basis of comparison with a user-defined threshold, may be used to determine the acceptability of the decision provided by the NN-based decision system. The concepts, associated methodology, and some illustrative numerical examples using the now classical Iris data to bring out the ease of implementation and effectiveness of the proposed innovations are presented.

20. Building, measuring and improving public confidence in the nuclear regulator

International Nuclear Information System (INIS)

2006-01-01

An important factor for public confidence in the nuclear regulator is the general public trust of the government and its representatives, which is clearly not the same in all countries. Likewise, cultural differences between countries can be considerable, and similar means of communication between government authorities and the public may not be universally effective. Nevertheless, this workshop identified a number of common principles for the communication of nuclear regulatory decisions that can be recommended to all regulators. They have been cited in particular for their ability to help build, measure and/or improve overall public confidence in the nuclear regulator. (author)

1. Lower Confidence Bounds for the Probabilities of Correct Selection

Directory of Open Access Journals (Sweden)

2011-01-01

Full Text Available We extend the results of Gupta and Liang (1998, derived for location parameters, to obtain lower confidence bounds for the probability of correctly selecting the t best populations (PCSt simultaneously for all t=1,…,k−1 for the general scale parameter models, where k is the number of populations involved in the selection problem. The application of the results to the exponential and normal probability models is discussed. The implementation of the simultaneous lower confidence bounds for PCSt is illustrated through real-life datasets.

2. Effect of False Confidence on Asset Allocation Decisions of Households

Directory of Open Access Journals (Sweden)

Swarn Chatterjee

2014-01-01

Full Text Available This paper investigates whether false confidence, as characterized by a high level of personal mastery and a low level of intelligence (IQ, results in frequent investor trading and subsequent investor wealth erosion across time. Using the National Longitudinal Survey (NLSY79, change in wealth and asset allocation across time is modeled as a function of various behavioral, socio-economic and demographic variables drawn from prior literature.  Findings of this research reveal that false confidence is indeed a predictor of trading activity in individual investment assets, and it also has a negative impact on individual wealth creation across time.

3. Confidence bounds of recurrence-based complexity measures

International Nuclear Information System (INIS)

Schinkel, Stefan; Marwan, N.; Dimigen, O.; Kurths, J.

2009-01-01

In the recent past, recurrence quantification analysis (RQA) has gained an increasing interest in various research areas. The complexity measures the RQA provides have been useful in describing and analysing a broad range of data. It is known to be rather robust to noise and nonstationarities. Yet, one key question in empirical research concerns the confidence bounds of measured data. In the present Letter we suggest a method for estimating the confidence bounds of recurrence-based complexity measures. We study the applicability of the suggested method with model and real-life data.

4. Older widows and married women: their intimates and confidants.

Science.gov (United States)

Babchuk, N; Anderson, T B

1989-01-01

Interview data obtained from 132 women sixty-five and older reveals that the widows and married women have a comparable number of primary friends. Being over age seventy-four influences the size of the friendship network for widows but not married women. The primary friendships of widows and married women parallel each other in terms of endurance and stability. Primary ties with men are the exception rather than the norm, for both widows and married women. Widows do differ from married women in that the former rely on confidant friends to a greater extent. Ties between older women and their confidants are characterized by norms of reciprocity.

5. The effect of terrorism on public confidence : an exploratory study.

Energy Technology Data Exchange (ETDEWEB)

Berry, M. S.; Baldwin, T. E.; Samsa, M. E.; Ramaprasad, A.; Decision and Information Sciences

2008-10-31

A primary goal of terrorism is to instill a sense of fear and vulnerability in a population and to erode confidence in government and law enforcement agencies to protect citizens against future attacks. In recognition of its importance, the Department of Homeland Security includes public confidence as one of the metrics it uses to assess the consequences of terrorist attacks. Hence, several factors--including a detailed understanding of the variations in public confidence among individuals, by type of terrorist event, and as a function of time--are critical to developing this metric. In this exploratory study, a questionnaire was designed, tested, and administered to small groups of individuals to measure public confidence in the ability of federal, state, and local governments and their public safety agencies to prevent acts of terrorism. Data were collected from the groups before and after they watched mock television news broadcasts portraying a smallpox attack, a series of suicide bomber attacks, a refinery bombing, and cyber intrusions on financial institutions that resulted in identity theft and financial losses. Our findings include the following: (a) the subjects can be classified into at least three distinct groups on the basis of their baseline outlook--optimistic, pessimistic, and unaffected; (b) the subjects make discriminations in their interpretations of an event on the basis of the nature of a terrorist attack, the time horizon, and its impact; (c) the recovery of confidence after a terrorist event has an incubation period and typically does not return to its initial level in the long-term; (d) the patterns of recovery of confidence differ between the optimists and the pessimists; and (e) individuals are able to associate a monetary value with a loss or gain in confidence, and the value associated with a loss is greater than the value associated with a gain. These findings illustrate the importance the public places in their confidence in government

6. Quantum condensation from a tailored exciton population in a microcavity

International Nuclear Information System (INIS)

Eastham, P. R.; Phillips, R. T.

2009-01-01

An experiment is proposed on the coherent quantum dynamics of a semiconductor microcavity containing quantum dots. Modeling the experiment using a generalized Dicke model, we show that a tailored excitation pulse can create an energy-dependent population of excitons, which subsequently evolves to a quantum condensate of excitons and photons. The population is created by a generalization of adiabatic rapid passage and then condenses due to a dynamical analog of the BCS instability.

7. Towards collaborative filtering recommender systems for tailored health communications.

Science.gov (United States)

Marlin, Benjamin M; Adams, Roy J; Sadasivam, Rajani; Houston, Thomas K

2013-01-01

The goal of computer tailored health communications (CTHC) is to promote healthy behaviors by sending messages tailored to individual patients. Current CTHC systems collect baseline patient "profiles" and then use expert-written, rule-based systems to target messages to subsets of patients. Our main interest in this work is the study of collaborative filtering-based CTHC systems that can learn to tailor future message selections to individual patients based explicit feedback about past message selections. This paper reports the results of a study designed to collect explicit feedback (ratings) regarding four aspects of messages from 100 subjects in the smoking cessation support domain. Our results show that most users have positive opinions of most messages and that the ratings for all four aspects of the messages are highly correlated with each other. Finally, we conduct a range of rating prediction experiments comparing several different model variations. Our results show that predicting future ratings based on each user's past ratings contributes the most to predictive accuracy.

8. A porous ceramic membrane tailored high-temperature supercapacitor

Science.gov (United States)

Zhang, Xin; He, Benlin; Zhao, Yuanyuan; Tang, Qunwei

2018-03-01

The supercapacitor that can operate at high-temperature are promising for markedly increase in capacitance because of accelerated charge movement. However, the state-of-the-art polymer-based membranes will decompose at high temperature. Inspired by solid oxide fuel cells, we present here the experimental realization of high-temperature supercapacitors (HTSCs) tailored with porous ceramic separator fabricated by yttria-stabilized zirconia (YSZ) and nickel oxide (NiO). Using activated carbon electrode and supporting electrolyte from potassium hydroxide (KOH) aqueous solution, a category of symmetrical HTSCs are built in comparison with a conventional polymer membrane based device. The dependence of capacitance performance on temperature is carefully studied, yielding a maximized specific capacitance of 272 F g-1 at 90 °C for the optimized HTSC tailored by NiO/YSZ membrane. Moreover, the resultant HTSC has relatively high durability when suffer repeated measurement over 1000 cycles at 90 °C, while the polymer membrane based supercapacitor shows significant reduction in capacitance at 60 °C. The high capacitance along with durability demonstrates NiO/YSZ membrane tailored HTSCs are promising in future advanced energy storage devices.

9. Flavin-catalyzed redox tailoring reactions in natural product biosynthesis.

Science.gov (United States)

Teufel, Robin

2017-10-15

Natural products are distinct and often highly complex organic molecules that constitute not only an important drug source, but have also pushed the field of organic chemistry by providing intricate targets for total synthesis. How the astonishing structural diversity of natural products is enzymatically generated in biosynthetic pathways remains a challenging research area, which requires detailed and sophisticated approaches to elucidate the underlying catalytic mechanisms. Commonly, the diversification of precursor molecules into distinct natural products relies on the action of pathway-specific tailoring enzymes that catalyze, e.g., acylations, glycosylations, or redox reactions. This review highlights a selection of tailoring enzymes that employ riboflavin (vitamin B2)-derived cofactors (FAD and FMN) to facilitate unusual redox catalysis and steer the formation of complex natural product pharmacophores. Remarkably, several such recently reported flavin-dependent tailoring enzymes expand the classical paradigms of flavin biochemistry leading, e.g., to the discovery of the flavin-N5-oxide - a novel flavin redox state and oxygenating species. Copyright © 2017 Elsevier Inc. All rights reserved.

10. Manufacturing of tailored tubes with a process integrated heat treatment

Science.gov (United States)

Hordych, Illia; Boiarkin, Viacheslav; Rodman, Dmytro; Nürnberger, Florian

2017-10-01

The usage of work-pieces with tailored properties allows for reducing costs and materials. One example are tailored tubes that can be used as end parts e.g. in the automotive industry or in domestic applications as well as semi-finished products for subsequent controlled deformation processes. An innovative technology to manufacture tubes is roll forming with a subsequent inductive heating and adapted quenching to obtain tailored properties in the longitudinal direction. This processing offers a great potential for the production of tubes with a wide range of properties, although this novel approach still requires a suited process design. Based on experimental data, a process simulation is being developed. The simulation shall be suitable for a virtual design of the tubes and allows for gaining a deeper understanding of the required processing. The model proposed shall predict microstructural and mechanical tube properties by considering process parameters, different geometries, batch-related influences etc. A validation is carried out using experimental data of tubes manufactured from various steel grades.

11. Tailor cutting of crystalline solar cells by laser micro jet

Science.gov (United States)

Bruckert, F.; Pilat, E.; Piron, P.; Torres, P.; Carron, B.; Richerzhagen, B.; Pirot, M.; Monna, R.

2012-03-01

Coupling a laser into a hair thin water micro jet (Laser Micro Jet, LMJ) for cutting applications offers a wide range of processes that are quite unique. As the laser beam is guided by internal reflections inside of a liquid cylinder, the cuts are naturally straight and do not reflect any divergence as otherwise occurs with an unguided laser beam. Furthermore, having a liquid media at the point of contact ensures a fast removal of heat and eventual debris ensuring clean cuts, which are free of any burrs. Many applications have indeed been developed for a large variety of materials, which are as different as e.g. diamond, silicon, aluminum, ceramic and hard metals. The photovoltaic industry has enjoyed in the last decades tremendous growth rates, which are still projected into the future. We focus here on the segment of Building Integrated PV (BIPV), which requests tailored solutions to actual buildings and not-one-fits-it-all standardized modules. Having the option to tailor cut solar cells opens a new field of BIPV applications. For the first time, finished crystalline solar cells have been LMJ cut into predetermined shapes. First results show that the cut is clean and neat. Preliminary solar performance measurements are positive. This opens a new avenue of tailored made modules instead of having to rely on the one-fits-alloy approach used so far.

12. Feasibility of tailoring of press formed thermoplastic composite parts

Science.gov (United States)

Sinke, J.

2018-05-01

The Tailor Made Blank concept is widely accepted in the production of sheet metal parts. By joining, adding and subtracting materials, and sometimes even applying different alloys, parts can be produced more efficiently by cost and/or weight, and new design options have been discovered. This paper is about the manufacture of press formed parts of Fibre Reinforced Thermoplastics and the evaluation whether the Tailoring concept, though adapted to the material behavior of FRTP, can be applied to these composites as well. From research, the first results and ideas are presented. One of the ideas is the multistep forming process, creating parts with thickness variations and combinations of fibre orientations that are usually not feasible using common press forming strategies. Another idea is the blending of different prepreg materials in one component. This might be useful in case of specific details, like for areas of mechanical fastening or to avoid carbon/metal contact, otherwise resulting in severe corrosion. In a brief overview, future perspectives of the potential of the Tailoring concept are presented.

13. Change in Breast Cancer Screening Intervals Since the 2009 USPSTF Guideline.

Science.gov (United States)

Wernli, Karen J; Arao, Robert F; Hubbard, Rebecca A; Sprague, Brian L; Alford-Teaster, Jennifer; Haas, Jennifer S; Henderson, Louise; Hill, Deidre; Lee, Christoph I; Tosteson, Anna N A; Onega, Tracy

2017-08-01

In 2009, the U.S. Preventive Services Task Force (USPSTF) recommended biennial mammography for women aged 50-74 years and shared decision-making for women aged 40-49 years for breast cancer screening. We evaluated changes in mammography screening interval after the 2009 recommendations. We conducted a prospective cohort study of women aged 40-74 years who received 821,052 screening mammograms between 2006 and 2012 using data from the Breast Cancer Surveillance Consortium. We compared changes in screening intervals and stratified intervals based on whether the mammogram at the end of the interval occurred before or after the 2009 recommendation. Differences in mean interval length by woman-level characteristics were compared using linear regression. The mean interval (in months) minimally decreased after the 2009 USPSTF recommendations. Among women aged 40-49 years, the mean interval decreased from 17.2 months to 17.1 months (difference -0.16%, 95% confidence interval [CI] -0.30 to -0.01). Similar small reductions were seen for most age groups. The largest change in interval length in the post-USPSTF period was declines among women with a first-degree family history of breast cancer (difference -0.68%, 95% CI -0.82 to -0.54) or a 5-year breast cancer risk ≥2.5% (difference -0.58%, 95% CI -0.73 to -0.44). The 2009 USPSTF recommendation did not lengthen the average mammography interval among women routinely participating in mammography screening. Future studies should evaluate whether breast cancer screening intervals lengthen toward biennial intervals following new national 2016 breast cancer screening recommendations, particularly among women less than 50 years of age.

14. INTERVAL OBSERVER FOR A BIOLOGICAL REACTOR MODEL

Directory of Open Access Journals (Sweden)

T. A. Kharkovskaia

2014-05-01

Full Text Available The method of an interval observer design for nonlinear systems with parametric uncertainties is considered. The interval observer synthesis problem for systems with varying parameters consists in the following. If there is the uncertainty restraint for the state values of the system, limiting the initial conditions of the system and the set of admissible values for the vector of unknown parameters and inputs, the interval existence condition for the estimations of the system state variables, containing the actual state at a given time, needs to be held valid over the whole considered time segment as well. Conditions of the interval observers design for the considered class of systems are shown. They are: limitation of the input and state, the existence of a majorizing function defining the uncertainty vector for the system, Lipschitz continuity or finiteness of this function, the existence of an observer gain with the suitable Lyapunov matrix. The main condition for design of such a device is cooperativity of the interval estimation error dynamics. An individual observer gain matrix selection problem is considered. In order to ensure the property of cooperativity for interval estimation error dynamics, a static transformation of coordinates is proposed. The proposed algorithm is demonstrated by computer modeling of the biological reactor. Possible applications of these interval estimation systems are the spheres of robust control, where the presence of various types of uncertainties in the system dynamics is assumed, biotechnology and environmental systems and processes, mechatronics and robotics, etc.

15. Magnetic Resonance Fingerprinting with short relaxation intervals.

Science.gov (United States)

Amthor, Thomas; Doneva, Mariya; Koken, Peter; Sommer, Karsten; Meineke, Jakob; Börnert, Peter

2017-09-01

The aim of this study was to investigate a technique for improving the performance of Magnetic Resonance Fingerprinting (MRF) in repetitive sampling schemes, in particular for 3D MRF acquisition, by shortening relaxation intervals between MRF pulse train repetitions. A calculation method for MRF dictionaries adapted to short relaxation intervals and non-relaxed initial spin states is presented, based on the concept of stationary fingerprints. The method is applicable to many different k-space sampling schemes in 2D and 3D. For accuracy analysis, T 1 and T 2 values of a phantom are determined by single-slice Cartesian MRF for different relaxation intervals and are compared with quantitative reference measurements. The relevance of slice profile effects is also investigated in this case. To further illustrate the capabilities of the method, an application to in-vivo spiral 3D MRF measurements is demonstrated. The proposed computation method enables accurate parameter estimation even for the shortest relaxation intervals, as investigated for different sampling patterns in 2D and 3D. In 2D Cartesian measurements, we achieved a scan acceleration of more than a factor of two, while maintaining acceptable accuracy: The largest T 1 values of a sample set deviated from their reference values by 0.3% (longest relaxation interval) and 2.4% (shortest relaxation interval). The largest T 2 values showed systematic deviations of up to 10% for all relaxation intervals, which is discussed. The influence of slice profile effects for multislice acquisition is shown to become increasingly relevant for short relaxation intervals. In 3D spiral measurements, a scan time reduction of 36% was achieved, maintaining the quality of in-vivo T1 and T2 maps. Reducing the relaxation interval between MRF sequence repetitions using stationary fingerprint dictionaries is a feasible method to improve the scan efficiency of MRF sequences. The method enables fast implementations of 3D spatially

16. Probability intervals for the top event unavailability of fault trees

International Nuclear Information System (INIS)

Lee, Y.T.; Apostolakis, G.E.

1976-06-01

The evaluation of probabilities of rare events is of major importance in the quantitative assessment of the risk from large technological systems. In particular, for nuclear power plants the complexity of the systems, their high reliability and the lack of significant statistical records have led to the extensive use of logic diagrams in the estimation of low probabilities. The estimation of probability intervals for the probability of existence of the top event of a fault tree is examined. Given the uncertainties of the primary input data, a method is described for the evaluation of the first four moments of the top event occurrence probability. These moments are then used to estimate confidence bounds by several approaches which are based on standard inequalities (e.g., Tchebycheff, Cantelli, etc.) or on empirical distributions (the Johnson family). Several examples indicate that the Johnson family of distributions yields results which are in good agreement with those produced by Monte Carlo simulation

17. Advanced Interval Management: A Benefit Analysis

Science.gov (United States)

Timer, Sebastian; Peters, Mark

2016-01-01

This document is the final report for the NASA Langley Research Center (LaRC)- sponsored task order 'Possible Benefits for Advanced Interval Management Operations.' Under this research project, Architecture Technology Corporation performed an analysis to determine the maximum potential benefit to be gained if specific Advanced Interval Management (AIM) operations were implemented in the National Airspace System (NAS). The motivation for this research is to guide NASA decision-making on which Interval Management (IM) applications offer the most potential benefit and warrant further research.

18. Generalized production planning problem under interval uncertainty

Directory of Open Access Journals (Sweden)

Samir A. Abass

2010-06-01

Full Text Available Data in many real life engineering and economical problems suffer from inexactness. Herein we assume that we are given some intervals in which the data can simultaneously and independently perturb. We consider the generalized production planning problem with interval data. The interval data are in both of the objective function and constraints. The existing results concerning the qualitative and quantitative analysis of basic notions in parametric production planning problem. These notions are the set of feasible parameters, the solvability set and the stability set of the first kind.

19. Reconstruction of dynamical systems from interspike intervals

International Nuclear Information System (INIS)

Sauer, T.

1994-01-01

Attractor reconstruction from interspike interval (ISI) data is described, in rough analogy with Taken's theorem for attractor reconstruction from time series. Assuming a generic integrate-and-fire model coupling the dynamical system to the spike train, there is a one-to-one correspondence between the system states and interspike interval vectors of sufficiently large dimension. The correspondence has an important implication: interspike intervals can be forecast from past history. We show that deterministically driven ISI series can be distinguished from stochastically driven ISI series on the basis of prediction error

20. Are we there yet? An examination of online tailored health communication.

Science.gov (United States)

Suggs, L Suzanne; McIntyre, Chris

2009-04-01

Increasingly, the Internet is playing an important role in consumer health and patient-provider communication. Seventy-three percent of American adults are now online, and 79% have searched for health information on the Internet. This study provides a baseline understanding of the extent to which health consumers are able to find tailored communication online. It describes the current behavioral focus, the channels being used to deliver the tailored content, and the level of tailoring in online-tailored communication. A content analysis of 497 health Web sites found few examples of personalized, targeted, or tailored health sites freely available online. Tailored content was provided in 13 Web sites, although 15 collected individual data. More health risk assessment (HRA) sites included tailored feedback than other topics. The patterns that emerged from the analysis demonstrate that online health users can access a number of Web sites with communication tailored to their needs.