WorldWideScience

Sample records for statistical test varying

  1. 100 statistical tests

    CERN Document Server

    Kanji, Gopal K

    2006-01-01

    This expanded and updated Third Edition of Gopal K. Kanji's best-selling resource on statistical tests covers all the most commonly used tests with information on how to calculate and interpret results with simple datasets. Each entry begins with a short summary statement about the test's purpose, and contains details of the test objective, the limitations (or assumptions) involved, a brief outline of the method, a worked example, and the numerical calculation. 100 Statistical Tests, Third Edition is the one indispensable guide for users of statistical materials and consumers of statistical information at all levels and across all disciplines.

  2. Vector-field statistics for the analysis of time varying clinical gait data.

    Science.gov (United States)

    Donnelly, C J; Alexander, C; Pataky, T C; Stannage, K; Reid, S; Robinson, M A

    2017-01-01

    In clinical settings, the time varying analysis of gait data relies heavily on the experience of the individual(s) assessing these biological signals. Though three dimensional kinematics are recognised as time varying waveforms (1D), exploratory statistical analysis of these data are commonly carried out with multiple discrete or 0D dependent variables. In the absence of an a priori 0D hypothesis, clinicians are at risk of making type I and II errors in their analyis of time varying gait signatures in the event statistics are used in concert with prefered subjective clinical assesment methods. The aim of this communication was to determine if vector field waveform statistics were capable of providing quantitative corroboration to practically significant differences in time varying gait signatures as determined by two clinically trained gait experts. The case study was a left hemiplegic Cerebral Palsy (GMFCS I) gait patient following a botulinum toxin (BoNT-A) injection to their left gastrocnemius muscle. When comparing subjective clinical gait assessments between two testers, they were in agreement with each other for 61% of the joint degrees of freedom and phases of motion analysed. For tester 1 and tester 2, they were in agreement with the vector-field analysis for 78% and 53% of the kinematic variables analysed. When the subjective analyses of tester 1 and tester 2 were pooled together and then compared to the vector-field analysis, they were in agreement for 83% of the time varying kinematic variables analysed. These outcomes demonstrate that in principle, vector-field statistics corroborates with what a team of clinical gait experts would classify as practically meaningful pre- versus post time varying kinematic differences. The potential for vector-field statistics to be used as a useful clinical tool for the objective analysis of time varying clinical gait data is established. Future research is recommended to assess the usefulness of vector-field analyses

  3. Testing for Change in Mean of Independent Multivariate Observations with Time Varying Covariance

    Directory of Open Access Journals (Sweden)

    Mohamed Boutahar

    2012-01-01

    Full Text Available We consider a nonparametric CUSUM test for change in the mean of multivariate time series with time varying covariance. We prove that under the null, the test statistic has a Kolmogorov limiting distribution. The asymptotic consistency of the test against a large class of alternatives which contains abrupt, smooth and continuous changes is established. We also perform a simulation study to analyze the size distortion and the power of the proposed test.

  4. On-line statistical processing of radiation detector pulse trains with time-varying count rates

    International Nuclear Information System (INIS)

    Apostolopoulos, G.

    2008-01-01

    Statistical analysis is of primary importance for the correct interpretation of nuclear measurements, due to the inherent random nature of radioactive decay processes. This paper discusses the application of statistical signal processing techniques to the random pulse trains generated by radiation detectors. The aims of the presented algorithms are: (i) continuous, on-line estimation of the underlying time-varying count rate θ(t) and its first-order derivative dθ/dt; (ii) detection of abrupt changes in both of these quantities and estimation of their new value after the change point. Maximum-likelihood techniques, based on the Poisson probability distribution, are employed for the on-line estimation of θ and dθ/dt. Detection of abrupt changes is achieved on the basis of the generalized likelihood ratio statistical test. The properties of the proposed algorithms are evaluated by extensive simulations and possible applications for on-line radiation monitoring are discussed

  5. EVALUATION OF A NEW MEAN SCALED AND MOMENT ADJUSTED TEST STATISTIC FOR SEM.

    Science.gov (United States)

    Tong, Xiaoxiao; Bentler, Peter M

    2013-01-01

    Recently a new mean scaled and skewness adjusted test statistic was developed for evaluating structural equation models in small samples and with potentially nonnormal data, but this statistic has received only limited evaluation. The performance of this statistic is compared to normal theory maximum likelihood and two well-known robust test statistics. A modification to the Satorra-Bentler scaled statistic is developed for the condition that sample size is smaller than degrees of freedom. The behavior of the four test statistics is evaluated with a Monte Carlo confirmatory factor analysis study that varies seven sample sizes and three distributional conditions obtained using Headrick's fifth-order transformation to nonnormality. The new statistic performs badly in most conditions except under the normal distribution. The goodness-of-fit χ(2) test based on maximum-likelihood estimation performed well under normal distributions as well as under a condition of asymptotic robustness. The Satorra-Bentler scaled test statistic performed best overall, while the mean scaled and variance adjusted test statistic outperformed the others at small and moderate sample sizes under certain distributional conditions.

  6. Testing statistical hypotheses

    CERN Document Server

    Lehmann, E L

    2005-01-01

    The third edition of Testing Statistical Hypotheses updates and expands upon the classic graduate text, emphasizing optimality theory for hypothesis testing and confidence sets. The principal additions include a rigorous treatment of large sample optimality, together with the requisite tools. In addition, an introduction to the theory of resampling methods such as the bootstrap is developed. The sections on multiple testing and goodness of fit testing are expanded. The text is suitable for Ph.D. students in statistics and includes over 300 new problems out of a total of more than 760. E.L. Lehmann is Professor of Statistics Emeritus at the University of California, Berkeley. He is a member of the National Academy of Sciences and the American Academy of Arts and Sciences, and the recipient of honorary degrees from the University of Leiden, The Netherlands and the University of Chicago. He is the author of Elements of Large-Sample Theory and (with George Casella) he is also the author of Theory of Point Estimat...

  7. Properties of permutation-based gene tests and controlling type 1 error using a summary statistic based gene test.

    Science.gov (United States)

    Swanson, David M; Blacker, Deborah; Alchawa, Taofik; Ludwig, Kerstin U; Mangold, Elisabeth; Lange, Christoph

    2013-11-07

    The advent of genome-wide association studies has led to many novel disease-SNP associations, opening the door to focused study on their biological underpinnings. Because of the importance of analyzing these associations, numerous statistical methods have been devoted to them. However, fewer methods have attempted to associate entire genes or genomic regions with outcomes, which is potentially more useful knowledge from a biological perspective and those methods currently implemented are often permutation-based. One property of some permutation-based tests is that their power varies as a function of whether significant markers are in regions of linkage disequilibrium (LD) or not, which we show from a theoretical perspective. We therefore develop two methods for quantifying the degree of association between a genomic region and outcome, both of whose power does not vary as a function of LD structure. One method uses dimension reduction to "filter" redundant information when significant LD exists in the region, while the other, called the summary-statistic test, controls for LD by scaling marker Z-statistics using knowledge of the correlation matrix of markers. An advantage of this latter test is that it does not require the original data, but only their Z-statistics from univariate regressions and an estimate of the correlation structure of markers, and we show how to modify the test to protect the type 1 error rate when the correlation structure of markers is misspecified. We apply these methods to sequence data of oral cleft and compare our results to previously proposed gene tests, in particular permutation-based ones. We evaluate the versatility of the modification of the summary-statistic test since the specification of correlation structure between markers can be inaccurate. We find a significant association in the sequence data between the 8q24 region and oral cleft using our dimension reduction approach and a borderline significant association using the

  8. Statistical Estimation of Heterogeneities: A New Frontier in Well Testing

    Science.gov (United States)

    Neuman, S. P.; Guadagnini, A.; Illman, W. A.; Riva, M.; Vesselinov, V. V.

    2001-12-01

    Well-testing methods have traditionally relied on analytical solutions of groundwater flow equations in relatively simple domains, consisting of one or at most a few units having uniform hydraulic properties. Recently, attention has been shifting toward methods and solutions that would allow one to characterize subsurface heterogeneities in greater detail. On one hand, geostatistical inverse methods are being used to assess the spatial variability of parameters, such as permeability and porosity, on the basis of multiple cross-hole pressure interference tests. On the other hand, analytical solutions are being developed to describe the mean and variance (first and second statistical moments) of flow to a well in a randomly heterogeneous medium. Geostatistical inverse interpretation of cross-hole tests yields a smoothed but detailed "tomographic" image of how parameters actually vary in three-dimensional space, together with corresponding measures of estimation uncertainty. Moment solutions may soon allow one to interpret well tests in terms of statistical parameters such as the mean and variance of log permeability, its spatial autocorrelation and statistical anisotropy. The idea of geostatistical cross-hole tomography is illustrated through pneumatic injection tests conducted in unsaturated fractured tuff at the Apache Leap Research Site near Superior, Arizona. The idea of using moment equations to interpret well-tests statistically is illustrated through a recently developed three-dimensional solution for steady state flow to a well in a bounded, randomly heterogeneous, statistically anisotropic aquifer.

  9. Determination of Geometrical REVs Based on Volumetric Fracture Intensity and Statistical Tests

    Directory of Open Access Journals (Sweden)

    Ying Liu

    2018-05-01

    Full Text Available This paper presents a method to estimate a representative element volume (REV of a fractured rock mass based on the volumetric fracture intensity P32 and statistical tests. A 150 m × 80 m × 50 m 3D fracture network model was generated based on field data collected at the Maji dam site by using the rectangular window sampling method. The volumetric fracture intensity P32 of each cube was calculated by varying the cube location in the generated 3D fracture network model and varying the cube side length from 1 to 20 m, and the distribution of the P32 values was described. The size effect and spatial effect of the fractured rock mass were studied; the P32 values from the same cube sizes and different locations were significantly different, and the fluctuation in P32 values clearly decreases as the cube side length increases. In this paper, a new method that comprehensively considers the anisotropy of rock masses, simplicity of calculation and differences between different methods was proposed to estimate the geometrical REV size. The geometrical REV size of the fractured rock mass was determined based on the volumetric fracture intensity P32 and two statistical test methods, namely, the likelihood ratio test and the Wald–Wolfowitz runs test. The results of the two statistical tests were substantially different; critical cube sizes of 13 m and 12 m were estimated by the Wald–Wolfowitz runs test and the likelihood ratio test, respectively. Because the different test methods emphasize different considerations and impact factors, considering a result that these two tests accept, the larger cube size, 13 m, was selected as the geometrical REV size of the fractured rock mass at the Maji dam site in China.

  10. CONFIDENCE LEVELS AND/VS. STATISTICAL HYPOTHESIS TESTING IN STATISTICAL ANALYSIS. CASE STUDY

    Directory of Open Access Journals (Sweden)

    ILEANA BRUDIU

    2009-05-01

    Full Text Available Estimated parameters with confidence intervals and testing statistical assumptions used in statistical analysis to obtain conclusions on research from a sample extracted from the population. Paper to the case study presented aims to highlight the importance of volume of sample taken in the study and how this reflects on the results obtained when using confidence intervals and testing for pregnant. If statistical testing hypotheses not only give an answer "yes" or "no" to some questions of statistical estimation using statistical confidence intervals provides more information than a test statistic, show high degree of uncertainty arising from small samples and findings build in the "marginally significant" or "almost significant (p very close to 0.05.

  11. The insignificance of statistical significance testing

    Science.gov (United States)

    Johnson, Douglas H.

    1999-01-01

    Despite their use in scientific journals such as The Journal of Wildlife Management, statistical hypothesis tests add very little value to the products of research. Indeed, they frequently confuse the interpretation of data. This paper describes how statistical hypothesis tests are often viewed, and then contrasts that interpretation with the correct one. I discuss the arbitrariness of P-values, conclusions that the null hypothesis is true, power analysis, and distinctions between statistical and biological significance. Statistical hypothesis testing, in which the null hypothesis about the properties of a population is almost always known a priori to be false, is contrasted with scientific hypothesis testing, which examines a credible null hypothesis about phenomena in nature. More meaningful alternatives are briefly outlined, including estimation and confidence intervals for determining the importance of factors, decision theory for guiding actions in the face of uncertainty, and Bayesian approaches to hypothesis testing and other statistical practices.

  12. Testing statistical hypotheses of equivalence

    CERN Document Server

    Wellek, Stefan

    2010-01-01

    Equivalence testing has grown significantly in importance over the last two decades, especially as its relevance to a variety of applications has become understood. Yet published work on the general methodology remains scattered in specialists' journals, and for the most part, it focuses on the relatively narrow topic of bioequivalence assessment.With a far broader perspective, Testing Statistical Hypotheses of Equivalence provides the first comprehensive treatment of statistical equivalence testing. The author addresses a spectrum of specific, two-sided equivalence testing problems, from the

  13. Statistical hypothesis testing with SAS and R

    CERN Document Server

    Taeger, Dirk

    2014-01-01

    A comprehensive guide to statistical hypothesis testing with examples in SAS and R When analyzing datasets the following questions often arise:Is there a short hand procedure for a statistical test available in SAS or R?If so, how do I use it?If not, how do I program the test myself? This book answers these questions and provides an overview of the most commonstatistical test problems in a comprehensive way, making it easy to find and performan appropriate statistical test. A general summary of statistical test theory is presented, along with a basicdescription for each test, including the

  14. Testing for time-varying loadings in dynamic factor models

    DEFF Research Database (Denmark)

    Mikkelsen, Jakob Guldbæk

    Abstract: In this paper we develop a test for time-varying factor loadings in factor models. The test is simple to compute and is constructed from estimated factors and residuals using the principal components estimator. The hypothesis is tested by regressing the squared residuals on the squared...... there is evidence of time-varying loadings on the risk factors underlying portfolio returns for around 80% of the portfolios....

  15. [The research protocol VI: How to choose the appropriate statistical test. Inferential statistics].

    Science.gov (United States)

    Flores-Ruiz, Eric; Miranda-Novales, María Guadalupe; Villasís-Keever, Miguel Ángel

    2017-01-01

    The statistical analysis can be divided in two main components: descriptive analysis and inferential analysis. An inference is to elaborate conclusions from the tests performed with the data obtained from a sample of a population. Statistical tests are used in order to establish the probability that a conclusion obtained from a sample is applicable to the population from which it was obtained. However, choosing the appropriate statistical test in general poses a challenge for novice researchers. To choose the statistical test it is necessary to take into account three aspects: the research design, the number of measurements and the scale of measurement of the variables. Statistical tests are divided into two sets, parametric and nonparametric. Parametric tests can only be used if the data show a normal distribution. Choosing the right statistical test will make it easier for readers to understand and apply the results.

  16. The research protocol VI: How to choose the appropriate statistical test. Inferential statistics

    Directory of Open Access Journals (Sweden)

    Eric Flores-Ruiz

    2017-10-01

    Full Text Available The statistical analysis can be divided in two main components: descriptive analysis and inferential analysis. An inference is to elaborate conclusions from the tests performed with the data obtained from a sample of a population. Statistical tests are used in order to establish the probability that a conclusion obtained from a sample is applicable to the population from which it was obtained. However, choosing the appropriate statistical test in general poses a challenge for novice researchers. To choose the statistical test it is necessary to take into account three aspects: the research design, the number of measurements and the scale of measurement of the variables. Statistical tests are divided into two sets, parametric and nonparametric. Parametric tests can only be used if the data show a normal distribution. Choosing the right statistical test will make it easier for readers to understand and apply the results.

  17. Polarimetric Segmentation Using Wishart Test Statistic

    DEFF Research Database (Denmark)

    Skriver, Henning; Schou, Jesper; Nielsen, Allan Aasbjerg

    2002-01-01

    A newly developed test statistic for equality of two complex covariance matrices following the complex Wishart distribution and an associated asymptotic probability for the test statistic has been used in a segmentation algorithm. The segmentation algorithm is based on the MUM (merge using moments......) approach, which is a merging algorithm for single channel SAR images. The polarimetric version described in this paper uses the above-mentioned test statistic for merging. The segmentation algorithm has been applied to polarimetric SAR data from the Danish dual-frequency, airborne polarimetric SAR, EMISAR...

  18. A simplification of the likelihood ratio test statistic for testing ...

    African Journals Online (AJOL)

    The traditional likelihood ratio test statistic for testing hypothesis about goodness of fit of multinomial probabilities in one, two and multi – dimensional contingency table was simplified. Advantageously, using the simplified version of the statistic to test the null hypothesis is easier and faster because calculating the expected ...

  19. Controversy in the allometric application of fixed- versus varying-exponent models: a statistical and mathematical perspective.

    Science.gov (United States)

    Tang, Huadong; Hussain, Azher; Leal, Mauricio; Fluhler, Eric; Mayersohn, Michael

    2011-02-01

    This commentary is a reply to a recent article by Mahmood commenting on the authors' article on the use of fixed-exponent allometry in predicting human clearance. The commentary discusses eight issues that are related to criticisms made in Mahmood's article and examines the controversies (fixed-exponent vs. varying-exponent allometry) from the perspective of statistics and mathematics. The key conclusion is that any allometric method, which is to establish a power function based on a limited number of animal species and to extrapolate the resulting power function to human values (varying-exponent allometry), is infused with fundamental statistical errors. Copyright © 2010 Wiley-Liss, Inc.

  20. Assessment of the beryllium lymphocyte proliferation test using statistical process control.

    Science.gov (United States)

    Cher, Daniel J; Deubner, David C; Kelsh, Michael A; Chapman, Pamela S; Ray, Rose M

    2006-10-01

    Despite more than 20 years of surveillance and epidemiologic studies using the beryllium blood lymphocyte proliferation test (BeBLPT) as a measure of beryllium sensitization (BeS) and as an aid for diagnosing subclinical chronic beryllium disease (CBD), improvements in specific understanding of the inhalation toxicology of CBD have been limited. Although epidemiologic data suggest that BeS and CBD risks vary by process/work activity, it has proven difficult to reach specific conclusions regarding the dose-response relationship between workplace beryllium exposure and BeS or subclinical CBD. One possible reason for this uncertainty could be misclassification of BeS resulting from variation in BeBLPT testing performance. The reliability of the BeBLPT, a biological assay that measures beryllium sensitization, is unknown. To assess the performance of four laboratories that conducted this test, we used data from a medical surveillance program that offered testing for beryllium sensitization with the BeBLPT. The study population was workers exposed to beryllium at various facilities over a 10-year period (1992-2001). Workers with abnormal results were offered diagnostic workups for CBD. Our analyses used a standard statistical technique, statistical process control (SPC), to evaluate test reliability. The study design involved a repeated measures analysis of BeBLPT results generated from the company-wide, longitudinal testing. Analytical methods included use of (1) statistical process control charts that examined temporal patterns of variation for the stimulation index, a measure of cell reactivity to beryllium; (2) correlation analysis that compared prior perceptions of BeBLPT instability to the statistical measures of test variation; and (3) assessment of the variation in the proportion of missing test results and how time periods with more missing data influenced SPC findings. During the period of this study, all laboratories displayed variation in test results that

  1. Explorations in Statistics: Hypothesis Tests and P Values

    Science.gov (United States)

    Curran-Everett, Douglas

    2009-01-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This second installment of "Explorations in Statistics" delves into test statistics and P values, two concepts fundamental to the test of a scientific null hypothesis. The essence of a test statistic is that it compares what…

  2. Robust inference from multiple test statistics via permutations: a better alternative to the single test statistic approach for randomized trials.

    Science.gov (United States)

    Ganju, Jitendra; Yu, Xinxin; Ma, Guoguang Julie

    2013-01-01

    Formal inference in randomized clinical trials is based on controlling the type I error rate associated with a single pre-specified statistic. The deficiency of using just one method of analysis is that it depends on assumptions that may not be met. For robust inference, we propose pre-specifying multiple test statistics and relying on the minimum p-value for testing the null hypothesis of no treatment effect. The null hypothesis associated with the various test statistics is that the treatment groups are indistinguishable. The critical value for hypothesis testing comes from permutation distributions. Rejection of the null hypothesis when the smallest p-value is less than the critical value controls the type I error rate at its designated value. Even if one of the candidate test statistics has low power, the adverse effect on the power of the minimum p-value statistic is not much. Its use is illustrated with examples. We conclude that it is better to rely on the minimum p-value rather than a single statistic particularly when that single statistic is the logrank test, because of the cost and complexity of many survival trials. Copyright © 2013 John Wiley & Sons, Ltd.

  3. Distinguish Dynamic Basic Blocks by Structural Statistical Testing

    DEFF Research Database (Denmark)

    Petit, Matthieu; Gotlieb, Arnaud

    Statistical testing aims at generating random test data that respect selected probabilistic properties. A distribution probability is associated with the program input space in order to achieve statistical test purpose: to test the most frequent usage of software or to maximize the probability of...... control flow path) during the test data selection. We implemented this algorithm in a statistical test data generator for Java programs. A first experimental validation is presented...

  4. Statistical auditing and randomness test of lotto k/N-type games

    Science.gov (United States)

    Coronel-Brizio, H. F.; Hernández-Montoya, A. R.; Rapallo, F.; Scalas, E.

    2008-11-01

    One of the most popular lottery games worldwide is the so-called “lotto k/N”. It considers N numbers 1,2,…,N from which k are drawn randomly, without replacement. A player selects k or more numbers and the first prize is shared amongst those players whose selected numbers match all of the k randomly drawn. Exact rules may vary in different countries. In this paper, mean values and covariances for the random variables representing the numbers drawn from this kind of game are presented, with the aim of using them to audit statistically the consistency of a given sample of historical results with theoretical values coming from a hypergeometric statistical model. The method can be adapted to test pseudorandom number generators.

  5. Simplified Freeman-Tukey test statistics for testing probabilities in ...

    African Journals Online (AJOL)

    This paper presents the simplified version of the Freeman-Tukey test statistic for testing hypothesis about multinomial probabilities in one, two and multidimensional contingency tables that does not require calculating the expected cell frequencies before test of significance. The simplified method established new criteria of ...

  6. Analysis of Preference Data Using Intermediate Test Statistic Abstract

    African Journals Online (AJOL)

    PROF. O. E. OSUAGWU

    2013-06-01

    Jun 1, 2013 ... West African Journal of Industrial and Academic Research Vol.7 No. 1 June ... Keywords:-Preference data, Friedman statistic, multinomial test statistic, intermediate test statistic. ... new method and consequently a new statistic ...

  7. New Graphical Methods and Test Statistics for Testing Composite Normality

    Directory of Open Access Journals (Sweden)

    Marc S. Paolella

    2015-07-01

    Full Text Available Several graphical methods for testing univariate composite normality from an i.i.d. sample are presented. They are endowed with correct simultaneous error bounds and yield size-correct tests. As all are based on the empirical CDF, they are also consistent for all alternatives. For one test, called the modified stabilized probability test, or MSP, a highly simplified computational method is derived, which delivers the test statistic and also a highly accurate p-value approximation, essentially instantaneously. The MSP test is demonstrated to have higher power against asymmetric alternatives than the well-known and powerful Jarque-Bera test. A further size-correct test, based on combining two test statistics, is shown to have yet higher power. The methodology employed is fully general and can be applied to any i.i.d. univariate continuous distribution setting.

  8. Modified Distribution-Free Goodness-of-Fit Test Statistic.

    Science.gov (United States)

    Chun, So Yeon; Browne, Michael W; Shapiro, Alexander

    2018-03-01

    Covariance structure analysis and its structural equation modeling extensions have become one of the most widely used methodologies in social sciences such as psychology, education, and economics. An important issue in such analysis is to assess the goodness of fit of a model under analysis. One of the most popular test statistics used in covariance structure analysis is the asymptotically distribution-free (ADF) test statistic introduced by Browne (Br J Math Stat Psychol 37:62-83, 1984). The ADF statistic can be used to test models without any specific distribution assumption (e.g., multivariate normal distribution) of the observed data. Despite its advantage, it has been shown in various empirical studies that unless sample sizes are extremely large, this ADF statistic could perform very poorly in practice. In this paper, we provide a theoretical explanation for this phenomenon and further propose a modified test statistic that improves the performance in samples of realistic size. The proposed statistic deals with the possible ill-conditioning of the involved large-scale covariance matrices.

  9. Log-concave Probability Distributions: Theory and Statistical Testing

    DEFF Research Database (Denmark)

    An, Mark Yuing

    1996-01-01

    This paper studies the broad class of log-concave probability distributions that arise in economics of uncertainty and information. For univariate, continuous, and log-concave random variables we prove useful properties without imposing the differentiability of density functions. Discrete...... and multivariate distributions are also discussed. We propose simple non-parametric testing procedures for log-concavity. The test statistics are constructed to test one of the two implicati ons of log-concavity: increasing hazard rates and new-is-better-than-used (NBU) property. The test for increasing hazard...... rates are based on normalized spacing of the sample order statistics. The tests for NBU property fall into the category of Hoeffding's U-statistics...

  10. Similar tests and the standardized log likelihood ratio statistic

    DEFF Research Database (Denmark)

    Jensen, Jens Ledet

    1986-01-01

    When testing an affine hypothesis in an exponential family the 'ideal' procedure is to calculate the exact similar test, or an approximation to this, based on the conditional distribution given the minimal sufficient statistic under the null hypothesis. By contrast to this there is a 'primitive......' approach in which the marginal distribution of a test statistic considered and any nuisance parameter appearing in the test statistic is replaced by an estimate. We show here that when using standardized likelihood ratio statistics the 'primitive' procedure is in fact an 'ideal' procedure to order O(n -3...

  11. Two independent pivotal statistics that test location and misspecification and add-up to the Anderson-Rubin statistic

    NARCIS (Netherlands)

    Kleibergen, F.R.

    2002-01-01

    We extend the novel pivotal statistics for testing the parameters in the instrumental variables regression model. We show that these statistics result from a decomposition of the Anderson-Rubin statistic into two independent pivotal statistics. The first statistic is a score statistic that tests

  12. Caveats for using statistical significance tests in research assessments

    DEFF Research Database (Denmark)

    Schneider, Jesper Wiborg

    2013-01-01

    controversial and numerous criticisms have been leveled against their use. Based on examples from articles by proponents of the use statistical significance tests in research assessments, we address some of the numerous problems with such tests. The issues specifically discussed are the ritual practice......This article raises concerns about the advantages of using statistical significance tests in research assessments as has recently been suggested in the debate about proper normalization procedures for citation indicators by Opthof and Leydesdorff (2010). Statistical significance tests are highly...... argue that applying statistical significance tests and mechanically adhering to their results are highly problematic and detrimental to critical thinking. We claim that the use of such tests do not provide any advantages in relation to deciding whether differences between citation indicators...

  13. Teaching Statistics in Language Testing Courses

    Science.gov (United States)

    Brown, James Dean

    2013-01-01

    The purpose of this article is to examine the literature on teaching statistics for useful ideas that teachers of language testing courses can draw on and incorporate into their teaching toolkits as they see fit. To those ends, the article addresses eight questions: What is known generally about teaching statistics? Why are students so anxious…

  14. Bayesian models based on test statistics for multiple hypothesis testing problems.

    Science.gov (United States)

    Ji, Yuan; Lu, Yiling; Mills, Gordon B

    2008-04-01

    We propose a Bayesian method for the problem of multiple hypothesis testing that is routinely encountered in bioinformatics research, such as the differential gene expression analysis. Our algorithm is based on modeling the distributions of test statistics under both null and alternative hypotheses. We substantially reduce the complexity of the process of defining posterior model probabilities by modeling the test statistics directly instead of modeling the full data. Computationally, we apply a Bayesian FDR approach to control the number of rejections of null hypotheses. To check if our model assumptions for the test statistics are valid for various bioinformatics experiments, we also propose a simple graphical model-assessment tool. Using extensive simulations, we demonstrate the performance of our models and the utility of the model-assessment tool. In the end, we apply the proposed methodology to an siRNA screening and a gene expression experiment.

  15. Significance levels for studies with correlated test statistics.

    Science.gov (United States)

    Shi, Jianxin; Levinson, Douglas F; Whittemore, Alice S

    2008-07-01

    When testing large numbers of null hypotheses, one needs to assess the evidence against the global null hypothesis that none of the hypotheses is false. Such evidence typically is based on the test statistic of the largest magnitude, whose statistical significance is evaluated by permuting the sample units to simulate its null distribution. Efron (2007) has noted that correlation among the test statistics can induce substantial interstudy variation in the shapes of their histograms, which may cause misleading tail counts. Here, we show that permutation-based estimates of the overall significance level also can be misleading when the test statistics are correlated. We propose that such estimates be conditioned on a simple measure of the spread of the observed histogram, and we provide a method for obtaining conditional significance levels. We justify this conditioning using the conditionality principle described by Cox and Hinkley (1974). Application of the method to gene expression data illustrates the circumstances when conditional significance levels are needed.

  16. SPSS for applied sciences basic statistical testing

    CERN Document Server

    Davis, Cole

    2013-01-01

    This book offers a quick and basic guide to using SPSS and provides a general approach to solving problems using statistical tests. It is both comprehensive in terms of the tests covered and the applied settings it refers to, and yet is short and easy to understand. Whether you are a beginner or an intermediate level test user, this book will help you to analyse different types of data in applied settings. It will also give you the confidence to use other statistical software and to extend your expertise to more specific scientific settings as required.The author does not use mathematical form

  17. Statistical characteristics of mechanical heart valve cavitation in accelerated testing.

    Science.gov (United States)

    Wu, Changfu; Hwang, Ned H C; Lin, Yu-Kweng M

    2004-07-01

    Cavitation damage has been observed on mechanical heart valves (MHVs) undergoing accelerated testing. Cavitation itself can be modeled as a stochastic process, as it varies from beat to beat of the testing machine. This in-vitro study was undertaken to investigate the statistical characteristics of MHV cavitation. A 25-mm St. Jude Medical bileaflet MHV (SJM 25) was tested in an accelerated tester at various pulse rates, ranging from 300 to 1,000 bpm, with stepwise increments of 100 bpm. A miniature pressure transducer was placed near a leaflet tip on the inflow side of the valve, to monitor regional transient pressure fluctuations at instants of valve closure. The pressure trace associated with each beat was passed through a 70 kHz high-pass digital filter to extract the high-frequency oscillation (HFO) components resulting from the collapse of cavitation bubbles. Three intensity-related measures were calculated for each HFO burst: its time span; its local root-mean-square (LRMS) value; and the area enveloped by the absolute value of the HFO pressure trace and the time axis, referred to as cavitation impulse. These were treated as stochastic processes, of which the first-order probability density functions (PDFs) were estimated for each test rate. Both the LRMS value and cavitation impulse were log-normal distributed, and the time span was normal distributed. These distribution laws were consistent at different test rates. The present investigation was directed at understanding MHV cavitation as a stochastic process. The results provide a basis for establishing further the statistical relationship between cavitation intensity and time-evolving cavitation damage on MHV surfaces. These data are required to assess and compare the performance of MHVs of different designs.

  18. A comparison of test statistics for the recovery of rapid growth-based enumeration tests

    NARCIS (Netherlands)

    van den Heuvel, Edwin R.; IJzerman-Boon, Pieta C.

    This paper considers five test statistics for comparing the recovery of a rapid growth-based enumeration test with respect to the compendial microbiological method using a specific nonserial dilution experiment. The finite sample distributions of these test statistics are unknown, because they are

  19. Confidence intervals permit, but don't guarantee, better inference than statistical significance testing

    Directory of Open Access Journals (Sweden)

    Melissa Coulson

    2010-07-01

    Full Text Available A statistically significant result, and a non-significant result may differ little, although significance status may tempt an interpretation of difference. Two studies are reported that compared interpretation of such results presented using null hypothesis significance testing (NHST, or confidence intervals (CIs. Authors of articles published in psychology, behavioural neuroscience, and medical journals were asked, via email, to interpret two fictitious studies that found similar results, one statistically significant, and the other non-significant. Responses from 330 authors varied greatly, but interpretation was generally poor, whether results were presented as CIs or using NHST. However, when interpreting CIs respondents who mentioned NHST were 60% likely to conclude, unjustifiably, the two results conflicted, whereas those who interpreted CIs without reference to NHST were 95% likely to conclude, justifiably, the two results were consistent. Findings were generally similar for all three disciplines. An email survey of academic psychologists confirmed that CIs elicit better interpretations if NHST is not invoked. Improved statistical inference can result from encouragement of meta-analytic thinking and use of CIs but, for full benefit, such highly desirable statistical reform requires also that researchers interpret CIs without recourse to NHST.

  20. Ensuring Positiveness of the Scaled Difference Chi-square Test Statistic.

    Science.gov (United States)

    Satorra, Albert; Bentler, Peter M

    2010-06-01

    A scaled difference test statistic [Formula: see text] that can be computed from standard software of structural equation models (SEM) by hand calculations was proposed in Satorra and Bentler (2001). The statistic [Formula: see text] is asymptotically equivalent to the scaled difference test statistic T̄(d) introduced in Satorra (2000), which requires more involved computations beyond standard output of SEM software. The test statistic [Formula: see text] has been widely used in practice, but in some applications it is negative due to negativity of its associated scaling correction. Using the implicit function theorem, this note develops an improved scaling correction leading to a new scaled difference statistic T̄(d) that avoids negative chi-square values.

  1. A statistical theory of cell killing by radiation of varying linear energy transfer

    International Nuclear Information System (INIS)

    Hawkins, R.B.

    1994-01-01

    A theory is presented that provides an explanation for the observed features of the survival of cultured cells after exposure to densely ionizing high-linear energy transfer (LET) radiation. It starts from a phenomenological postulate based on the linear-quadratic form of cell survival observed for low-LET radiation and uses principles of statistics and fluctuation theory to demonstrate that the effect of varying LET on cell survival can be attributed to random variation of dose to small volumes contained within the nucleus. A simple relation is presented for surviving fraction of cells after exposure to radiation of varying LET that depends on the α and β parameters for the same cells in the limit of low-LET radiation. This relation implies that the value of β is independent of LET. Agreement of the theory with selected observations of cell survival from the literature is demonstrated. A relation is presented that gives relative biological effectiveness (RBE) as a function of the α and β parameters for low-LET radiation. Measurements from microdosimetry are used to estimate the size of the subnuclear volume to which the fluctuation pertains. 11 refs., 4 figs., 2 tabs

  2. Statistical tests for person misfit in computerized adaptive testing

    NARCIS (Netherlands)

    Glas, Cornelis A.W.; Meijer, R.R.; van Krimpen-Stoop, Edith

    1998-01-01

    Recently, several person-fit statistics have been proposed to detect nonfitting response patterns. This study is designed to generalize an approach followed by Klauer (1995) to an adaptive testing system using the two-parameter logistic model (2PL) as a null model. The approach developed by Klauer

  3. [Clinical research IV. Relevancy of the statistical test chosen].

    Science.gov (United States)

    Talavera, Juan O; Rivas-Ruiz, Rodolfo

    2011-01-01

    When we look at the difference between two therapies or the association of a risk factor or prognostic indicator with its outcome, we need to evaluate the accuracy of the result. This assessment is based on a judgment that uses information about the study design and statistical management of the information. This paper specifically mentions the relevance of the statistical test selected. Statistical tests are chosen mainly from two characteristics: the objective of the study and type of variables. The objective can be divided into three test groups: a) those in which you want to show differences between groups or inside a group before and after a maneuver, b) those that seek to show the relationship (correlation) between variables, and c) those that aim to predict an outcome. The types of variables are divided in two: quantitative (continuous and discontinuous) and qualitative (ordinal and dichotomous). For example, if we seek to demonstrate differences in age (quantitative variable) among patients with systemic lupus erythematosus (SLE) with and without neurological disease (two groups), the appropriate test is the "Student t test for independent samples." But if the comparison is about the frequency of females (binomial variable), then the appropriate statistical test is the χ(2).

  4. The power of statistical tests using field trial count data of non-target organisms in enviromental risk assessment of genetically modified plants

    NARCIS (Netherlands)

    Voet, van der H.; Goedhart, P.W.

    2015-01-01

    Publications on power analyses for field trial count data comparing transgenic and conventional crops have reported widely varying requirements for the replication needed to obtain statistical tests with adequate power. These studies are critically reviewed and complemented with a new simulation

  5. Application of a Statistical Linear Time-Varying System Model of High Grazing Angle Sea Clutter for Computing Interference Power

    Science.gov (United States)

    2017-12-08

    STATISTICAL LINEAR TIME-VARYING SYSTEM MODEL OF HIGH GRAZING ANGLE SEA CLUTTER FOR COMPUTING INTERFERENCE POWER 1. INTRODUCTION Statistical linear time...beam. We can approximate one of the sinc factors using the Dirichlet kernel to facilitate computation of the integral in (6) as follows: ∣∣∣∣sinc(WB...plotted in Figure 4. The resultant autocorrelation can then be found by substituting (18) into (28). The Python code used to generate Figures 1-4 is found

  6. Statistical analysis and planning of multihundred-watt impact tests

    International Nuclear Information System (INIS)

    Martz, H.F. Jr.; Waterman, M.S.

    1977-10-01

    Modular multihundred-watt (MHW) radioisotope thermoelectric generators (RTG's) are used as a power source for spacecraft. Due to possible environmental contamination by radioactive materials, numerous tests are required to determine and verify the safety of the RTG. There are results available from 27 fueled MHW impact tests regarding hoop failure, fingerprint failure, and fuel failure. Data from the 27 tests are statistically analyzed for relationships that exist between the test design variables and the failure types. Next, these relationships are used to develop a statistical procedure for planning and conducting either future MHW impact tests or similar tests on other RTG fuel sources. Finally, some conclusions are given

  7. Statistical tests to compare motif count exceptionalities

    Directory of Open Access Journals (Sweden)

    Vandewalle Vincent

    2007-03-01

    Full Text Available Abstract Background Finding over- or under-represented motifs in biological sequences is now a common task in genomics. Thanks to p-value calculation for motif counts, exceptional motifs are identified and represent candidate functional motifs. The present work addresses the related question of comparing the exceptionality of one motif in two different sequences. Just comparing the motif count p-values in each sequence is indeed not sufficient to decide if this motif is significantly more exceptional in one sequence compared to the other one. A statistical test is required. Results We develop and analyze two statistical tests, an exact binomial one and an asymptotic likelihood ratio test, to decide whether the exceptionality of a given motif is equivalent or significantly different in two sequences of interest. For that purpose, motif occurrences are modeled by Poisson processes, with a special care for overlapping motifs. Both tests can take the sequence compositions into account. As an illustration, we compare the octamer exceptionalities in the Escherichia coli K-12 backbone versus variable strain-specific loops. Conclusion The exact binomial test is particularly adapted for small counts. For large counts, we advise to use the likelihood ratio test which is asymptotic but strongly correlated with the exact binomial test and very simple to use.

  8. The use of statistical tools in field testing of putative effects of genetically modified plants on nontarget organisms.

    Science.gov (United States)

    Semenov, Alexander V; Elsas, Jan Dirk; Glandorf, Debora C M; Schilthuizen, Menno; Boer, Willem F

    2013-08-01

    To fulfill existing guidelines, applicants that aim to place their genetically modified (GM) insect-resistant crop plants on the market are required to provide data from field experiments that address the potential impacts of the GM plants on nontarget organisms (NTO's). Such data may be based on varied experimental designs. The recent EFSA guidance document for environmental risk assessment (2010) does not provide clear and structured suggestions that address the statistics of field trials on effects on NTO's. This review examines existing practices in GM plant field testing such as the way of randomization, replication, and pseudoreplication. Emphasis is placed on the importance of design features used for the field trials in which effects on NTO's are assessed. The importance of statistical power and the positive and negative aspects of various statistical models are discussed. Equivalence and difference testing are compared, and the importance of checking the distribution of experimental data is stressed to decide on the selection of the proper statistical model. While for continuous data (e.g., pH and temperature) classical statistical approaches - for example, analysis of variance (ANOVA) - are appropriate, for discontinuous data (counts) only generalized linear models (GLM) are shown to be efficient. There is no golden rule as to which statistical test is the most appropriate for any experimental situation. In particular, in experiments in which block designs are used and covariates play a role GLMs should be used. Generic advice is offered that will help in both the setting up of field testing and the interpretation and data analysis of the data obtained in this testing. The combination of decision trees and a checklist for field trials, which are provided, will help in the interpretation of the statistical analyses of field trials and to assess whether such analyses were correctly applied. We offer generic advice to risk assessors and applicants that will

  9. Testing the statistical compatibility of independent data sets

    International Nuclear Information System (INIS)

    Maltoni, M.; Schwetz, T.

    2003-01-01

    We discuss a goodness-of-fit method which tests the compatibility between statistically independent data sets. The method gives sensible results even in cases where the χ 2 minima of the individual data sets are very low or when several parameters are fitted to a large number of data points. In particular, it avoids the problem that a possible disagreement between data sets becomes diluted by data points which are insensitive to the crucial parameters. A formal derivation of the probability distribution function for the proposed test statistics is given, based on standard theorems of statistics. The application of the method is illustrated on data from neutrino oscillation experiments, and its complementarity to the standard goodness-of-fit is discussed

  10. HOW TO SELECT APPROPRIATE STATISTICAL TEST IN SCIENTIFIC ARTICLES

    Directory of Open Access Journals (Sweden)

    Vladimir TRAJKOVSKI

    2016-09-01

    Full Text Available Statistics is mathematical science dealing with the collection, analysis, interpretation, and presentation of masses of numerical data in order to draw relevant conclusions. Statistics is a form of mathematical analysis that uses quantified models, representations and synopses for a given set of experimental data or real-life studies. The students and young researchers in biomedical sciences and in special education and rehabilitation often declare that they have chosen to enroll that study program because they have lack of knowledge or interest in mathematics. This is a sad statement, but there is much truth in it. The aim of this editorial is to help young researchers to select statistics or statistical techniques and statistical software appropriate for the purposes and conditions of a particular analysis. The most important statistical tests are reviewed in the article. Knowing how to choose right statistical test is an important asset and decision in the research data processing and in the writing of scientific papers. Young researchers and authors should know how to choose and how to use statistical methods. The competent researcher will need knowledge in statistical procedures. That might include an introductory statistics course, and it most certainly includes using a good statistics textbook. For this purpose, there is need to return of Statistics mandatory subject in the curriculum of the Institute of Special Education and Rehabilitation at Faculty of Philosophy in Skopje. Young researchers have a need of additional courses in statistics. They need to train themselves to use statistical software on appropriate way.

  11. Monte Carlo testing in spatial statistics, with applications to spatial residuals

    DEFF Research Database (Denmark)

    Mrkvička, Tomáš; Soubeyrand, Samuel; Myllymäki, Mari

    2016-01-01

    This paper reviews recent advances made in testing in spatial statistics and discussed at the Spatial Statistics conference in Avignon 2015. The rank and directional quantile envelope tests are discussed and practical rules for their use are provided. These tests are global envelope tests...... with an appropriate type I error probability. Two novel examples are given on their usage. First, in addition to the test based on a classical one-dimensional summary function, the goodness-of-fit of a point process model is evaluated by means of the test based on a higher dimensional functional statistic, namely...

  12. Kolmogorov complexity, pseudorandom generators and statistical models testing

    Czech Academy of Sciences Publication Activity Database

    Šindelář, Jan; Boček, Pavel

    2002-01-01

    Roč. 38, č. 6 (2002), s. 747-759 ISSN 0023-5954 R&D Projects: GA ČR GA102/99/1564 Institutional research plan: CEZ:AV0Z1075907 Keywords : Kolmogorov complexity * pseudorandom generators * statistical models testing Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.341, year: 2002

  13. statistical tests for frequency distribution of mean gravity anomalies

    African Journals Online (AJOL)

    ES Obe

    1980-03-01

    Mar 1, 1980 ... STATISTICAL TESTS FOR FREQUENCY DISTRIBUTION OF MEAN. GRAVITY ANOMALIES. By ... approach. Kaula [1,2] discussed the method of applying statistical techniques in the ..... mathematical foundation of physical ...

  14. Understanding the Sampling Distribution and Its Use in Testing Statistical Significance.

    Science.gov (United States)

    Breunig, Nancy A.

    Despite the increasing criticism of statistical significance testing by researchers, particularly in the publication of the 1994 American Psychological Association's style manual, statistical significance test results are still popular in journal articles. For this reason, it remains important to understand the logic of inferential statistics. A…

  15. A weighted generalized score statistic for comparison of predictive values of diagnostic tests.

    Science.gov (United States)

    Kosinski, Andrzej S

    2013-03-15

    Positive and negative predictive values are important measures of a medical diagnostic test performance. We consider testing equality of two positive or two negative predictive values within a paired design in which all patients receive two diagnostic tests. The existing statistical tests for testing equality of predictive values are either Wald tests based on the multinomial distribution or the empirical Wald and generalized score tests within the generalized estimating equations (GEE) framework. As presented in the literature, these test statistics have considerably complex formulas without clear intuitive insight. We propose their re-formulations that are mathematically equivalent but algebraically simple and intuitive. As is clearly seen with a new re-formulation we presented, the generalized score statistic does not always reduce to the commonly used score statistic in the independent samples case. To alleviate this, we introduce a weighted generalized score (WGS) test statistic that incorporates empirical covariance matrix with newly proposed weights. This statistic is simple to compute, always reduces to the score statistic in the independent samples situation, and preserves type I error better than the other statistics as demonstrated by simulations. Thus, we believe that the proposed WGS statistic is the preferred statistic for testing equality of two predictive values and for corresponding sample size computations. The new formulas of the Wald statistics may be useful for easy computation of confidence intervals for difference of predictive values. The introduced concepts have potential to lead to development of the WGS test statistic in a general GEE setting. Copyright © 2012 John Wiley & Sons, Ltd.

  16. Statistical inferences for bearings life using sudden death test

    Directory of Open Access Journals (Sweden)

    Morariu Cristin-Olimpiu

    2017-01-01

    Full Text Available In this paper we propose a calculus method for reliability indicators estimation and a complete statistical inferences for three parameters Weibull distribution of bearings life. Using experimental values regarding the durability of bearings tested on stands by the sudden death tests involves a series of particularities of the estimation using maximum likelihood method and statistical inference accomplishment. The paper detailing these features and also provides an example calculation.

  17. Selecting the most appropriate inferential statistical test for your quantitative research study.

    Science.gov (United States)

    Bettany-Saltikov, Josette; Whittaker, Victoria Jane

    2014-06-01

    To discuss the issues and processes relating to the selection of the most appropriate statistical test. A review of the basic research concepts together with a number of clinical scenarios is used to illustrate this. Quantitative nursing research generally features the use of empirical data which necessitates the selection of both descriptive and statistical tests. Different types of research questions can be answered by different types of research designs, which in turn need to be matched to a specific statistical test(s). Discursive paper. This paper discusses the issues relating to the selection of the most appropriate statistical test and makes some recommendations as to how these might be dealt with. When conducting empirical quantitative studies, a number of key issues need to be considered. Considerations for selecting the most appropriate statistical tests are discussed and flow charts provided to facilitate this process. When nursing clinicians and researchers conduct quantitative research studies, it is crucial that the most appropriate statistical test is selected to enable valid conclusions to be made. © 2013 John Wiley & Sons Ltd.

  18. Testing the Difference of Correlated Agreement Coefficients for Statistical Significance

    Science.gov (United States)

    Gwet, Kilem L.

    2016-01-01

    This article addresses the problem of testing the difference between two correlated agreement coefficients for statistical significance. A number of authors have proposed methods for testing the difference between two correlated kappa coefficients, which require either the use of resampling methods or the use of advanced statistical modeling…

  19. Examining publication bias—a simulation-based evaluation of statistical tests on publication bias

    Directory of Open Access Journals (Sweden)

    Andreas Schneck

    2017-11-01

    Full Text Available Background Publication bias is a form of scientific misconduct. It threatens the validity of research results and the credibility of science. Although several tests on publication bias exist, no in-depth evaluations are available that examine which test performs best for different research settings. Methods Four tests on publication bias, Egger’s test (FAT, p-uniform, the test of excess significance (TES, as well as the caliper test, were evaluated in a Monte Carlo simulation. Two different types of publication bias and its degree (0%, 50%, 100% were simulated. The type of publication bias was defined either as file-drawer, meaning the repeated analysis of new datasets, or p-hacking, meaning the inclusion of covariates in order to obtain a significant result. In addition, the underlying effect (β = 0, 0.5, 1, 1.5, effect heterogeneity, the number of observations in the simulated primary studies (N = 100, 500, and the number of observations for the publication bias tests (K = 100, 1,000 were varied. Results All tests evaluated were able to identify publication bias both in the file-drawer and p-hacking condition. The false positive rates were, with the exception of the 15%- and 20%-caliper test, unbiased. The FAT had the largest statistical power in the file-drawer conditions, whereas under p-hacking the TES was, except under effect heterogeneity, slightly better. The CTs were, however, inferior to the other tests under effect homogeneity and had a decent statistical power only in conditions with 1,000 primary studies. Discussion The FAT is recommended as a test for publication bias in standard meta-analyses with no or only small effect heterogeneity. If two-sided publication bias is suspected as well as under p-hacking the TES is the first alternative to the FAT. The 5%-caliper test is recommended under conditions of effect heterogeneity and a large number of primary studies, which may be found if publication bias is examined in a

  20. A Third Moment Adjusted Test Statistic for Small Sample Factor Analysis.

    Science.gov (United States)

    Lin, Johnny; Bentler, Peter M

    2012-01-01

    Goodness of fit testing in factor analysis is based on the assumption that the test statistic is asymptotically chi-square; but this property may not hold in small samples even when the factors and errors are normally distributed in the population. Robust methods such as Browne's asymptotically distribution-free method and Satorra Bentler's mean scaling statistic were developed under the presumption of non-normality in the factors and errors. This paper finds new application to the case where factors and errors are normally distributed in the population but the skewness of the obtained test statistic is still high due to sampling error in the observed indicators. An extension of Satorra Bentler's statistic is proposed that not only scales the mean but also adjusts the degrees of freedom based on the skewness of the obtained test statistic in order to improve its robustness under small samples. A simple simulation study shows that this third moment adjusted statistic asymptotically performs on par with previously proposed methods, and at a very small sample size offers superior Type I error rates under a properly specified model. Data from Mardia, Kent and Bibby's study of students tested for their ability in five content areas that were either open or closed book were used to illustrate the real-world performance of this statistic.

  1. 688,112 statistical results : Content mining psychology articles for statistical test results

    NARCIS (Netherlands)

    Hartgerink, C.H.J.

    2016-01-01

    In this data deposit, I describe a dataset that is the result of content mining 167,318 published articles for statistical test results reported according to the standards prescribed by the American Psychological Association (APA). Articles published by the APA, Springer, Sage, and Taylor & Francis

  2. CUSUM-based person-fit statistics for adaptive testing

    NARCIS (Netherlands)

    van Krimpen-Stoop, Edith; Meijer, R.R.

    2001-01-01

    Item scores that do not fit an assumed item response theory model may cause the latent trait value to be inaccurately estimated. Several person-fit statistics for detecting nonfitting score patterns for paper-and-pencil tests have been proposed. In the context of computerized adaptive tests (CAT),

  3. CUSUM-based person-fit statistics for adaptive testing

    NARCIS (Netherlands)

    van Krimpen-Stoop, Edith; Meijer, R.R.

    1999-01-01

    Item scores that do not fit an assumed item response theory model may cause the latent trait value to be estimated inaccurately. Several person-fit statistics for detecting nonfitting score patterns for paper-and-pencil tests have been proposed. In the context of computerized adaptive tests (CAT),

  4. Statistical test of anarchy

    International Nuclear Information System (INIS)

    Gouvea, Andre de; Murayama, Hitoshi

    2003-01-01

    'Anarchy' is the hypothesis that there is no fundamental distinction among the three flavors of neutrinos. It describes the mixing angles as random variables, drawn from well-defined probability distributions dictated by the group Haar measure. We perform a Kolmogorov-Smirnov (KS) statistical test to verify whether anarchy is consistent with all neutrino data, including the new result presented by KamLAND. We find a KS probability for Nature's choice of mixing angles equal to 64%, quite consistent with the anarchical hypothesis. In turn, assuming that anarchy is indeed correct, we compute lower bounds on vertical bar U e3 vertical bar 2 , the remaining unknown 'angle' of the leptonic mixing matrix

  5. Corrections of the NIST Statistical Test Suite for Randomness

    OpenAIRE

    Kim, Song-Ju; Umeno, Ken; Hasegawa, Akio

    2004-01-01

    It is well known that the NIST statistical test suite was used for the evaluation of AES candidate algorithms. We have found that the test setting of Discrete Fourier Transform test and Lempel-Ziv test of this test suite are wrong. We give four corrections of mistakes in the test settings. This suggests that re-evaluation of the test results should be needed.

  6. Statistical alignment: computational properties, homology testing and goodness-of-fit

    DEFF Research Database (Denmark)

    Hein, J; Wiuf, Carsten; Møller, Martin

    2000-01-01

    The model of insertions and deletions in biological sequences, first formulated by Thorne, Kishino, and Felsenstein in 1991 (the TKF91 model), provides a basis for performing alignment within a statistical framework. Here we investigate this model.Firstly, we show how to accelerate the statistical...... alignment algorithms several orders of magnitude. The main innovations are to confine likelihood calculations to a band close to the similarity based alignment, to get good initial guesses of the evolutionary parameters and to apply an efficient numerical optimisation algorithm for finding the maximum...... analysis.Secondly, we propose a new homology test based on this model, where homology means that an ancestor to a sequence pair can be found finitely far back in time. This test has statistical advantages relative to the traditional shuffle test for proteins.Finally, we describe a goodness-of-fit test...

  7. Statistical treatment of fatigue test data

    International Nuclear Information System (INIS)

    Raske, D.T.

    1980-01-01

    This report discussed several aspects of fatigue data analysis in order to provide a basis for the development of statistically sound design curves. Included is a discussion on the choice of the dependent variable, the assumptions associated with least squares regression models, the variability of fatigue data, the treatment of data from suspended tests and outlying observations, and various strain-life relations

  8. Caracterização estatística de variáveis usadas para ensaiar uma semeadora-adubadora em semeadura direta e convencional = Statistical characterization of variables used to test a planter under direct and conventional sowing systems

    Directory of Open Access Journals (Sweden)

    Geraldo do Amaral Gravina

    2009-10-01

    Full Text Available O objetivo deste trabalho foi caracterizar, estatisticamente, as variáveis patinagem dos rodados, espaço percorrido por parcela, área da parcela trabalhada,capacidade de campo teórica e efetiva de uma semeadora em sistema de semeadura direta (SD e convencional (SC, com base na verificação do ajuste de uma série de dados a uma distribuição estatística, visando indicar a melhor forma de representação e valores a serem adotados para que estas variáveis sejam utilizadas em operações de práticas agrícolas. O experimento na SC foi realizado com velocidade de 1,5 m s-1, com 190 repetições, durante a semeadura de milho em solo classificado como Cambissolo; oexperimento na SD, com 1,8 m s-1 e 58 repetições, durante a semeadura de sorgo em solo classificado como Latossolo Vermelho-Amarelo. Concluiu-se que não foram detectados valores discrepantes e que as variáveis em estudo podem ser representadas pela função densidade de probabilidade normal (Distribuição de Gauss, podendo ser utilizados parâmetros para suas representações.Statistical characterization of variables used to test a planter under direct and conventional sowing systems. The objective of this work was to statistically characterize variables such as wheels slip, theoretical and effective field capacity of a planter in direct (DS and conventional sowing (CS systems, based on the verification of the adjustment of a series of data to a statistical distribution, aiming for the best form of representation and values be adopted so that these variables be used in agricultural practices operations. The experiment in CS was done with a speed of 1.5 m s-1 with 190 repetitions, and the experiment in DS was done with 1.8 m s-1 and58 repetitions. It was concluded that differing values were not detected, and the variables in study can be represented by the function density of normal probability (Gaussian distribution, and the variables can be used for its representations.

  9. Comparing statistical tests for detecting soil contamination greater than background

    International Nuclear Information System (INIS)

    Hardin, J.W.; Gilbert, R.O.

    1993-12-01

    The Washington State Department of Ecology (WSDE) recently issued a report that provides guidance on statistical issues regarding investigation and cleanup of soil and groundwater contamination under the Model Toxics Control Act Cleanup Regulation. Included in the report are procedures for determining a background-based cleanup standard and for conducting a 3-step statistical test procedure to decide if a site is contaminated greater than the background standard. The guidance specifies that the State test should only be used if the background and site data are lognormally distributed. The guidance in WSDE allows for using alternative tests on a site-specific basis if prior approval is obtained from WSDE. This report presents the results of a Monte Carlo computer simulation study conducted to evaluate the performance of the State test and several alternative tests for various contamination scenarios (background and site data distributions). The primary test performance criteria are (1) the probability the test will indicate that a contaminated site is indeed contaminated, and (2) the probability that the test will indicate an uncontaminated site is contaminated. The simulation study was conducted assuming the background concentrations were from lognormal or Weibull distributions. The site data were drawn from distributions selected to represent various contamination scenarios. The statistical tests studied are the State test, t test, Satterthwaite's t test, five distribution-free tests, and several tandem tests (wherein two or more tests are conducted using the same data set)

  10. Testing and qualification of confidence in statistical procedures

    Energy Technology Data Exchange (ETDEWEB)

    Serghiuta, D.; Tholammakkil, J.; Hammouda, N. [Canadian Nuclear Safety Commission (Canada); O' Hagan, A. [Sheffield Univ. (United Kingdom)

    2014-07-01

    This paper discusses a framework for designing artificial test problems, evaluation criteria, and two of the benchmark tests developed under a research project initiated by the Canadian Nuclear Safety Commission to investigate the approaches for qualification of tolerance limit methods and algorithms proposed for application in optimization of CANDU regional/neutron overpower protection trip setpoints for aged conditions. A significant component of this investigation has been the development of a series of benchmark problems of gradually increased complexity, from simple 'theoretical' problems up to complex problems closer to the real application. The first benchmark problem discussed in this paper is a simplified scalar problem which does not involve extremal, maximum or minimum, operations, typically encountered in the real applications. The second benchmark is a high dimensional, but still simple, problem for statistical inference of maximum channel power during normal operation. Bayesian algorithms have been developed for each benchmark problem to provide an independent way of constructing tolerance limits from the same data and allow assessing how well different methods make use of those data and, depending on the type of application, evaluating what the level of 'conservatism' is. The Bayesian method is not, however, used as a reference method, or 'gold' standard, but simply as an independent review method. The approach and the tests developed can be used as a starting point for developing a generic suite (generic in the sense of potentially applying whatever the proposed statistical method) of empirical studies, with clear criteria for passing those tests. Some lessons learned, in particular concerning the need to assure the completeness of the description of the application and the role of completeness of input information, are also discussed. It is concluded that a formal process which includes extended and detailed benchmark

  11. Test for the statistical significance of differences between ROC curves

    International Nuclear Information System (INIS)

    Metz, C.E.; Kronman, H.B.

    1979-01-01

    A test for the statistical significance of observed differences between two measured Receiver Operating Characteristic (ROC) curves has been designed and evaluated. The set of observer response data for each ROC curve is assumed to be independent and to arise from a ROC curve having a form which, in the absence of statistical fluctuations in the response data, graphs as a straight line on double normal-deviate axes. To test the significance of an apparent difference between two measured ROC curves, maximum likelihood estimates of the two parameters of each curve and the associated parameter variances and covariance are calculated from the corresponding set of observer response data. An approximate Chi-square statistic with two degrees of freedom is then constructed from the differences between the parameters estimated for each ROC curve and from the variances and covariances of these estimates. This statistic is known to be truly Chi-square distributed only in the limit of large numbers of trials in the observer performance experiments. Performance of the statistic for data arising from a limited number of experimental trials was evaluated. Independent sets of rating scale data arising from the same underlying ROC curve were paired, and the fraction of differences found (falsely) significant was compared to the significance level, α, used with the test. Although test performance was found to be somewhat dependent on both the number of trials in the data and the position of the underlying ROC curve in the ROC space, the results for various significance levels showed the test to be reliable under practical experimental conditions

  12. Normality Tests for Statistical Analysis: A Guide for Non-Statisticians

    Science.gov (United States)

    Ghasemi, Asghar; Zahediasl, Saleh

    2012-01-01

    Statistical errors are common in scientific literature and about 50% of the published articles have at least one error. The assumption of normality needs to be checked for many statistical procedures, namely parametric tests, because their validity depends on it. The aim of this commentary is to overview checking for normality in statistical analysis using SPSS. PMID:23843808

  13. Comparison of small n statistical tests of differential expression applied to microarrays

    Directory of Open Access Journals (Sweden)

    Lee Anna Y

    2009-02-01

    Full Text Available Abstract Background DNA microarrays provide data for genome wide patterns of expression between observation classes. Microarray studies often have small samples sizes, however, due to cost constraints or specimen availability. This can lead to poor random error estimates and inaccurate statistical tests of differential expression. We compare the performance of the standard t-test, fold change, and four small n statistical test methods designed to circumvent these problems. We report results of various normalization methods for empirical microarray data and of various random error models for simulated data. Results Three Empirical Bayes methods (CyberT, BRB, and limma t-statistics were the most effective statistical tests across simulated and both 2-colour cDNA and Affymetrix experimental data. The CyberT regularized t-statistic in particular was able to maintain expected false positive rates with simulated data showing high variances at low gene intensities, although at the cost of low true positive rates. The Local Pooled Error (LPE test introduced a bias that lowered false positive rates below theoretically expected values and had lower power relative to the top performers. The standard two-sample t-test and fold change were also found to be sub-optimal for detecting differentially expressed genes. The generalized log transformation was shown to be beneficial in improving results with certain data sets, in particular high variance cDNA data. Conclusion Pre-processing of data influences performance and the proper combination of pre-processing and statistical testing is necessary for obtaining the best results. All three Empirical Bayes methods assessed in our study are good choices for statistical tests for small n microarray studies for both Affymetrix and cDNA data. Choice of method for a particular study will depend on software and normalization preferences.

  14. A critique of statistical hypothesis testing in clinical research

    Directory of Open Access Journals (Sweden)

    Somik Raha

    2011-01-01

    Full Text Available Many have documented the difficulty of using the current paradigm of Randomized Controlled Trials (RCTs to test and validate the effectiveness of alternative medical systems such as Ayurveda. This paper critiques the applicability of RCTs for all clinical knowledge-seeking endeavors, of which Ayurveda research is a part. This is done by examining statistical hypothesis testing, the underlying foundation of RCTs, from a practical and philosophical perspective. In the philosophical critique, the two main worldviews of probability are that of the Bayesian and the frequentist. The frequentist worldview is a special case of the Bayesian worldview requiring the unrealistic assumptions of knowing nothing about the universe and believing that all observations are unrelated to each other. Many have claimed that the first belief is necessary for science, and this claim is debunked by comparing variations in learning with different prior beliefs. Moving beyond the Bayesian and frequentist worldviews, the notion of hypothesis testing itself is challenged on the grounds that a hypothesis is an unclear distinction, and assigning a probability on an unclear distinction is an exercise that does not lead to clarity of action. This critique is of the theory itself and not any particular application of statistical hypothesis testing. A decision-making frame is proposed as a way of both addressing this critique and transcending ideological debates on probability. An example of a Bayesian decision-making approach is shown as an alternative to statistical hypothesis testing, utilizing data from a past clinical trial that studied the effect of Aspirin on heart attacks in a sample population of doctors. As a big reason for the prevalence of RCTs in academia is legislation requiring it, the ethics of legislating the use of statistical methods for clinical research is also examined.

  15. Statistical test theory for the behavioral sciences

    CERN Document Server

    de Gruijter, Dato N M

    2007-01-01

    Since the development of the first intelligence test in the early 20th century, educational and psychological tests have become important measurement techniques to quantify human behavior. Focusing on this ubiquitous yet fruitful area of research, Statistical Test Theory for the Behavioral Sciences provides both a broad overview and a critical survey of assorted testing theories and models used in psychology, education, and other behavioral science fields. Following a logical progression from basic concepts to more advanced topics, the book first explains classical test theory, covering true score, measurement error, and reliability. It then presents generalizability theory, which provides a framework to deal with various aspects of test scores. In addition, the authors discuss the concept of validity in testing, offering a strategy for evidence-based validity. In the two chapters devoted to item response theory (IRT), the book explores item response models, such as the Rasch model, and applications, incl...

  16. Estimation of In Situ Stresses with Hydro-Fracturing Tests and a Statistical Method

    Science.gov (United States)

    Lee, Hikweon; Ong, See Hong

    2018-03-01

    At great depths, where borehole-based field stress measurements such as hydraulic fracturing are challenging due to difficult downhole conditions or prohibitive costs, in situ stresses can be indirectly estimated using wellbore failures such as borehole breakouts and/or drilling-induced tensile failures detected by an image log. As part of such efforts, a statistical method has been developed in which borehole breakouts detected on an image log are used for this purpose (Song et al. in Proceedings on the 7th international symposium on in situ rock stress, 2016; Song and Chang in J Geophys Res Solid Earth 122:4033-4052, 2017). The method employs a grid-searching algorithm in which the least and maximum horizontal principal stresses ( S h and S H) are varied, and the corresponding simulated depth-related breakout width distribution as a function of the breakout angle ( θ B = 90° - half of breakout width) is compared to that observed along the borehole to determine a set of S h and S H having the lowest misfit between them. An important advantage of the method is that S h and S H can be estimated simultaneously in vertical wells. To validate the statistical approach, the method is applied to a vertical hole where a set of field hydraulic fracturing tests have been carried out. The stress estimations using the proposed method were found to be in good agreement with the results interpreted from the hydraulic fracturing test measurements.

  17. Efficient statistical tests to compare Youden index: accounting for contingency correlation.

    Science.gov (United States)

    Chen, Fangyao; Xue, Yuqiang; Tan, Ming T; Chen, Pingyan

    2015-04-30

    Youden index is widely utilized in studies evaluating accuracy of diagnostic tests and performance of predictive, prognostic, or risk models. However, both one and two independent sample tests on Youden index have been derived ignoring the dependence (association) between sensitivity and specificity, resulting in potentially misleading findings. Besides, paired sample test on Youden index is currently unavailable. This article develops efficient statistical inference procedures for one sample, independent, and paired sample tests on Youden index by accounting for contingency correlation, namely associations between sensitivity and specificity and paired samples typically represented in contingency tables. For one and two independent sample tests, the variances are estimated by Delta method, and the statistical inference is based on the central limit theory, which are then verified by bootstrap estimates. For paired samples test, we show that the estimated covariance of the two sensitivities and specificities can be represented as a function of kappa statistic so the test can be readily carried out. We then show the remarkable accuracy of the estimated variance using a constrained optimization approach. Simulation is performed to evaluate the statistical properties of the derived tests. The proposed approaches yield more stable type I errors at the nominal level and substantially higher power (efficiency) than does the original Youden's approach. Therefore, the simple explicit large sample solution performs very well. Because we can readily implement the asymptotic and exact bootstrap computation with common software like R, the method is broadly applicable to the evaluation of diagnostic tests and model performance. Copyright © 2015 John Wiley & Sons, Ltd.

  18. A Modified Jonckheere Test Statistic for Ordered Alternatives in Repeated Measures Design

    Directory of Open Access Journals (Sweden)

    Hatice Tül Kübra AKDUR

    2016-09-01

    Full Text Available In this article, a new test based on Jonckheere test [1] for  randomized blocks which have dependent observations within block is presented. A weighted sum for each block statistic rather than the unweighted sum proposed by Jonckheereis included. For Jonckheere type statistics, the main assumption is independency of observations within block. In the case of repeated measures design, the assumption of independence is violated. The weighted Jonckheere type statistic for the situation of dependence for different variance-covariance structure and the situation based on ordered alternative hypothesis structure of each block on the design is used. Also, the proposed statistic is compared to the existing test based on Jonckheere in terms of type I error rates by performing Monte Carlo simulation. For the strong correlations, circular bootstrap version of the proposed Jonckheere test provides lower rates of type I error.

  19. Use of run statistics to validate tensile tests

    International Nuclear Information System (INIS)

    Eatherly, W.P.

    1981-01-01

    In tensile testing of irradiated graphites, it is difficult to assure alignment of sample and train for tensile measurements. By recording location of fractures, run (sequential) statistics can readily detect lack of randomness. The technique is based on partitioning binomial distributions

  20. Your Chi-Square Test Is Statistically Significant: Now What?

    Science.gov (United States)

    Sharpe, Donald

    2015-01-01

    Applied researchers have employed chi-square tests for more than one hundred years. This paper addresses the question of how one should follow a statistically significant chi-square test result in order to determine the source of that result. Four approaches were evaluated: calculating residuals, comparing cells, ransacking, and partitioning. Data…

  1. Reliability Evaluation of Concentric Butterfly Valve Using Statistical Hypothesis Test

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Mu Seong; Choi, Jong Sik; Choi, Byung Oh; Kim, Do Sik [Korea Institute of Machinery and Materials, Daejeon (Korea, Republic of)

    2015-12-15

    A butterfly valve is a type of flow-control device typically used to regulate a fluid flow. This paper presents an estimation of the shape parameter of the Weibull distribution, characteristic life, and B10 life for a concentric butterfly valve based on a statistical analysis of the reliability test data taken before and after the valve improvement. The difference in the shape and scale parameters between the existing and improved valves is reviewed using a statistical hypothesis test. The test results indicate that the shape parameter of the improved valve is similar to that of the existing valve, and that the scale parameter of the improved valve is found to have increased. These analysis results are particularly useful for a reliability qualification test and the determination of the service life cycles.

  2. Reliability Evaluation of Concentric Butterfly Valve Using Statistical Hypothesis Test

    International Nuclear Information System (INIS)

    Chang, Mu Seong; Choi, Jong Sik; Choi, Byung Oh; Kim, Do Sik

    2015-01-01

    A butterfly valve is a type of flow-control device typically used to regulate a fluid flow. This paper presents an estimation of the shape parameter of the Weibull distribution, characteristic life, and B10 life for a concentric butterfly valve based on a statistical analysis of the reliability test data taken before and after the valve improvement. The difference in the shape and scale parameters between the existing and improved valves is reviewed using a statistical hypothesis test. The test results indicate that the shape parameter of the improved valve is similar to that of the existing valve, and that the scale parameter of the improved valve is found to have increased. These analysis results are particularly useful for a reliability qualification test and the determination of the service life cycles

  3. Evaluating statistical tests on OLAP cubes to compare degree of disease.

    Science.gov (United States)

    Ordonez, Carlos; Chen, Zhibo

    2009-09-01

    Statistical tests represent an important technique used to formulate and validate hypotheses on a dataset. They are particularly useful in the medical domain, where hypotheses link disease with medical measurements, risk factors, and treatment. In this paper, we propose to compute parametric statistical tests treating patient records as elements in a multidimensional cube. We introduce a technique that combines dimension lattice traversal and statistical tests to discover significant differences in the degree of disease within pairs of patient groups. In order to understand a cause-effect relationship, we focus on patient group pairs differing in one dimension. We introduce several optimizations to prune the search space, to discover significant group pairs, and to summarize results. We present experiments showing important medical findings and evaluating scalability with medical datasets.

  4. Statistical test for the distribution of galaxies on plates

    International Nuclear Information System (INIS)

    Garcia Lambas, D.

    1985-01-01

    A statistical test for the distribution of galaxies on plates is presented. We apply the test to synthetic astronomical plates obtained by means of numerical simulation (Garcia Lambas and Sersic 1983) with three different models for the 3-dimensional distribution, comparison with an observational plate, suggest the presence of filamentary structure. (author)

  5. Statistical Analysis of the Polarimetric Cloud Analysis and Seeding Test (POLCAST) Field Projects

    Science.gov (United States)

    Ekness, Jamie Lynn

    The North Dakota farming industry brings in more than $4.1 billion annually in cash receipts. Unfortunately, agriculture sales vary significantly from year to year, which is due in large part to weather events such as hail storms and droughts. One method to mitigate drought is to use hygroscopic seeding to increase the precipitation efficiency of clouds. The North Dakota Atmospheric Research Board (NDARB) sponsored the Polarimetric Cloud Analysis and Seeding Test (POLCAST) research project to determine the effectiveness of hygroscopic seeding in North Dakota. The POLCAST field projects obtained airborne and radar observations, while conducting randomized cloud seeding. The Thunderstorm Identification Tracking and Nowcasting (TITAN) program is used to analyze radar data (33 usable cases) in determining differences in the duration of the storm, rain rate and total rain amount between seeded and non-seeded clouds. The single ratio of seeded to non-seeded cases is 1.56 (0.28 mm/0.18 mm) or 56% increase for the average hourly rainfall during the first 60 minutes after target selection. A seeding effect is indicated with the lifetime of the storms increasing by 41 % between seeded and non-seeded clouds for the first 60 minutes past seeding decision. A double ratio statistic, a comparison of radar derived rain amount of the last 40 minutes of a case (seed/non-seed), compared to the first 20 minutes (seed/non-seed), is used to account for the natural variability of the cloud system and gives a double ratio of 1.85. The Mann-Whitney test on the double ratio of seeded to non-seeded cases (33 cases) gives a significance (p-value) of 0.063. Bootstrapping analysis of the POLCAST set indicates that 50 cases would provide statistically significant results based on the Mann-Whitney test of the double ratio. All the statistical analysis conducted on the POLCAST data set show that hygroscopic seeding in North Dakota does increase precipitation. While an additional POLCAST field

  6. Study designs, use of statistical tests, and statistical analysis software choice in 2015: Results from two Pakistani monthly Medline indexed journals.

    Science.gov (United States)

    Shaikh, Masood Ali

    2017-09-01

    Assessment of research articles in terms of study designs used, statistical tests applied and the use of statistical analysis programmes help determine research activity profile and trends in the country. In this descriptive study, all original articles published by Journal of Pakistan Medical Association (JPMA) and Journal of the College of Physicians and Surgeons Pakistan (JCPSP), in the year 2015 were reviewed in terms of study designs used, application of statistical tests, and the use of statistical analysis programmes. JPMA and JCPSP published 192 and 128 original articles, respectively, in the year 2015. Results of this study indicate that cross-sectional study design, bivariate inferential statistical analysis entailing comparison between two variables/groups, and use of statistical software programme SPSS to be the most common study design, inferential statistical analysis, and statistical analysis software programmes, respectively. These results echo previously published assessment of these two journals for the year 2014.

  7. Appropriate statistical methods are required to assess diagnostic tests for replacement, add-on, and triage

    NARCIS (Netherlands)

    Hayen, Andrew; Macaskill, Petra; Irwig, Les; Bossuyt, Patrick

    2010-01-01

    To explain which measures of accuracy and which statistical methods should be used in studies to assess the value of a new binary test as a replacement test, an add-on test, or a triage test. Selection and explanation of statistical methods, illustrated with examples. Statistical methods for

  8. THE ATKINSON INDEX, THE MORAN STATISTIC, AND TESTING EXPONENTIALITY

    OpenAIRE

    Nao, Mimoto; Ricardas, Zitikis; Department of Statistics and Probability, Michigan State University; Department of Statistical and Actuarial Sciences, University of Western Ontario

    2008-01-01

    Constructing tests for exponentiality has been an active and fruitful research area, with numerous applications in engineering, biology and other sciences concerned with life-time data. In the present paper, we construct and investigate powerful tests for exponentiality based on two well known quantities: the Atkinson index and the Moran statistic. We provide an extensive study of the performance of the tests and compare them with those already available in the literature.

  9. 688,112 statistical results: Content mining psychology articles for statistical test results

    OpenAIRE

    Hartgerink, C.H.J.

    2016-01-01

    In this data deposit, I describe a dataset that is the result of content mining 167,318 published articles for statistical test results reported according to the standards prescribed by the American Psychological Association (APA). Articles published by the APA, Springer, Sage, and Taylor & Francis were included (mining from Wiley and Elsevier was actively blocked). As a result of this content mining, 688,112 results from 50,845 articles were extracted. In order to provide a comprehensive set...

  10. Testing and estimating time-varying elasticities of Swiss gasoline demand

    International Nuclear Information System (INIS)

    Neto, David

    2012-01-01

    This paper is intended to test and estimate time-varying elasticities for gasoline demand in Switzerland. For this purpose, a smooth time-varying cointegrating parameters model is investigated in order to describe smooth mutations of the Swiss gasoline demand. The methodology, based on Chebyshev polynomials, is rigorously outlined. Our empirical finding states that the time-invariance assumption does not hold for long-run price and income elasticities. Furthermore they highlight that gasoline demand passed through some periods of sensitivity and non sensitivity with respect to the price. Our empirical statements are of great importance to assess the performance of a gasoline tax as an instrument for CO 2 reduction policy. Indeed, such an instrument can contribute to reduce emissions of greenhouse gases only if the demand is not fully inelastic with respect to the price. Our results suggest that such a carbon-tax would not be always suitable since the price elasticity is found not stable over time and not always significant.

  11. Test Statistics and Confidence Intervals to Establish Noninferiority between Treatments with Ordinal Categorical Data.

    Science.gov (United States)

    Zhang, Fanghong; Miyaoka, Etsuo; Huang, Fuping; Tanaka, Yutaka

    2015-01-01

    The problem for establishing noninferiority is discussed between a new treatment and a standard (control) treatment with ordinal categorical data. A measure of treatment effect is used and a method of specifying noninferiority margin for the measure is provided. Two Z-type test statistics are proposed where the estimation of variance is constructed under the shifted null hypothesis using U-statistics. Furthermore, the confidence interval and the sample size formula are given based on the proposed test statistics. The proposed procedure is applied to a dataset from a clinical trial. A simulation study is conducted to compare the performance of the proposed test statistics with that of the existing ones, and the results show that the proposed test statistics are better in terms of the deviation from nominal level and the power.

  12. Testing statistical isotropy in cosmic microwave background polarization maps

    Science.gov (United States)

    Rath, Pranati K.; Samal, Pramoda Kumar; Panda, Srikanta; Mishra, Debesh D.; Aluri, Pavan K.

    2018-04-01

    We apply our symmetry based Power tensor technique to test conformity of PLANCK Polarization maps with statistical isotropy. On a wide range of angular scales (l = 40 - 150), our preliminary analysis detects many statistically anisotropic multipoles in foreground cleaned full sky PLANCK polarization maps viz., COMMANDER and NILC. We also study the effect of residual foregrounds that may still be present in the Galactic plane using both common UPB77 polarization mask, as well as the individual component separation method specific polarization masks. However, some of the statistically anisotropic modes still persist, albeit significantly in NILC map. We further probed the data for any coherent alignments across multipoles in several bins from the chosen multipole range.

  13. Gray bootstrap method for estimating frequency-varying random vibration signals with small samples

    Directory of Open Access Journals (Sweden)

    Wang Yanqing

    2014-04-01

    Full Text Available During environment testing, the estimation of random vibration signals (RVS is an important technique for the airborne platform safety and reliability. However, the available methods including extreme value envelope method (EVEM, statistical tolerances method (STM and improved statistical tolerance method (ISTM require large samples and typical probability distribution. Moreover, the frequency-varying characteristic of RVS is usually not taken into account. Gray bootstrap method (GBM is proposed to solve the problem of estimating frequency-varying RVS with small samples. Firstly, the estimated indexes are obtained including the estimated interval, the estimated uncertainty, the estimated value, the estimated error and estimated reliability. In addition, GBM is applied to estimating the single flight testing of certain aircraft. At last, in order to evaluate the estimated performance, GBM is compared with bootstrap method (BM and gray method (GM in testing analysis. The result shows that GBM has superiority for estimating dynamic signals with small samples and estimated reliability is proved to be 100% at the given confidence level.

  14. Kepler Planet Detection Metrics: Statistical Bootstrap Test

    Science.gov (United States)

    Jenkins, Jon M.; Burke, Christopher J.

    2016-01-01

    This document describes the data produced by the Statistical Bootstrap Test over the final three Threshold Crossing Event (TCE) deliveries to NExScI: SOC 9.1 (Q1Q16)1 (Tenenbaum et al. 2014), SOC 9.2 (Q1Q17) aka DR242 (Seader et al. 2015), and SOC 9.3 (Q1Q17) aka DR253 (Twicken et al. 2016). The last few years have seen significant improvements in the SOC science data processing pipeline, leading to higher quality light curves and more sensitive transit searches. The statistical bootstrap analysis results presented here and the numerical results archived at NASAs Exoplanet Science Institute (NExScI) bear witness to these software improvements. This document attempts to introduce and describe the main features and differences between these three data sets as a consequence of the software changes.

  15. Statistical time lags in ac discharges

    International Nuclear Information System (INIS)

    Sobota, A; Kanters, J H M; Van Veldhuizen, E M; Haverlag, M; Manders, F

    2011-01-01

    The paper presents statistical time lags measured for breakdown events in near-atmospheric pressure argon and xenon. Ac voltage at 100, 400 and 800 kHz was used to drive the breakdown processes, and the voltage amplitude slope was varied between 10 and 1280 V ms -1 . The values obtained for the statistical time lags are roughly between 1 and 150 ms. It is shown that the statistical time lags in ac-driven discharges follow the same general trends as the discharges driven by voltage of monotonic slope. In addition, the validity of the Cobine-Easton expression is tested at an alternating voltage form.

  16. Statistical time lags in ac discharges

    Energy Technology Data Exchange (ETDEWEB)

    Sobota, A; Kanters, J H M; Van Veldhuizen, E M; Haverlag, M [Eindhoven University of Technology, Department of Applied Physics, Postbus 513, 5600MB Eindhoven (Netherlands); Manders, F, E-mail: a.sobota@tue.nl [Philips Lighting, LightLabs, Mathildelaan 1, 5600JM Eindhoven (Netherlands)

    2011-04-06

    The paper presents statistical time lags measured for breakdown events in near-atmospheric pressure argon and xenon. Ac voltage at 100, 400 and 800 kHz was used to drive the breakdown processes, and the voltage amplitude slope was varied between 10 and 1280 V ms{sup -1}. The values obtained for the statistical time lags are roughly between 1 and 150 ms. It is shown that the statistical time lags in ac-driven discharges follow the same general trends as the discharges driven by voltage of monotonic slope. In addition, the validity of the Cobine-Easton expression is tested at an alternating voltage form.

  17. The Relationship between Test Anxiety and Academic Performance of Students in Vital Statistics Course

    Directory of Open Access Journals (Sweden)

    Shirin Iranfar

    2013-12-01

    Full Text Available Introduction: Test anxiety is a common phenomenon among students and is one of the problems of educational system. The present study was conducted to investigate the test anxiety in vital statistics course and its association with academic performance of students at Kermanshah University of Medical Sciences. This study was descriptive-analytical and the study sample included the students studying in nursing and midwifery, paramedicine and health faculties that had taken vital statistics course and were selected through census method. Sarason questionnaire was used to analyze the test anxiety. Data were analyzed by descriptive and inferential statistics. The findings indicated no significant correlation between test anxiety and score of vital statistics course.

  18. Common pitfalls in statistical analysis: The perils of multiple testing

    Science.gov (United States)

    Ranganathan, Priya; Pramesh, C. S.; Buyse, Marc

    2016-01-01

    Multiple testing refers to situations where a dataset is subjected to statistical testing multiple times - either at multiple time-points or through multiple subgroups or for multiple end-points. This amplifies the probability of a false-positive finding. In this article, we look at the consequences of multiple testing and explore various methods to deal with this issue. PMID:27141478

  19. FADTTS: functional analysis of diffusion tensor tract statistics.

    Science.gov (United States)

    Zhu, Hongtu; Kong, Linglong; Li, Runze; Styner, Martin; Gerig, Guido; Lin, Weili; Gilmore, John H

    2011-06-01

    The aim of this paper is to present a functional analysis of a diffusion tensor tract statistics (FADTTS) pipeline for delineating the association between multiple diffusion properties along major white matter fiber bundles with a set of covariates of interest, such as age, diagnostic status and gender, and the structure of the variability of these white matter tract properties in various diffusion tensor imaging studies. The FADTTS integrates five statistical tools: (i) a multivariate varying coefficient model for allowing the varying coefficient functions in terms of arc length to characterize the varying associations between fiber bundle diffusion properties and a set of covariates, (ii) a weighted least squares estimation of the varying coefficient functions, (iii) a functional principal component analysis to delineate the structure of the variability in fiber bundle diffusion properties, (iv) a global test statistic to test hypotheses of interest, and (v) a simultaneous confidence band to quantify the uncertainty in the estimated coefficient functions. Simulated data are used to evaluate the finite sample performance of FADTTS. We apply FADTTS to investigate the development of white matter diffusivities along the splenium of the corpus callosum tract and the right internal capsule tract in a clinical study of neurodevelopment. FADTTS can be used to facilitate the understanding of normal brain development, the neural bases of neuropsychiatric disorders, and the joint effects of environmental and genetic factors on white matter fiber bundles. The advantages of FADTTS compared with the other existing approaches are that they are capable of modeling the structured inter-subject variability, testing the joint effects, and constructing their simultaneous confidence bands. However, FADTTS is not crucial for estimation and reduces to the functional analysis method for the single measure. Copyright © 2011 Elsevier Inc. All rights reserved.

  20. Testing statistical self-similarity in the topology of river networks

    Science.gov (United States)

    Troutman, Brent M.; Mantilla, Ricardo; Gupta, Vijay K.

    2010-01-01

    Recent work has demonstrated that the topological properties of real river networks deviate significantly from predictions of Shreve's random model. At the same time the property of mean self-similarity postulated by Tokunaga's model is well supported by data. Recently, a new class of network model called random self-similar networks (RSN) that combines self-similarity and randomness has been introduced to replicate important topological features observed in real river networks. We investigate if the hypothesis of statistical self-similarity in the RSN model is supported by data on a set of 30 basins located across the continental United States that encompass a wide range of hydroclimatic variability. We demonstrate that the generators of the RSN model obey a geometric distribution, and self-similarity holds in a statistical sense in 26 of these 30 basins. The parameters describing the distribution of interior and exterior generators are tested to be statistically different and the difference is shown to produce the well-known Hack's law. The inter-basin variability of RSN parameters is found to be statistically significant. We also test generator dependence on two climatic indices, mean annual precipitation and radiative index of dryness. Some indication of climatic influence on the generators is detected, but this influence is not statistically significant with the sample size available. Finally, two key applications of the RSN model to hydrology and geomorphology are briefly discussed.

  1. Time-varying Entry Heating Profile Replication with a Rotating Arc Jet Test Article

    Science.gov (United States)

    Grinstead, Jay Henderson; Venkatapathy, Ethiraj; Noyes, Eric A.; Mach, Jeffrey J.; Empey, Daniel M.; White, Todd R.

    2014-01-01

    A new approach for arc jet testing of thermal protection materials at conditions approximating the time-varying conditions of atmospheric entry was developed and demonstrated. The approach relies upon the spatial variation of heat flux and pressure over a cylindrical test model. By slowly rotating a cylindrical arc jet test model during exposure to an arc jet stream, each point on the test model will experience constantly changing applied heat flux. The predicted temporal profile of heat flux at a point on a vehicle can be replicated by rotating the cylinder at a prescribed speed and direction. An electromechanical test model mechanism was designed, built, and operated during an arc jet test to demonstrate the technique.

  2. Operational statistical analysis of the results of computer-based testing of students

    Directory of Open Access Journals (Sweden)

    Виктор Иванович Нардюжев

    2018-12-01

    Full Text Available The article is devoted to the issues of statistical analysis of results of computer-based testing for evaluation of educational achievements of students. The issues are relevant due to the fact that computerbased testing in Russian universities has become an important method for evaluation of educational achievements of students and quality of modern educational process. Usage of modern methods and programs for statistical analysis of results of computer-based testing and assessment of quality of developed tests is an actual problem for every university teacher. The article shows how the authors solve this problem using their own program “StatInfo”. For several years the program has been successfully applied in a credit system of education at such technological stages as loading computerbased testing protocols into a database, formation of queries, generation of reports, lists, and matrices of answers for statistical analysis of quality of test items. Methodology, experience and some results of its usage by university teachers are described in the article. Related topics of a test development, models, algorithms, technologies, and software for large scale computer-based testing has been discussed by the authors in their previous publications which are presented in the reference list.

  3. Statistical tests for the Gaussian nature of primordial fluctuations through CBR experiments

    International Nuclear Information System (INIS)

    Luo, X.

    1994-01-01

    Information about the physical processes that generate the primordial fluctuations in the early Universe can be gained by testing the Gaussian nature of the fluctuations through cosmic microwave background radiation (CBR) temperature anisotropy experiments. One of the crucial aspects of density perturbations that are produced by the standard inflation scenario is that they are Gaussian, whereas seeds produced by topological defects left over from an early cosmic phase transition tend to be non-Gaussian. To carry out this test, sophisticated statistical tools are required. In this paper, we will discuss several such statistical tools, including multivariant skewness and kurtosis, Euler-Poincare characteristics, the three-point temperature correlation function, and Hotelling's T 2 statistic defined through bispectral estimates of a one-dimensional data set. The effect of noise present in the current data is discussed in detail and the COBE 53 GHz data set is analyzed. Our analysis shows that, on the large angular scale to which COBE is sensitive, the statistics are probably Gaussian. On the small angular scales, the importance of Hotelling's T 2 statistic is stressed, and the minimum sample size required to test Gaussianity is estimated. Although the current data set available from various experiments at half-degree scales is still too small, improvement of the data set by roughly a factor of 2 will be enough to test the Gaussianity statistically. On the arc min scale, we analyze the recent RING data through bispectral analysis, and the result indicates possible deviation from Gaussianity. Effects of point sources are also discussed. It is pointed out that the Gaussianity problem can be resolved in the near future by ground-based or balloon-borne experiments

  4. Using the Bootstrap Method for a Statistical Significance Test of Differences between Summary Histograms

    Science.gov (United States)

    Xu, Kuan-Man

    2006-01-01

    A new method is proposed to compare statistical differences between summary histograms, which are the histograms summed over a large ensemble of individual histograms. It consists of choosing a distance statistic for measuring the difference between summary histograms and using a bootstrap procedure to calculate the statistical significance level. Bootstrapping is an approach to statistical inference that makes few assumptions about the underlying probability distribution that describes the data. Three distance statistics are compared in this study. They are the Euclidean distance, the Jeffries-Matusita distance and the Kuiper distance. The data used in testing the bootstrap method are satellite measurements of cloud systems called cloud objects. Each cloud object is defined as a contiguous region/patch composed of individual footprints or fields of view. A histogram of measured values over footprints is generated for each parameter of each cloud object and then summary histograms are accumulated over all individual histograms in a given cloud-object size category. The results of statistical hypothesis tests using all three distances as test statistics are generally similar, indicating the validity of the proposed method. The Euclidean distance is determined to be most suitable after comparing the statistical tests of several parameters with distinct probability distributions among three cloud-object size categories. Impacts on the statistical significance levels resulting from differences in the total lengths of satellite footprint data between two size categories are also discussed.

  5. Statistical Redundancy Testing for Improved Gene Selection in Cancer Classification Using Microarray Data

    Directory of Open Access Journals (Sweden)

    J. Sunil Rao

    2007-01-01

    Full Text Available In gene selection for cancer classifi cation using microarray data, we define an eigenvalue-ratio statistic to measure a gene’s contribution to the joint discriminability when this gene is included into a set of genes. Based on this eigenvalueratio statistic, we define a novel hypothesis testing for gene statistical redundancy and propose two gene selection methods. Simulation studies illustrate the agreement between statistical redundancy testing and gene selection methods. Real data examples show the proposed gene selection methods can select a compact gene subset which can not only be used to build high quality cancer classifiers but also show biological relevance.

  6. Effect of non-normality on test statistics for one-way independent groups designs.

    Science.gov (United States)

    Cribbie, Robert A; Fiksenbaum, Lisa; Keselman, H J; Wilcox, Rand R

    2012-02-01

    The data obtained from one-way independent groups designs is typically non-normal in form and rarely equally variable across treatment populations (i.e., population variances are heterogeneous). Consequently, the classical test statistic that is used to assess statistical significance (i.e., the analysis of variance F test) typically provides invalid results (e.g., too many Type I errors, reduced power). For this reason, there has been considerable interest in finding a test statistic that is appropriate under conditions of non-normality and variance heterogeneity. Previously recommended procedures for analysing such data include the James test, the Welch test applied either to the usual least squares estimators of central tendency and variability, or the Welch test with robust estimators (i.e., trimmed means and Winsorized variances). A new statistic proposed by Krishnamoorthy, Lu, and Mathew, intended to deal with heterogeneous variances, though not non-normality, uses a parametric bootstrap procedure. In their investigation of the parametric bootstrap test, the authors examined its operating characteristics under limited conditions and did not compare it to the Welch test based on robust estimators. Thus, we investigated how the parametric bootstrap procedure and a modified parametric bootstrap procedure based on trimmed means perform relative to previously recommended procedures when data are non-normal and heterogeneous. The results indicated that the tests based on trimmed means offer the best Type I error control and power when variances are unequal and at least some of the distribution shapes are non-normal. © 2011 The British Psychological Society.

  7. A general statistical test for correlations in a finite-length time series.

    Science.gov (United States)

    Hanson, Jeffery A; Yang, Haw

    2008-06-07

    The statistical properties of the autocorrelation function from a time series composed of independently and identically distributed stochastic variables has been studied. Analytical expressions for the autocorrelation function's variance have been derived. It has been found that two common ways of calculating the autocorrelation, moving-average and Fourier transform, exhibit different uncertainty characteristics. For periodic time series, the Fourier transform method is preferred because it gives smaller uncertainties that are uniform through all time lags. Based on these analytical results, a statistically robust method has been proposed to test the existence of correlations in a time series. The statistical test is verified by computer simulations and an application to single-molecule fluorescence spectroscopy is discussed.

  8. Statistical testing of association between menstruation and migraine.

    Science.gov (United States)

    Barra, Mathias; Dahl, Fredrik A; Vetvik, Kjersti G

    2015-02-01

    To repair and refine a previously proposed method for statistical analysis of association between migraine and menstruation. Menstrually related migraine (MRM) affects about 20% of female migraineurs in the general population. The exact pathophysiological link from menstruation to migraine is hypothesized to be through fluctuations in female reproductive hormones, but the exact mechanisms remain unknown. Therefore, the main diagnostic criterion today is concurrency of migraine attacks with menstruation. Methods aiming to exclude spurious associations are wanted, so that further research into these mechanisms can be performed on a population with a true association. The statistical method is based on a simple two-parameter null model of MRM (which allows for simulation modeling), and Fisher's exact test (with mid-p correction) applied to standard 2 × 2 contingency tables derived from the patients' headache diaries. Our method is a corrected version of a previously published flawed framework. To our best knowledge, no other published methods for establishing a menstruation-migraine association by statistical means exist today. The probabilistic methodology shows good performance when subjected to receiver operator characteristic curve analysis. Quick reference cutoff values for the clinical setting were tabulated for assessing association given a patient's headache history. In this paper, we correct a proposed method for establishing association between menstruation and migraine by statistical methods. We conclude that the proposed standard of 3-cycle observations prior to setting an MRM diagnosis should be extended with at least one perimenstrual window to obtain sufficient information for statistical processing. © 2014 American Headache Society.

  9. Near-exact distributions for the block equicorrelation and equivariance likelihood ratio test statistic

    Science.gov (United States)

    Coelho, Carlos A.; Marques, Filipe J.

    2013-09-01

    In this paper the authors combine the equicorrelation and equivariance test introduced by Wilks [13] with the likelihood ratio test (l.r.t.) for independence of groups of variables to obtain the l.r.t. of block equicorrelation and equivariance. This test or its single block version may find applications in many areas as in psychology, education, medicine, genetics and they are important "in many tests of multivariate analysis, e.g. in MANOVA, Profile Analysis, Growth Curve analysis, etc" [12, 9]. By decomposing the overall hypothesis into the hypotheses of independence of groups of variables and the hypothesis of equicorrelation and equivariance we are able to obtain the expressions for the overall l.r.t. statistic and its moments. From these we obtain a suitable factorization of the characteristic function (c.f.) of the logarithm of the l.r.t. statistic, which enables us to develop highly manageable and precise near-exact distributions for the test statistic.

  10. Comment on the asymptotics of a distribution-free goodness of fit test statistic.

    Science.gov (United States)

    Browne, Michael W; Shapiro, Alexander

    2015-03-01

    In a recent article Jennrich and Satorra (Psychometrika 78: 545-552, 2013) showed that a proof by Browne (British Journal of Mathematical and Statistical Psychology 37: 62-83, 1984) of the asymptotic distribution of a goodness of fit test statistic is incomplete because it fails to prove that the orthogonal component function employed is continuous. Jennrich and Satorra (Psychometrika 78: 545-552, 2013) showed how Browne's proof can be completed satisfactorily but this required the development of an extensive and mathematically sophisticated framework for continuous orthogonal component functions. This short note provides a simple proof of the asymptotic distribution of Browne's (British Journal of Mathematical and Statistical Psychology 37: 62-83, 1984) test statistic by using an equivalent form of the statistic that does not involve orthogonal component functions and consequently avoids all complicating issues associated with them.

  11. Probability and statistics with integrated software routines

    CERN Document Server

    Deep, Ronald

    2005-01-01

    Probability & Statistics with Integrated Software Routines is a calculus-based treatment of probability concurrent with and integrated with statistics through interactive, tailored software applications designed to enhance the phenomena of probability and statistics. The software programs make the book unique.The book comes with a CD containing the interactive software leading to the Statistical Genie. The student can issue commands repeatedly while making parameter changes to observe the effects. Computer programming is an excellent skill for problem solvers, involving design, prototyping, data gathering, testing, redesign, validating, etc, all wrapped up in the scientific method.See also: CD to accompany Probability and Stats with Integrated Software Routines (0123694698)* Incorporates more than 1,000 engaging problems with answers* Includes more than 300 solved examples* Uses varied problem solving methods

  12. Improved Test Planning and Analysis Through the Use of Advanced Statistical Methods

    Science.gov (United States)

    Green, Lawrence L.; Maxwell, Katherine A.; Glass, David E.; Vaughn, Wallace L.; Barger, Weston; Cook, Mylan

    2016-01-01

    The goal of this work is, through computational simulations, to provide statistically-based evidence to convince the testing community that a distributed testing approach is superior to a clustered testing approach for most situations. For clustered testing, numerous, repeated test points are acquired at a limited number of test conditions. For distributed testing, only one or a few test points are requested at many different conditions. The statistical techniques of Analysis of Variance (ANOVA), Design of Experiments (DOE) and Response Surface Methods (RSM) are applied to enable distributed test planning, data analysis and test augmentation. The D-Optimal class of DOE is used to plan an optimally efficient single- and multi-factor test. The resulting simulated test data are analyzed via ANOVA and a parametric model is constructed using RSM. Finally, ANOVA can be used to plan a second round of testing to augment the existing data set with new data points. The use of these techniques is demonstrated through several illustrative examples. To date, many thousands of comparisons have been performed and the results strongly support the conclusion that the distributed testing approach outperforms the clustered testing approach.

  13. Time-Varying Dynamic Properties of Offshore Wind Turbines Evaluated by Modal Testing

    DEFF Research Database (Denmark)

    Damgaard, Mads; Andersen, J. K. F.; Ibsen, Lars Bo

    2014-01-01

    resonance of the wind turbine structure. In this paper, free vibration tests and a numerical Winkler type approach are used to evaluate the dynamic properties of a total of 30 offshore wind turbines located in the North Sea. Analyses indicate time-varying eigenfrequencies and damping ratios of the lowest...... structural eigenmode. Isolating the oscillation oil damper performance, moveable seabed conditions may lead to the observed time dependency....

  14. A NEW TEST OF THE STATISTICAL NATURE OF THE BRIGHTEST CLUSTER GALAXIES

    International Nuclear Information System (INIS)

    Lin, Yen-Ting; Ostriker, Jeremiah P.; Miller, Christopher J.

    2010-01-01

    A novel statistic is proposed to examine the hypothesis that all cluster galaxies are drawn from the same luminosity distribution (LD). In such a 'statistical model' of galaxy LD, the brightest cluster galaxies (BCGs) are simply the statistical extreme of the galaxy population. Using a large sample of nearby clusters, we show that BCGs in high luminosity clusters (e.g., L tot ∼> 4 x 10 11 h -2 70 L sun ) are unlikely (probability ≤3 x 10 -4 ) to be drawn from the LD defined by all red cluster galaxies more luminous than M r = -20. On the other hand, BCGs in less luminous clusters are consistent with being the statistical extreme. Applying our method to the second brightest galaxies, we show that they are consistent with being the statistical extreme, which implies that the BCGs are also distinct from non-BCG luminous, red, cluster galaxies. We point out some issues with the interpretation of the classical tests proposed by Tremaine and Richstone (TR) that are designed to examine the statistical nature of BCGs, investigate the robustness of both our statistical test and those of TR against difficulties in photometry of galaxies of large angular size, and discuss the implication of our findings on surveys that use the luminous red galaxies to measure the baryon acoustic oscillation features in the galaxy power spectrum.

  15. Statistical Requirements For Pass-Fail Testing Of Contraband Detection Systems

    International Nuclear Information System (INIS)

    Gilliam, David M.

    2011-01-01

    Contraband detection systems for homeland security applications are typically tested for probability of detection (PD) and probability of false alarm (PFA) using pass-fail testing protocols. Test protocols usually require specified values for PD and PFA to be demonstrated at a specified level of statistical confidence CL. Based on a recent more theoretical treatment of this subject [1], this summary reviews the definition of CL and provides formulas and spreadsheet functions for constructing tables of general test requirements and for determining the minimum number of tests required. The formulas and tables in this article may be generally applied to many other applications of pass-fail testing, in addition to testing of contraband detection systems.

  16. Caregiver Statistics: Demographics

    Science.gov (United States)

    ... You are here Home Selected Long-Term Care Statistics Order this publication Printer-friendly version What is ... needs and services are wide-ranging and complex, statistics may vary from study to study. Sources for ...

  17. P-Value, a true test of statistical significance? a cautionary note ...

    African Journals Online (AJOL)

    While it's not the intention of the founders of significance testing and hypothesis testing to have the two ideas intertwined as if they are complementary, the inconvenient marriage of the two practices into one coherent, convenient, incontrovertible and misinterpreted practice has dotted our standard statistics textbooks and ...

  18. Statistical approach for collaborative tests, reference material certification procedures

    International Nuclear Information System (INIS)

    Fangmeyer, H.; Haemers, L.; Larisse, J.

    1977-01-01

    The first part introduces the different aspects in organizing and executing intercomparison tests of chemical or physical quantities. It follows a description of a statistical procedure to handle the data collected in a circular analysis. Finally, an example demonstrates how the tool can be applied and which conclusion can be drawn of the results obtained

  19. A test statistic in the complex Wishart distribution and its application to change detection in polarimetric SAR data

    DEFF Research Database (Denmark)

    Conradsen, Knut; Nielsen, Allan Aasbjerg; Schou, Jesper

    2003-01-01

    . Based on this distribution, a test statistic for equality of two such matrices and an associated asymptotic probability for obtaining a smaller value of the test statistic are derived and applied successfully to change detection in polarimetric SAR data. In a case study, EMISAR L-band data from April 17...... to HH, VV, or HV data alone, the derived test statistic reduces to the well-known gamma likelihood-ratio test statistic. The derived test statistic and the associated significance value can be applied as a line or edge detector in fully polarimetric SAR data also....

  20. Comparing Simulated and Theoretical Sampling Distributions of the U3 Person-Fit Statistic.

    Science.gov (United States)

    Emons, Wilco H. M.; Meijer, Rob R.; Sijtsma, Klaas

    2002-01-01

    Studied whether the theoretical sampling distribution of the U3 person-fit statistic is in agreement with the simulated sampling distribution under different item response theory models and varying item and test characteristics. Simulation results suggest that the use of standard normal deviates for the standardized version of the U3 statistic may…

  1. A study of statistical tests for near-real-time materials accountancy using field test data of Tokai reprocessing plant

    International Nuclear Information System (INIS)

    Ihara, Hitoshi; Nishimura, Hideo; Ikawa, Koji; Miura, Nobuyuki; Iwanaga, Masayuki; Kusano, Toshitsugu.

    1988-03-01

    An Near-Real-Time Materials Accountancy(NRTA) system had been developed as an advanced safeguards measure for PNC Tokai Reprocessing Plant; a minicomputer system for NRTA data processing was designed and constructed. A full scale field test was carried out as a JASPAS(Japan Support Program for Agency Safeguards) project with the Agency's participation and the NRTA data processing system was used. Using this field test data, investigation of the detection power of a statistical test under real circumstances was carried out for five statistical tests, i.e., a significance test of MUF, CUMUF test, average loss test, MUF residual test and Page's test on MUF residuals. The result shows that the CUMUF test, average loss test, MUF residual test and the Page's test on MUF residual test are useful to detect a significant loss or diversion. An unmeasured inventory estimation model for the PNC reprocessing plant was developed in this study. Using this model, the field test data from the C-1 to 85 - 2 campaigns were re-analyzed. (author)

  2. "What If" Analyses: Ways to Interpret Statistical Significance Test Results Using EXCEL or "R"

    Science.gov (United States)

    Ozturk, Elif

    2012-01-01

    The present paper aims to review two motivations to conduct "what if" analyses using Excel and "R" to understand the statistical significance tests through the sample size context. "What if" analyses can be used to teach students what statistical significance tests really do and in applied research either prospectively to estimate what sample size…

  3. Testing statistical significance scores of sequence comparison methods with structure similarity

    Directory of Open Access Journals (Sweden)

    Leunissen Jack AM

    2006-10-01

    Full Text Available Abstract Background In the past years the Smith-Waterman sequence comparison algorithm has gained popularity due to improved implementations and rapidly increasing computing power. However, the quality and sensitivity of a database search is not only determined by the algorithm but also by the statistical significance testing for an alignment. The e-value is the most commonly used statistical validation method for sequence database searching. The CluSTr database and the Protein World database have been created using an alternative statistical significance test: a Z-score based on Monte-Carlo statistics. Several papers have described the superiority of the Z-score as compared to the e-value, using simulated data. We were interested if this could be validated when applied to existing, evolutionary related protein sequences. Results All experiments are performed on the ASTRAL SCOP database. The Smith-Waterman sequence comparison algorithm with both e-value and Z-score statistics is evaluated, using ROC, CVE and AP measures. The BLAST and FASTA algorithms are used as reference. We find that two out of three Smith-Waterman implementations with e-value are better at predicting structural similarities between proteins than the Smith-Waterman implementation with Z-score. SSEARCH especially has very high scores. Conclusion The compute intensive Z-score does not have a clear advantage over the e-value. The Smith-Waterman implementations give generally better results than their heuristic counterparts. We recommend using the SSEARCH algorithm combined with e-values for pairwise sequence comparisons.

  4. Finding differentially expressed genes in high dimensional data: Rank based test statistic via a distance measure.

    Science.gov (United States)

    Mathur, Sunil; Sadana, Ajit

    2015-12-01

    We present a rank-based test statistic for the identification of differentially expressed genes using a distance measure. The proposed test statistic is highly robust against extreme values and does not assume the distribution of parent population. Simulation studies show that the proposed test is more powerful than some of the commonly used methods, such as paired t-test, Wilcoxon signed rank test, and significance analysis of microarray (SAM) under certain non-normal distributions. The asymptotic distribution of the test statistic, and the p-value function are discussed. The application of proposed method is shown using a real-life data set. © The Author(s) 2011.

  5. Statistical tests for power-law cross-correlated processes

    Science.gov (United States)

    Podobnik, Boris; Jiang, Zhi-Qiang; Zhou, Wei-Xing; Stanley, H. Eugene

    2011-12-01

    For stationary time series, the cross-covariance and the cross-correlation as functions of time lag n serve to quantify the similarity of two time series. The latter measure is also used to assess whether the cross-correlations are statistically significant. For nonstationary time series, the analogous measures are detrended cross-correlations analysis (DCCA) and the recently proposed detrended cross-correlation coefficient, ρDCCA(T,n), where T is the total length of the time series and n the window size. For ρDCCA(T,n), we numerically calculated the Cauchy inequality -1≤ρDCCA(T,n)≤1. Here we derive -1≤ρDCCA(T,n)≤1 for a standard variance-covariance approach and for a detrending approach. For overlapping windows, we find the range of ρDCCA within which the cross-correlations become statistically significant. For overlapping windows we numerically determine—and for nonoverlapping windows we derive—that the standard deviation of ρDCCA(T,n) tends with increasing T to 1/T. Using ρDCCA(T,n) we show that the Chinese financial market's tendency to follow the U.S. market is extremely weak. We also propose an additional statistical test that can be used to quantify the existence of cross-correlations between two power-law correlated time series.

  6. Implementing statistical analysis in multi-channel acoustic impact-echo testing of concrete bridge decks: Determining thresholds for delamination detection

    Science.gov (United States)

    Hendricks, Lorin; Spencer Guthrie, W.; Mazzeo, Brian

    2018-04-01

    An automated acoustic impact-echo testing device with seven channels has been developed for faster surveying of bridge decks. Due to potential variations in bridge deck overlay thickness, varying conditions between testing passes, and occasional imprecise equipment calibrations, a method that can account for variations in deck properties and testing conditions was necessary to correctly interpret the acoustic data. A new methodology involving statistical analyses was therefore developed. After acoustic impact-echo data are collected and analyzed, the results are normalized by the median for each channel, a Gaussian distribution is fit to the histogram of the data, and the Kullback-Leibler divergence test or Otsu's method is then used to determine the optimum threshold for differentiating between intact and delaminated concrete. The new methodology was successfully applied to individual channels of previously unusable acoustic impact-echo data obtained from a three-lane interstate bridge deck surfaced with a polymer overlay, and the resulting delamination map compared very favorably with the results of a manual deck sounding survey.

  7. Statistical correlation of structural mode shapes from test measurements and NASTRAN analytical values

    Science.gov (United States)

    Purves, L.; Strang, R. F.; Dube, M. P.; Alea, P.; Ferragut, N.; Hershfeld, D.

    1983-01-01

    The software and procedures of a system of programs used to generate a report of the statistical correlation between NASTRAN modal analysis results and physical tests results from modal surveys are described. Topics discussed include: a mathematical description of statistical correlation, a user's guide for generating a statistical correlation report, a programmer's guide describing the organization and functions of individual programs leading to a statistical correlation report, and a set of examples including complete listings of programs, and input and output data.

  8. Using Relative Statistics and Approximate Disease Prevalence to Compare Screening Tests.

    Science.gov (United States)

    Samuelson, Frank; Abbey, Craig

    2016-11-01

    Schatzkin et al. and other authors demonstrated that the ratios of some conditional statistics such as the true positive fraction are equal to the ratios of unconditional statistics, such as disease detection rates, and therefore we can calculate these ratios between two screening tests on the same population even if negative test patients are not followed with a reference procedure and the true and false negative rates are unknown. We demonstrate that this same property applies to an expected utility metric. We also demonstrate that while simple estimates of relative specificities and relative areas under ROC curves (AUC) do depend on the unknown negative rates, we can write these ratios in terms of disease prevalence, and the dependence of these ratios on a posited prevalence is often weak particularly if that prevalence is small or the performance of the two screening tests is similar. Therefore we can estimate relative specificity or AUC with little loss of accuracy, if we use an approximate value of disease prevalence.

  9. Statistical test data selection for reliability evalution of process computer software

    International Nuclear Information System (INIS)

    Volkmann, K.P.; Hoermann, H.; Ehrenberger, W.

    1976-01-01

    The paper presents a concept for converting knowledge about the characteristics of process states into practicable procedures for the statistical selection of test cases in testing process computer software. Process states are defined as vectors whose components consist of values of input variables lying in discrete positions or within given limits. Two approaches for test data selection, based on knowledge about cases of demand, are outlined referring to a purely probabilistic method and to the mathematics of stratified sampling. (orig.) [de

  10. A Note on Three Statistical Tests in the Logistic Regression DIF Procedure

    Science.gov (United States)

    Paek, Insu

    2012-01-01

    Although logistic regression became one of the well-known methods in detecting differential item functioning (DIF), its three statistical tests, the Wald, likelihood ratio (LR), and score tests, which are readily available under the maximum likelihood, do not seem to be consistently distinguished in DIF literature. This paper provides a clarifying…

  11. Comparison of Statistical Methods for Detector Testing Programs

    Energy Technology Data Exchange (ETDEWEB)

    Rennie, John Alan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Abhold, Mark [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-10-14

    A typical goal for any detector testing program is to ascertain not only the performance of the detector systems under test, but also the confidence that systems accepted using that testing program’s acceptance criteria will exceed a minimum acceptable performance (which is usually expressed as the minimum acceptable success probability, p). A similar problem often arises in statistics, where we would like to ascertain the fraction, p, of a population of items that possess a property that may take one of two possible values. Typically, the problem is approached by drawing a fixed sample of size n, with the number of items out of n that possess the desired property, x, being termed successes. The sample mean gives an estimate of the population mean p ≈ x/n, although usually it is desirable to accompany such an estimate with a statement concerning the range within which p may fall and the confidence associated with that range. Procedures for establishing such ranges and confidence limits are described in detail by Clopper, Brown, and Agresti for two-sided symmetric confidence intervals.

  12. Jsub(Ic)-testing of A-533 B - statistical evaluation of some different testing techniques

    International Nuclear Information System (INIS)

    Nilsson, F.

    1978-01-01

    The purpose of the present study was to compare statistically some different methods for the evaluation of fracture toughness of the nuclear reactor material A-533 B. Since linear elastic fracture mechanics is not applicable to this material at the interesting temperature (275 0 C), the so-called Jsub(Ic) testing method was employed. Two main difficulties are inherent in this type of testing. The first one is to determine the quantity J as a function of the deflection of the three-point bend specimens used. Three different techniques were used, the first two based on the experimentally observed input of energy to the specimen and the third employing finite element calculations. The second main problem is to determine the point when crack growth begins. For this, two methods were used, a direct electrical method and the indirect R-curve method. A total of forty specimens were tested at two laboratories. No statistically significant different results were obtained from the respective laboratories. The three methods of calculating J yielded somewhat different results, although the discrepancy was small. Also the two methods of determination of the growth initiation point yielded consistent results. The R-curve method, however, exhibited a larger uncertainty as measured by the standard deviation. The resulting Jsub(Ic) value also agreed well with earlier presented results. The relative standard deviation was of the order of 25%, which is quite small for this type of experiment. (author)

  13. Evaluating Two Models of Collaborative Tests in an Online Introductory Statistics Course

    Science.gov (United States)

    Björnsdóttir, Auðbjörg; Garfield, Joan; Everson, Michelle

    2015-01-01

    This study explored the use of two different types of collaborative tests in an online introductory statistics course. A study was designed and carried out to investigate three research questions: (1) What is the difference in students' learning between using consensus and non-consensus collaborative tests in the online environment?, (2) What is…

  14. Observations in the statistical analysis of NBG-18 nuclear graphite strength tests

    International Nuclear Information System (INIS)

    Hindley, Michael P.; Mitchell, Mark N.; Blaine, Deborah C.; Groenwold, Albert A.

    2012-01-01

    Highlights: ► Statistical analysis of NBG-18 nuclear graphite strength test. ► A Weibull distribution and normal distribution is tested for all data. ► A Bimodal distribution in the CS data is confirmed. ► The CS data set has the lowest variance. ► A Combined data set is formed and has Weibull distribution. - Abstract: The purpose of this paper is to report on the selection of a statistical distribution chosen to represent the experimental material strength of NBG-18 nuclear graphite. Three large sets of samples were tested during the material characterisation of the Pebble Bed Modular Reactor and Core Structure Ceramics materials. These sets of samples are tensile strength, flexural strength and compressive strength (CS) measurements. A relevant statistical fit is determined and the goodness of fit is also evaluated for each data set. The data sets are also normalised for ease of comparison, and combined into one representative data set. The validity of this approach is demonstrated. A second failure mode distribution is found on the CS test data. Identifying this failure mode supports the similar observations made in the past. The success of fitting the Weibull distribution through the normalised data sets allows us to improve the basis for the estimates of the variability. This could also imply that the variability on the graphite strength for the different strength measures is based on the same flaw distribution and thus a property of the material.

  15. Statistical testing and power analysis for brain-wide association study.

    Science.gov (United States)

    Gong, Weikang; Wan, Lin; Lu, Wenlian; Ma, Liang; Cheng, Fan; Cheng, Wei; Grünewald, Stefan; Feng, Jianfeng

    2018-04-05

    The identification of connexel-wise associations, which involves examining functional connectivities between pairwise voxels across the whole brain, is both statistically and computationally challenging. Although such a connexel-wise methodology has recently been adopted by brain-wide association studies (BWAS) to identify connectivity changes in several mental disorders, such as schizophrenia, autism and depression, the multiple correction and power analysis methods designed specifically for connexel-wise analysis are still lacking. Therefore, we herein report the development of a rigorous statistical framework for connexel-wise significance testing based on the Gaussian random field theory. It includes controlling the family-wise error rate (FWER) of multiple hypothesis testings using topological inference methods, and calculating power and sample size for a connexel-wise study. Our theoretical framework can control the false-positive rate accurately, as validated empirically using two resting-state fMRI datasets. Compared with Bonferroni correction and false discovery rate (FDR), it can reduce false-positive rate and increase statistical power by appropriately utilizing the spatial information of fMRI data. Importantly, our method bypasses the need of non-parametric permutation to correct for multiple comparison, thus, it can efficiently tackle large datasets with high resolution fMRI images. The utility of our method is shown in a case-control study. Our approach can identify altered functional connectivities in a major depression disorder dataset, whereas existing methods fail. A software package is available at https://github.com/weikanggong/BWAS. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. The Statistic Test on Influence of Surface Treatment to Fatigue Lifetime with Limited Data

    OpenAIRE

    Suhartono, Agus

    2009-01-01

    Justifications on the influences of two or more parameters on fatigue strength are some times problematic due to the scatter nature of the fatigue data. Statistic test can facilitate the evaluation, whether the changes in material characteristics as a result of specific parameters of interest is significant. The statistic tests were applied to fatigue data of AISI 1045 steel specimens. The specimens are consisted of as received specimen, shot peened specimen with 15 and 16 Almen intensity as ...

  17. Conducting tests for statistically significant differences using forest inventory data

    Science.gov (United States)

    James A. Westfall; Scott A. Pugh; John W. Coulston

    2013-01-01

    Many forest inventory and monitoring programs are based on a sample of ground plots from which estimates of forest resources are derived. In addition to evaluating metrics such as number of trees or amount of cubic wood volume, it is often desirable to make comparisons between resource attributes. To properly conduct statistical tests for differences, it is imperative...

  18. Testing independence of bivariate interval-censored data using modified Kendall's tau statistic.

    Science.gov (United States)

    Kim, Yuneung; Lim, Johan; Park, DoHwan

    2015-11-01

    In this paper, we study a nonparametric procedure to test independence of bivariate interval censored data; for both current status data (case 1 interval-censored data) and case 2 interval-censored data. To do it, we propose a score-based modification of the Kendall's tau statistic for bivariate interval-censored data. Our modification defines the Kendall's tau statistic with expected numbers of concordant and disconcordant pairs of data. The performance of the modified approach is illustrated by simulation studies and application to the AIDS study. We compare our method to alternative approaches such as the two-stage estimation method by Sun et al. (Scandinavian Journal of Statistics, 2006) and the multiple imputation method by Betensky and Finkelstein (Statistics in Medicine, 1999b). © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Statistical Methods for the detection of answer copying on achievement tests

    NARCIS (Netherlands)

    Sotaridona, Leonardo

    2003-01-01

    This thesis contains a collection of studies where statistical methods for the detection of answer copying on achievement tests in multiple-choice format are proposed and investigated. Although all methods are suited to detect answer copying, each method is designed to address specific

  20. Common pitfalls in statistical analysis: Understanding the properties of diagnostic tests - Part 1.

    Science.gov (United States)

    Ranganathan, Priya; Aggarwal, Rakesh

    2018-01-01

    In this article in our series on common pitfalls in statistical analysis, we look at some of the attributes of diagnostic tests (i.e., tests which are used to determine whether an individual does or does not have disease). The next article in this series will focus on further issues related to diagnostic tests.

  1. Testing University Rankings Statistically: Why this Perhaps is not such a Good Idea after All. Some Reflections on Statistical Power, Effect Size, Random Sampling and Imaginary Populations

    DEFF Research Database (Denmark)

    Schneider, Jesper Wiborg

    2012-01-01

    In this paper we discuss and question the use of statistical significance tests in relation to university rankings as recently suggested. We outline the assumptions behind and interpretations of statistical significance tests and relate this to examples from the recent SCImago Institutions Rankin...

  2. Monte Carlo Method to Study Properties of Acceleration Factor Estimation Based on the Test Results with Varying Load

    Directory of Open Access Journals (Sweden)

    N. D. Tiannikova

    2014-01-01

    Full Text Available G.D. Kartashov has developed a technique to determine the rapid testing results scaling functions to the normal mode. Its feature is preliminary tests of products of one sample including tests using the alternating modes. Standard procedure of preliminary tests (researches is as follows: n groups of products with m elements in each start being tested in normal mode and, after a failure of one of products in the group, the remained products are tested in accelerated mode. In addition to tests in alternating mode, tests in constantly normal mode are conducted as well. The acceleration factor of rapid tests for this type of products, identical to any lots is determined using such testing results of products from the same lot. A drawback of this technique is that tests are to be conducted in alternating mode till the failure of all products. That is not always is possible. To avoid this shortcoming, the Renyi criterion is offered. It allows us to determine scaling functions using the right-censored data thus giving the opportunity to stop testing prior to all failures of products.In this work a statistical modeling of the acceleration factor estimation owing to Renyi statistics minimization is implemented by the Monte-Carlo method. Results of modeling show that the acceleration factor estimation obtained through Renyi statistics minimization is conceivable for rather large n . But for small sample volumes some systematic bias of acceleration factor estimation, which decreases with growth n is observed for both distributions (exponential and Veybull's distributions. Therefore the paper also presents calculation results of correction factors for a case of exponential distribution and Veybull's distribution.

  3. A Comparison of Several Statistical Tests of Reciprocity of Self-Disclosure.

    Science.gov (United States)

    Dindia, Kathryn

    1988-01-01

    Reports the results of a study that used several statistical tests of reciprocity of self-disclosure. Finds little evidence for reciprocity of self-disclosure, and concludes that either reciprocity is an illusion, or that different or more sophisticated methods are needed to detect it. (MS)

  4. Testing the statistical isotropy of large scale structure with multipole vectors

    International Nuclear Information System (INIS)

    Zunckel, Caroline; Huterer, Dragan; Starkman, Glenn D.

    2011-01-01

    A fundamental assumption in cosmology is that of statistical isotropy - that the Universe, on average, looks the same in every direction in the sky. Statistical isotropy has recently been tested stringently using cosmic microwave background data, leading to intriguing results on large angular scales. Here we apply some of the same techniques used in the cosmic microwave background to the distribution of galaxies on the sky. Using the multipole vector approach, where each multipole in the harmonic decomposition of galaxy density field is described by unit vectors and an amplitude, we lay out the basic formalism of how to reconstruct the multipole vectors and their statistics out of galaxy survey catalogs. We apply the algorithm to synthetic galaxy maps, and study the sensitivity of the multipole vector reconstruction accuracy to the density, depth, sky coverage, and pixelization of galaxy catalog maps.

  5. Statistics

    CERN Document Server

    Hayslett, H T

    1991-01-01

    Statistics covers the basic principles of Statistics. The book starts by tackling the importance and the two kinds of statistics; the presentation of sample data; the definition, illustration and explanation of several measures of location; and the measures of variation. The text then discusses elementary probability, the normal distribution and the normal approximation to the binomial. Testing of statistical hypotheses and tests of hypotheses about the theoretical proportion of successes in a binomial population and about the theoretical mean of a normal population are explained. The text the

  6. TRANSIT TIMING OBSERVATIONS FROM KEPLER. VI. POTENTIALLY INTERESTING CANDIDATE SYSTEMS FROM FOURIER-BASED STATISTICAL TESTS

    International Nuclear Information System (INIS)

    Steffen, Jason H.; Ford, Eric B.; Rowe, Jason F.; Borucki, William J.; Bryson, Steve; Caldwell, Douglas A.; Jenkins, Jon M.; Koch, David G.; Sanderfer, Dwight T.; Seader, Shawn; Twicken, Joseph D.; Fabrycky, Daniel C.; Holman, Matthew J.; Welsh, William F.; Batalha, Natalie M.; Ciardi, David R.; Kjeldsen, Hans; Prša, Andrej

    2012-01-01

    We analyze the deviations of transit times from a linear ephemeris for the Kepler Objects of Interest (KOI) through quarter six of science data. We conduct two statistical tests for all KOIs and a related statistical test for all pairs of KOIs in multi-transiting systems. These tests identify several systems which show potentially interesting transit timing variations (TTVs). Strong TTV systems have been valuable for the confirmation of planets and their mass measurements. Many of the systems identified in this study should prove fruitful for detailed TTV studies.

  7. Transit timing observations from Kepler. VI. Potentially interesting candidate systems from fourier-based statistical tests

    DEFF Research Database (Denmark)

    Steffen, J.H.; Ford, E.B.; Rowe, J.F.

    2012-01-01

    We analyze the deviations of transit times from a linear ephemeris for the Kepler Objects of Interest (KOI) through quarter six of science data. We conduct two statistical tests for all KOIs and a related statistical test for all pairs of KOIs in multi-transiting systems. These tests identify...... several systems which show potentially interesting transit timing variations (TTVs). Strong TTV systems have been valuable for the confirmation of planets and their mass measurements. Many of the systems identified in this study should prove fruitful for detailed TTV studies....

  8. Price limits and stock market efficiency: Evidence from rolling bicorrelation test statistic

    International Nuclear Information System (INIS)

    Lim, Kian-Ping; Brooks, Robert D.

    2009-01-01

    Using the rolling bicorrelation test statistic, the present paper compares the efficiency of stock markets from China, Korea and Taiwan in selected sub-periods with different price limits regimes. The statistical results do not support the claims that restrictive price limits and price limits per se are jeopardizing market efficiency. However, the evidence does not imply that price limits have no effect on the price discovery process but rather suggesting that market efficiency is not merely determined by price limits.

  9. A Statistical Perspective on Highly Accelerated Testing

    Energy Technology Data Exchange (ETDEWEB)

    Thomas, Edward V. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-02-01

    Highly accelerated life testing has been heavily promoted at Sandia (and elsewhere) as a means to rapidly identify product weaknesses caused by flaws in the product's design or manufacturing process. During product development, a small number of units are forced to fail at high stress. The failed units are then examined to determine the root causes of failure. The identification of the root causes of product failures exposed by highly accelerated life testing can instigate changes to the product's design and/or manufacturing process that result in a product with increased reliability. It is widely viewed that this qualitative use of highly accelerated life testing (often associated with the acronym HALT) can be useful. However, highly accelerated life testing has also been proposed as a quantitative means for "demonstrating" the reliability of a product where unreliability is associated with loss of margin via an identified and dominating failure mechanism. It is assumed that the dominant failure mechanism can be accelerated by changing the level of a stress factor that is assumed to be related to the dominant failure mode. In extreme cases, a minimal number of units (often from a pre-production lot) are subjected to a single highly accelerated stress relative to normal use. If no (or, sufficiently few) units fail at this high stress level, some might claim that a certain level of reliability has been demonstrated (relative to normal use conditions). Underlying this claim are assumptions regarding the level of knowledge associated with the relationship between the stress level and the probability of failure. The primary purpose of this document is to discuss (from a statistical perspective) the efficacy of using accelerated life testing protocols (and, in particular, "highly accelerated" protocols) to make quantitative inferences concerning the performance of a product (e.g., reliability) when in fact there is lack-of-knowledge and uncertainty concerning

  10. A robust statistical method for association-based eQTL analysis.

    Directory of Open Access Journals (Sweden)

    Ning Jiang

    Full Text Available It has been well established that theoretical kernel for recently surging genome-wide association study (GWAS is statistical inference of linkage disequilibrium (LD between a tested genetic marker and a putative locus affecting a disease trait. However, LD analysis is vulnerable to several confounding factors of which population stratification is the most prominent. Whilst many methods have been proposed to correct for the influence either through predicting the structure parameters or correcting inflation in the test statistic due to the stratification, these may not be feasible or may impose further statistical problems in practical implementation.We propose here a novel statistical method to control spurious LD in GWAS from population structure by incorporating a control marker into testing for significance of genetic association of a polymorphic marker with phenotypic variation of a complex trait. The method avoids the need of structure prediction which may be infeasible or inadequate in practice and accounts properly for a varying effect of population stratification on different regions of the genome under study. Utility and statistical properties of the new method were tested through an intensive computer simulation study and an association-based genome-wide mapping of expression quantitative trait loci in genetically divergent human populations.The analyses show that the new method confers an improved statistical power for detecting genuine genetic association in subpopulations and an effective control of spurious associations stemmed from population structure when compared with other two popularly implemented methods in the literature of GWAS.

  11. A testing procedure for wind turbine generators based on the power grid statistical model

    DEFF Research Database (Denmark)

    Farajzadehbibalan, Saber; Ramezani, Mohammad Hossein; Nielsen, Peter

    2017-01-01

    In this study, a comprehensive test procedure is developed to test wind turbine generators with a hardware-in-loop setup. The procedure employs the statistical model of the power grid considering the restrictions of the test facility and system dynamics. Given the model in the latent space...

  12. Person Fit Based on Statistical Process Control in an Adaptive Testing Environment. Research Report 98-13.

    Science.gov (United States)

    van Krimpen-Stoop, Edith M. L. A.; Meijer, Rob R.

    Person-fit research in the context of paper-and-pencil tests is reviewed, and some specific problems regarding person fit in the context of computerized adaptive testing (CAT) are discussed. Some new methods are proposed to investigate person fit in a CAT environment. These statistics are based on Statistical Process Control (SPC) theory. A…

  13. Outcomes Definitions and Statistical Tests in Oncology Studies: A Systematic Review of the Reporting Consistency.

    Science.gov (United States)

    Rivoirard, Romain; Duplay, Vianney; Oriol, Mathieu; Tinquaut, Fabien; Chauvin, Franck; Magne, Nicolas; Bourmaud, Aurelie

    2016-01-01

    Quality of reporting for Randomized Clinical Trials (RCTs) in oncology was analyzed in several systematic reviews, but, in this setting, there is paucity of data for the outcomes definitions and consistency of reporting for statistical tests in RCTs and Observational Studies (OBS). The objective of this review was to describe those two reporting aspects, for OBS and RCTs in oncology. From a list of 19 medical journals, three were retained for analysis, after a random selection: British Medical Journal (BMJ), Annals of Oncology (AoO) and British Journal of Cancer (BJC). All original articles published between March 2009 and March 2014 were screened. Only studies whose main outcome was accompanied by a corresponding statistical test were included in the analysis. Studies based on censored data were excluded. Primary outcome was to assess quality of reporting for description of primary outcome measure in RCTs and of variables of interest in OBS. A logistic regression was performed to identify covariates of studies potentially associated with concordance of tests between Methods and Results parts. 826 studies were included in the review, and 698 were OBS. Variables were described in Methods section for all OBS studies and primary endpoint was clearly detailed in Methods section for 109 RCTs (85.2%). 295 OBS (42.2%) and 43 RCTs (33.6%) had perfect agreement for reported statistical test between Methods and Results parts. In multivariable analysis, variable "number of included patients in study" was associated with test consistency: aOR (adjusted Odds Ratio) for third group compared to first group was equal to: aOR Grp3 = 0.52 [0.31-0.89] (P value = 0.009). Variables in OBS and primary endpoint in RCTs are reported and described with a high frequency. However, statistical tests consistency between methods and Results sections of OBS is not always noted. Therefore, we encourage authors and peer reviewers to verify consistency of statistical tests in oncology studies.

  14. Explorations in statistics: the log transformation.

    Science.gov (United States)

    Curran-Everett, Douglas

    2018-06-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This thirteenth installment of Explorations in Statistics explores the log transformation, an established technique that rescales the actual observations from an experiment so that the assumptions of some statistical analysis are better met. A general assumption in statistics is that the variability of some response Y is homogeneous across groups or across some predictor variable X. If the variability-the standard deviation-varies in rough proportion to the mean value of Y, a log transformation can equalize the standard deviations. Moreover, if the actual observations from an experiment conform to a skewed distribution, then a log transformation can make the theoretical distribution of the sample mean more consistent with a normal distribution. This is important: the results of a one-sample t test are meaningful only if the theoretical distribution of the sample mean is roughly normal. If we log-transform our observations, then we want to confirm the transformation was useful. We can do this if we use the Box-Cox method, if we bootstrap the sample mean and the statistic t itself, and if we assess the residual plots from the statistical model of the actual and transformed sample observations.

  15. A statistical test for outlier identification in data envelopment analysis

    Directory of Open Access Journals (Sweden)

    Morteza Khodabin

    2010-09-01

    Full Text Available In the use of peer group data to assess individual, typical or best practice performance, the effective detection of outliers is critical for achieving useful results. In these ‘‘deterministic’’ frontier models, statistical theory is now mostly available. This paper deals with the statistical pared sample method and its capability of detecting outliers in data envelopment analysis. In the presented method, each observation is deleted from the sample once and the resulting linear program is solved, leading to a distribution of efficiency estimates. Based on the achieved distribution, a pared test is designed to identify the potential outlier(s. We illustrate the method through a real data set. The method could be used in a first step, as an exploratory data analysis, before using any frontier estimation.

  16. Association testing for next-generation sequencing data using score statistics

    DEFF Research Database (Denmark)

    Skotte, Line; Korneliussen, Thorfinn Sand; Albrechtsen, Anders

    2012-01-01

    computationally feasible due to the use of score statistics. As part of the joint likelihood, we model the distribution of the phenotypes using a generalized linear model framework, which works for both quantitative and discrete phenotypes. Thus, the method presented here is applicable to case-control studies...... of genotype calls into account have been proposed; most require numerical optimization which for large-scale data is not always computationally feasible. We show that using a score statistic for the joint likelihood of observed phenotypes and observed sequencing data provides an attractive approach...... to association testing for next-generation sequencing data. The joint model accounts for the genotype classification uncertainty via the posterior probabilities of the genotypes given the observed sequencing data, which gives the approach higher power than methods based on called genotypes. This strategy remains...

  17. Interpreting Statistical Significance Test Results: A Proposed New "What If" Method.

    Science.gov (United States)

    Kieffer, Kevin M.; Thompson, Bruce

    As the 1994 publication manual of the American Psychological Association emphasized, "p" values are affected by sample size. As a result, it can be helpful to interpret the results of statistical significant tests in a sample size context by conducting so-called "what if" analyses. However, these methods can be inaccurate…

  18. Quantum Statistical Testing of a Quantum Random Number Generator

    Energy Technology Data Exchange (ETDEWEB)

    Humble, Travis S [ORNL

    2014-01-01

    The unobservable elements in a quantum technology, e.g., the quantum state, complicate system verification against promised behavior. Using model-based system engineering, we present methods for verifying the opera- tion of a prototypical quantum random number generator. We begin with the algorithmic design of the QRNG followed by the synthesis of its physical design requirements. We next discuss how quantum statistical testing can be used to verify device behavior as well as detect device bias. We conclude by highlighting how system design and verification methods must influence effort to certify future quantum technologies.

  19. Test the Overall Significance of p-values by Using Joint Tail Probability of Ordered p-values as Test Statistic

    NARCIS (Netherlands)

    Fang, Yongxiang; Wit, Ernst

    2008-01-01

    Fisher’s combined probability test is the most commonly used method to test the overall significance of a set independent p-values. However, it is very obviously that Fisher’s statistic is more sensitive to smaller p-values than to larger p-value and a small p-value may overrule the other p-values

  20. Mathematical statistics

    CERN Document Server

    Pestman, Wiebe R

    2009-01-01

    This textbook provides a broad and solid introduction to mathematical statistics, including the classical subjects hypothesis testing, normal regression analysis, and normal analysis of variance. In addition, non-parametric statistics and vectorial statistics are considered, as well as applications of stochastic analysis in modern statistics, e.g., Kolmogorov-Smirnov testing, smoothing techniques, robustness and density estimation. For students with some elementary mathematical background. With many exercises. Prerequisites from measure theory and linear algebra are presented.

  1. Error Analysis for RADAR Neighbor Matching Localization in Linear Logarithmic Strength Varying Wi-Fi Environment

    Directory of Open Access Journals (Sweden)

    Mu Zhou

    2014-01-01

    Full Text Available This paper studies the statistical errors for the fingerprint-based RADAR neighbor matching localization with the linearly calibrated reference points (RPs in logarithmic received signal strength (RSS varying Wi-Fi environment. To the best of our knowledge, little comprehensive analysis work has appeared on the error performance of neighbor matching localization with respect to the deployment of RPs. However, in order to achieve the efficient and reliable location-based services (LBSs as well as the ubiquitous context-awareness in Wi-Fi environment, much attention has to be paid to the highly accurate and cost-efficient localization systems. To this end, the statistical errors by the widely used neighbor matching localization are significantly discussed in this paper to examine the inherent mathematical relations between the localization errors and the locations of RPs by using a basic linear logarithmic strength varying model. Furthermore, based on the mathematical demonstrations and some testing results, the closed-form solutions to the statistical errors by RADAR neighbor matching localization can be an effective tool to explore alternative deployment of fingerprint-based neighbor matching localization systems in the future.

  2. Error Analysis for RADAR Neighbor Matching Localization in Linear Logarithmic Strength Varying Wi-Fi Environment

    Science.gov (United States)

    Tian, Zengshan; Xu, Kunjie; Yu, Xiang

    2014-01-01

    This paper studies the statistical errors for the fingerprint-based RADAR neighbor matching localization with the linearly calibrated reference points (RPs) in logarithmic received signal strength (RSS) varying Wi-Fi environment. To the best of our knowledge, little comprehensive analysis work has appeared on the error performance of neighbor matching localization with respect to the deployment of RPs. However, in order to achieve the efficient and reliable location-based services (LBSs) as well as the ubiquitous context-awareness in Wi-Fi environment, much attention has to be paid to the highly accurate and cost-efficient localization systems. To this end, the statistical errors by the widely used neighbor matching localization are significantly discussed in this paper to examine the inherent mathematical relations between the localization errors and the locations of RPs by using a basic linear logarithmic strength varying model. Furthermore, based on the mathematical demonstrations and some testing results, the closed-form solutions to the statistical errors by RADAR neighbor matching localization can be an effective tool to explore alternative deployment of fingerprint-based neighbor matching localization systems in the future. PMID:24683349

  3. IEEE Std 101-1987: IEEE guide for the statistical analysis of thermal life test data

    International Nuclear Information System (INIS)

    Anon.

    1992-01-01

    This revision of IEEE Std 101-1972 describes statistical analyses for data from thermally accelerated aging tests. It explains the basis and use of statistical calculations for an engineer or scientist. Accelerated test procedures usually call for a number of specimens to be aged at each of several temperatures appreciably above normal operating temperatures. High temperatures are chosen to produce specimen failures (according to specified failure criteria) in typically one week to one year. The test objective is to determine the dependence of median life on temperature from the data, and to estimate, by extrapolation, the median life to be expected at service temperature. This guide presents methods for analyzing such data and for comparing test data on different materials

  4. Application of statistical methods to the testing of nuclear counting assemblies

    International Nuclear Information System (INIS)

    Gilbert, J.P.; Friedling, G.

    1965-01-01

    This report describes the application of the hypothesis test theory to the control of the 'statistical purity' and of the stability of the counting batteries used for measurements on activation detectors in research reactors. The principles involved and the experimental results obtained at Cadarache on batteries operating with the reactors PEGGY and AZUR are given. (authors) [fr

  5. Test the Overall Significance of p-values by Using Joint Tail Probability of Ordered p-values as Test Statistic

    OpenAIRE

    Fang, Yongxiang; Wit, Ernst

    2008-01-01

    Fisher’s combined probability test is the most commonly used method to test the overall significance of a set independent p-values. However, it is very obviously that Fisher’s statistic is more sensitive to smaller p-values than to larger p-value and a small p-value may overrule the other p-values and decide the test result. This is, in some cases, viewed as a flaw. In order to overcome this flaw and improve the power of the test, the joint tail probability of a set p-values is proposed as a ...

  6. IMPLEMENTATION AND VALIDATION OF STATISTICAL TESTS IN RESEARCH'S SOFTWARE HELPING DATA COLLECTION AND PROTOCOLS ANALYSIS IN SURGERY.

    Science.gov (United States)

    Kuretzki, Carlos Henrique; Campos, Antônio Carlos Ligocki; Malafaia, Osvaldo; Soares, Sandramara Scandelari Kusano de Paula; Tenório, Sérgio Bernardo; Timi, Jorge Rufino Ribas

    2016-03-01

    The use of information technology is often applied in healthcare. With regard to scientific research, the SINPE(c) - Integrated Electronic Protocols was created as a tool to support researchers, offering clinical data standardization. By the time, SINPE(c) lacked statistical tests obtained by automatic analysis. Add to SINPE(c) features for automatic realization of the main statistical methods used in medicine . The study was divided into four topics: check the interest of users towards the implementation of the tests; search the frequency of their use in health care; carry out the implementation; and validate the results with researchers and their protocols. It was applied in a group of users of this software in their thesis in the strict sensu master and doctorate degrees in one postgraduate program in surgery. To assess the reliability of the statistics was compared the data obtained both automatically by SINPE(c) as manually held by a professional in statistics with experience with this type of study. There was concern for the use of automatic statistical tests, with good acceptance. The chi-square, Mann-Whitney, Fisher and t-Student were considered as tests frequently used by participants in medical studies. These methods have been implemented and thereafter approved as expected. The incorporation of the automatic SINPE (c) Statistical Analysis was shown to be reliable and equal to the manually done, validating its use as a research tool for medical research.

  7. Robustness Property of Robust-BD Wald-Type Test for Varying-Dimensional General Linear Models

    Directory of Open Access Journals (Sweden)

    Xiao Guo

    2018-03-01

    Full Text Available An important issue for robust inference is to examine the stability of the asymptotic level and power of the test statistic in the presence of contaminated data. Most existing results are derived in finite-dimensional settings with some particular choices of loss functions. This paper re-examines this issue by allowing for a diverging number of parameters combined with a broader array of robust error measures, called “robust- BD ”, for the class of “general linear models”. Under regularity conditions, we derive the influence function of the robust- BD parameter estimator and demonstrate that the robust- BD Wald-type test enjoys the robustness of validity and efficiency asymptotically. Specifically, the asymptotic level of the test is stable under a small amount of contamination of the null hypothesis, whereas the asymptotic power is large enough under a contaminated distribution in a neighborhood of the contiguous alternatives, thus lending supports to the utility of the proposed robust- BD Wald-type test.

  8. An omnibus likelihood test statistic and its factorization for change detection in time series of polarimetric SAR data

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Conradsen, Knut; Skriver, Henning

    2016-01-01

    Based on an omnibus likelihood ratio test statistic for the equality of several variance-covariance matrices following the complex Wishart distribution with an associated p-value and a factorization of this test statistic, change analysis in a short sequence of multilook, polarimetric SAR data...... in the covariance matrix representation is carried out. The omnibus test statistic and its factorization detect if and when change(s) occur. The technique is demonstrated on airborne EMISAR L-band data but may be applied to Sentinel-1, Cosmo-SkyMed, TerraSAR-X, ALOS and RadarSat-2 or other dual- and quad...

  9. Change detection in a time series of polarimetric SAR data by an omnibus test statistic and its factorization

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Conradsen, Knut; Skriver, Henning

    2016-01-01

    Based on an omnibus likelihood ratio test statistic for the equality of several variance-covariance matrices following the complex Wishart distribution with an associated p-value and a factorization of this test statistic, change analysis in a short sequence of multilook, polarimetric SAR data...... in the covariance matrix representation is carried out. The omnibus test statistic and its factorization detect if and when change(s) occur. The technique is demonstrated on airborne EMISAR L-band data but may be applied to Sentinel-1, Cosmo-SkyMed, TerraSAR-X, ALOS and RadarSat-2 or other dual- and quad...

  10. Testing for Statistical Discrimination based on Gender

    DEFF Research Database (Denmark)

    Lesner, Rune Vammen

    . It is shown that the implications of both screening discrimination and stereotyping are consistent with observable wage dynamics. In addition, it is found that the gender wage gap decreases in tenure but increases in job transitions and that the fraction of women in high-ranking positions within a firm does......This paper develops a model which incorporates the two most commonly cited strands of the literature on statistical discrimination, namely screening discrimination and stereotyping. The model is used to provide empirical evidence of statistical discrimination based on gender in the labour market...... not affect the level of statistical discrimination by gender....

  11. Statistics 101 for Radiologists.

    Science.gov (United States)

    Anvari, Arash; Halpern, Elkan F; Samir, Anthony E

    2015-10-01

    Diagnostic tests have wide clinical applications, including screening, diagnosis, measuring treatment effect, and determining prognosis. Interpreting diagnostic test results requires an understanding of key statistical concepts used to evaluate test efficacy. This review explains descriptive statistics and discusses probability, including mutually exclusive and independent events and conditional probability. In the inferential statistics section, a statistical perspective on study design is provided, together with an explanation of how to select appropriate statistical tests. Key concepts in recruiting study samples are discussed, including representativeness and random sampling. Variable types are defined, including predictor, outcome, and covariate variables, and the relationship of these variables to one another. In the hypothesis testing section, we explain how to determine if observed differences between groups are likely to be due to chance. We explain type I and II errors, statistical significance, and study power, followed by an explanation of effect sizes and how confidence intervals can be used to generalize observed effect sizes to the larger population. Statistical tests are explained in four categories: t tests and analysis of variance, proportion analysis tests, nonparametric tests, and regression techniques. We discuss sensitivity, specificity, accuracy, receiver operating characteristic analysis, and likelihood ratios. Measures of reliability and agreement, including κ statistics, intraclass correlation coefficients, and Bland-Altman graphs and analysis, are introduced. © RSNA, 2015.

  12. Computer processing of 14C data; statistical tests and corrections of data

    International Nuclear Information System (INIS)

    Obelic, B.; Planinic, J.

    1977-01-01

    The described computer program calculates the age of samples and performs statistical tests and corrections of data. Data are obtained from the proportional counter that measures anticoincident pulses per 20 minute intervals. After every 9th interval the counter measures total number of counts per interval. Input data are punched on cards. The output list contains input data schedule and the following results: mean CPM value, correction of CPM for normal pressure and temperature (NTP), sample age calculation based on 14 C half life of 5570 and 5730 years, age correction for NTP, dendrochronological corrections and the relative radiocarbon concentration. All results are given with one standard deviation. Input data test (Chauvenet's criterion), gas purity test, standard deviation test and test of the data processor are also included in the program. (author)

  13. A Note on Comparing the Power of Test Statistics at Low Significance Levels.

    Science.gov (United States)

    Morris, Nathan; Elston, Robert

    2011-01-01

    It is an obvious fact that the power of a test statistic is dependent upon the significance (alpha) level at which the test is performed. It is perhaps a less obvious fact that the relative performance of two statistics in terms of power is also a function of the alpha level. Through numerous personal discussions, we have noted that even some competent statisticians have the mistaken intuition that relative power comparisons at traditional levels such as α = 0.05 will be roughly similar to relative power comparisons at very low levels, such as the level α = 5 × 10 -8 , which is commonly used in genome-wide association studies. In this brief note, we demonstrate that this notion is in fact quite wrong, especially with respect to comparing tests with differing degrees of freedom. In fact, at very low alpha levels the cost of additional degrees of freedom is often comparatively low. Thus we recommend that statisticians exercise caution when interpreting the results of power comparison studies which use alpha levels that will not be used in practice.

  14. Cosmological Non-Gaussian Signature Detection: Comparing Performance of Different Statistical Tests

    Directory of Open Access Journals (Sweden)

    O. Forni

    2005-09-01

    Full Text Available Currently, it appears that the best method for non-Gaussianity detection in the cosmic microwave background (CMB consists in calculating the kurtosis of the wavelet coefficients. We know that wavelet-kurtosis outperforms other methods such as the bispectrum, the genus, ridgelet-kurtosis, and curvelet-kurtosis on an empirical basis, but relatively few studies have compared other transform-based statistics, such as extreme values, or more recent tools such as higher criticism (HC, or proposed “best possible” choices for such statistics. In this paper, we consider two models for transform-domain coefficients: (a a power-law model, which seems suited to the wavelet coefficients of simulated cosmic strings, and (b a sparse mixture model, which seems suitable for the curvelet coefficients of filamentary structure. For model (a, if power-law behavior holds with finite 8th moment, excess kurtosis is an asymptotically optimal detector, but if the 8th moment is not finite, a test based on extreme values is asymptotically optimal. For model (b, if the transform coefficients are very sparse, a recent test, higher criticism, is an optimal detector, but if they are dense, kurtosis is an optimal detector. Empirical wavelet coefficients of simulated cosmic strings have power-law character, infinite 8th moment, while curvelet coefficients of the simulated cosmic strings are not very sparse. In all cases, excess kurtosis seems to be an effective test in moderate-resolution imagery.

  15. Testing Genetic Pleiotropy with GWAS Summary Statistics for Marginal and Conditional Analyses.

    Science.gov (United States)

    Deng, Yangqing; Pan, Wei

    2017-12-01

    There is growing interest in testing genetic pleiotropy, which is when a single genetic variant influences multiple traits. Several methods have been proposed; however, these methods have some limitations. First, all the proposed methods are based on the use of individual-level genotype and phenotype data; in contrast, for logistical, and other, reasons, summary statistics of univariate SNP-trait associations are typically only available based on meta- or mega-analyzed large genome-wide association study (GWAS) data. Second, existing tests are based on marginal pleiotropy, which cannot distinguish between direct and indirect associations of a single genetic variant with multiple traits due to correlations among the traits. Hence, it is useful to consider conditional analysis, in which a subset of traits is adjusted for another subset of traits. For example, in spite of substantial lowering of low-density lipoprotein cholesterol (LDL) with statin therapy, some patients still maintain high residual cardiovascular risk, and, for these patients, it might be helpful to reduce their triglyceride (TG) level. For this purpose, in order to identify new therapeutic targets, it would be useful to identify genetic variants with pleiotropic effects on LDL and TG after adjusting the latter for LDL; otherwise, a pleiotropic effect of a genetic variant detected by a marginal model could simply be due to its association with LDL only, given the well-known correlation between the two types of lipids. Here, we develop a new pleiotropy testing procedure based only on GWAS summary statistics that can be applied for both marginal analysis and conditional analysis. Although the main technical development is based on published union-intersection testing methods, care is needed in specifying conditional models to avoid invalid statistical estimation and inference. In addition to the previously used likelihood ratio test, we also propose using generalized estimating equations under the

  16. Evaluation of the Wishart test statistics for polarimetric SAR data

    DEFF Research Database (Denmark)

    Skriver, Henning; Nielsen, Allan Aasbjerg; Conradsen, Knut

    2003-01-01

    A test statistic for equality of two covariance matrices following the complex Wishart distribution has previously been used in new algorithms for change detection, edge detection and segmentation in polarimetric SAR images. Previously, the results for change detection and edge detection have been...... quantitatively evaluated. This paper deals with the evaluation of segmentation. A segmentation performance measure originally developed for single-channel SAR images has been extended to polarimetric SAR images, and used to evaluate segmentation for a merge-using-moment algorithm for polarimetric SAR data....

  17. Analysis of statistical misconception in terms of statistical reasoning

    Science.gov (United States)

    Maryati, I.; Priatna, N.

    2018-05-01

    Reasoning skill is needed for everyone to face globalization era, because every person have to be able to manage and use information from all over the world which can be obtained easily. Statistical reasoning skill is the ability to collect, group, process, interpret, and draw conclusion of information. Developing this skill can be done through various levels of education. However, the skill is low because many people assume that statistics is just the ability to count and using formulas and so do students. Students still have negative attitude toward course which is related to research. The purpose of this research is analyzing students’ misconception in descriptive statistic course toward the statistical reasoning skill. The observation was done by analyzing the misconception test result and statistical reasoning skill test; observing the students’ misconception effect toward statistical reasoning skill. The sample of this research was 32 students of math education department who had taken descriptive statistic course. The mean value of misconception test was 49,7 and standard deviation was 10,6 whereas the mean value of statistical reasoning skill test was 51,8 and standard deviation was 8,5. If the minimal value is 65 to state the standard achievement of a course competence, students’ mean value is lower than the standard competence. The result of students’ misconception study emphasized on which sub discussion that should be considered. Based on the assessment result, it was found that students’ misconception happen on this: 1) writing mathematical sentence and symbol well, 2) understanding basic definitions, 3) determining concept that will be used in solving problem. In statistical reasoning skill, the assessment was done to measure reasoning from: 1) data, 2) representation, 3) statistic format, 4) probability, 5) sample, and 6) association.

  18. Partial discharge testing: a progress report. Statistical evaluation of PD data

    International Nuclear Information System (INIS)

    Warren, V.; Allan, J.

    2005-01-01

    It has long been known that comparing the partial discharge results obtained from a single machine is a valuable tool enabling companies to observe the gradual deterioration of a machine stator winding and thus plan appropriate maintenance for the machine. In 1998, at the annual Iris Rotating Machines Conference (IRMC), a paper was presented that compared thousands of PD test results to establish the criteria for comparing results from different machines and the expected PD levels. At subsequent annual Iris conferences, using similar analytical procedures, papers were presented that supported the previous criteria and: in 1999, established sensor location as an additional criterion; in 2000, evaluated the effect of insulation type and age on PD activity; in 2001, evaluated the effect of manufacturer on PD activity; in 2002, evaluated the effect of operating pressure for hydrogen-cooled machines; in 2003, evaluated the effect of insulation type and setting Trac alarms; in 2004, re-evaluated the effect of manufacturer on PD activity. Before going further in database analysis procedures, it would be prudent to statistically evaluate the anecdotal evidence observed to date. The goal was to determine which variables of machine conditions greatly influenced the PD results and which didn't. Therefore, this year's paper looks at the impact of operating voltage, machine type and winding type on the test results for air-cooled machines. Because of resource constraints, only data collected through 2003 was used; however, as before, it is still standardized for frequency bandwidth and pruned to include only full-load-hot (FLH) results collected for one sensor on operating machines. All questionable data, or data from off-line testing or unusual machine conditions was excluded, leaving 6824 results. Calibration of on-line PD test results is impractical; therefore, only results obtained using the same method of data collection and noise separation techniques are compared. For

  19. To test photon statistics by atomic beam deflection

    International Nuclear Information System (INIS)

    Wang Yuzhu; Chen Yudan; Huang Weigang; Liu Liang

    1985-02-01

    There exists a simple relation between the photon statistics in resonance fluorescence and the statistics of the momentum transferred to an atom by a plane travelling wave [Cook, R.J., Opt. Commun., 35, 347(1980)]. Using an atomic beam deflection by light pressure, we have observed sub-Poissonian statistics in resonance fluorescence of two-level atoms. (author)

  20. Development of modelling algorithm of technological systems by statistical tests

    Science.gov (United States)

    Shemshura, E. A.; Otrokov, A. V.; Chernyh, V. G.

    2018-03-01

    The paper tackles the problem of economic assessment of design efficiency regarding various technological systems at the stage of their operation. The modelling algorithm of a technological system was performed using statistical tests and with account of the reliability index allows estimating the level of machinery technical excellence and defining the efficiency of design reliability against its performance. Economic feasibility of its application shall be determined on the basis of service quality of a technological system with further forecasting of volumes and the range of spare parts supply.

  1. Test-retest studies in quantitative sensory testing

    DEFF Research Database (Denmark)

    Werner, M U; Petersen, M A; Bischoff, J M

    2013-01-01

    Quantitative sensory testing (QST) investigates the graded psychophysical response to controlled thermal, mechanical, electrical or chemical stimuli, allowing quantification of clinically relevant perception and pain thresholds. The methods are ubiquitously used in experimental and clinical pain...... research, and therefore, the need for uniform assessment procedures has been emphasised. However, varying consistency and transparency in the statistical methodology seem to occur in the QST literature. Sixteen publications, evaluating aspects of QST variability, from 2010 to 2012, were critically reviewed...

  2. A statistical method for testing epidemiological results, as applied to the Hanford worker population

    International Nuclear Information System (INIS)

    Brodsky, A.

    1979-01-01

    Some recent reports of Mancuso, Stewart and Kneale claim findings of radiation-produced cancer in the Hanford worker population. These claims are based on statistical computations that use small differences in accumulated exposures between groups dying of cancer and groups dying of other causes; actual mortality and longevity were not reported. This paper presents a statistical method for evaluation of actual mortality and longevity longitudinally over time, as applied in a primary analysis of the mortality experience of the Hanford worker population. Although available, this method was not utilized in the Mancuso-Stewart-Kneale paper. The author's preliminary longitudinal analysis shows that the gross mortality experience of persons employed at Hanford during 1943-70 interval did not differ significantly from that of certain controls, when both employees and controls were selected from families with two or more offspring and comparison were matched by age, sex, race and year of entry into employment. This result is consistent with findings reported by Sanders (Health Phys. vol.35, 521-538, 1978). The method utilizes an approximate chi-square (1 D.F.) statistic for testing population subgroup comparisons, as well as the cumulation of chi-squares (1 D.F.) for testing the overall result of a particular type of comparison. The method is available for computer testing of the Hanford mortality data, and could also be adapted to morbidity or other population studies. (author)

  3. Proficiency Testing for Determination of Water Content in Toluene of Chemical Reagents by iteration robust statistic technique

    Science.gov (United States)

    Wang, Hao; Wang, Qunwei; He, Ming

    2018-05-01

    In order to investigate and improve the level of detection technology of water content in liquid chemical reagents of domestic laboratories, proficiency testing provider PT0031 (CNAS) has organized proficiency testing program of water content in toluene, 48 laboratories from 18 provinces/cities/municipals took part in the PT. This paper introduces the implementation process of proficiency testing for determination of water content in toluene, including sample preparation, homogeneity and stability test, the results of statistics of iteration robust statistic technique and analysis, summarized and analyzed those of the different test standards which are widely used in the laboratories, put forward the technological suggestions for the improvement of the test quality of water content. Satisfactory results were obtained by 43 laboratories, amounting to 89.6% of the total participating laboratories.

  4. Do Time-Varying Covariances, Volatility Comovement and Spillover Matter?

    OpenAIRE

    Lakshmi Balasubramanyan

    2005-01-01

    Financial markets and their respective assets are so intertwined; analyzing any single market in isolation ignores important information. We investigate whether time varying volatility comovement and spillover impact the true variance-covariance matrix under a time-varying correlation set up. Statistically significant volatility spillover and comovement between US, UK and Japan is found. To demonstrate the importance of modelling volatility comovement and spillover, we look at a simple portfo...

  5. Recent Literature on Whether Statistical Significance Tests Should or Should Not Be Banned.

    Science.gov (United States)

    Deegear, James

    This paper summarizes the literature regarding statistical significant testing with an emphasis on recent literature in various discipline and literature exploring why researchers have demonstrably failed to be influenced by the American Psychological Association publication manual's encouragement to report effect sizes. Also considered are…

  6. The Bayesian New Statistics: Hypothesis testing, estimation, meta-analysis, and power analysis from a Bayesian perspective.

    Science.gov (United States)

    Kruschke, John K; Liddell, Torrin M

    2018-02-01

    In the practice of data analysis, there is a conceptual distinction between hypothesis testing, on the one hand, and estimation with quantified uncertainty on the other. Among frequentists in psychology, a shift of emphasis from hypothesis testing to estimation has been dubbed "the New Statistics" (Cumming 2014). A second conceptual distinction is between frequentist methods and Bayesian methods. Our main goal in this article is to explain how Bayesian methods achieve the goals of the New Statistics better than frequentist methods. The article reviews frequentist and Bayesian approaches to hypothesis testing and to estimation with confidence or credible intervals. The article also describes Bayesian approaches to meta-analysis, randomized controlled trials, and power analysis.

  7. A method of statistical analysis in the field of sports science when assumptions of parametric tests are not violated

    Directory of Open Access Journals (Sweden)

    Elżbieta Sandurska

    2016-12-01

    Full Text Available Introduction: Application of statistical software typically does not require extensive statistical knowledge, allowing to easily perform even complex analyses. Consequently, test selection criteria and important assumptions may be easily overlooked or given insufficient consideration. In such cases, the results may likely lead to wrong conclusions. Aim: To discuss issues related to assumption violations in the case of Student's t-test and one-way ANOVA, two parametric tests frequently used in the field of sports science, and to recommend solutions. Description of the state of knowledge: Student's t-test and ANOVA are parametric tests, and therefore some of the assumptions that need to be satisfied include normal distribution of the data and homogeneity of variances in groups. If the assumptions are violated, the original design of the test is impaired, and the test may then be compromised giving spurious results. A simple method to normalize the data and to stabilize the variance is to use transformations. If such approach fails, a good alternative to consider is a nonparametric test, such as Mann-Whitney, the Kruskal-Wallis or Wilcoxon signed-rank tests. Summary: Thorough verification of the parametric tests assumptions allows for correct selection of statistical tools, which is the basis of well-grounded statistical analysis. With a few simple rules, testing patterns in the data characteristic for the study of sports science comes down to a straightforward procedure.

  8. Testing a statistical method of global mean palotemperature estimations in a long climate simulation

    Energy Technology Data Exchange (ETDEWEB)

    Zorita, E.; Gonzalez-Rouco, F. [GKSS-Forschungszentrum Geesthacht GmbH (Germany). Inst. fuer Hydrophysik

    2001-07-01

    Current statistical methods of reconstructing the climate of the last centuries are based on statistical models linking climate observations (temperature, sea-level-pressure) and proxy-climate data (tree-ring chronologies, ice-cores isotope concentrations, varved sediments, etc.). These models are calibrated in the instrumental period, and the longer time series of proxy data are then used to estimate the past evolution of the climate variables. Using such methods the global mean temperature of the last 600 years has been recently estimated. In this work this method of reconstruction is tested using data from a very long simulation with a climate model. This testing allows to estimate the errors of the estimations as a function of the number of proxy data and the time scale at which the estimations are probably reliable. (orig.)

  9. Pivotal statistics for testing subsets of structural parameters in the IV Regression Model

    NARCIS (Netherlands)

    Kleibergen, F.R.

    2000-01-01

    We construct a novel statistic to test hypothezes on subsets of the structural parameters in anInstrumental Variables (IV) regression model. We derive the chi squared limiting distribution of thestatistic and show that it has a degrees of freedom parameter that is equal to the number ofstructural

  10. Statistical Diversions

    Science.gov (United States)

    Petocz, Peter; Sowey, Eric

    2008-01-01

    In this article, the authors focus on hypothesis testing--that peculiarly statistical way of deciding things. Statistical methods for testing hypotheses were developed in the 1920s and 1930s by some of the most famous statisticians, in particular Ronald Fisher, Jerzy Neyman and Egon Pearson, who laid the foundations of almost all modern methods of…

  11. Heteroscedastic Tests Statistics for One-Way Analysis of Variance: The Trimmed Means and Hall's Transformation Conjunction

    Science.gov (United States)

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2005-01-01

    To deal with nonnormal and heterogeneous data for the one-way fixed effect analysis of variance model, the authors adopted a trimmed means method in conjunction with Hall's invertible transformation into a heteroscedastic test statistic (Alexander-Govern test or Welch test). The results of simulation experiments showed that the proposed technique…

  12. Hydroxyisohexyl 3-cyclohexene carboxaldehyde (lyral) in patch test preparations under varied storage conditions.

    Science.gov (United States)

    Hamann, Dathan; Hamann, Carsten R; Zimerson, Erik; Bruze, Magnus

    2013-01-01

    The common practice of preparing patch tests in advance has recently been called into question by researchers. It has been established that fragrance compounds are volatile and their testing efficacy may be affected by storage conditions and preparation. Allergens in fragrance mix I rapidly decrease in concentration after preapplication to test chambers. This study aimed to investigate the volatility of hydroxyisohexyl 3-cyclohexene carboxaldehyde (HICC) in petrolatum when stored in test chambers and to explore the correlation between vapor pressure and allergen loss in petrolatum during preparation and storage. Standardized HICC in petrolatum was prepared and stored in IQ Chambers and Finn Chambers with covers at 5°C, 25°C, and 35°C, and concentration was analyzed at intervals for up to 9 days using gel permeation chromatography. Changes in HICC concentrations were not statistically significant at 8 hours at 5°C, 25°C, and 35°C. After 9 days, HICC concentrations were found to fall approximately 30% when stored at 35°C, 10% at 25°C, and less than 5% at 5°C. There was no significant difference between IQ and Finn chambers. Hydroxyisohexyl 3-cyclohexene carboxaldehyde concentrations are more stable in petrolatum than many other studied fragrance allergens, but HICC is still at risk for decreasing concentration when exposed to ambient air or heat for prolonged periods.

  13. Reliability assessment for safety critical systems by statistical random testing

    International Nuclear Information System (INIS)

    Mills, S.E.

    1995-11-01

    In this report we present an overview of reliability assessment for software and focus on some basic aspects of assessing reliability for safety critical systems by statistical random testing. We also discuss possible deviations from some essential assumptions on which the general methodology is based. These deviations appear quite likely in practical applications. We present and discuss possible remedies and adjustments and then undertake applying this methodology to a portion of the SDS1 software. We also indicate shortcomings of the methodology and possible avenues to address to follow to address these problems. (author). 128 refs., 11 tabs., 31 figs

  14. Reliability assessment for safety critical systems by statistical random testing

    Energy Technology Data Exchange (ETDEWEB)

    Mills, S E [Carleton Univ., Ottawa, ON (Canada). Statistical Consulting Centre

    1995-11-01

    In this report we present an overview of reliability assessment for software and focus on some basic aspects of assessing reliability for safety critical systems by statistical random testing. We also discuss possible deviations from some essential assumptions on which the general methodology is based. These deviations appear quite likely in practical applications. We present and discuss possible remedies and adjustments and then undertake applying this methodology to a portion of the SDS1 software. We also indicate shortcomings of the methodology and possible avenues to address to follow to address these problems. (author). 128 refs., 11 tabs., 31 figs.

  15. Testing for Statistical Discrimination based on Gender

    OpenAIRE

    Lesner, Rune Vammen

    2016-01-01

    This paper develops a model which incorporates the two most commonly cited strands of the literature on statistical discrimination, namely screening discrimination and stereotyping. The model is used to provide empirical evidence of statistical discrimination based on gender in the labour market. It is shown that the implications of both screening discrimination and stereotyping are consistent with observable wage dynamics. In addition, it is found that the gender wage gap decreases in tenure...

  16. Statistical methods in epidemiology. VII. An overview of the chi2 test for 2 x 2 contingency table analysis.

    Science.gov (United States)

    Rigby, A S

    2001-11-10

    The odds ratio is an appropriate method of analysis for data in 2 x 2 contingency tables. However, other methods of analysis exist. One such method is based on the chi2 test of goodness-of-fit. Key players in the development of statistical theory include Pearson, Fisher and Yates. Data are presented in the form of 2 x 2 contingency tables and a method of analysis based on the chi2 test is introduced. There are many variations of the basic test statistic, one of which is the chi2 test with Yates' continuity correction. The usefulness (or not) of Yates' continuity correction is discussed. Problems of interpretation when the method is applied to k x m tables are highlighted. Some properties of the chi2 the test are illustrated by taking examples from the author's teaching experiences. Journal editors should be encouraged to give both observed and expected cell frequencies so that better information comes out of the chi2 test statistic.

  17. Contributions to early HIV diagnosis among patients linked to care vary by testing venue

    Directory of Open Access Journals (Sweden)

    Trott Alexander T

    2008-06-01

    Full Text Available Abstract Objective Early HIV diagnosis reduces transmission and improves health outcomes; screening in non-traditional settings is increasingly advocated. We compared test venues by the number of new diagnoses successfully linked to the regional HIV treatment center and disease stage at diagnosis. Methods We conducted a retrospective cohort study using structured chart review of newly diagnosed HIV patients successfully referred to the region's only HIV treatment center from 1998 to 2003. Demographics, testing indication, risk profile, and initial CD4 count were recorded. Results There were 277 newly diagnosed patients meeting study criteria. Mean age was 33 years, 77% were male, and 46% were African-American. Median CD4 at diagnosis was 324. Diagnoses were earlier via partner testing at the HIV treatment center (N = 8, median CD4 648, p = 0.008 and with universal screening by the blood bank, military, and insurance companies (N = 13, median CD4 483, p = 0.05 than at other venues. Targeted testing by health care and public health entities based on patient request, risk profile, or patient condition lead to later diagnosis. Conclusion Test venues varied by the number of new diagnoses made and the stage of illness at diagnosis. To improve the rate of early diagnosis, scarce resources should be allocated to maximize the number of new diagnoses at screening venues where diagnoses are more likely to be early or alter testing strategies at test venues where diagnoses are traditionally made late. Efforts to improve early diagnosis should be coordinated longitudinally on a regional basis according to this conceptual paradigm.

  18. Statistical Inference at Work: Statistical Process Control as an Example

    Science.gov (United States)

    Bakker, Arthur; Kent, Phillip; Derry, Jan; Noss, Richard; Hoyles, Celia

    2008-01-01

    To characterise statistical inference in the workplace this paper compares a prototypical type of statistical inference at work, statistical process control (SPC), with a type of statistical inference that is better known in educational settings, hypothesis testing. Although there are some similarities between the reasoning structure involved in…

  19. Tests and Confidence Intervals for an Extended Variance Component Using the Modified Likelihood Ratio Statistic

    DEFF Research Database (Denmark)

    Christensen, Ole Fredslund; Frydenberg, Morten; Jensen, Jens Ledet

    2005-01-01

    The large deviation modified likelihood ratio statistic is studied for testing a variance component equal to a specified value. Formulas are presented in the general balanced case, whereas in the unbalanced case only the one-way random effects model is studied. Simulation studies are presented......, showing that the normal approximation to the large deviation modified likelihood ratio statistic gives confidence intervals for variance components with coverage probabilities very close to the nominal confidence coefficient....

  20. Statistical power analysis a simple and general model for traditional and modern hypothesis tests

    CERN Document Server

    Murphy, Kevin R; Wolach, Allen

    2014-01-01

    Noted for its accessible approach, this text applies the latest approaches of power analysis to both null hypothesis and minimum-effect testing using the same basic unified model. Through the use of a few simple procedures and examples, the authors show readers with little expertise in statistical analysis how to obtain the values needed to carry out the power analysis for their research. Illustrations of how these analyses work and how they can be used to choose the appropriate criterion for defining statistically significant outcomes are sprinkled throughout. The book presents a simple and g

  1. Statistical Analysis of Compressive and Flexural Test Results on the Sustainable Adobe Reinforced with Steel Wire Mesh

    Science.gov (United States)

    Jokhio, Gul A.; Syed Mohsin, Sharifah M.; Gul, Yasmeen

    2018-04-01

    It has been established that Adobe provides, in addition to being sustainable and economic, a better indoor air quality without spending extensive amounts of energy as opposed to the modern synthetic materials. The material, however, suffers from weak structural behaviour when subjected to adverse loading conditions. A wide range of mechanical properties has been reported in literature owing to lack of research and standardization. The present paper presents the statistical analysis of the results that were obtained through compressive and flexural tests on Adobe samples. Adobe specimens with and without wire mesh reinforcement were tested and the results were reported. The statistical analysis of these results presents an interesting read. It has been found that the compressive strength of adobe increases by about 43% after adding a single layer of wire mesh reinforcement. This increase is statistically significant. The flexural response of Adobe has also shown improvement with the addition of wire mesh reinforcement, however, the statistical significance of the same cannot be established.

  2. Why the null matters: statistical tests, random walks and evolution.

    Science.gov (United States)

    Sheets, H D; Mitchell, C E

    2001-01-01

    A number of statistical tests have been developed to determine what type of dynamics underlie observed changes in morphology in evolutionary time series, based on the pattern of change within the time series. The theory of the 'scaled maximum', the 'log-rate-interval' (LRI) method, and the Hurst exponent all operate on the same principle of comparing the maximum change, or rate of change, in the observed dataset to the maximum change expected of a random walk. Less change in a dataset than expected of a random walk has been interpreted as indicating stabilizing selection, while more change implies directional selection. The 'runs test' in contrast, operates on the sequencing of steps, rather than on excursion. Applications of these tests to computer generated, simulated time series of known dynamical form and various levels of additive noise indicate that there is a fundamental asymmetry in the rate of type II errors of the tests based on excursion: they are all highly sensitive to noise in models of directional selection that result in a linear trend within a time series, but are largely noise immune in the case of a simple model of stabilizing selection. Additionally, the LRI method has a lower sensitivity than originally claimed, due to the large range of LRI rates produced by random walks. Examination of the published results of these tests show that they have seldom produced a conclusion that an observed evolutionary time series was due to directional selection, a result which needs closer examination in light of the asymmetric response of these tests.

  3. Assessment of noise in a digital image using the join-count statistic and the Moran test

    International Nuclear Information System (INIS)

    Kehshih Chuang; Huang, H.K.

    1992-01-01

    It is assumed that data bits of a pixel in digital images can be divided into signal and noise bits. The signal bits occupy the most significant part of the pixel. The signal parts of each pixel are correlated while the noise parts are uncorrelated. Two statistical methods, the Moran test and the join-count statistic, are used to examine the noise parts. Images from computerized tomography, magnetic resonance and computed radiography are used for the evaluation of the noise bits. A residual image is formed by subtracting the original image from its smoothed version. The noise level in the residual image is then identical to that in the original image. Both statistical tests are then performed on the bit planes of the residual image. Results show that most digital images contain only 8-9 bits of correlated information. Both methods are easy to implement and fast to perform. (author)

  4. Statistical testing of the full-range leadership theory in nursing.

    Science.gov (United States)

    Kanste, Outi; Kääriäinen, Maria; Kyngäs, Helvi

    2009-12-01

    The aim of this study is to test statistically the structure of the full-range leadership theory in nursing. The data were gathered by postal questionnaires from nurses and nurse leaders working in healthcare organizations in Finland. A follow-up study was performed 1 year later. The sample consisted of 601 nurses and nurse leaders, and the follow-up study had 78 respondents. Theory was tested through structural equation modelling, standard regression analysis and two-way anova. Rewarding transformational leadership seems to promote and passive laissez-faire leadership to reduce willingness to exert extra effort, perceptions of leader effectiveness and satisfaction with the leader. Active management-by-exception seems to reduce willingness to exert extra effort and perception of leader effectiveness. Rewarding transformational leadership remained as a strong explanatory factor of all outcome variables measured 1 year later. The data supported the main structure of the full-range leadership theory, lending support to the universal nature of the theory.

  5. Statistical analysis of non-homogeneous Poisson processes. Statistical processing of a particle multidetector

    International Nuclear Information System (INIS)

    Lacombe, J.P.

    1985-12-01

    Statistic study of Poisson non-homogeneous and spatial processes is the first part of this thesis. A Neyman-Pearson type test is defined concerning the intensity measurement of these processes. Conditions are given for which consistency of the test is assured, and others giving the asymptotic normality of the test statistics. Then some techniques of statistic processing of Poisson fields and their applications to a particle multidetector study are given. Quality tests of the device are proposed togetherwith signal extraction methods [fr

  6. A practical model-based statistical approach for generating functional test cases: application in the automotive industry

    OpenAIRE

    Awédikian , Roy; Yannou , Bernard

    2012-01-01

    International audience; With the growing complexity of industrial software applications, industrials are looking for efficient and practical methods to validate the software. This paper develops a model-based statistical testing approach that automatically generates online and offline test cases for embedded software. It discusses an integrated framework that combines solutions for three major software testing research questions: (i) how to select test inputs; (ii) how to predict the expected...

  7. Cost-effectiveness of population based BRCA testing with varying Ashkenazi Jewish ancestry.

    Science.gov (United States)

    Manchanda, Ranjit; Patel, Shreeya; Antoniou, Antonis C; Levy-Lahad, Ephrat; Turnbull, Clare; Evans, D Gareth; Hopper, John L; Macinnis, Robert J; Menon, Usha; Jacobs, Ian; Legood, Rosa

    2017-11-01

    -adjusted life-years and $100,000 per quality-adjusted life-years willingness-to-pay thresholds for all 4 Ashkenazi-Jewish grandparent scenarios, with ≥95% simulations found to be cost-effective on probabilistic sensitivity analysis. Population-testing remains cost-effective in the absence of reduction in breast cancer risk from oophorectomy and at lower risk-reducing mastectomy (13%) or risk-reducing salpingo-oophorectomy (20%) rates. Population testing for BRCA mutations with varying levels of Ashkenazi-Jewish ancestry is cost-effective in the United Kingdom and the United States. These results support population testing in Ashkenazi-Jewish women with 1-4 Ashkenazi-Jewish grandparent ancestry. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Reliability Verification of DBE Environment Simulation Test Facility by using Statistics Method

    International Nuclear Information System (INIS)

    Jang, Kyung Nam; Kim, Jong Soeg; Jeong, Sun Chul; Kyung Heum

    2011-01-01

    In the nuclear power plant, all the safety-related equipment including cables under the harsh environment should perform the equipment qualification (EQ) according to the IEEE std 323. There are three types of qualification methods including type testing, operating experience and analysis. In order to environmentally qualify the safety-related equipment using type testing method, not analysis or operation experience method, the representative sample of equipment, including interfaces, should be subjected to a series of tests. Among these tests, Design Basis Events (DBE) environment simulating test is the most important test. DBE simulation test is performed in DBE simulation test chamber according to the postulated DBE conditions including specified high-energy line break (HELB), loss of coolant accident (LOCA), main steam line break (MSLB) and etc, after thermal and radiation aging. Because most DBE conditions have 100% humidity condition, in order to trace temperature and pressure of DBE condition, high temperature steam should be used. During DBE simulation test, if high temperature steam under high pressure inject to the DBE test chamber, the temperature and pressure in test chamber rapidly increase over the target temperature. Therefore, the temperature and pressure in test chamber continue fluctuating during the DBE simulation test to meet target temperature and pressure. We should ensure fairness and accuracy of test result by confirming the performance of DBE environment simulation test facility. In this paper, in order to verify reliability of DBE environment simulation test facility, statistics method is used

  9. Filtering a statistically exactly solvable test model for turbulent tracers from partial observations

    International Nuclear Information System (INIS)

    Gershgorin, B.; Majda, A.J.

    2011-01-01

    A statistically exactly solvable model for passive tracers is introduced as a test model for the authors' Nonlinear Extended Kalman Filter (NEKF) as well as other filtering algorithms. The model involves a Gaussian velocity field and a passive tracer governed by the advection-diffusion equation with an imposed mean gradient. The model has direct relevance to engineering problems such as the spread of pollutants in the air or contaminants in the water as well as climate change problems concerning the transport of greenhouse gases such as carbon dioxide with strongly intermittent probability distributions consistent with the actual observations of the atmosphere. One of the attractive properties of the model is the existence of the exact statistical solution. In particular, this unique feature of the model provides an opportunity to design and test fast and efficient algorithms for real-time data assimilation based on rigorous mathematical theory for a turbulence model problem with many active spatiotemporal scales. Here, we extensively study the performance of the NEKF which uses the exact first and second order nonlinear statistics without any approximations due to linearization. The role of partial and sparse observations, the frequency of observations and the observation noise strength in recovering the true signal, its spectrum, and fat tail probability distribution are the central issues discussed here. The results of our study provide useful guidelines for filtering realistic turbulent systems with passive tracers through partial observations.

  10. Statistical energy as a tool for binning-free, multivariate goodness-of-fit tests, two-sample comparison and unfolding

    International Nuclear Information System (INIS)

    Aslan, B.; Zech, G.

    2005-01-01

    We introduce the novel concept of statistical energy as a statistical tool. We define statistical energy of statistical distributions in a similar way as for electric charge distributions. Charges of opposite sign are in a state of minimum energy if they are equally distributed. This property is used to check whether two samples belong to the same parent distribution, to define goodness-of-fit tests and to unfold distributions distorted by measurement. The approach is binning-free and especially powerful in multidimensional applications

  11. Empirical Statistical Power for Testing Multilocus Genotypic Effects under Unbalanced Designs Using a Gibbs Sampler

    Directory of Open Access Journals (Sweden)

    Chaeyoung Lee

    2012-11-01

    Full Text Available Epistasis that may explain a large portion of the phenotypic variation for complex economic traits of animals has been ignored in many genetic association studies. A Baysian method was introduced to draw inferences about multilocus genotypic effects based on their marginal posterior distributions by a Gibbs sampler. A simulation study was conducted to provide statistical powers under various unbalanced designs by using this method. Data were simulated by combined designs of number of loci, within genotype variance, and sample size in unbalanced designs with or without null combined genotype cells. Mean empirical statistical power was estimated for testing posterior mean estimate of combined genotype effect. A practical example for obtaining empirical statistical power estimates with a given sample size was provided under unbalanced designs. The empirical statistical powers would be useful for determining an optimal design when interactive associations of multiple loci with complex phenotypes were examined.

  12. The extended statistical analysis of toxicity tests using standardised effect sizes (SESs): a comparison of nine published papers.

    Science.gov (United States)

    Festing, Michael F W

    2014-01-01

    The safety of chemicals, drugs, novel foods and genetically modified crops is often tested using repeat-dose sub-acute toxicity tests in rats or mice. It is important to avoid misinterpretations of the results as these tests are used to help determine safe exposure levels in humans. Treated and control groups are compared for a range of haematological, biochemical and other biomarkers which may indicate tissue damage or other adverse effects. However, the statistical analysis and presentation of such data poses problems due to the large number of statistical tests which are involved. Often, it is not clear whether a "statistically significant" effect is real or a false positive (type I error) due to sampling variation. The author's conclusions appear to be reached somewhat subjectively by the pattern of statistical significances, discounting those which they judge to be type I errors and ignoring any biomarker where the p-value is greater than p = 0.05. However, by using standardised effect sizes (SESs) a range of graphical methods and an over-all assessment of the mean absolute response can be made. The approach is an extension, not a replacement of existing methods. It is intended to assist toxicologists and regulators in the interpretation of the results. Here, the SES analysis has been applied to data from nine published sub-acute toxicity tests in order to compare the findings with those of the author's. Line plots, box plots and bar plots show the pattern of response. Dose-response relationships are easily seen. A "bootstrap" test compares the mean absolute differences across dose groups. In four out of seven papers where the no observed adverse effect level (NOAEL) was estimated by the authors, it was set too high according to the bootstrap test, suggesting that possible toxicity is under-estimated.

  13. The extended statistical analysis of toxicity tests using standardised effect sizes (SESs: a comparison of nine published papers.

    Directory of Open Access Journals (Sweden)

    Michael F W Festing

    Full Text Available The safety of chemicals, drugs, novel foods and genetically modified crops is often tested using repeat-dose sub-acute toxicity tests in rats or mice. It is important to avoid misinterpretations of the results as these tests are used to help determine safe exposure levels in humans. Treated and control groups are compared for a range of haematological, biochemical and other biomarkers which may indicate tissue damage or other adverse effects. However, the statistical analysis and presentation of such data poses problems due to the large number of statistical tests which are involved. Often, it is not clear whether a "statistically significant" effect is real or a false positive (type I error due to sampling variation. The author's conclusions appear to be reached somewhat subjectively by the pattern of statistical significances, discounting those which they judge to be type I errors and ignoring any biomarker where the p-value is greater than p = 0.05. However, by using standardised effect sizes (SESs a range of graphical methods and an over-all assessment of the mean absolute response can be made. The approach is an extension, not a replacement of existing methods. It is intended to assist toxicologists and regulators in the interpretation of the results. Here, the SES analysis has been applied to data from nine published sub-acute toxicity tests in order to compare the findings with those of the author's. Line plots, box plots and bar plots show the pattern of response. Dose-response relationships are easily seen. A "bootstrap" test compares the mean absolute differences across dose groups. In four out of seven papers where the no observed adverse effect level (NOAEL was estimated by the authors, it was set too high according to the bootstrap test, suggesting that possible toxicity is under-estimated.

  14. Online incidental statistical learning of audiovisual word sequences in adults: a registered report.

    Science.gov (United States)

    Kuppuraj, Sengottuvel; Duta, Mihaela; Thompson, Paul; Bishop, Dorothy

    2018-02-01

    Statistical learning has been proposed as a key mechanism in language learning. Our main goal was to examine whether adults are capable of simultaneously extracting statistical dependencies in a task where stimuli include a range of structures amenable to statistical learning within a single paradigm. We devised an online statistical learning task using real word auditory-picture sequences that vary in two dimensions: (i) predictability and (ii) adjacency of dependent elements. This task was followed by an offline recall task to probe learning of each sequence type. We registered three hypotheses with specific predictions. First, adults would extract regular patterns from continuous stream (effect of grammaticality). Second, within grammatical conditions, they would show differential speeding up for each condition as a factor of statistical complexity of the condition and exposure. Third, our novel approach to measure online statistical learning would be reliable in showing individual differences in statistical learning ability. Further, we explored the relation between statistical learning and a measure of verbal short-term memory (STM). Forty-two participants were tested and retested after an interval of at least 3 days on our novel statistical learning task. We analysed the reaction time data using a novel regression discontinuity approach. Consistent with prediction, participants showed a grammaticality effect, agreeing with the predicted order of difficulty for learning different statistical structures. Furthermore, a learning index from the task showed acceptable test-retest reliability ( r  = 0.67). However, STM did not correlate with statistical learning. We discuss the findings noting the benefits of online measures in tracking the learning process.

  15. A Critique of One-Tailed Hypothesis Test Procedures in Business and Economics Statistics Textbooks.

    Science.gov (United States)

    Liu, Tung; Stone, Courtenay C.

    1999-01-01

    Surveys introductory business and economics statistics textbooks and finds that they differ over the best way to explain one-tailed hypothesis tests: the simple null-hypothesis approach or the composite null-hypothesis approach. Argues that the composite null-hypothesis approach contains methodological shortcomings that make it more difficult for…

  16. Performance Testing of Suspension Plasma Sprayed Thermal Barrier Coatings Produced with Varied Suspension Parameters

    Directory of Open Access Journals (Sweden)

    Nicholas Curry

    2015-07-01

    Full Text Available Suspension plasma spraying has become an emerging technology for the production of thermal barrier coatings for the gas turbine industry. Presently, though commercial systems for coating production are available, coatings remain in the development stage. Suitable suspension parameters for coating production remain an outstanding question and the influence of suspension properties on the final coatings is not well known. For this study, a number of suspensions were produced with varied solid loadings, powder size distributions and solvents. Suspensions were sprayed onto superalloy substrates coated with high velocity air fuel (HVAF -sprayed bond coats. Plasma spray parameters were selected to generate columnar structures based on previous experiments and were maintained at constant to discover the influence of the suspension behavior on coating microstructures. Testing of the produced thermal barrier coating (TBC systems has included thermal cyclic fatigue testing and thermal conductivity analysis. Pore size distribution has been characterized by mercury infiltration porosimetry. Results show a strong influence of suspension viscosity and surface tension on the microstructure of the produced coatings.

  17. High-Throughput Nanoindentation for Statistical and Spatial Property Determination

    Science.gov (United States)

    Hintsala, Eric D.; Hangen, Ude; Stauffer, Douglas D.

    2018-04-01

    Standard nanoindentation tests are "high throughput" compared to nearly all other mechanical tests, such as tension or compression. However, the typical rates of tens of tests per hour can be significantly improved. These higher testing rates enable otherwise impractical studies requiring several thousands of indents, such as high-resolution property mapping and detailed statistical studies. However, care must be taken to avoid systematic errors in the measurement, including choosing of the indentation depth/spacing to avoid overlap of plastic zones, pileup, and influence of neighboring microstructural features in the material being tested. Furthermore, since fast loading rates are required, the strain rate sensitivity must also be considered. A review of these effects is given, with the emphasis placed on making complimentary standard nanoindentation measurements to address these issues. Experimental applications of the technique, including mapping of welds, microstructures, and composites with varying length scales, along with studying the effect of surface roughness on nominally homogeneous specimens, will be presented.

  18. A critical discussion of null hypothesis significance testing and statistical power analysis within psychological research

    DEFF Research Database (Denmark)

    Jones, Allan; Sommerlund, Bo

    2007-01-01

    The uses of null hypothesis significance testing (NHST) and statistical power analysis within psychological research are critically discussed. The article looks at the problems of relying solely on NHST when dealing with small and large sample sizes. The use of power-analysis in estimating...... the potential error introduced by small and large samples is advocated. Power analysis is not recommended as a replacement to NHST but as an additional source of information about the phenomena under investigation. Moreover, the importance of conceptual analysis in relation to statistical analysis of hypothesis...

  19. Using Cochran's Z Statistic to Test the Kernel-Smoothed Item Response Function Differences between Focal and Reference Groups

    Science.gov (United States)

    Zheng, Yinggan; Gierl, Mark J.; Cui, Ying

    2010-01-01

    This study combined the kernel smoothing procedure and a nonparametric differential item functioning statistic--Cochran's Z--to statistically test the difference between the kernel-smoothed item response functions for reference and focal groups. Simulation studies were conducted to investigate the Type I error and power of the proposed…

  20. R for statistics

    CERN Document Server

    Cornillon, Pierre-Andre; Husson, Francois; Jegou, Nicolas; Josse, Julie; Kloareg, Maela; Matzner-Lober, Eric; Rouviere, Laurent

    2012-01-01

    An Overview of RMain ConceptsInstalling RWork SessionHelpR ObjectsFunctionsPackagesExercisesPreparing DataReading Data from FileExporting ResultsManipulating VariablesManipulating IndividualsConcatenating Data TablesCross-TabulationExercisesR GraphicsConventional Graphical FunctionsGraphical Functions with latticeExercisesMaking Programs with RControl FlowsPredefined FunctionsCreating a FunctionExercisesStatistical MethodsIntroduction to the Statistical MethodsA Quick Start with RInstalling ROpening and Closing RThe Command PromptAttribution, Objects, and FunctionSelectionOther Rcmdr PackageImporting (or Inputting) DataGraphsStatistical AnalysisHypothesis TestConfidence Intervals for a MeanChi-Square Test of IndependenceComparison of Two MeansTesting Conformity of a ProportionComparing Several ProportionsThe Power of a TestRegressionSimple Linear RegressionMultiple Linear RegressionPartial Least Squares (PLS) RegressionAnalysis of Variance and CovarianceOne-Way Analysis of VarianceMulti-Way Analysis of Varian...

  1. Statistical inference based on divergence measures

    CERN Document Server

    Pardo, Leandro

    2005-01-01

    The idea of using functionals of Information Theory, such as entropies or divergences, in statistical inference is not new. However, in spite of the fact that divergence statistics have become a very good alternative to the classical likelihood ratio test and the Pearson-type statistic in discrete models, many statisticians remain unaware of this powerful approach.Statistical Inference Based on Divergence Measures explores classical problems of statistical inference, such as estimation and hypothesis testing, on the basis of measures of entropy and divergence. The first two chapters form an overview, from a statistical perspective, of the most important measures of entropy and divergence and study their properties. The author then examines the statistical analysis of discrete multivariate data with emphasis is on problems in contingency tables and loglinear models using phi-divergence test statistics as well as minimum phi-divergence estimators. The final chapter looks at testing in general populations, prese...

  2. Ecosystem engineering varies spatially: a test of the vegetation modification paradigm for prairie dogs

    Science.gov (United States)

    Baker, Bruce W.; Augustine, David J.; Sedgwick, James A.; Lubow, Bruce C.

    2013-01-01

    Colonial, burrowing herbivores can be engineers of grassland and shrubland ecosystems worldwide. Spatial variation in landscapes suggests caution when extrapolating single-place studies of single species, but lack of data and the need to generalize often leads to ‘model system’ thinking and application of results beyond appropriate statistical inference. Generalizations about the engineering effects of prairie dogs (Cynomys sp.) developed largely from intensive study at a single complex of black-tailed prairie dogs C. ludovicianus in northern mixed prairie, but have been extrapolated to other ecoregions and prairie dog species in North America, and other colonial, burrowing herbivores. We tested the paradigm that prairie dogs decrease vegetation volume and the cover of grasses and tall shrubs, and increase bare ground and forb cover. We sampled vegetation on and off 279 colonies at 13 complexes of 3 prairie dog species widely distributed across 5 ecoregions in North America. The paradigm was generally supported at 7 black-tailed prairie dog complexes in northern mixed prairie, where vegetation volume, grass cover, and tall shrub cover were lower, and bare ground and forb cover were higher, on colonies than at paired off-colony sites. Outside the northern mixed prairie, all 3 prairie dog species consistently reduced vegetation volume, but their effects on cover of plant functional groups varied with prairie dog species and the grazing tolerance of dominant perennial grasses. White-tailed prairie dogs C. leucurus in sagebrush steppe did not reduce shrub cover, whereas black-tailed prairie dogs suppressed shrub cover at all complexes with tall shrubs in the surrounding habitat matrix. Black-tailed prairie dogs in shortgrass steppe and Gunnison's prairie dogs C. gunnisoni in Colorado Plateau grassland both had relatively minor effects on grass cover, which may reflect the dominance of grazing-tolerant shortgrasses at both complexes. Variation in modification of

  3. Statistical methods for conducting agreement (comparison of clinical tests) and precision (repeatability or reproducibility) studies in optometry and ophthalmology.

    Science.gov (United States)

    McAlinden, Colm; Khadka, Jyoti; Pesudovs, Konrad

    2011-07-01

    The ever-expanding choice of ocular metrology and imaging equipment has driven research into the validity of their measurements. Consequently, studies of the agreement between two instruments or clinical tests have proliferated in the ophthalmic literature. It is important that researchers apply the appropriate statistical tests in agreement studies. Correlation coefficients are hazardous and should be avoided. The 'limits of agreement' method originally proposed by Altman and Bland in 1983 is the statistical procedure of choice. Its step-by-step use and practical considerations in relation to optometry and ophthalmology are detailed in addition to sample size considerations and statistical approaches to precision (repeatability or reproducibility) estimates. Ophthalmic & Physiological Optics © 2011 The College of Optometrists.

  4. A wavenumber-partitioning scheme for two-dimensional statistical closures

    International Nuclear Information System (INIS)

    Bowman, J.C.

    1994-11-01

    One of the principal advantages of statistical closure approximations for fluid turbulence is that they involve smoothly varying functions of wavenumber. This suggests the possibility of modeling a flow by following the evolution of only a few representative wavenumbers. This work presents two new techniques for the implementation of two-dimensional isotropic statistical closures that for the first time allows the inertial-range scalings of these approximation to be numerically demonstrated. A technique of wavenumber partitioning that conserves both energy and enstrophy is developed for two-dimensional statistical closures. Coupled with a new time-stepping scheme based on a variable integrating factor, this advance facilitates the computation of energy spectra over seven wavenumber decades, a task that will clearly remain outside the realm of conventional numerical simulations for the foreseeable future. Within the context of the test-field model, the method is used to demonstrate Kraichnan's logarithmically-corrected scaling for the enstrophy inertial range and to make a quantitative assessment of the effect of replacing the physical Laplacian viscosity with an enhanced hyperviscosity

  5. Intelligent Condition Diagnosis Method Based on Adaptive Statistic Test Filter and Diagnostic Bayesian Network.

    Science.gov (United States)

    Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing

    2016-01-08

    A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method.

  6. Intelligent Condition Diagnosis Method Based on Adaptive Statistic Test Filter and Diagnostic Bayesian Network

    Directory of Open Access Journals (Sweden)

    Ke Li

    2016-01-01

    Full Text Available A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF and Diagnostic Bayesian Network (DBN is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO. To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA is proposed to evaluate the sensitiveness of symptom parameters (SPs for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method.

  7. Intelligent Condition Diagnosis Method Based on Adaptive Statistic Test Filter and Diagnostic Bayesian Network

    Science.gov (United States)

    Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing

    2016-01-01

    A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method. PMID:26761006

  8. Development of the Statistical Reasoning in Biology Concept Inventory (SRBCI)

    Science.gov (United States)

    Deane, Thomas; Nomme, Kathy; Jeffery, Erica; Pollock, Carol; Birol, Gülnur

    2016-01-01

    We followed established best practices in concept inventory design and developed a 12-item inventory to assess student ability in statistical reasoning in biology (Statistical Reasoning in Biology Concept Inventory [SRBCI]). It is important to assess student thinking in this conceptual area, because it is a fundamental requirement of being statistically literate and associated skills are needed in almost all walks of life. Despite this, previous work shows that non–expert-like thinking in statistical reasoning is common, even after instruction. As science educators, our goal should be to move students along a novice-to-expert spectrum, which could be achieved with growing experience in statistical reasoning. We used item response theory analyses (the one-parameter Rasch model and associated analyses) to assess responses gathered from biology students in two populations at a large research university in Canada in order to test SRBCI’s robustness and sensitivity in capturing useful data relating to the students’ conceptual ability in statistical reasoning. Our analyses indicated that SRBCI is a unidimensional construct, with items that vary widely in difficulty and provide useful information about such student ability. SRBCI should be useful as a diagnostic tool in a variety of biology settings and as a means of measuring the success of teaching interventions designed to improve statistical reasoning skills. PMID:26903497

  9. Statistics For Dummies

    CERN Document Server

    Rumsey, Deborah

    2011-01-01

    The fun and easy way to get down to business with statistics Stymied by statistics? No fear ? this friendly guide offers clear, practical explanations of statistical ideas, techniques, formulas, and calculations, with lots of examples that show you how these concepts apply to your everyday life. Statistics For Dummies shows you how to interpret and critique graphs and charts, determine the odds with probability, guesstimate with confidence using confidence intervals, set up and carry out a hypothesis test, compute statistical formulas, and more.Tracks to a typical first semester statistics cou

  10. Nuclear multifragmentation, its relation to general physics. A rich test ground of the fundamentals of statistical mechanics

    International Nuclear Information System (INIS)

    Gross, D.H.E.

    2006-01-01

    Heat can flow from cold to hot at any phase separation even in macroscopic systems. Therefore also Lynden-Bell's famous gravo-thermal catastrophe must be reconsidered. In contrast to traditional canonical Boltzmann-Gibbs statistics this is correctly described only by microcanonical statistics. Systems studied in chemical thermodynamics (ChTh) by using canonical statistics consist of several homogeneous macroscopic phases. Evidently, macroscopic statistics as in chemistry cannot and should not be applied to non-extensive or inhomogeneous systems like nuclei or galaxies. Nuclei are small and inhomogeneous. Multifragmented nuclei are even more inhomogeneous and the fragments even smaller. Phase transitions of first order and especially phase separations therefore cannot be described by a (homogeneous) canonical ensemble. Taking this serious, fascinating perspectives open for statistical nuclear fragmentation as test ground for the basic principles of statistical mechanics, especially of phase transitions, without the use of the thermodynamic limit. Moreover, there is also a lot of similarity between the accessible phase space of fragmenting nuclei and inhomogeneous multistellar systems. This underlines the fundamental significance for statistical physics in general. (orig.)

  11. Elevated-temperature benchmark tests of simply supported beams and circular plates subjected to time-varying loadings

    International Nuclear Information System (INIS)

    Corum, J.M.; Richardson, M.; Clinard, J.A.

    1977-01-01

    This report presents the measured elastic-plastic-creep responses of eight simply supported type 304 stainless steel beams and circular plates that were subjected to time-varying loadings at elevated temperature. The tests were performed to provide experimental benchmark problem data suitable for assessing inelastic analysis methods and for validating computer programs. Beams and plates exhibit the essential features of inelastic structural behavior; yet they are relatively simple and the experimental results are generally easy to interpret. The stress fields are largely uniaxial in beams, while multiaxial effects are introduced in plates. The specimens tested were laterally loaded at the center and subjected to either a prescribed load or a center deflection history. The specimens were machined from a common well-characterized heat of material, and all the tests were performed at a temperature of 593 0 C (1100 0 F). Test results are presented in terms of the load and center deflection behaviors, which typify the overall structural behavior. Additional deflection data, as well as strain gage results and mechanical properties data for the beam and plate material, are provided in the appendices

  12. Automated collimation testing by determining the statistical correlation coefficient of Talbot self-images.

    Science.gov (United States)

    Rana, Santosh; Dhanotia, Jitendra; Bhatia, Vimal; Prakash, Shashi

    2018-04-01

    In this paper, we propose a simple, fast, and accurate technique for detection of collimation position of an optical beam using the self-imaging phenomenon and correlation analysis. Herrera-Fernandez et al. [J. Opt.18, 075608 (2016)JOOPDB0150-536X10.1088/2040-8978/18/7/075608] proposed an experimental arrangement for collimation testing by comparing the period of two different self-images produced by a single diffraction grating. Following their approach, we propose a testing procedure based on correlation coefficient (CC) for efficient detection of variation in the size and fringe width of the Talbot self-images and thereby the collimation position. When the beam is collimated, the physical properties of the self-images of the grating, such as its size and fringe width, do not vary from one Talbot plane to the other and are identical; the CC is maximum in such a situation. For the de-collimated position, the size and fringe width of the self-images vary, and correspondingly the CC decreases. Hence, the magnitude of CC is a measure of degree of collimation. Using the method, we could set the collimation position to a resolution of 1 μm, which relates to ±0.25   μ    radians in terms of collimation angle (for testing a collimating lens of diameter 46 mm and focal length 300 mm). In contrast to most collimation techniques reported to date, the proposed technique does not require a translation/rotation of the grating, use of complicated phase evaluation algorithms, or an intricate method for determination of period of the grating or its self-images. The technique is fully automated and provides high resolution and precision.

  13. Selection of hidden layer nodes in neural networks by statistical tests

    International Nuclear Information System (INIS)

    Ciftcioglu, Ozer

    1992-05-01

    A statistical methodology for selection of the number of hidden layer nodes in feedforward neural networks is described. The method considers the network as an empirical model for the experimental data set subject to pattern classification so that the selection process becomes a model estimation through parameter identification. The solution is performed for an overdetermined estimation problem for identification using nonlinear least squares minimization technique. The number of the hidden layer nodes is determined as result of hypothesis testing. Accordingly the redundant network structure with respect to the number of parameters is avoided and the classification error being kept to a minimum. (author). 11 refs.; 4 figs.; 1 tab

  14. Evaluation of PDA Technical Report No 33. Statistical Testing Recommendations for a Rapid Microbiological Method Case Study.

    Science.gov (United States)

    Murphy, Thomas; Schwedock, Julie; Nguyen, Kham; Mills, Anna; Jones, David

    2015-01-01

    New recommendations for the validation of rapid microbiological methods have been included in the revised Technical Report 33 release from the PDA. The changes include a more comprehensive review of the statistical methods to be used to analyze data obtained during validation. This case study applies those statistical methods to accuracy, precision, ruggedness, and equivalence data obtained using a rapid microbiological methods system being evaluated for water bioburden testing. Results presented demonstrate that the statistical methods described in the PDA Technical Report 33 chapter can all be successfully applied to the rapid microbiological method data sets and gave the same interpretation for equivalence to the standard method. The rapid microbiological method was in general able to pass the requirements of PDA Technical Report 33, though the study shows that there can be occasional outlying results and that caution should be used when applying statistical methods to low average colony-forming unit values. Prior to use in a quality-controlled environment, any new method or technology has to be shown to work as designed by the manufacturer for the purpose required. For new rapid microbiological methods that detect and enumerate contaminating microorganisms, additional recommendations have been provided in the revised PDA Technical Report No. 33. The changes include a more comprehensive review of the statistical methods to be used to analyze data obtained during validation. This paper applies those statistical methods to analyze accuracy, precision, ruggedness, and equivalence data obtained using a rapid microbiological method system being validated for water bioburden testing. The case study demonstrates that the statistical methods described in the PDA Technical Report No. 33 chapter can be successfully applied to rapid microbiological method data sets and give the same comparability results for similarity or difference as the standard method. © PDA, Inc

  15. A statistical test for the habitable zone concept

    Science.gov (United States)

    Checlair, J.; Abbot, D. S.

    2017-12-01

    Traditional habitable zone theory assumes that the silicate-weathering feedback regulates the atmospheric CO2 of planets within the habitable zone to maintain surface temperatures that allow for liquid water. There is some non-definitive evidence that this feedback has worked in Earth history, but it is untested in an exoplanet context. A critical prediction of the silicate-weathering feedback is that, on average, within the habitable zone planets that receive a higher stellar flux should have a lower CO2 in order to maintain liquid water at their surface. We can test this prediction directly by using a statistical approach involving low-precision CO2 measurements on many planets with future instruments such as JWST, LUVOIR, or HabEx. The purpose of this work is to carefully outline the requirements for such a test. First, we use a radiative-transfer model to compute the amount of CO2 necessary to maintain surface liquid water on planets for different values of insolation and planetary parameters. We run a large ensemble of Earth-like planets with different masses, atmospheric masses, inert atmospheric composition, cloud composition and level, and other greenhouse gases. Second, we post-process this data to determine the precision with which future instruments such as JWST, LUVOIR, and HabEx could measure the CO2. We then combine the variation due to planetary parameters and observational error to determine the number of planet measurements that would be needed to effectively marginalize over uncertainties and resolve the predicted trend in CO2 vs. stellar flux. The results of this work may influence the usage of JWST and will enhance mission planning for LUVOIR and HabEx.

  16. Statistical hypothesis testing and common misinterpretations: Should we abandon p-value in forensic science applications?

    Science.gov (United States)

    Taroni, F; Biedermann, A; Bozza, S

    2016-02-01

    Many people regard the concept of hypothesis testing as fundamental to inferential statistics. Various schools of thought, in particular frequentist and Bayesian, have promoted radically different solutions for taking a decision about the plausibility of competing hypotheses. Comprehensive philosophical comparisons about their advantages and drawbacks are widely available and continue to span over large debates in the literature. More recently, controversial discussion was initiated by an editorial decision of a scientific journal [1] to refuse any paper submitted for publication containing null hypothesis testing procedures. Since the large majority of papers published in forensic journals propose the evaluation of statistical evidence based on the so called p-values, it is of interest to expose the discussion of this journal's decision within the forensic science community. This paper aims to provide forensic science researchers with a primer on the main concepts and their implications for making informed methodological choices. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  17. The relationship of speech intelligibility with hearing sensitivity, cognition, and perceived hearing difficulties varies for different speech perception tests

    Science.gov (United States)

    Heinrich, Antje; Henshaw, Helen; Ferguson, Melanie A.

    2015-01-01

    Listeners vary in their ability to understand speech in noisy environments. Hearing sensitivity, as measured by pure-tone audiometry, can only partly explain these results, and cognition has emerged as another key concept. Although cognition relates to speech perception, the exact nature of the relationship remains to be fully understood. This study investigates how different aspects of cognition, particularly working memory and attention, relate to speech intelligibility for various tests. Perceptual accuracy of speech perception represents just one aspect of functioning in a listening environment. Activity and participation limits imposed by hearing loss, in addition to the demands of a listening environment, are also important and may be better captured by self-report questionnaires. Understanding how speech perception relates to self-reported aspects of listening forms the second focus of the study. Forty-four listeners aged between 50 and 74 years with mild sensorineural hearing loss were tested on speech perception tests differing in complexity from low (phoneme discrimination in quiet), to medium (digit triplet perception in speech-shaped noise) to high (sentence perception in modulated noise); cognitive tests of attention, memory, and non-verbal intelligence quotient; and self-report questionnaires of general health-related and hearing-specific quality of life. Hearing sensitivity and cognition related to intelligibility differently depending on the speech test: neither was important for phoneme discrimination, hearing sensitivity alone was important for digit triplet perception, and hearing and cognition together played a role in sentence perception. Self-reported aspects of auditory functioning were correlated with speech intelligibility to different degrees, with digit triplets in noise showing the richest pattern. The results suggest that intelligibility tests can vary in their auditory and cognitive demands and their sensitivity to the challenges that

  18. The relationship of speech intelligibility with hearing sensitivity, cognition, and perceived hearing difficulties varies for different speech perception tests

    Directory of Open Access Journals (Sweden)

    Antje eHeinrich

    2015-06-01

    Full Text Available Listeners vary in their ability to understand speech in noisy environments. Hearing sensitivity, as measured by pure-tone audiometry, can only partly explain these results, and cognition has emerged as another key concept. Although cognition relates to speech perception, the exact nature of the relationship remains to be fully understood. This study investigates how different aspects of cognition, particularly working memory and attention, relate to speech intelligibility for various tests.Perceptual accuracy of speech perception represents just one aspect of functioning in a listening environment. Activity and participation limits imposed by hearing loss, in addition to the demands of a listening environment, are also important and may be better captured by self-report questionnaires. Understanding how speech perception relates to self-reported aspects of listening forms the second focus of the study.Forty-four listeners aged between 50-74 years with mild SNHL were tested on speech perception tests differing in complexity from low (phoneme discrimination in quiet, to medium (digit triplet perception in speech-shaped noise to high (sentence perception in modulated noise; cognitive tests of attention, memory, and nonverbal IQ; and self-report questionnaires of general health-related and hearing-specific quality of life.Hearing sensitivity and cognition related to intelligibility differently depending on the speech test: neither was important for phoneme discrimination, hearing sensitivity alone was important for digit triplet perception, and hearing and cognition together played a role in sentence perception. Self-reported aspects of auditory functioning were correlated with speech intelligibility to different degrees, with digit triplets in noise showing the richest pattern. The results suggest that intelligibility tests can vary in their auditory and cognitive demands and their sensitivity to the challenges that auditory environments pose on

  19. Transfer of drug dissolution testing by statistical approaches: Case study

    Science.gov (United States)

    AL-Kamarany, Mohammed Amood; EL Karbane, Miloud; Ridouan, Khadija; Alanazi, Fars K.; Hubert, Philippe; Cherrah, Yahia; Bouklouze, Abdelaziz

    2011-01-01

    The analytical transfer is a complete process that consists in transferring an analytical procedure from a sending laboratory to a receiving laboratory. After having experimentally demonstrated that also masters the procedure in order to avoid problems in the future. Method of transfers is now commonplace during the life cycle of analytical method in the pharmaceutical industry. No official guideline exists for a transfer methodology in pharmaceutical analysis and the regulatory word of transfer is more ambiguous than for validation. Therefore, in this study, Gauge repeatability and reproducibility (R&R) studies associated with other multivariate statistics appropriates were successfully applied for the transfer of the dissolution test of diclofenac sodium as a case study from a sending laboratory A (accredited laboratory) to a receiving laboratory B. The HPLC method for the determination of the percent release of diclofenac sodium in solid pharmaceutical forms (one is the discovered product and another generic) was validated using accuracy profile (total error) in the sender laboratory A. The results showed that the receiver laboratory B masters the test dissolution process, using the same HPLC analytical procedure developed in laboratory A. In conclusion, if the sender used the total error to validate its analytical method, dissolution test can be successfully transferred without mastering the analytical method validation by receiving laboratory B and the pharmaceutical analysis method state should be maintained to ensure the same reliable results in the receiving laboratory. PMID:24109204

  20. Statistical power analyses using G*Power 3.1: tests for correlation and regression analyses.

    Science.gov (United States)

    Faul, Franz; Erdfelder, Edgar; Buchner, Axel; Lang, Albert-Georg

    2009-11-01

    G*Power is a free power analysis program for a variety of statistical tests. We present extensions and improvements of the version introduced by Faul, Erdfelder, Lang, and Buchner (2007) in the domain of correlation and regression analyses. In the new version, we have added procedures to analyze the power of tests based on (1) single-sample tetrachoric correlations, (2) comparisons of dependent correlations, (3) bivariate linear regression, (4) multiple linear regression based on the random predictor model, (5) logistic regression, and (6) Poisson regression. We describe these new features and provide a brief introduction to their scope and handling.

  1. Testing of a "smart-pebble" for measuring particle transport statistics

    Science.gov (United States)

    Kitsikoudis, Vasileios; Avgeris, Loukas; Valyrakis, Manousos

    2017-04-01

    This paper presents preliminary results from novel experiments aiming to assess coarse sediment transport statistics for a range of transport conditions, via the use of an innovative "smart-pebble" device. This device is a waterproof sphere, which has 7 cm diameter and is equipped with a number of sensors that provide information about the velocity, acceleration and positioning of the "smart-pebble" within the flow field. A series of specifically designed experiments are carried out to monitor the entrainment of a "smart-pebble" for fully developed, uniform, turbulent flow conditions over a hydraulically rough bed. Specifically, the bed surface is configured to three sections, each of them consisting of well packed glass beads of slightly increasing size at the downstream direction. The first section has a streamwise length of L1=150 cm and beads size of D1=15 mm, the second section has a length of L2=85 cm and beads size of D2=22 mm, and the third bed section has a length of L3=55 cm and beads size of D3=25.4 mm. Two cameras monitor the area of interest to provide additional information regarding the "smart-pebble" movement. Three-dimensional flow measurements are obtained with the aid of an acoustic Doppler velocimeter along a measurement grid to assess the flow forcing field. A wide range of flow rates near and above the threshold of entrainment is tested, while using four distinct densities for the "smart-pebble", which can affect its transport speed and total momentum. The acquired data are analyzed to derive Lagrangian transport statistics and the implications of such an important experiment for the transport of particles by rolling are discussed. The flow conditions for the initiation of motion, particle accelerations and equilibrium particle velocities (translating into transport rates), statistics of particle impact and its motion, can be extracted from the acquired data, which can be further compared to develop meaningful insights for sediment transport

  2. Statistical Decision Theory Estimation, Testing, and Selection

    CERN Document Server

    Liese, Friedrich

    2008-01-01

    Suitable for advanced graduate students and researchers in mathematical statistics and decision theory, this title presents an account of the concepts and a treatment of the major results of classical finite sample size decision theory and modern asymptotic decision theory

  3. Using statistical process control for monitoring the prevalence of hospital-acquired pressure ulcers.

    Science.gov (United States)

    Kottner, Jan; Halfens, Ruud

    2010-05-01

    Institutionally acquired pressure ulcers are used as outcome indicators to assess the quality of pressure ulcer prevention programs. Determining whether quality improvement projects that aim to decrease the proportions of institutionally acquired pressure ulcers lead to real changes in clinical practice depends on the measurement method and statistical analysis used. To examine whether nosocomial pressure ulcer prevalence rates in hospitals in the Netherlands changed, a secondary data analysis using different statistical approaches was conducted of annual (1998-2008) nationwide nursing-sensitive health problem prevalence studies in the Netherlands. Institutions that participated regularly in all survey years were identified. Risk-adjusted nosocomial pressure ulcers prevalence rates, grade 2 to 4 (European Pressure Ulcer Advisory Panel system) were calculated per year and hospital. Descriptive statistics, chi-square trend tests, and P charts based on statistical process control (SPC) were applied and compared. Six of the 905 healthcare institutions participated in every survey year and 11,444 patients in these six hospitals were identified as being at risk for pressure ulcers. Prevalence rates per year ranged from 0.05 to 0.22. Chi-square trend tests revealed statistically significant downward trends in four hospitals but based on SPC methods, prevalence rates of five hospitals varied by chance only. Results of chi-square trend tests and SPC methods were not comparable, making it impossible to decide which approach is more appropriate. P charts provide more valuable information than single P values and are more helpful for monitoring institutional performance. Empirical evidence about the decrease of nosocomial pressure ulcer prevalence rates in the Netherlands is contradictory and limited.

  4. Statistical mechanics for a system with imperfections: pt. 1

    International Nuclear Information System (INIS)

    Choh, S.T.; Kahng, W.H.; Um, C.I.

    1982-01-01

    Statistical mechanics is extended to treat a system where parts of the Hamiltonian are randomly varying. As the starting point of the theory, the statistical correlation among energy levels is neglected, allowing use of the central limit theorem of the probability theory. (Author)

  5. Decision Support Systems: Applications in Statistics and Hypothesis Testing.

    Science.gov (United States)

    Olsen, Christopher R.; Bozeman, William C.

    1988-01-01

    Discussion of the selection of appropriate statistical procedures by educators highlights a study conducted to investigate the effectiveness of decision aids in facilitating the use of appropriate statistics. Experimental groups and a control group using a printed flow chart, a computer-based decision aid, and a standard text are described. (11…

  6. Reduced risk of breast cancer associated with recreational physical activity varies by HER2 status

    International Nuclear Information System (INIS)

    Ma, Huiyan; Xu, Xinxin; Ursin, Giske; Simon, Michael S; Marchbanks, Polly A; Malone, Kathleen E; Lu, Yani; McDonald, Jill A; Folger, Suzanne G; Weiss, Linda K; Sullivan-Halley, Jane; Deapen, Dennis M; Press, Michael F; Bernstein, Leslie

    2015-01-01

    Convincing epidemiologic evidence indicates that physical activity is inversely associated with breast cancer risk. Whether this association varies by the tumor protein expression status of the estrogen receptor (ER), progesterone receptor (PR), human epidermal growth factor receptor 2 (HER2), or p53 is unclear. We evaluated the effects of recreational physical activity on risk of invasive breast cancer classified by the four biomarkers, fitting multivariable unconditional logistic regression models to data from 1195 case and 2012 control participants in the population-based Women’s Contraceptive and Reproductive Experiences Study. Self-reported recreational physical activity at different life periods was measured as average annual metabolic equivalents of energy expenditure [MET]-hours per week. Our biomarker-specific analyses showed that lifetime recreational physical activity was negatively associated with the risks of ER-positive (ER+) and of HER2-negative (HER2−) subtypes (both P trend ≤ 0.04), but not with other subtypes (all P trend > 0.10). Analyses using combinations of biomarkers indicated that risk of invasive breast cancer varied only by HER2 status. Risk of HER2–breast cancer decreased with increasing number of MET-hours of recreational physical activity in each specific life period examined, although some trend tests were only marginally statistically significant (all P trend ≤ 0.06). The test for homogeneity of trends (HER2– vs. HER2+) reached statistical significance only when evaluating physical activity during the first 10 years after menarche (P homogeneity = 0.03). Our data suggest that physical activity reduces risk of invasive breast cancers that lack HER2 overexpression, increasing our understanding of the biological mechanisms by which physical activity acts

  7. A conceptual guide to statistics using SPSS

    CERN Document Server

    Berkman, Elliot T

    2011-01-01

    Bridging an understanding of Statistics and SPSS. This unique text helps students develop a conceptual understanding of a variety of statistical tests by linking the ideas learned in a statistics class from a traditional statistics textbook with the computational steps and output from SPSS. Each chapter begins with a student-friendly explanation of the concept behind each statistical test and how the test relates to that concept. The authors then walk through the steps to compute the test in SPSS and the output, clearly linking how the SPSS procedure and output connect back to the conceptual u

  8. Debate on GMOs health risks after statistical findings in regulatory tests.

    Science.gov (United States)

    de Vendômois, Joël Spiroux; Cellier, Dominique; Vélot, Christian; Clair, Emilie; Mesnage, Robin; Séralini, Gilles-Eric

    2010-10-05

    We summarize the major points of international debate on health risk studies for the main commercialized edible GMOs. These GMOs are soy, maize and oilseed rape designed to contain new pesticide residues since they have been modified to be herbicide-tolerant (mostly to Roundup) or to produce mutated Bt toxins. The debated alimentary chronic risks may come from unpredictable insertional mutagenesis effects, metabolic effects, or from the new pesticide residues. The most detailed regulatory tests on the GMOs are three-month long feeding trials of laboratory rats, which are biochemically assessed. The tests are not compulsory, and are not independently conducted. The test data and the corresponding results are kept in secret by the companies. Our previous analyses of regulatory raw data at these levels, taking the representative examples of three GM maize NK 603, MON 810, and MON 863 led us to conclude that hepatorenal toxicities were possible, and that longer testing was necessary. Our study was criticized by the company developing the GMOs in question and the regulatory bodies, mainly on the divergent biological interpretations of statistically significant biochemical and physiological effects. We present the scientific reasons for the crucially different biological interpretations and also highlight the shortcomings in the experimental protocols designed by the company. The debate implies an enormous responsibility towards public health and is essential due to nonexistent traceability or epidemiological studies in the GMO-producing countries.

  9. Development and testing of improved statistical wind power forecasting methods.

    Energy Technology Data Exchange (ETDEWEB)

    Mendes, J.; Bessa, R.J.; Keko, H.; Sumaili, J.; Miranda, V.; Ferreira, C.; Gama, J.; Botterud, A.; Zhou, Z.; Wang, J. (Decision and Information Sciences); (INESC Porto)

    2011-12-06

    Wind power forecasting (WPF) provides important inputs to power system operators and electricity market participants. It is therefore not surprising that WPF has attracted increasing interest within the electric power industry. In this report, we document our research on improving statistical WPF algorithms for point, uncertainty, and ramp forecasting. Below, we provide a brief introduction to the research presented in the following chapters. For a detailed overview of the state-of-the-art in wind power forecasting, we refer to [1]. Our related work on the application of WPF in operational decisions is documented in [2]. Point forecasts of wind power are highly dependent on the training criteria used in the statistical algorithms that are used to convert weather forecasts and observational data to a power forecast. In Chapter 2, we explore the application of information theoretic learning (ITL) as opposed to the classical minimum square error (MSE) criterion for point forecasting. In contrast to the MSE criterion, ITL criteria do not assume a Gaussian distribution of the forecasting errors. We investigate to what extent ITL criteria yield better results. In addition, we analyze time-adaptive training algorithms and how they enable WPF algorithms to cope with non-stationary data and, thus, to adapt to new situations without requiring additional offline training of the model. We test the new point forecasting algorithms on two wind farms located in the U.S. Midwest. Although there have been advancements in deterministic WPF, a single-valued forecast cannot provide information on the dispersion of observations around the predicted value. We argue that it is essential to generate, together with (or as an alternative to) point forecasts, a representation of the wind power uncertainty. Wind power uncertainty representation can take the form of probabilistic forecasts (e.g., probability density function, quantiles), risk indices (e.g., prediction risk index) or scenarios

  10. A statistical design for testing apomictic diversification through linkage analysis.

    Science.gov (United States)

    Zeng, Yanru; Hou, Wei; Song, Shuang; Feng, Sisi; Shen, Lin; Xia, Guohua; Wu, Rongling

    2014-03-01

    The capacity of apomixis to generate maternal clones through seed reproduction has made it a useful characteristic for the fixation of heterosis in plant breeding. It has been observed that apomixis displays pronounced intra- and interspecific diversification, but the genetic mechanisms underlying this diversification remains elusive, obstructing the exploitation of this phenomenon in practical breeding programs. By capitalizing on molecular information in mapping populations, we describe and assess a statistical design that deploys linkage analysis to estimate and test the pattern and extent of apomictic differences at various levels from genotypes to species. The design is based on two reciprocal crosses between two individuals each chosen from a hermaphrodite or monoecious species. A multinomial distribution likelihood is constructed by combining marker information from two crosses. The EM algorithm is implemented to estimate the rate of apomixis and test its difference between two plant populations or species as the parents. The design is validated by computer simulation. A real data analysis of two reciprocal crosses between hickory (Carya cathayensis) and pecan (C. illinoensis) demonstrates the utilization and usefulness of the design in practice. The design provides a tool to address fundamental and applied questions related to the evolution and breeding of apomixis.

  11. A simple and robust statistical framework for planning, analysing and interpreting faecal egg count reduction test (FECRT) studies

    DEFF Research Database (Denmark)

    Denwood, M.J.; McKendrick, I.J.; Matthews, L.

    Introduction. There is an urgent need for a method of analysing FECRT data that is computationally simple and statistically robust. A method for evaluating the statistical power of a proposed FECRT study would also greatly enhance the current guidelines. Methods. A novel statistical framework has...... been developed that evaluates observed FECRT data against two null hypotheses: (1) the observed efficacy is consistent with the expected efficacy, and (2) the observed efficacy is inferior to the expected efficacy. The method requires only four simple summary statistics of the observed data. Power...... that the notional type 1 error rate of the new statistical test is accurate. Power calculations demonstrate a power of only 65% with a sample size of 20 treatment and control animals, which increases to 69% with 40 control animals or 79% with 40 treatment animals. Discussion. The method proposed is simple...

  12. Influence of manufacturing parameters on the strength of PLA parts using Layered Manufacturing technique: A statistical approach

    Science.gov (United States)

    Jaya Christiyan, K. G.; Chandrasekhar, U.; Mathivanan, N. Rajesh; Venkateswarlu, K.

    2018-02-01

    A 3D printing was successfully used to fabricate samples of Polylactic Acid (PLA). Processing parameters such as Lay-up speed, Lay-up thickness, and printing nozzle were varied. All samples were tested for flexural strength using three point load test. A statistical mathematical model was developed to correlate the processing parameters with flexural strength. The result clearly demonstrated that the lay-up thickness and nozzle diameter influenced flexural strength significantly, whereas lay-up speed hardly influenced the flexural strength.

  13. A review of statistical methods for testing genetic anticipation: looking for an answer in Lynch syndrome

    DEFF Research Database (Denmark)

    Boonstra, Philip S; Gruber, Stephen B; Raymond, Victoria M

    2010-01-01

    Anticipation, manifested through decreasing age of onset or increased severity in successive generations, has been noted in several genetic diseases. Statistical methods for genetic anticipation range from a simple use of the paired t-test for age of onset restricted to affected parent-child pairs......, and this right truncation effect is more pronounced in children than in parents. In this study, we first review different statistical methods for testing genetic anticipation in affected parent-child pairs that address the issue of bias due to right truncation. Using affected parent-child pair data, we compare...... the issue of multiplex ascertainment and its effect on the different methods. We then focus on exploring genetic anticipation in Lynch syndrome and analyze new data on the age of onset in affected parent-child pairs from families seen at the University of Michigan Cancer Genetics clinic with a mutation...

  14. Errors in 'BED'-derived estimates of HIV incidence will vary by place, time and age.

    Directory of Open Access Journals (Sweden)

    Timothy B Hallett

    2009-05-01

    Full Text Available The BED Capture Enzyme Immunoassay, believed to distinguish recent HIV infections, is being used to estimate HIV incidence, although an important property of the test--how specificity changes with time since infection--has not been not measured.We construct hypothetical scenarios for the performance of BED test, consistent with current knowledge, and explore how this could influence errors in BED estimates of incidence using a mathematical model of six African countries. The model is also used to determine the conditions and the sample sizes required for the BED test to reliably detect trends in HIV incidence.If the chance of misclassification by BED increases with time since infection, the overall proportion of individuals misclassified could vary widely between countries, over time, and across age-groups, in a manner determined by the historic course of the epidemic and the age-pattern of incidence. Under some circumstances, changes in BED estimates over time can approximately track actual changes in incidence, but large sample sizes (50,000+ will be required for recorded changes to be statistically significant.The relationship between BED test specificity and time since infection has not been fully measured, but, if it decreases, errors in estimates of incidence could vary by place, time and age-group. This means that post-assay adjustment procedures using parameters from different populations or at different times may not be valid. Further research is urgently needed into the properties of the BED test, and the rate of misclassification in a wide range of populations.

  15. Which statistics should tropical biologists learn?

    Science.gov (United States)

    Loaiza Velásquez, Natalia; González Lutz, María Isabel; Monge-Nájera, Julián

    2011-09-01

    Tropical biologists study the richest and most endangered biodiversity in the planet, and in these times of climate change and mega-extinctions, the need for efficient, good quality research is more pressing than in the past. However, the statistical component in research published by tropical authors sometimes suffers from poor quality in data collection; mediocre or bad experimental design and a rigid and outdated view of data analysis. To suggest improvements in their statistical education, we listed all the statistical tests and other quantitative analyses used in two leading tropical journals, the Revista de Biología Tropical and Biotropica, during a year. The 12 most frequent tests in the articles were: Analysis of Variance (ANOVA), Chi-Square Test, Student's T Test, Linear Regression, Pearson's Correlation Coefficient, Mann-Whitney U Test, Kruskal-Wallis Test, Shannon's Diversity Index, Tukey's Test, Cluster Analysis, Spearman's Rank Correlation Test and Principal Component Analysis. We conclude that statistical education for tropical biologists must abandon the old syllabus based on the mathematical side of statistics and concentrate on the correct selection of these and other procedures and tests, on their biological interpretation and on the use of reliable and friendly freeware. We think that their time will be better spent understanding and protecting tropical ecosystems than trying to learn the mathematical foundations of statistics: in most cases, a well designed one-semester course should be enough for their basic requirements.

  16. Powerful Statistical Inference for Nested Data Using Sufficient Summary Statistics

    Science.gov (United States)

    Dowding, Irene; Haufe, Stefan

    2018-01-01

    Hierarchically-organized data arise naturally in many psychology and neuroscience studies. As the standard assumption of independent and identically distributed samples does not hold for such data, two important problems are to accurately estimate group-level effect sizes, and to obtain powerful statistical tests against group-level null hypotheses. A common approach is to summarize subject-level data by a single quantity per subject, which is often the mean or the difference between class means, and treat these as samples in a group-level t-test. This “naive” approach is, however, suboptimal in terms of statistical power, as it ignores information about the intra-subject variance. To address this issue, we review several approaches to deal with nested data, with a focus on methods that are easy to implement. With what we call the sufficient-summary-statistic approach, we highlight a computationally efficient technique that can improve statistical power by taking into account within-subject variances, and we provide step-by-step instructions on how to apply this approach to a number of frequently-used measures of effect size. The properties of the reviewed approaches and the potential benefits over a group-level t-test are quantitatively assessed on simulated data and demonstrated on EEG data from a simulated-driving experiment. PMID:29615885

  17. In vivo evaluation of the effect of stimulus distribution on FIR statistical efficiency in event-related fMRI.

    Science.gov (United States)

    Jansma, J Martijn; de Zwart, Jacco A; van Gelderen, Peter; Duyn, Jeff H; Drevets, Wayne C; Furey, Maura L

    2013-05-15

    Technical developments in MRI have improved signal to noise, allowing use of analysis methods such as Finite impulse response (FIR) of rapid event related functional MRI (er-fMRI). FIR is one of the most informative analysis methods as it determines onset and full shape of the hemodynamic response function (HRF) without any a priori assumptions. FIR is however vulnerable to multicollinearity, which is directly related to the distribution of stimuli over time. Efficiency can be optimized by simplifying a design, and restricting stimuli distribution to specific sequences, while more design flexibility necessarily reduces efficiency. However, the actual effect of efficiency on fMRI results has never been tested in vivo. Thus, it is currently difficult to make an informed choice between protocol flexibility and statistical efficiency. The main goal of this study was to assign concrete fMRI signal to noise values to the abstract scale of FIR statistical efficiency. Ten subjects repeated a perception task with five random and m-sequence based protocol, with varying but, according to literature, acceptable levels of multicollinearity. Results indicated substantial differences in signal standard deviation, while the level was a function of multicollinearity. Experiment protocols varied up to 55.4% in standard deviation. Results confirm that quality of fMRI in an FIR analysis can significantly and substantially vary with statistical efficiency. Our in vivo measurements can be used to aid in making an informed decision between freedom in protocol design and statistical efficiency. Published by Elsevier B.V.

  18. The Use of Statistical Process Control-Charts for Person-Fit Analysis on Computerized Adaptive Testing. LSAC Research Report Series.

    Science.gov (United States)

    Meijer, Rob R.; van Krimpen-Stoop, Edith M. L. A.

    In this study a cumulative-sum (CUSUM) procedure from the theory of Statistical Process Control was modified and applied in the context of person-fit analysis in a computerized adaptive testing (CAT) environment. Six person-fit statistics were proposed using the CUSUM procedure, and three of them could be used to investigate the CAT in online test…

  19. [''R"--project for statistical computing

    DEFF Research Database (Denmark)

    Dessau, R.B.; Pipper, Christian Bressen

    2008-01-01

    An introduction to the R project for statistical computing (www.R-project.org) is presented. The main topics are: 1. To make the professional community aware of "R" as a potent and free software for graphical and statistical analysis of medical data; 2. Simple well-known statistical tests are fai...... are fairly easy to perform in R, but more complex modelling requires programming skills; 3. R is seen as a tool for teaching statistics and implementing complex modelling of medical data among medical professionals Udgivelsesdato: 2008/1/28......An introduction to the R project for statistical computing (www.R-project.org) is presented. The main topics are: 1. To make the professional community aware of "R" as a potent and free software for graphical and statistical analysis of medical data; 2. Simple well-known statistical tests...

  20. Statistical Model-Based Face Pose Estimation

    Institute of Scientific and Technical Information of China (English)

    GE Xinliang; YANG Jie; LI Feng; WANG Huahua

    2007-01-01

    A robust face pose estimation approach is proposed by using face shape statistical model approach and pose parameters are represented by trigonometric functions. The face shape statistical model is firstly built by analyzing the face shapes from different people under varying poses. The shape alignment is vital in the process of building the statistical model. Then, six trigonometric functions are employed to represent the face pose parameters. Lastly, the mapping function is constructed between face image and face pose by linearly relating different parameters. The proposed approach is able to estimate different face poses using a few face training samples. Experimental results are provided to demonstrate its efficiency and accuracy.

  1. An investigation of the statistical power of neutrality tests based on comparative and population genetic data

    DEFF Research Database (Denmark)

    Zhai, Weiwei; Nielsen, Rasmus; Slatkin, Montgomery

    2009-01-01

    In this report, we investigate the statistical power of several tests of selective neutrality based on patterns of genetic diversity within and between species. The goal is to compare tests based solely on population genetic data with tests using comparative data or a combination of comparative...... and population genetic data. We show that in the presence of repeated selective sweeps on relatively neutral background, tests based on the d(N)/d(S) ratios in comparative data almost always have more power to detect selection than tests based on population genetic data, even if the overall level of divergence...... selection. The Hudson-Kreitman-Aguadé test is the most powerful test for detecting positive selection among the population genetic tests investigated, whereas McDonald-Kreitman test typically has more power to detect negative selection. We discuss our findings in the light of the discordant results obtained...

  2. Designing experiments for maximum information from cyclic oxidation tests and their statistical analysis using half Normal plots

    International Nuclear Information System (INIS)

    Coleman, S.Y.; Nicholls, J.R.

    2006-01-01

    Cyclic oxidation testing at elevated temperatures requires careful experimental design and the adoption of standard procedures to ensure reliable data. This is a major aim of the 'COTEST' research programme. Further, as such tests are both time consuming and costly, in terms of human effort, to take measurements over a large number of cycles, it is important to gain maximum information from a minimum number of tests (trials). This search for standardisation of cyclic oxidation conditions leads to a series of tests to determine the relative effects of cyclic parameters on the oxidation process. Following a review of the available literature, databases and the experience of partners to the COTEST project, the most influential parameters, upper dwell temperature (oxidation temperature) and time (hot time), lower dwell time (cold time) and environment, were investigated in partners' laboratories. It was decided to test upper dwell temperature at 3 levels, at and equidistant from a reference temperature; to test upper dwell time at a reference, a higher and a lower time; to test lower dwell time at a reference and a higher time and wet and dry environments. Thus an experiment, consisting of nine trials, was designed according to statistical criteria. The results of the trial were analysed statistically, to test the main linear and quadratic effects of upper dwell temperature and hot time and the main effects of lower dwell time (cold time) and environment. The nine trials are a quarter fraction of the 36 possible combinations of parameter levels that could have been studied. The results have been analysed by half Normal plots as there are only 2 degrees of freedom for the experimental error variance, which is rather low for a standard analysis of variance. Half Normal plots give a visual indication of which factors are statistically significant. In this experiment each trial has 3 replications, and the data are analysed in terms of mean mass change, oxidation kinetics

  3. Do parental perceptions and motivations towards genetic testing and prenatal diagnosis for deafness vary in different cultures?

    Science.gov (United States)

    Nahar, Risha; Puri, Ratna D; Saxena, Renu; Verma, Ishwar C

    2013-01-01

    Surveys of attitudes of individuals with deafness and their families towards genetic testing or prenatal diagnosis have mostly been carried out in the West. It is expected that the perceptions and attitudes would vary amongst persons of different cultures and economic background. There is little information on the prevailing attitudes for genetic testing and prenatal diagnosis for deafness in developing countries. Therefore, this study evaluates the motivations of Indian people with inherited hearing loss towards such testing. Twenty-eight families with history of congenital hearing loss (23 hearing parents with child/family member with deafness, 4 couples with both partners having deafness and 1 parent and child with deafness) participated in a semi-structured survey investigating their interest, attitudes, and intentions for using genetic and prenatal testing for deafness. Participants opinioned that proper management and care of individuals with deafness were handicapped by limited rehabilitation facilities with significant financial and social burden. Nineteen (68%) opted for genetic testing. Twenty-six (93%) expressed high interest in prenatal diagnosis, while 19 (73%) would consider termination of an affected fetus. Three hearing couples, in whom the causative mutations were identified, opted for prenatal diagnosis. On testing, all the three fetuses were affected and the hearing parents elected to terminate the pregnancies. This study provides an insight into the contrasting perceptions towards hearing disability in India and its influence on the desirability of genetic testing and prenatal diagnosis. Copyright © 2012 Wiley Periodicals, Inc.

  4. A Stochastic Fractional Dynamics Model of Rainfall Statistics

    Science.gov (United States)

    Kundu, Prasun; Travis, James

    2013-04-01

    Rainfall varies in space and time in a highly irregular manner and is described naturally in terms of a stochastic process. A characteristic feature of rainfall statistics is that they depend strongly on the space-time scales over which rain data are averaged. A spectral model of precipitation has been developed based on a stochastic differential equation of fractional order for the point rain rate, that allows a concise description of the second moment statistics of rain at any prescribed space-time averaging scale. The model is designed to faithfully reflect the scale dependence and is thus capable of providing a unified description of the statistics of both radar and rain gauge data. The underlying dynamical equation can be expressed in terms of space-time derivatives of fractional orders that are adjusted together with other model parameters to fit the data. The form of the resulting spectrum gives the model adequate flexibility to capture the subtle interplay between the spatial and temporal scales of variability of rain but strongly constrains the predicted statistical behavior as a function of the averaging length and times scales. The main restriction is the assumption that the statistics of the precipitation field is spatially homogeneous and isotropic and stationary in time. We test the model with radar and gauge data collected contemporaneously at the NASA TRMM ground validation sites located near Melbourne, Florida and in Kwajalein Atoll, Marshall Islands in the tropical Pacific. We estimate the parameters by tuning them to the second moment statistics of the radar data. The model predictions are then found to fit the second moment statistics of the gauge data reasonably well without any further adjustment. Some data sets containing periods of non-stationary behavior that involves occasional anomalously correlated rain events, present a challenge for the model.

  5. AP statistics crash course

    CERN Document Server

    D'Alessio, Michael

    2012-01-01

    AP Statistics Crash Course - Gets You a Higher Advanced Placement Score in Less Time Crash Course is perfect for the time-crunched student, the last-minute studier, or anyone who wants a refresher on the subject. AP Statistics Crash Course gives you: Targeted, Focused Review - Study Only What You Need to Know Crash Course is based on an in-depth analysis of the AP Statistics course description outline and actual Advanced Placement test questions. It covers only the information tested on the exam, so you can make the most of your valuable study time. Our easy-to-read format covers: exploring da

  6. Mutagenicity potential of commercial broth cubes at varying concentrations

    International Nuclear Information System (INIS)

    De Torres, Nelson Velasquez; Talain, Augusto Nicolas.

    1997-01-01

    Today, there has been a growing concern on the mutagenicity potential of environmental chemical systems. These environmental chemicals such as pesticides, food additives, synthetic drugs, water and atmospheric pollutants are possible causes of mutagenic activity. Meat products and some meat flavorings, were also reported to exhibit mutagenic activity. And since these products are normal part of the daily human diet, there is a need for extensive studies regarding the possible mutagenic activity associated with these products. This study aimed to evaluate the mutagenicity potential of commercial broth cubes at varying concentration. The researchers sought to answer the following questions: 1. Do beef, pork and chicken broth cubes exhibit mutagenic activity? 2. Are there significant differences in the mutagenic activity among the three samples? 3. Are these significant differences in the mutagenic activity exhibited by each of the samples compared to that of Mitomycin-C (positive control)? 4. Which of the sample of each specific concentration exhibit the greatest mutagenic activity? Three specific concentrations of beef, pork and chicken broth cubes were prepared and their mutagenicity potential was evaluated by using the Micronucleus test. The formation of micro nucleated polychromatic and micro nucleated normo chromatic erythrocytes in bone marrow cells of mice treated with these samples were detected using a Carl-Zeiss photo microscope. The statistical tool used to test the validity of the null hypothesis was analysis of variance using randomized complete block design and independent T- test. (author)

  7. Mutagenicity potential of commercial broth cubes at varying concentrations

    Energy Technology Data Exchange (ETDEWEB)

    De Torres, Nelson Velasquez; Talain, Augusto Nicolas

    1998-12-31

    Today, there has been a growing concern on the mutagenicity potential of environmental chemical systems. These environmental chemicals such as pesticides, food additives, synthetic drugs, water and atmospheric pollutants are possible causes of mutagenic activity. Meat products and some meat flavorings, were also reported to exhibit mutagenic activity. And since these products are normal part of the daily human diet, there is a need for extensive studies regarding the possible mutagenic activity associated with these products. This study aimed to evaluate the mutagenicity potential of commercial broth cubes at varying concentration. The researchers sought to answer the following questions: 1. Do beef, pork and chicken broth cubes exhibit mutagenic activity? 2. Are there significant differences in the mutagenic activity among the three samples? 3. Are these significant differences in the mutagenic activity exhibited by each of the samples compared to that of Mitomycin-C (positive control)? 4. Which of the sample of each specific concentration exhibit the greatest mutagenic activity? Three specific concentrations of beef, pork and chicken broth cubes were prepared and their mutagenicity potential was evaluated by using the Micronucleus test. The formation of micro nucleated polychromatic and micro nucleated normo chromatic erythrocytes in bone marrow cells of mice treated with these samples were detected using a Carl-Zeiss photo microscope. The statistical tool used to test the validity of the null hypothesis was analysis of variance using randomized complete block design and independent T- test. (author). 28 refs., 9 figs., 26 tabs.

  8. Statistical inferences under the Null hypothesis: Common mistakes and pitfalls in neuroimaging studies.

    Directory of Open Access Journals (Sweden)

    Jean-Michel eHupé

    2015-02-01

    Full Text Available Published studies using functional and structural MRI include many errors in the way data are analyzed and conclusions reported. This was observed when working on a comprehensive review of the neural bases of synesthesia, but these errors are probably endemic to neuroimaging studies. All studies reviewed had based their conclusions using Null Hypothesis Significance Tests (NHST. NHST have yet been criticized since their inception because they are more appropriate for taking decisions related to a Null hypothesis (like in manufacturing than for making inferences about behavioral and neuronal processes. Here I focus on a few key problems of NHST related to brain imaging techniques, and explain why or when we should not rely on significance tests. I also observed that, often, the ill-posed logic of NHST was even not correctly applied, and describe what I identified as common mistakes or at least problematic practices in published papers, in light of what could be considered as the very basics of statistical inference. MRI statistics also involve much more complex issues than standard statistical inference. Analysis pipelines vary a lot between studies, even for those using the same software, and there is no consensus which pipeline is the best. I propose a synthetic view of the logic behind the possible methodological choices, and warn against the usage and interpretation of two statistical methods popular in brain imaging studies, the false discovery rate (FDR procedure and permutation tests. I suggest that current models for the analysis of brain imaging data suffer from serious limitations and call for a revision taking into account the new statistics (confidence intervals logic.

  9. Assessing attitudes towards statistics among medical students: psychometric properties of the Serbian version of the Survey of Attitudes Towards Statistics (SATS.

    Directory of Open Access Journals (Sweden)

    Dejana Stanisavljevic

    Full Text Available BACKGROUND: Medical statistics has become important and relevant for future doctors, enabling them to practice evidence based medicine. Recent studies report that students' attitudes towards statistics play an important role in their statistics achievements. The aim of the study was to test the psychometric properties of the Serbian version of the Survey of Attitudes Towards Statistics (SATS in order to acquire a valid instrument to measure attitudes inside the Serbian educational context. METHODS: The validation study was performed on a cohort of 417 medical students who were enrolled in an obligatory introductory statistics course. The SATS adaptation was based on an internationally accepted methodology for translation and cultural adaptation. Psychometric properties of the Serbian version of the SATS were analyzed through the examination of factorial structure and internal consistency. RESULTS: Most medical students held positive attitudes towards statistics. The average total SATS score was above neutral (4.3±0.8, and varied from 1.9 to 6.2. Confirmatory factor analysis validated the six-factor structure of the questionnaire (Affect, Cognitive Competence, Value, Difficulty, Interest and Effort. Values for fit indices TLI (0.940 and CFI (0.961 were above the cut-off of ≥0.90. The RMSEA value of 0.064 (0.051-0.078 was below the suggested value of ≤0.08. Cronbach's alpha of the entire scale was 0.90, indicating scale reliability. In a multivariate regression model, self-rating of ability in mathematics and current grade point average were significantly associated with the total SATS score after adjusting for age and gender. CONCLUSION: Present study provided the evidence for the appropriate metric properties of the Serbian version of SATS. Confirmatory factor analysis validated the six-factor structure of the scale. The SATS might be reliable and a valid instrument for identifying medical students' attitudes towards statistics in the

  10. Assessing attitudes towards statistics among medical students: psychometric properties of the Serbian version of the Survey of Attitudes Towards Statistics (SATS).

    Science.gov (United States)

    Stanisavljevic, Dejana; Trajkovic, Goran; Marinkovic, Jelena; Bukumiric, Zoran; Cirkovic, Andja; Milic, Natasa

    2014-01-01

    Medical statistics has become important and relevant for future doctors, enabling them to practice evidence based medicine. Recent studies report that students' attitudes towards statistics play an important role in their statistics achievements. The aim of the study was to test the psychometric properties of the Serbian version of the Survey of Attitudes Towards Statistics (SATS) in order to acquire a valid instrument to measure attitudes inside the Serbian educational context. The validation study was performed on a cohort of 417 medical students who were enrolled in an obligatory introductory statistics course. The SATS adaptation was based on an internationally accepted methodology for translation and cultural adaptation. Psychometric properties of the Serbian version of the SATS were analyzed through the examination of factorial structure and internal consistency. Most medical students held positive attitudes towards statistics. The average total SATS score was above neutral (4.3±0.8), and varied from 1.9 to 6.2. Confirmatory factor analysis validated the six-factor structure of the questionnaire (Affect, Cognitive Competence, Value, Difficulty, Interest and Effort). Values for fit indices TLI (0.940) and CFI (0.961) were above the cut-off of ≥0.90. The RMSEA value of 0.064 (0.051-0.078) was below the suggested value of ≤0.08. Cronbach's alpha of the entire scale was 0.90, indicating scale reliability. In a multivariate regression model, self-rating of ability in mathematics and current grade point average were significantly associated with the total SATS score after adjusting for age and gender. Present study provided the evidence for the appropriate metric properties of the Serbian version of SATS. Confirmatory factor analysis validated the six-factor structure of the scale. The SATS might be reliable and a valid instrument for identifying medical students' attitudes towards statistics in the Serbian educational context.

  11. Statistical assessment of numerous Monte Carlo tallies

    International Nuclear Information System (INIS)

    Kiedrowski, Brian C.; Solomon, Clell J.

    2011-01-01

    Four tests are developed to assess the statistical reliability of collections of tallies that number in thousands or greater. To this end, the relative-variance density function is developed and its moments are studied using simplified, non-transport models. The statistical tests are performed upon the results of MCNP calculations of three different transport test problems and appear to show that the tests are appropriate indicators of global statistical quality. (author)

  12. Orthoptic parameters and asthenopic symptoms analysis after 3D viewing at varying distances

    Directory of Open Access Journals (Sweden)

    Oleeviya Joseph

    2018-05-01

    Full Text Available AIM: To analyse visual modifications such as amplitude of accommodation, near point of convergence(NPCreopsis and near phoria associated with asthenopic symptoms after 3D viewing at varying distances.METHODS: A prospective study. Thirty young adults were randomly selected. Each individual was exposed to 3D viewing thrice in a day for a fixed distance and the distance was varied on three consecutive days. Same video of equal duration and different screen sizes were used for every distance. Cyclic 3D mode of K-multimedia(KMplayer was used for projecting the 3D video. Different variables like stereopsis, amplitude of accommodation, near point of accommodation, near phoria and asthenopic symptoms were recorded immediately after 3D video viewing. Stereopsis was measured with “Toegepast Natuurwetenschappelijk Onderzoek” or “Netherlands Organisation for Applied Scientific Research”(TNO test, amplitude of accommodation and NPC were measured using RAF ruler, near phoria was measured using prism bar and a closed ended sample questionnaire was used to know the occurrence of asthenopic symptoms. Statistical analyses were performed using descriptive statistics, paired t-test etc. Qualitative data was analyzed using Chi-square test.RESULTS: For every distance of 40 cm, 3 m and 6 m, amplitude of accommodation was significantly reduced by 0.66 D, 1.12 D and 1.44 D. NPC got significantly receded by 0.63 cm, 0.93 cm and 1.23 cm, and the near phoria was significantly increased by 0.87, and 2.2 prism dioptres(PDbase-in respectively. It was found that most of the subjects got pain around the eyes, headache and irritation for each viewing distance. This study also revealed that 3D video viewing in theaters may increase the symptoms of headache, watering and irritation. Symptoms like headache, watering, fatigue, irritation and nausea may increase considerably at home environment and symptoms such as headache and watering may cause significant discomfort by 3D

  13. Instruction of Statistics via Computer-Based Tools: Effects on Statistics' Anxiety, Attitude, and Achievement

    Science.gov (United States)

    Ciftci, S. Koza; Karadag, Engin; Akdal, Pinar

    2014-01-01

    The purpose of this study was to determine the effect of statistics instruction using computer-based tools, on statistics anxiety, attitude, and achievement. This study was designed as quasi-experimental research and the pattern used was a matched pre-test/post-test with control group design. Data was collected using three scales: a Statistics…

  14. Varying prior information in Bayesian inversion

    International Nuclear Information System (INIS)

    Walker, Matthew; Curtis, Andrew

    2014-01-01

    Bayes' rule is used to combine likelihood and prior probability distributions. The former represents knowledge derived from new data, the latter represents pre-existing knowledge; the Bayesian combination is the so-called posterior distribution, representing the resultant new state of knowledge. While varying the likelihood due to differing data observations is common, there are also situations where the prior distribution must be changed or replaced repeatedly. For example, in mixture density neural network (MDN) inversion, using current methods the neural network employed for inversion needs to be retrained every time prior information changes. We develop a method of prior replacement to vary the prior without re-training the network. Thus the efficiency of MDN inversions can be increased, typically by orders of magnitude when applied to geophysical problems. We demonstrate this for the inversion of seismic attributes in a synthetic subsurface geological reservoir model. We also present results which suggest that prior replacement can be used to control the statistical properties (such as variance) of the final estimate of the posterior in more general (e.g., Monte Carlo based) inverse problem solutions. (paper)

  15. Histoplasmosis Statistics

    Science.gov (United States)

    ... Testing Treatment & Outcomes Health Professionals Statistics More Resources Candidiasis Candida infections of the mouth, throat, and esophagus Vaginal candidiasis Invasive candidiasis Definition Symptoms Risk & Prevention Sources Diagnosis ...

  16. Estudo comparativo de variáveis bioperacionais entre atletas de desportos de diferentes demandas Study comparative of variables bioperational among athletes of sports of different demands

    Directory of Open Access Journals (Sweden)

    Nilo Terra Arêas Neto

    2010-09-01

    Full Text Available Este estudo teve por função mensurar e comparar os escores de atletas de desportos de diferentes demandas, em variáveis bioperacionais. Para tanto, selecionou-se trinta (N=30 atletas do gênero masculino, com idade entre 13 e 16 anos, sendo 15 atletas de basquetebol e 15 velocistas do atletismo. As variáveis coordenação geral, percepção cinestésica e tempo de reação motriz foram mensuradas por meio de testes aplicados na seguinte ordem, teste de Burpee, teste de Salto Percepção Cinestésica e teste de Tempo de Reação Motriz. Os dados obtidos foram tratados e analisados no programa SSPS 10. Na versão descritiva utilizou-se os escores mínimos e máximos, as médias e desvios-padrão, na estatística inferêncial, o teste "t" student. O teste da hipótese do estudo teve como referência o valor de alfa pThis study had the task of comparing and measuring the scores of sports athletes of different demands raised here, the basketball and athletics, in varying bioperacionais. Thirty (N = 30 athletes of the masculine gender were selected, with age between 13 and 16 years, 15 basketball athletes and 15 athletes velocitys. The variables overall coordination, cinestesic perception and reaction time , were measured by means of tests that were applied in the following order, Burpee test; Jump Cinestesic Perception and test the reaction time. Data from the testing procedures were processed and analyzed in the program SSPS 10. In the version used to the descriptive means and standard deviations for the two times of testing. In statistical inference used to test the "t" student. The test of the hypothesis of the study was to reference the value of alpha p<0.05. The basketball athletes were pointed to have a better score in this variable. The results pointed to the reaction time of driving as the variable that has obtained statistical significance between the groups.

  17. Fitting Social Network Models Using Varying Truncation Stochastic Approximation MCMC Algorithm

    KAUST Repository

    Jin, Ick Hoon

    2013-10-01

    The exponential random graph model (ERGM) plays a major role in social network analysis. However, parameter estimation for the ERGM is a hard problem due to the intractability of its normalizing constant and the model degeneracy. The existing algorithms, such as Monte Carlo maximum likelihood estimation (MCMLE) and stochastic approximation, often fail for this problem in the presence of model degeneracy. In this article, we introduce the varying truncation stochastic approximation Markov chain Monte Carlo (SAMCMC) algorithm to tackle this problem. The varying truncation mechanism enables the algorithm to choose an appropriate starting point and an appropriate gain factor sequence, and thus to produce a reasonable parameter estimate for the ERGM even in the presence of model degeneracy. The numerical results indicate that the varying truncation SAMCMC algorithm can significantly outperform the MCMLE and stochastic approximation algorithms: for degenerate ERGMs, MCMLE and stochastic approximation often fail to produce any reasonable parameter estimates, while SAMCMC can do; for nondegenerate ERGMs, SAMCMC can work as well as or better than MCMLE and stochastic approximation. The data and source codes used for this article are available online as supplementary materials. © 2013 American Statistical Association, Institute of Mathematical Statistics, and Interface Foundation of North America.

  18. Approximations to the distribution of a test statistic in covariance structure analysis: A comprehensive study.

    Science.gov (United States)

    Wu, Hao

    2018-05-01

    In structural equation modelling (SEM), a robust adjustment to the test statistic or to its reference distribution is needed when its null distribution deviates from a χ 2 distribution, which usually arises when data do not follow a multivariate normal distribution. Unfortunately, existing studies on this issue typically focus on only a few methods and neglect the majority of alternative methods in statistics. Existing simulation studies typically consider only non-normal distributions of data that either satisfy asymptotic robustness or lead to an asymptotic scaled χ 2 distribution. In this work we conduct a comprehensive study that involves both typical methods in SEM and less well-known methods from the statistics literature. We also propose the use of several novel non-normal data distributions that are qualitatively different from the non-normal distributions widely used in existing studies. We found that several under-studied methods give the best performance under specific conditions, but the Satorra-Bentler method remains the most viable method for most situations. © 2017 The British Psychological Society.

  19. Introductory statistics and analytics a resampling perspective

    CERN Document Server

    Bruce, Peter C

    2014-01-01

    Concise, thoroughly class-tested primer that features basic statistical concepts in the concepts in the context of analytics, resampling, and the bootstrapA uniquely developed presentation of key statistical topics, Introductory Statistics and Analytics: A Resampling Perspective provides an accessible approach to statistical analytics, resampling, and the bootstrap for readers with various levels of exposure to basic probability and statistics. Originally class-tested at one of the first online learning companies in the discipline, www.statistics.com, the book primarily focuses on application

  20. Endogenous time-varying risk aversion and asset returns.

    Science.gov (United States)

    Berardi, Michele

    2016-01-01

    Stylized facts about statistical properties for short horizon returns in financial markets have been identified in the literature, but a satisfactory understanding for their manifestation is yet to be achieved. In this work, we show that a simple asset pricing model with representative agent is able to generate time series of returns that replicate such stylized facts if the risk aversion coefficient is allowed to change endogenously over time in response to unexpected excess returns under evolutionary forces. The same model, under constant risk aversion, would instead generate returns that are essentially Gaussian. We conclude that an endogenous time-varying risk aversion represents a very parsimonious way to make the model match real data on key statistical properties, and therefore deserves careful consideration from economists and practitioners alike.

  1. Learning Predictive Statistics: Strategies and Brain Mechanisms.

    Science.gov (United States)

    Wang, Rui; Shen, Yuan; Tino, Peter; Welchman, Andrew E; Kourtzi, Zoe

    2017-08-30

    When immersed in a new environment, we are challenged to decipher initially incomprehensible streams of sensory information. However, quite rapidly, the brain finds structure and meaning in these incoming signals, helping us to predict and prepare ourselves for future actions. This skill relies on extracting the statistics of event streams in the environment that contain regularities of variable complexity from simple repetitive patterns to complex probabilistic combinations. Here, we test the brain mechanisms that mediate our ability to adapt to the environment's statistics and predict upcoming events. By combining behavioral training and multisession fMRI in human participants (male and female), we track the corticostriatal mechanisms that mediate learning of temporal sequences as they change in structure complexity. We show that learning of predictive structures relates to individual decision strategy; that is, selecting the most probable outcome in a given context (maximizing) versus matching the exact sequence statistics. These strategies engage distinct human brain regions: maximizing engages dorsolateral prefrontal, cingulate, sensory-motor regions, and basal ganglia (dorsal caudate, putamen), whereas matching engages occipitotemporal regions (including the hippocampus) and basal ganglia (ventral caudate). Our findings provide evidence for distinct corticostriatal mechanisms that facilitate our ability to extract behaviorally relevant statistics to make predictions. SIGNIFICANCE STATEMENT Making predictions about future events relies on interpreting streams of information that may initially appear incomprehensible. Past work has studied how humans identify repetitive patterns and associative pairings. However, the natural environment contains regularities that vary in complexity from simple repetition to complex probabilistic combinations. Here, we combine behavior and multisession fMRI to track the brain mechanisms that mediate our ability to adapt to

  2. Replicability of time-varying connectivity patterns in large resting state fMRI samples.

    Science.gov (United States)

    Abrol, Anees; Damaraju, Eswar; Miller, Robyn L; Stephen, Julia M; Claus, Eric D; Mayer, Andrew R; Calhoun, Vince D

    2017-12-01

    The past few years have seen an emergence of approaches that leverage temporal changes in whole-brain patterns of functional connectivity (the chronnectome). In this chronnectome study, we investigate the replicability of the human brain's inter-regional coupling dynamics during rest by evaluating two different dynamic functional network connectivity (dFNC) analysis frameworks using 7 500 functional magnetic resonance imaging (fMRI) datasets. To quantify the extent to which the emergent functional connectivity (FC) patterns are reproducible, we characterize the temporal dynamics by deriving several summary measures across multiple large, independent age-matched samples. Reproducibility was demonstrated through the existence of basic connectivity patterns (FC states) amidst an ensemble of inter-regional connections. Furthermore, application of the methods to conservatively configured (statistically stationary, linear and Gaussian) surrogate datasets revealed that some of the studied state summary measures were indeed statistically significant and also suggested that this class of null model did not explain the fMRI data fully. This extensive testing of reproducibility of similarity statistics also suggests that the estimated FC states are robust against variation in data quality, analysis, grouping, and decomposition methods. We conclude that future investigations probing the functional and neurophysiological relevance of time-varying connectivity assume critical importance. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  3. The impact of reorienting cone-beam computed tomographic images in varied head positions on the coordinates of anatomical landmarks

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jae Hun; Jeong, Ho Gul; Hwang, Jae Joon; Lee, Jung Hee; Han, Sang Sun [Dept. of Oral and Maxillofacial Radiology, Yonsei University, College of Dentistry, Seoul (Korea, Republic of)

    2016-06-15

    The aim of this study was to compare the coordinates of anatomical landmarks on cone-beam computed tomographic (CBCT) images in varied head positions before and after reorientation using image analysis software. CBCT images were taken in a normal position and four varied head positions using a dry skull marked with 3 points where gutta percha was fixed. In each of the five radiographic images, reference points were set, 20 anatomical landmarks were identified, and each set of coordinates was calculated. Coordinates in the images from the normally positioned head were compared with those in the images obtained from varied head positions using statistical methods. Post-reorientation coordinates calculated using a three-dimensional image analysis program were also compared to the reference coordinates. In the original images, statistically significant differences were found between coordinates in the normal-position and varied-position images. However, post-reorientation, no statistically significant differences were found between coordinates in the normal-position and varied-position images. The changes in head position impacted the coordinates of the anatomical landmarks in three-dimensional images. However, reorientation using image analysis software allowed accurate superimposition onto the reference positions.

  4. Application of the modified chi-square ratio statistic in a stepwise procedure for cascade impactor equivalence testing.

    Science.gov (United States)

    Weber, Benjamin; Lee, Sau L; Delvadia, Renishkumar; Lionberger, Robert; Li, Bing V; Tsong, Yi; Hochhaus, Guenther

    2015-03-01

    Equivalence testing of aerodynamic particle size distribution (APSD) through multi-stage cascade impactors (CIs) is important for establishing bioequivalence of orally inhaled drug products. Recent work demonstrated that the median of the modified chi-square ratio statistic (MmCSRS) is a promising metric for APSD equivalence testing of test (T) and reference (R) products as it can be applied to a reduced number of CI sites that are more relevant for lung deposition. This metric is also less sensitive to the increased variability often observed for low-deposition sites. A method to establish critical values for the MmCSRS is described here. This method considers the variability of the R product by employing a reference variance scaling approach that allows definition of critical values as a function of the observed variability of the R product. A stepwise CI equivalence test is proposed that integrates the MmCSRS as a method for comparing the relative shapes of CI profiles and incorporates statistical tests for assessing equivalence of single actuation content and impactor sized mass. This stepwise CI equivalence test was applied to 55 published CI profile scenarios, which were classified as equivalent or inequivalent by members of the Product Quality Research Institute working group (PQRI WG). The results of the stepwise CI equivalence test using a 25% difference in MmCSRS as an acceptance criterion provided the best matching with those of the PQRI WG as decisions of both methods agreed in 75% of the 55 CI profile scenarios.

  5. Combining Multiple Hypothesis Testing with Machine Learning Increases the Statistical Power of Genome-wide Association Studies

    Science.gov (United States)

    Mieth, Bettina; Kloft, Marius; Rodríguez, Juan Antonio; Sonnenburg, Sören; Vobruba, Robin; Morcillo-Suárez, Carlos; Farré, Xavier; Marigorta, Urko M.; Fehr, Ernst; Dickhaus, Thorsten; Blanchard, Gilles; Schunk, Daniel; Navarro, Arcadi; Müller, Klaus-Robert

    2016-01-01

    The standard approach to the analysis of genome-wide association studies (GWAS) is based on testing each position in the genome individually for statistical significance of its association with the phenotype under investigation. To improve the analysis of GWAS, we propose a combination of machine learning and statistical testing that takes correlation structures within the set of SNPs under investigation in a mathematically well-controlled manner into account. The novel two-step algorithm, COMBI, first trains a support vector machine to determine a subset of candidate SNPs and then performs hypothesis tests for these SNPs together with an adequate threshold correction. Applying COMBI to data from a WTCCC study (2007) and measuring performance as replication by independent GWAS published within the 2008–2015 period, we show that our method outperforms ordinary raw p-value thresholding as well as other state-of-the-art methods. COMBI presents higher power and precision than the examined alternatives while yielding fewer false (i.e. non-replicated) and more true (i.e. replicated) discoveries when its results are validated on later GWAS studies. More than 80% of the discoveries made by COMBI upon WTCCC data have been validated by independent studies. Implementations of the COMBI method are available as a part of the GWASpi toolbox 2.0. PMID:27892471

  6. Statistics Clinic

    Science.gov (United States)

    Feiveson, Alan H.; Foy, Millennia; Ploutz-Snyder, Robert; Fiedler, James

    2014-01-01

    Do you have elevated p-values? Is the data analysis process getting you down? Do you experience anxiety when you need to respond to criticism of statistical methods in your manuscript? You may be suffering from Insufficient Statistical Support Syndrome (ISSS). For symptomatic relief of ISSS, come for a free consultation with JSC biostatisticians at our help desk during the poster sessions at the HRP Investigators Workshop. Get answers to common questions about sample size, missing data, multiple testing, when to trust the results of your analyses and more. Side effects may include sudden loss of statistics anxiety, improved interpretation of your data, and increased confidence in your results.

  7. Statistics & probaility for dummies

    CERN Document Server

    Rumsey, Deborah J

    2013-01-01

    Two complete eBooks for one low price! Created and compiled by the publisher, this Statistics I and Statistics II bundle brings together two math titles in one, e-only bundle. With this special bundle, you'll get the complete text of the following two titles: Statistics For Dummies, 2nd Edition  Statistics For Dummies shows you how to interpret and critique graphs and charts, determine the odds with probability, guesstimate with confidence using confidence intervals, set up and carry out a hypothesis test, compute statistical formulas, and more. Tra

  8. Integrated Data Collection Analysis (IDCA) Program - Statistical Analysis of RDX Standard Data Sets

    Energy Technology Data Exchange (ETDEWEB)

    Sandstrom, Mary M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Brown, Geoffrey W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Preston, Daniel N. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Pollard, Colin J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Warner, Kirstin F. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Sorensen, Daniel N. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Remmers, Daniel L. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Phillips, Jason J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Shelley, Timothy J. [Air Force Research Lab. (AFRL), Tyndall AFB, FL (United States); Reyes, Jose A. [Applied Research Associates, Tyndall AFB, FL (United States); Hsu, Peter C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Reynolds, John G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-10-30

    The Integrated Data Collection Analysis (IDCA) program is conducting a Proficiency Test for Small- Scale Safety and Thermal (SSST) testing of homemade explosives (HMEs). Described here are statistical analyses of the results for impact, friction, electrostatic discharge, and differential scanning calorimetry analysis of the RDX Type II Class 5 standard. The material was tested as a well-characterized standard several times during the proficiency study to assess differences among participants and the range of results that may arise for well-behaved explosive materials. The analyses show that there are detectable differences among the results from IDCA participants. While these differences are statistically significant, most of them can be disregarded for comparison purposes to assess potential variability when laboratories attempt to measure identical samples using methods assumed to be nominally the same. The results presented in this report include the average sensitivity results for the IDCA participants and the ranges of values obtained. The ranges represent variation about the mean values of the tests of between 26% and 42%. The magnitude of this variation is attributed to differences in operator, method, and environment as well as the use of different instruments that are also of varying age. The results appear to be a good representation of the broader safety testing community based on the range of methods, instruments, and environments included in the IDCA Proficiency Test.

  9. Error calculations statistics in radioactive measurements

    International Nuclear Information System (INIS)

    Verdera, Silvia

    1994-01-01

    Basic approach and procedures frequently used in the practice of radioactive measurements.Statistical principles applied are part of Good radiopharmaceutical Practices and quality assurance.Concept of error, classification as systematic and random errors.Statistic fundamentals,probability theories, populations distributions, Bernoulli, Poisson,Gauss, t-test distribution,Ξ2 test, error propagation based on analysis of variance.Bibliography.z table,t-test table, Poisson index ,Ξ2 test

  10. Visualizing the Bayesian 2-test case: The effect of tree diagrams on medical decision making.

    Science.gov (United States)

    Binder, Karin; Krauss, Stefan; Bruckmaier, Georg; Marienhagen, Jörg

    2018-01-01

    In medicine, diagnoses based on medical test results are probabilistic by nature. Unfortunately, cognitive illusions regarding the statistical meaning of test results are well documented among patients, medical students, and even physicians. There are two effective strategies that can foster insight into what is known as Bayesian reasoning situations: (1) translating the statistical information on the prevalence of a disease and the sensitivity and the false-alarm rate of a specific test for that disease from probabilities into natural frequencies, and (2) illustrating the statistical information with tree diagrams, for instance, or with other pictorial representation. So far, such strategies have only been empirically tested in combination for "1-test cases", where one binary hypothesis ("disease" vs. "no disease") has to be diagnosed based on one binary test result ("positive" vs. "negative"). However, in reality, often more than one medical test is conducted to derive a diagnosis. In two studies, we examined a total of 388 medical students from the University of Regensburg (Germany) with medical "2-test scenarios". Each student had to work on two problems: diagnosing breast cancer with mammography and sonography test results, and diagnosing HIV infection with the ELISA and Western Blot tests. In Study 1 (N = 190 participants), we systematically varied the presentation of statistical information ("only textual information" vs. "only tree diagram" vs. "text and tree diagram in combination"), whereas in Study 2 (N = 198 participants), we varied the kinds of tree diagrams ("complete tree" vs. "highlighted tree" vs. "pruned tree"). All versions were implemented in probability format (including probability trees) and in natural frequency format (including frequency trees). We found that natural frequency trees, especially when the question-related branches were highlighted, improved performance, but that none of the corresponding probabilistic visualizations did.

  11. Nonparametric Statistics Test Software Package.

    Science.gov (United States)

    1983-09-01

    25 I1l,lCELL WRITE (NCF,12 ) IvE (I ,RCCT(I) 122 FORMAT(IlXt 3(H5 9 1) IF( IeLT *NCELL) WRITE (NOF1123 J PARTV(I1J 123 FORMAT( Xll----’,FIo.3J 25 CONT...the user’s entries. Its purpose is to write two types of files needed by the program Crunch: the data file, and the option file. 211 Iuill rateLchiavar...data file and communicate the choice of test and test parameters to Crunch. After a data file is written, Lochinvar prompts the writing of the

  12. Measurement and statistics for teachers

    CERN Document Server

    Van Blerkom, Malcolm

    2008-01-01

    Written in a student-friendly style, Measurement and Statistics for Teachers shows teachers how to use measurement and statistics wisely in their classes. Although there is some discussion of theory, emphasis is given to the practical, everyday uses of measurement and statistics. The second part of the text provides more complete coverage of basic descriptive statistics and their use in the classroom than in any text now available.Comprehensive and accessible, Measurement and Statistics for Teachers includes:Short vignettes showing concepts in action Numerous classroom examples Highlighted vocabulary Boxes summarizing related concepts End-of-chapter exercises and problems Six full chapters devoted to the essential topic of Classroom Tests Instruction on how to carry out informal assessments, performance assessments, and portfolio assessments, and how to use and interpret standardized tests A five-chapter section on Descriptive Statistics, giving instructors the option of more thoroughly teaching basic measur...

  13. Bayesian approach to inverse statistical mechanics

    Science.gov (United States)

    Habeck, Michael

    2014-05-01

    Inverse statistical mechanics aims to determine particle interactions from ensemble properties. This article looks at this inverse problem from a Bayesian perspective and discusses several statistical estimators to solve it. In addition, a sequential Monte Carlo algorithm is proposed that draws the interaction parameters from their posterior probability distribution. The posterior probability involves an intractable partition function that is estimated along with the interactions. The method is illustrated for inverse problems of varying complexity, including the estimation of a temperature, the inverse Ising problem, maximum entropy fitting, and the reconstruction of molecular interaction potentials.

  14. DWPF Sample Vial Insert Study-Statistical Analysis of DWPF Mock-Up Test Data

    International Nuclear Information System (INIS)

    Harris, S.P.

    1997-01-01

    This report is prepared as part of Technical/QA Task Plan WSRC-RP-97-351 which was issued in response to Technical Task Request HLW/DWPF/TTR-970132 submitted by DWPF. Presented in this report is a statistical analysis of DWPF Mock-up test data for evaluation of two new analytical methods which use insert samples from the existing HydragardTM sampler. The first is a new hydrofluoric acid based method called the Cold Chemical Method (Cold Chem) and the second is a modified fusion method.Both new methods use the existing HydragardTM sampler to collect a smaller insert sample from the process sampling system. The insert testing methodology applies to the DWPF Slurry Mix Evaporator (SME) and the Melter Feed Tank (MFT) samples. Samples in small 3 ml containers (Inserts) are analyzed by either the cold chemical method or a modified fusion method. The current analytical method uses a HydragardTM sample station to obtain nearly full 15 ml peanut vials. The samples are prepared by a multi-step process for Inductively Coupled Plasma (ICP) analysis by drying, vitrification, grinding and finally dissolution by either mixed acid or fusion. In contrast, the insert sample is placed directly in the dissolution vessel, thus eliminating the drying, vitrification and grinding operations for the Cold chem method. Although the modified fusion still requires drying and calcine conversion, the process is rapid due to the decreased sample size and that no vitrification step is required.A slurry feed simulant material was acquired from the TNX pilot facility from the test run designated as PX-7.The Mock-up test data were gathered on the basis of a statistical design presented in SRT-SCS-97004 (Rev. 0). Simulant PX-7 samples were taken in the DWPF Analytical Cell Mock-up Facility using 3 ml inserts and 15 ml peanut vials. A number of the insert samples were analyzed by Cold Chem and compared with full peanut vial samples analyzed by the current methods. The remaining inserts were analyzed by

  15. Women’s Attitudes Regarding Prenatal Testing for a Range of Congenital Disorders of Varying Severity

    Directory of Open Access Journals (Sweden)

    Mary E. Norton

    2014-01-01

    Full Text Available Little is known about women’s comparative attitudes towards prenatal testing for different categories of genetic disorders. We interviewed women who delivered healthy infants within the past year and assessed attitudes towards prenatal screening and diagnostic testing, as well as pregnancy termination, for Down syndrome (DS, fragile X (FraX, cystic fibrosis (CF, spinal muscular atrophy (SMA, phenylketonuria (PKU and congenital heart defects (CHD. Ninety-five women aged 21 to 48 years participated, of whom 60% were Caucasian, 23% Asian, 10% Latina and 7% African American; 82% were college graduates. Ninety-five to ninety-eight percent indicated that they would have screening for each condition, and the majority would have amniocentesis (64% for PKU to 72% for SMA. Inclinations regarding pregnancy termination varied by condition: Whereas only 10% reported they would probably or definitely terminate a pregnancy for CHD, 41% indicated they would do so for DS and 62% for SMA. Most women in this cohort reported that they would undergo screening for all six conditions presented, the majority without the intent to terminate an affected pregnancy. These women were least inclined to terminate treatable disorders (PKU, CHD versus those associated with intellectual disability (DS, FraX and were most likely to terminate for SMA, typically lethal in childhood.

  16. Statistical theory of signal detection

    CERN Document Server

    Helstrom, Carl Wilhelm; Costrell, L; Kandiah, K

    1968-01-01

    Statistical Theory of Signal Detection, Second Edition provides an elementary introduction to the theory of statistical testing of hypotheses that is related to the detection of signals in radar and communications technology. This book presents a comprehensive survey of digital communication systems. Organized into 11 chapters, this edition begins with an overview of the theory of signal detection and the typical detection problem. This text then examines the goals of the detection system, which are defined through an analogy with the testing of statistical hypotheses. Other chapters consider

  17. A method for statistically comparing spatial distribution maps

    Directory of Open Access Journals (Sweden)

    Reynolds Mary G

    2009-01-01

    Full Text Available Abstract Background Ecological niche modeling is a method for estimation of species distributions based on certain ecological parameters. Thus far, empirical determination of significant differences between independently generated distribution maps for a single species (maps which are created through equivalent processes, but with different ecological input parameters, has been challenging. Results We describe a method for comparing model outcomes, which allows a statistical evaluation of whether the strength of prediction and breadth of predicted areas is measurably different between projected distributions. To create ecological niche models for statistical comparison, we utilized GARP (Genetic Algorithm for Rule-Set Production software to generate ecological niche models of human monkeypox in Africa. We created several models, keeping constant the case location input records for each model but varying the ecological input data. In order to assess the relative importance of each ecological parameter included in the development of the individual predicted distributions, we performed pixel-to-pixel comparisons between model outcomes and calculated the mean difference in pixel scores. We used a two sample Student's t-test, (assuming as null hypothesis that both maps were identical to each other regardless of which input parameters were used to examine whether the mean difference in corresponding pixel scores from one map to another was greater than would be expected by chance alone. We also utilized weighted kappa statistics, frequency distributions, and percent difference to look at the disparities in pixel scores. Multiple independent statistical tests indicated precipitation as the single most important independent ecological parameter in the niche model for human monkeypox disease. Conclusion In addition to improving our understanding of the natural factors influencing the distribution of human monkeypox disease, such pixel-to-pixel comparison

  18. Statistics of natural binaural sounds.

    Directory of Open Access Journals (Sweden)

    Wiktor Młynarski

    Full Text Available Binaural sound localization is usually considered a discrimination task, where interaural phase (IPD and level (ILD disparities at narrowly tuned frequency channels are utilized to identify a position of a sound source. In natural conditions however, binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. Distribution of cues encountered naturally and their dependence on physical properties of an auditory scene have not been studied before. In the present work we analyzed statistics of naturally encountered binaural sounds. We performed binaural recordings of three auditory scenes with varying spatial configuration and analyzed empirical cue distributions from each scene. We have found that certain properties such as the spread of IPD distributions as well as an overall shape of ILD distributions do not vary strongly between different auditory scenes. Moreover, we found that ILD distributions vary much weaker across frequency channels and IPDs often attain much higher values, than can be predicted from head filtering properties. In order to understand the complexity of the binaural hearing task in the natural environment, sound waveforms were analyzed by performing Independent Component Analysis (ICA. Properties of learned basis functions indicate that in natural conditions soundwaves in each ear are predominantly generated by independent sources. This implies that the real-world sound localization must rely on mechanisms more complex than a mere cue extraction.

  19. Statistics of natural binaural sounds.

    Science.gov (United States)

    Młynarski, Wiktor; Jost, Jürgen

    2014-01-01

    Binaural sound localization is usually considered a discrimination task, where interaural phase (IPD) and level (ILD) disparities at narrowly tuned frequency channels are utilized to identify a position of a sound source. In natural conditions however, binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. Distribution of cues encountered naturally and their dependence on physical properties of an auditory scene have not been studied before. In the present work we analyzed statistics of naturally encountered binaural sounds. We performed binaural recordings of three auditory scenes with varying spatial configuration and analyzed empirical cue distributions from each scene. We have found that certain properties such as the spread of IPD distributions as well as an overall shape of ILD distributions do not vary strongly between different auditory scenes. Moreover, we found that ILD distributions vary much weaker across frequency channels and IPDs often attain much higher values, than can be predicted from head filtering properties. In order to understand the complexity of the binaural hearing task in the natural environment, sound waveforms were analyzed by performing Independent Component Analysis (ICA). Properties of learned basis functions indicate that in natural conditions soundwaves in each ear are predominantly generated by independent sources. This implies that the real-world sound localization must rely on mechanisms more complex than a mere cue extraction.

  20. Time-Varying Value of Energy Efficiency in Michigan

    Energy Technology Data Exchange (ETDEWEB)

    Mims, Natalie; Eckman, Tom; Schwartz, Lisa C.

    2018-04-02

    Quantifying the time-varying value of energy efficiency is necessary to properly account for all of its benefits and costs and to identify and implement efficiency resources that contribute to a low-cost, reliable electric system. Historically, most quantification of the benefits of efficiency has focused largely on the economic value of annual energy reduction. Due to the lack of statistically representative metered end-use load shape data in Michigan (i.e., the hourly or seasonal timing of electricity savings), the ability to confidently characterize the time-varying value of energy efficiency savings in the state, especially for weather-sensitive measures such as central air conditioning, is limited. Still, electric utilities in Michigan can take advantage of opportunities to incorporate the time-varying value of efficiency into their planning. For example, end-use load research and hourly valuation of efficiency savings can be used for a variety of electricity planning functions, including load forecasting, demand-side management and evaluation, capacity planning, long-term resource planning, renewable energy integration, assessing potential grid modernization investments, establishing rates and pricing, and customer service (KEMA 2012). In addition, accurately calculating the time-varying value of efficiency may help energy efficiency program administrators prioritize existing offerings, set incentive or rebate levels that reflect the full value of efficiency, and design new programs.

  1. A goodness of fit statistic for the geometric distribution

    OpenAIRE

    Ferreira, J.A.

    2003-01-01

    textabstractWe propose a goodness of fit statistic for the geometric distribution and compare it in terms of power, via simulation, with the chi-square statistic. The statistic is based on the Lau-Rao theorem and can be seen as a discrete analogue of the total time on test statistic. The results suggest that the test based on the new statistic is generally superior to the chi-square test.

  2. Testes não paramétricos para pequenas amostras de variáveis não categorizadas: um estudo

    Directory of Open Access Journals (Sweden)

    José Luiz Contador

    2016-01-01

    Full Text Available Resumo Apresenta-se neste trabalho um estudo sobre testes não paramétricos para verificar a semelhança entre duas pequenas amostras de variáveis classificadas em múltiplas categorias. Mostra-se que, para essa situação, os únicos testes disponíveis são qui-quadrado e os testes exatos. Porém, testes assintóticos (como o qui-quadrado podem não funcionar bem para pequenas amostras, sobrando como alterativa a aplicação de testes exatos. Mas, se o número de categorias cresce, a aplicação desses testes pode-se tornar bastante difícil, além de requerer algoritmos específicos, que podem exigir grande esforço computacional. Assim, um novo teste baseado na diferença de duas distribuições uniformes é proposto como uma alternativa ao teste exato. Ensaios computacionais são realizados para avaliar o desempenho desses três testes. Embora testes não paramétricos tenham inúmeras aplicações em diversas áreas de conhecimento, este trabalho surgiu motivado pela necessidade de verificar se a estratégia de negócio adotada pela empresa é um fator determinante para sua competitividade.

  3. Mendelian randomization analysis of a time-varying exposure for binary disease outcomes using functional data analysis methods.

    Science.gov (United States)

    Cao, Ying; Rajan, Suja S; Wei, Peng

    2016-12-01

    A Mendelian randomization (MR) analysis is performed to analyze the causal effect of an exposure variable on a disease outcome in observational studies, by using genetic variants that affect the disease outcome only through the exposure variable. This method has recently gained popularity among epidemiologists given the success of genetic association studies. Many exposure variables of interest in epidemiological studies are time varying, for example, body mass index (BMI). Although longitudinal data have been collected in many cohort studies, current MR studies only use one measurement of a time-varying exposure variable, which cannot adequately capture the long-term time-varying information. We propose using the functional principal component analysis method to recover the underlying individual trajectory of the time-varying exposure from the sparsely and irregularly observed longitudinal data, and then conduct MR analysis using the recovered curves. We further propose two MR analysis methods. The first assumes a cumulative effect of the time-varying exposure variable on the disease risk, while the second assumes a time-varying genetic effect and employs functional regression models. We focus on statistical testing for a causal effect. Our simulation studies mimicking the real data show that the proposed functional data analysis based methods incorporating longitudinal data have substantial power gains compared to standard MR analysis using only one measurement. We used the Framingham Heart Study data to demonstrate the promising performance of the new methods as well as inconsistent results produced by the standard MR analysis that relies on a single measurement of the exposure at some arbitrary time point. © 2016 WILEY PERIODICALS, INC.

  4. Development of the Statistical Reasoning in Biology Concept Inventory (SRBCI).

    Science.gov (United States)

    Deane, Thomas; Nomme, Kathy; Jeffery, Erica; Pollock, Carol; Birol, Gülnur

    2016-01-01

    We followed established best practices in concept inventory design and developed a 12-item inventory to assess student ability in statistical reasoning in biology (Statistical Reasoning in Biology Concept Inventory [SRBCI]). It is important to assess student thinking in this conceptual area, because it is a fundamental requirement of being statistically literate and associated skills are needed in almost all walks of life. Despite this, previous work shows that non-expert-like thinking in statistical reasoning is common, even after instruction. As science educators, our goal should be to move students along a novice-to-expert spectrum, which could be achieved with growing experience in statistical reasoning. We used item response theory analyses (the one-parameter Rasch model and associated analyses) to assess responses gathered from biology students in two populations at a large research university in Canada in order to test SRBCI's robustness and sensitivity in capturing useful data relating to the students' conceptual ability in statistical reasoning. Our analyses indicated that SRBCI is a unidimensional construct, with items that vary widely in difficulty and provide useful information about such student ability. SRBCI should be useful as a diagnostic tool in a variety of biology settings and as a means of measuring the success of teaching interventions designed to improve statistical reasoning skills. © 2016 T. Deane et al. CBE—Life Sciences Education © 2016 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  5. Statistics for experimentalists

    CERN Document Server

    Cooper, B E

    2014-01-01

    Statistics for Experimentalists aims to provide experimental scientists with a working knowledge of statistical methods and search approaches to the analysis of data. The book first elaborates on probability and continuous probability distributions. Discussions focus on properties of continuous random variables and normal variables, independence of two random variables, central moments of a continuous distribution, prediction from a normal distribution, binomial probabilities, and multiplication of probabilities and independence. The text then examines estimation and tests of significance. Topics include estimators and estimates, expected values, minimum variance linear unbiased estimators, sufficient estimators, methods of maximum likelihood and least squares, and the test of significance method. The manuscript ponders on distribution-free tests, Poisson process and counting problems, correlation and function fitting, balanced incomplete randomized block designs and the analysis of covariance, and experiment...

  6. Quality of reporting statistics in two Indian pharmacology journals.

    Science.gov (United States)

    Jaykaran; Yadav, Preeti

    2011-04-01

    To evaluate the reporting of the statistical methods in articles published in two Indian pharmacology journals. All original articles published since 2002 were downloaded from the journals' (Indian Journal of Pharmacology (IJP) and Indian Journal of Physiology and Pharmacology (IJPP)) website. These articles were evaluated on the basis of appropriateness of descriptive statistics and inferential statistics. Descriptive statistics was evaluated on the basis of reporting of method of description and central tendencies. Inferential statistics was evaluated on the basis of fulfilling of assumption of statistical methods and appropriateness of statistical tests. Values are described as frequencies, percentage, and 95% confidence interval (CI) around the percentages. Inappropriate descriptive statistics was observed in 150 (78.1%, 95% CI 71.7-83.3%) articles. Most common reason for this inappropriate descriptive statistics was use of mean ± SEM at the place of "mean (SD)" or "mean ± SD." Most common statistical method used was one-way ANOVA (58.4%). Information regarding checking of assumption of statistical test was mentioned in only two articles. Inappropriate statistical test was observed in 61 (31.7%, 95% CI 25.6-38.6%) articles. Most common reason for inappropriate statistical test was the use of two group test for three or more groups. Articles published in two Indian pharmacology journals are not devoid of statistical errors.

  7. Statistical considerations of graphite strength for assessing design allowable stresses

    International Nuclear Information System (INIS)

    Ishihara, M.; Mogi, H.; Ioka, I.; Arai, T.; Oku, T.

    1987-01-01

    Several aspects of statistics need to be considered to determine design allowable stresses for graphite structures. These include: 1) Statistical variation of graphite material strength. 2) Uncertainty of calculated stress. 3) Reliability (survival probability) required from operational and safety performance of graphite structures. This paper deals with some statistical considerations of structural graphite for assessing design allowable stress. Firstly, probability distribution functions of tensile and compressive strengths are investigated on experimental Very High Temperature candidated graphites. Normal, logarithmic normal and Weibull distribution functions are compared in terms of coefficient of correlation to measured strength data. This leads to the adaptation of normal distribution function. Then, the relation between factor of safety and fracture probability is discussed on the following items: 1) As the graphite strength is more variable than metalic material's strength, the effect of strength variation to the fracture probability is evaluated. 2) Fracture probability depending on survival probability of 99 ∼ 99.9 (%) with confidence level of 90 ∼ 95 (%) is discussed. 3) As the material properties used in the design analysis are usually the mean values of their variation, the additional effect of these variations on the fracture probability is discussed. Finally, the way to assure the minimum ultimate strength with required survival probability with confidence level is discussed in view of statistical treatment of the strength data from varying sample numbers in a material acceptance test. (author)

  8. Statistical Tests for Mixed Linear Models

    CERN Document Server

    Khuri, André I; Sinha, Bimal K

    2011-01-01

    An advanced discussion of linear models with mixed or random effects. In recent years a breakthrough has occurred in our ability to draw inferences from exact and optimum tests of variance component models, generating much research activity that relies on linear models with mixed and random effects. This volume covers the most important research of the past decade as well as the latest developments in hypothesis testing. It compiles all currently available results in the area of exact and optimum tests for variance component models and offers the only comprehensive treatment for these models a

  9. Adaptive estimation of a time-varying phase with a power-law spectrum via continuous squeezed states

    Science.gov (United States)

    Dinani, Hossein T.; Berry, Dominic W.

    2017-06-01

    When measuring a time-varying phase, the standard quantum limit and Heisenberg limit as usually defined, for a constant phase, do not apply. If the phase has Gaussian statistics and a power-law spectrum 1 /|ω| p with p >1 , then the generalized standard quantum limit and Heisenberg limit have recently been found to have scalings of 1 /N(p -1 )/p and 1 /N2 (p -1 )/(p +1 ) , respectively, where N is the mean photon flux. We show that this Heisenberg scaling can be achieved via adaptive measurements on squeezed states. We predict the experimental parameters analytically, and test them with numerical simulations. Previous work had considered the special case of p =2 .

  10. A Systematic Review of Statistical Methods Used to Test for Reliability of Medical Instruments Measuring Continuous Variables

    Directory of Open Access Journals (Sweden)

    Rafdzah Zaki

    2013-06-01

    Full Text Available   Objective(s: Reliability measures precision or the extent to which test results can be replicated. This is the first ever systematic review to identify statistical methods used to measure reliability of equipment measuring continuous variables. This studyalso aims to highlight the inappropriate statistical method used in the reliability analysis and its implication in the medical practice.   Materials and Methods: In 2010, five electronic databases were searched between 2007 and 2009 to look for reliability studies. A total of 5,795 titles were initially identified. Only 282 titles were potentially related, and finally 42 fitted the inclusion criteria. Results: The Intra-class Correlation Coefficient (ICC is the most popular method with 25 (60% studies having used this method followed by the comparing means (8 or 19%. Out of 25 studies using the ICC, only 7 (28% reported the confidence intervals and types of ICC used. Most studies (71% also tested the agreement of instruments. Conclusion: This study finds that the Intra-class Correlation Coefficient is the most popular method used to assess the reliability of medical instruments measuring continuous outcomes. There are also inappropriate applications and interpretations of statistical methods in some studies. It is important for medical researchers to be aware of this issue, and be able to correctly perform analysis in reliability studies.

  11. [Statistical approach to evaluate the occurrence of out-of acceptable ranges and accuracy for antimicrobial susceptibility tests in inter-laboratory quality control program].

    Science.gov (United States)

    Ueno, Tamio; Matuda, Junichi; Yamane, Nobuhisa

    2013-03-01

    To evaluate the occurrence of out-of acceptable ranges and accuracy of antimicrobial susceptibility tests, we applied a new statistical tool to the Inter-Laboratory Quality Control Program established by the Kyushu Quality Control Research Group. First, we defined acceptable ranges of minimum inhibitory concentration (MIC) for broth microdilution tests and inhibitory zone diameter for disk diffusion tests on the basis of Clinical and Laboratory Standards Institute (CLSI) M100-S21. In the analysis, more than two out-of acceptable range results in the 20 tests were considered as not allowable according to the CLSI document. Of the 90 participating laboratories, 46 (51%) experienced one or more occurrences of out-of acceptable range results. Then, a binomial test was applied to each participating laboratory. The results indicated that the occurrences of out-of acceptable range results in the 11 laboratories were significantly higher when compared to the CLSI recommendation (allowable rate laboratory was statistically compared with zero using a Student's t-test. The results revealed that 5 of the 11 above laboratories reported erroneous test results that systematically drifted to the side of resistance. In conclusion, our statistical approach has enabled us to detect significantly higher occurrences and source of interpretive errors in antimicrobial susceptibility tests; therefore, this approach can provide us with additional information that can improve the accuracy of the test results in clinical microbiology laboratories.

  12. Specification and testing of Multiplicative Time-Varying GARCH models with applications

    DEFF Research Database (Denmark)

    Amado, Cristina; Teräsvirta, Timo

    2017-01-01

    In this article, we develop a specification technique for building multiplicative time-varying GARCH models of Amado and Teräsvirta (2008, 2013). The variance is decomposed into an unconditional and a conditional component such that the unconditional variance component is allowed to evolve smooth...... is illustrated in practice with two real examples: an empirical application to daily exchange rate returns and another one to daily coffee futures returns....

  13. Optimal allocation of testing resources for statistical simulations

    Science.gov (United States)

    Quintana, Carolina; Millwater, Harry R.; Singh, Gulshan; Golden, Patrick

    2015-07-01

    Statistical estimates from simulation involve uncertainty caused by the variability in the input random variables due to limited data. Allocating resources to obtain more experimental data of the input variables to better characterize their probability distributions can reduce the variance of statistical estimates. The methodology proposed determines the optimal number of additional experiments required to minimize the variance of the output moments given single or multiple constraints. The method uses multivariate t-distribution and Wishart distribution to generate realizations of the population mean and covariance of the input variables, respectively, given an amount of available data. This method handles independent and correlated random variables. A particle swarm method is used for the optimization. The optimal number of additional experiments per variable depends on the number and variance of the initial data, the influence of the variable in the output function and the cost of each additional experiment. The methodology is demonstrated using a fretting fatigue example.

  14. IEEE Std 101-1972: IEEE guide for the statistical analysis of thermal life test data

    International Nuclear Information System (INIS)

    Anon.

    1992-01-01

    Procedures for estimating the thermal life of electrical insulation systems and materials call for life tests at several temperatures, usually well above the expected normal operating temperature. By the selection of high temperatures for the tests, life of the insulation samples will be terminated, according to some selected failure criterion or criteria, within relatively short times -- typically one week to one year. The result of these thermally accelerated life tests will be a set of data of life values for a corresponding set of temperatures. Usually the data consist of a set of life values for each of two to four (occasionally more) test temperatures, 10 C to 25 C apart. The objective then is to establish from these data the mean life vales at each temperature and the functional dependence of life on temperature, as well as the statistical consistency and the confidence to be attributed to the mean life values and the functional life temperature dependence. The purpose of this guide is to assist in this objective and to give guidance for comparing the results of tests on different materials and of different tests on the same materials

  15. Antibody responses to Borrelia burgdorferi detected by western blot vary geographically in Canada.

    Science.gov (United States)

    Ogden, Nicholas H; Arsenault, Julie; Hatchette, Todd F; Mechai, Samir; Lindsay, L Robbin

    2017-01-01

    Lyme disease is emerging in eastern and central Canada, and most cases are diagnosed using the two-tier serological test (Enzyme Immuno Assay [EIA] followed by Western blot [WB]). Simplification of this algorithm would be advantageous unless it impacts test performance. In this study, accuracy of individual proteins of the IgG WB algorithm in predicting the overall test result in samples from Canadians was assessed. Because Borrelia burgdorferi strains vary geographically in Canada, geographic variations in serological responses were also explored. Metrics of relative sensitivity, specificity and the kappa statistic measure of concordance were used to assess the capacity of responses to individual proteins to predict the overall IgG WB result of 2524 EIA (C6)-positive samples from across Canada. Geographic and interannual variations in proportions of samples testing positive were explored by logistic regression. No one protein was highly concordant with the IgG WB result. Significant variations were found amongst years and geographic regions in the prevalence of samples testing positive using the overall IgG WB algorithm, and for individual proteins of the algorithm. In most cases the prevalence of samples testing positive were highest in Nova Scotia, and lower in samples from Manitoba westwards. These findings suggest that the current two tier test may not be simplified and continued use of the current two-tier test method and interpretation is recommended. Geographic and interannual variations in the prevalence of samples testing positive may be consistent with B. burgdorferi strain variation in Canada, and further studies are needed to explore this.

  16. Antibody responses to Borrelia burgdorferi detected by western blot vary geographically in Canada.

    Directory of Open Access Journals (Sweden)

    Nicholas H Ogden

    Full Text Available Lyme disease is emerging in eastern and central Canada, and most cases are diagnosed using the two-tier serological test (Enzyme Immuno Assay [EIA] followed by Western blot [WB]. Simplification of this algorithm would be advantageous unless it impacts test performance. In this study, accuracy of individual proteins of the IgG WB algorithm in predicting the overall test result in samples from Canadians was assessed. Because Borrelia burgdorferi strains vary geographically in Canada, geographic variations in serological responses were also explored. Metrics of relative sensitivity, specificity and the kappa statistic measure of concordance were used to assess the capacity of responses to individual proteins to predict the overall IgG WB result of 2524 EIA (C6-positive samples from across Canada. Geographic and interannual variations in proportions of samples testing positive were explored by logistic regression. No one protein was highly concordant with the IgG WB result. Significant variations were found amongst years and geographic regions in the prevalence of samples testing positive using the overall IgG WB algorithm, and for individual proteins of the algorithm. In most cases the prevalence of samples testing positive were highest in Nova Scotia, and lower in samples from Manitoba westwards. These findings suggest that the current two tier test may not be simplified and continued use of the current two-tier test method and interpretation is recommended. Geographic and interannual variations in the prevalence of samples testing positive may be consistent with B. burgdorferi strain variation in Canada, and further studies are needed to explore this.

  17. FADTTSter: accelerating hypothesis testing with functional analysis of diffusion tensor tract statistics

    Science.gov (United States)

    Noel, Jean; Prieto, Juan C.; Styner, Martin

    2017-03-01

    Functional Analysis of Diffusion Tensor Tract Statistics (FADTTS) is a toolbox for analysis of white matter (WM) fiber tracts. It allows associating diffusion properties along major WM bundles with a set of covariates of interest, such as age, diagnostic status and gender, and the structure of the variability of these WM tract properties. However, to use this toolbox, a user must have an intermediate knowledge in scripting languages (MATLAB). FADTTSter was created to overcome this issue and make the statistical analysis accessible to any non-technical researcher. FADTTSter is actively being used by researchers at the University of North Carolina. FADTTSter guides non-technical users through a series of steps including quality control of subjects and fibers in order to setup the necessary parameters to run FADTTS. Additionally, FADTTSter implements interactive charts for FADTTS' outputs. This interactive chart enhances the researcher experience and facilitates the analysis of the results. FADTTSter's motivation is to improve usability and provide a new analysis tool to the community that complements FADTTS. Ultimately, by enabling FADTTS to a broader audience, FADTTSter seeks to accelerate hypothesis testing in neuroimaging studies involving heterogeneous clinical data and diffusion tensor imaging. This work is submitted to the Biomedical Applications in Molecular, Structural, and Functional Imaging conference. The source code of this application is available in NITRC.

  18. Diagnosis of students' ability in a statistical course based on Rasch probabilistic outcome

    Science.gov (United States)

    Mahmud, Zamalia; Ramli, Wan Syahira Wan; Sapri, Shamsiah; Ahmad, Sanizah

    2017-06-01

    Measuring students' ability and performance are important in assessing how well students have learned and mastered the statistical courses. Any improvement in learning will depend on the student's approaches to learning, which are relevant to some factors of learning, namely assessment methods carrying out tasks consisting of quizzes, tests, assignment and final examination. This study has attempted an alternative approach to measure students' ability in an undergraduate statistical course based on the Rasch probabilistic model. Firstly, this study aims to explore the learning outcome patterns of students in a statistics course (Applied Probability and Statistics) based on an Entrance-Exit survey. This is followed by investigating students' perceived learning ability based on four Course Learning Outcomes (CLOs) and students' actual learning ability based on their final examination scores. Rasch analysis revealed that students perceived themselves as lacking the ability to understand about 95% of the statistics concepts at the beginning of the class but eventually they had a good understanding at the end of the 14 weeks class. In terms of students' performance in their final examination, their ability in understanding the topics varies at different probability values given the ability of the students and difficulty of the questions. Majority found the probability and counting rules topic to be the most difficult to learn.

  19. Statistical refinements for data analysis of mollusc reproduction tests: an example with Lymnaea stagnalis

    DEFF Research Database (Denmark)

    Holbech, Henrik

    -contribution of each individual to the measured response. Furthermore, the combination of a Gamma-Poisson stochastic part with a Weibull concentration-response model allowed accounting for the inter-replicate variability. Second, we checked for the possibility of optimizing the initial experimental design through...... was twofold. First, we refined the statistical analyses of reproduction data accounting for mortality all along the test period. The variable “number of clutches/eggs produced per individual-day” was used for EC x modelling, as classically done in epidemiology in order to account for the time...

  20. Statistical inference and Aristotle's Rhetoric.

    Science.gov (United States)

    Macdonald, Ranald R

    2004-11-01

    Formal logic operates in a closed system where all the information relevant to any conclusion is present, whereas this is not the case when one reasons about events and states of the world. Pollard and Richardson drew attention to the fact that the reasoning behind statistical tests does not lead to logically justifiable conclusions. In this paper statistical inferences are defended not by logic but by the standards of everyday reasoning. Aristotle invented formal logic, but argued that people mostly get at the truth with the aid of enthymemes--incomplete syllogisms which include arguing from examples, analogies and signs. It is proposed that statistical tests work in the same way--in that they are based on examples, invoke the analogy of a model and use the size of the effect under test as a sign that the chance hypothesis is unlikely. Of existing theories of statistical inference only a weak version of Fisher's takes this into account. Aristotle anticipated Fisher by producing an argument of the form that there were too many cases in which an outcome went in a particular direction for that direction to be plausibly attributed to chance. We can therefore conclude that Aristotle would have approved of statistical inference and there is a good reason for calling this form of statistical inference classical.

  1. A statistical skull geometry model for children 0-3 years old.

    Directory of Open Access Journals (Sweden)

    Zhigang Li

    Full Text Available Head injury is the leading cause of fatality and long-term disability for children. Pediatric heads change rapidly in both size and shape during growth, especially for children under 3 years old (YO. To accurately assess the head injury risks for children, it is necessary to understand the geometry of the pediatric head and how morphologic features influence injury causation within the 0-3 YO population. In this study, head CT scans from fifty-six 0-3 YO children were used to develop a statistical model of pediatric skull geometry. Geometric features important for injury prediction, including skull size and shape, skull thickness and suture width, along with their variations among the sample population, were quantified through a series of image and statistical analyses. The size and shape of the pediatric skull change significantly with age and head circumference. The skull thickness and suture width vary with age, head circumference and location, which will have important effects on skull stiffness and injury prediction. The statistical geometry model developed in this study can provide a geometrical basis for future development of child anthropomorphic test devices and pediatric head finite element models.

  2. A statistical skull geometry model for children 0-3 years old.

    Science.gov (United States)

    Li, Zhigang; Park, Byoung-Keon; Liu, Weiguo; Zhang, Jinhuan; Reed, Matthew P; Rupp, Jonathan D; Hoff, Carrie N; Hu, Jingwen

    2015-01-01

    Head injury is the leading cause of fatality and long-term disability for children. Pediatric heads change rapidly in both size and shape during growth, especially for children under 3 years old (YO). To accurately assess the head injury risks for children, it is necessary to understand the geometry of the pediatric head and how morphologic features influence injury causation within the 0-3 YO population. In this study, head CT scans from fifty-six 0-3 YO children were used to develop a statistical model of pediatric skull geometry. Geometric features important for injury prediction, including skull size and shape, skull thickness and suture width, along with their variations among the sample population, were quantified through a series of image and statistical analyses. The size and shape of the pediatric skull change significantly with age and head circumference. The skull thickness and suture width vary with age, head circumference and location, which will have important effects on skull stiffness and injury prediction. The statistical geometry model developed in this study can provide a geometrical basis for future development of child anthropomorphic test devices and pediatric head finite element models.

  3. An R package "VariABEL" for genome-wide searching of potentially interacting loci by testing genotypic variance heterogeneity

    Directory of Open Access Journals (Sweden)

    Struchalin Maksim V

    2012-01-01

    Full Text Available Abstract Background Hundreds of new loci have been discovered by genome-wide association studies of human traits. These studies mostly focused on associations between single locus and a trait. Interactions between genes and between genes and environmental factors are of interest as they can improve our understanding of the genetic background underlying complex traits. Genome-wide testing of complex genetic models is a computationally demanding task. Moreover, testing of such models leads to multiple comparison problems that reduce the probability of new findings. Assuming that the genetic model underlying a complex trait can include hundreds of genes and environmental factors, testing of these models in genome-wide association studies represent substantial difficulties. We and Pare with colleagues (2010 developed a method allowing to overcome such difficulties. The method is based on the fact that loci which are involved in interactions can show genotypic variance heterogeneity of a trait. Genome-wide testing of such heterogeneity can be a fast scanning approach which can point to the interacting genetic variants. Results In this work we present a new method, SVLM, allowing for variance heterogeneity analysis of imputed genetic variation. Type I error and power of this test are investigated and contracted with these of the Levene's test. We also present an R package, VariABEL, implementing existing and newly developed tests. Conclusions Variance heterogeneity analysis is a promising method for detection of potentially interacting loci. New method and software package developed in this work will facilitate such analysis in genome-wide context.

  4. Goodness of Fit Test and Test of Independence by Entropy

    Directory of Open Access Journals (Sweden)

    M. Sharifdoost

    2009-06-01

    Full Text Available To test whether a set of data has a specific distribution or not, we can use the goodness of fit test. This test can be done by one of Pearson X 2 -statistic or the likelihood ratio statistic G 2 , which are asymptotically equal, and also by using the Kolmogorov-Smirnov statistic in continuous distributions. In this paper, we introduce a new test statistic for goodness of fit test which is based on entropy distance, and which can be applied for large sample sizes. We compare this new statistic with the classical test statistics X 2 , G 2 , and Tn by some simulation studies. We conclude that the new statistic is more sensitive than the usual statistics to the rejection of distributions which are almost closed to the desired distribution. Also for testing independence, a new test statistic based on mutual information is introduced

  5. Time-varying surrogate data to assess nonlinearity in nonstationary time series: application to heart rate variability.

    Science.gov (United States)

    Faes, Luca; Zhao, He; Chon, Ki H; Nollo, Giandomenico

    2009-03-01

    We propose a method to extend to time-varying (TV) systems the procedure for generating typical surrogate time series, in order to test the presence of nonlinear dynamics in potentially nonstationary signals. The method is based on fitting a TV autoregressive (AR) model to the original series and then regressing the model coefficients with random replacements of the model residuals to generate TV AR surrogate series. The proposed surrogate series were used in combination with a TV sample entropy (SE) discriminating statistic to assess nonlinearity in both simulated and experimental time series, in comparison with traditional time-invariant (TIV) surrogates combined with the TIV SE discriminating statistic. Analysis of simulated time series showed that using TIV surrogates, linear nonstationary time series may be erroneously regarded as nonlinear and weak TV nonlinearities may remain unrevealed, while the use of TV AR surrogates markedly increases the probability of a correct interpretation. Application to short (500 beats) heart rate variability (HRV) time series recorded at rest (R), after head-up tilt (T), and during paced breathing (PB) showed: 1) modifications of the SE statistic that were well interpretable with the known cardiovascular physiology; 2) significant contribution of nonlinear dynamics to HRV in all conditions, with significant increase during PB at 0.2 Hz respiration rate; and 3) a disagreement between TV AR surrogates and TIV surrogates in about a quarter of the series, suggesting that nonstationarity may affect HRV recordings and bias the outcome of the traditional surrogate-based nonlinearity test.

  6. Uniting statistical and individual-based approaches for animal movement modelling.

    Science.gov (United States)

    Latombe, Guillaume; Parrott, Lael; Basille, Mathieu; Fortin, Daniel

    2014-01-01

    The dynamic nature of their internal states and the environment directly shape animals' spatial behaviours and give rise to emergent properties at broader scales in natural systems. However, integrating these dynamic features into habitat selection studies remains challenging, due to practically impossible field work to access internal states and the inability of current statistical models to produce dynamic outputs. To address these issues, we developed a robust method, which combines statistical and individual-based modelling. Using a statistical technique for forward modelling of the IBM has the advantage of being faster for parameterization than a pure inverse modelling technique and allows for robust selection of parameters. Using GPS locations from caribou monitored in Québec, caribou movements were modelled based on generative mechanisms accounting for dynamic variables at a low level of emergence. These variables were accessed by replicating real individuals' movements in parallel sub-models, and movement parameters were then empirically parameterized using Step Selection Functions. The final IBM model was validated using both k-fold cross-validation and emergent patterns validation and was tested for two different scenarios, with varying hardwood encroachment. Our results highlighted a functional response in habitat selection, which suggests that our method was able to capture the complexity of the natural system, and adequately provided projections on future possible states of the system in response to different management plans. This is especially relevant for testing the long-term impact of scenarios corresponding to environmental configurations that have yet to be observed in real systems.

  7. Statistical inference an integrated Bayesianlikelihood approach

    CERN Document Server

    Aitkin, Murray

    2010-01-01

    Filling a gap in current Bayesian theory, Statistical Inference: An Integrated Bayesian/Likelihood Approach presents a unified Bayesian treatment of parameter inference and model comparisons that can be used with simple diffuse prior specifications. This novel approach provides new solutions to difficult model comparison problems and offers direct Bayesian counterparts of frequentist t-tests and other standard statistical methods for hypothesis testing.After an overview of the competing theories of statistical inference, the book introduces the Bayes/likelihood approach used throughout. It pre

  8. Varying Constants, Gravitation and Cosmology

    Directory of Open Access Journals (Sweden)

    Jean-Philippe Uzan

    2011-03-01

    Full Text Available Fundamental constants are a cornerstone of our physical laws. Any constant varying in space and/or time would reflect the existence of an almost massless field that couples to matter. This will induce a violation of the universality of free fall. Thus, it is of utmost importance for our understanding of gravity and of the domain of validity of general relativity to test for their constancy. We detail the relations between the constants, the tests of the local position invariance and of the universality of free fall. We then review the main experimental and observational constraints that have been obtained from atomic clocks, the Oklo phenomenon, solar system observations, meteorite dating, quasar absorption spectra, stellar physics, pulsar timing, the cosmic microwave background and big bang nucleosynthesis. At each step we describe the basics of each system, its dependence with respect to the constants, the known systematic effects and the most recent constraints that have been obtained. We then describe the main theoretical frameworks in which the low-energy constants may actually be varying and we focus on the unification mechanisms and the relations between the variation of different constants. To finish, we discuss the more speculative possibility of understanding their numerical values and the apparent fine-tuning that they confront us with.

  9. Testing for changes using permutations of U-statistics

    Czech Academy of Sciences Publication Activity Database

    Horvath, L.; Hušková, Marie

    2005-01-01

    Roč. 2005, č. 128 (2005), s. 351-371 ISSN 0378-3758 R&D Projects: GA ČR GA201/00/0769 Institutional research plan: CEZ:AV0Z10750506 Keywords : U-statistics * permutations * change-point * weighted approximation * Brownian bridge Subject RIV: BD - Theory of Information Impact factor: 0.481, year: 2005

  10. Critical analysis of adsorption data statistically

    Science.gov (United States)

    Kaushal, Achla; Singh, S. K.

    2017-10-01

    Experimental data can be presented, computed, and critically analysed in a different way using statistics. A variety of statistical tests are used to make decisions about the significance and validity of the experimental data. In the present study, adsorption was carried out to remove zinc ions from contaminated aqueous solution using mango leaf powder. The experimental data was analysed statistically by hypothesis testing applying t test, paired t test and Chi-square test to (a) test the optimum value of the process pH, (b) verify the success of experiment and (c) study the effect of adsorbent dose in zinc ion removal from aqueous solutions. Comparison of calculated and tabulated values of t and χ 2 showed the results in favour of the data collected from the experiment and this has been shown on probability charts. K value for Langmuir isotherm was 0.8582 and m value for Freundlich adsorption isotherm obtained was 0.725, both are mango leaf powder.

  11. Lectures on algebraic statistics

    CERN Document Server

    Drton, Mathias; Sullivant, Seth

    2009-01-01

    How does an algebraic geometer studying secant varieties further the understanding of hypothesis tests in statistics? Why would a statistician working on factor analysis raise open problems about determinantal varieties? Connections of this type are at the heart of the new field of "algebraic statistics". In this field, mathematicians and statisticians come together to solve statistical inference problems using concepts from algebraic geometry as well as related computational and combinatorial techniques. The goal of these lectures is to introduce newcomers from the different camps to algebraic statistics. The introduction will be centered around the following three observations: many important statistical models correspond to algebraic or semi-algebraic sets of parameters; the geometry of these parameter spaces determines the behaviour of widely used statistical inference procedures; computational algebraic geometry can be used to study parameter spaces and other features of statistical models.

  12. BrightStat.com: free statistics online.

    Science.gov (United States)

    Stricker, Daniel

    2008-10-01

    Powerful software for statistical analysis is expensive. Here I present BrightStat, a statistical software running on the Internet which is free of charge. BrightStat's goals, its main capabilities and functionalities are outlined. Three different sample runs, a Friedman test, a chi-square test, and a step-wise multiple regression are presented. The results obtained by BrightStat are compared with results computed by SPSS, one of the global leader in providing statistical software, and VassarStats, a collection of scripts for data analysis running on the Internet. Elementary statistics is an inherent part of academic education and BrightStat is an alternative to commercial products.

  13. Medical Statistics – Mathematics or Oracle? Farewell Lecture

    Directory of Open Access Journals (Sweden)

    Gaus, Wilhelm

    2005-06-01

    Full Text Available Certainty is rare in medicine. This is a direct consequence of the individuality of each and every human being and the reason why we need medical statistics. However, statistics have their pitfalls, too. Fig. 1 shows that the suicide rate peaks in youth, while in Fig. 2 the rate is highest in midlife and Fig. 3 in old age. Which of these contradictory messages is right? After an introduction to the principles of statistical testing, this lecture examines the probability with which statistical test results are correct. For this purpose the level of significance and the power of the test are compared with the sensitivity and specificity of a diagnostic procedure. The probability of obtaining correct statistical test results is the same as that for the positive and negative correctness of a diagnostic procedure and therefore depends on prevalence. The focus then shifts to the problem of multiple statistical testing. The lecture demonstrates that for each data set of reasonable size at least one test result proves to be significant - even if the data set is produced by a random number generator. It is extremely important that a hypothesis is generated independently from the data used for its testing. These considerations enable us to understand the gradation of "lame excuses, lies and statistics" and the difference between pure truth and the full truth. Finally, two historical oracles are cited.

  14. Statistical analysis applied to safety culture self-assessment

    International Nuclear Information System (INIS)

    Macedo Soares, P.P.

    2002-01-01

    Interviews and opinion surveys are instruments used to assess the safety culture in an organization as part of the Safety Culture Enhancement Programme. Specific statistical tools are used to analyse the survey results. This paper presents an example of an opinion survey with the corresponding application of the statistical analysis and the conclusions obtained. Survey validation, Frequency statistics, Kolmogorov-Smirnov non-parametric test, Student (T-test) and ANOVA means comparison tests and LSD post-hoc multiple comparison test, are discussed. (author)

  15. A statistical analysis of the impact of advertising signs on road safety.

    Science.gov (United States)

    Yannis, George; Papadimitriou, Eleonora; Papantoniou, Panagiotis; Voulgari, Chrisoula

    2013-01-01

    This research aims to investigate the impact of advertising signs on road safety. An exhaustive review of international literature was carried out on the effect of advertising signs on driver behaviour and safety. Moreover, a before-and-after statistical analysis with control groups was applied on several road sites with different characteristics in the Athens metropolitan area, in Greece, in order to investigate the correlation between the placement or removal of advertising signs and the related occurrence of road accidents. Road accident data for the 'before' and 'after' periods on the test sites and the control sites were extracted from the database of the Hellenic Statistical Authority, and the selected 'before' and 'after' periods vary from 2.5 to 6 years. The statistical analysis shows no statistical correlation between road accidents and advertising signs in none of the nine sites examined, as the confidence intervals of the estimated safety effects are non-significant at 95% confidence level. This can be explained by the fact that, in the examined road sites, drivers are overloaded with information (traffic signs, directions signs, labels of shops, pedestrians and other vehicles, etc.) so that the additional information load from advertising signs may not further distract them.

  16. Statistics: The stethoscope of a thinking urologist

    Directory of Open Access Journals (Sweden)

    Arun S Sivanandam

    2009-01-01

    Full Text Available Understanding statistical terminology and the ability to appraise clinical research findings and statistical tests are critical to the practice of evidence-based medicine. Urologists require statistics in their toolbox of skills in order to successfully sift through increasingly complex studies and realize the drawbacks of statistical tests. Currently, the level of evidence in urology literature is low and the majority of research abstracts published for the American Urological Association (AUA meetings lag behind for full-text publication because of a lack of statistical reporting. Underlying these issues is a distinct deficiency in solid comprehension of statistics in the literature and a discomfort with the application of statistics for clinical decision-making. This review examines the plight of statistics in urology and investigates the reason behind the white-coat aversion to biostatistics. Resources such as evidence-based medicine websites, primers in statistics, and guidelines for statistical reporting exist for quick reference by urologists. Ultimately, educators should take charge of monitoring statistical knowledge among trainees by bolstering competency requirements and creating sustained opportunities for statistics and methodology exposure.

  17. Confidence Intervals: From tests of statistical significance to confidence intervals, range hypotheses and substantial effects

    Directory of Open Access Journals (Sweden)

    Dominic Beaulieu-Prévost

    2006-03-01

    Full Text Available For the last 50 years of research in quantitative social sciences, the empirical evaluation of scientific hypotheses has been based on the rejection or not of the null hypothesis. However, more than 300 articles demonstrated that this method was problematic. In summary, null hypothesis testing (NHT is unfalsifiable, its results depend directly on sample size and the null hypothesis is both improbable and not plausible. Consequently, alternatives to NHT such as confidence intervals (CI and measures of effect size are starting to be used in scientific publications. The purpose of this article is, first, to provide the conceptual tools necessary to implement an approach based on confidence intervals, and second, to briefly demonstrate why such an approach is an interesting alternative to an approach based on NHT. As demonstrated in the article, the proposed CI approach avoids most problems related to a NHT approach and can often improve the scientific and contextual relevance of the statistical interpretations by testing range hypotheses instead of a point hypothesis and by defining the minimal value of a substantial effect. The main advantage of such a CI approach is that it replaces the notion of statistical power by an easily interpretable three-value logic (probable presence of a substantial effect, probable absence of a substantial effect and probabilistic undetermination. The demonstration includes a complete example.

  18. A statistical simulation model for field testing of non-target organisms in environmental risk assessment of genetically modified plants.

    Science.gov (United States)

    Goedhart, Paul W; van der Voet, Hilko; Baldacchino, Ferdinando; Arpaia, Salvatore

    2014-04-01

    Genetic modification of plants may result in unintended effects causing potentially adverse effects on the environment. A comparative safety assessment is therefore required by authorities, such as the European Food Safety Authority, in which the genetically modified plant is compared with its conventional counterpart. Part of the environmental risk assessment is a comparative field experiment in which the effect on non-target organisms is compared. Statistical analysis of such trials come in two flavors: difference testing and equivalence testing. It is important to know the statistical properties of these, for example, the power to detect environmental change of a given magnitude, before the start of an experiment. Such prospective power analysis can best be studied by means of a statistical simulation model. This paper describes a general framework for simulating data typically encountered in environmental risk assessment of genetically modified plants. The simulation model, available as Supplementary Material, can be used to generate count data having different statistical distributions possibly with excess-zeros. In addition the model employs completely randomized or randomized block experiments, can be used to simulate single or multiple trials across environments, enables genotype by environment interaction by adding random variety effects, and finally includes repeated measures in time following a constant, linear or quadratic pattern in time possibly with some form of autocorrelation. The model also allows to add a set of reference varieties to the GM plants and its comparator to assess the natural variation which can then be used to set limits of concern for equivalence testing. The different count distributions are described in some detail and some examples of how to use the simulation model to study various aspects, including a prospective power analysis, are provided.

  19. Descriptive and inferential statistical methods used in burns research.

    Science.gov (United States)

    Al-Benna, Sammy; Al-Ajam, Yazan; Way, Benjamin; Steinstraesser, Lars

    2010-05-01

    Burns research articles utilise a variety of descriptive and inferential methods to present and analyse data. The aim of this study was to determine the descriptive methods (e.g. mean, median, SD, range, etc.) and survey the use of inferential methods (statistical tests) used in articles in the journal Burns. This study defined its population as all original articles published in the journal Burns in 2007. Letters to the editor, brief reports, reviews, and case reports were excluded. Study characteristics, use of descriptive statistics and the number and types of statistical methods employed were evaluated. Of the 51 articles analysed, 11(22%) were randomised controlled trials, 18(35%) were cohort studies, 11(22%) were case control studies and 11(22%) were case series. The study design and objectives were defined in all articles. All articles made use of continuous and descriptive data. Inferential statistics were used in 49(96%) articles. Data dispersion was calculated by standard deviation in 30(59%). Standard error of the mean was quoted in 19(37%). The statistical software product was named in 33(65%). Of the 49 articles that used inferential statistics, the tests were named in 47(96%). The 6 most common tests used (Student's t-test (53%), analysis of variance/co-variance (33%), chi(2) test (27%), Wilcoxon & Mann-Whitney tests (22%), Fisher's exact test (12%)) accounted for the majority (72%) of statistical methods employed. A specified significance level was named in 43(88%) and the exact significance levels were reported in 28(57%). Descriptive analysis and basic statistical techniques account for most of the statistical tests reported. This information should prove useful in deciding which tests should be emphasised in educating burn care professionals. These results highlight the need for burn care professionals to have a sound understanding of basic statistics, which is crucial in interpreting and reporting data. Advice should be sought from professionals

  20. Statistical analysis of angular correlation measurements

    International Nuclear Information System (INIS)

    Oliveira, R.A.A.M. de.

    1986-01-01

    Obtaining the multipole mixing ratio, δ, of γ transitions in angular correlation measurements is a statistical problem characterized by the small number of angles in which the observation is made and by the limited statistic of counting, α. The inexistence of a sufficient statistics for the estimator of δ, is shown. Three different estimators for δ were constructed and their properties of consistency, bias and efficiency were tested. Tests were also performed in experimental results obtained in γ-γ directional correlation measurements. (Author) [pt

  1. A Discussion of the Statistical Investigation Process in the Australian Curriculum

    Science.gov (United States)

    McQuade, Vivienne

    2013-01-01

    Statistics and statistical literacy can be found in the Learning Areas of Mathematics, Geography, Science, History and the upcoming Business and Economics, as well as in the General Capability of Numeracy and all three Crosscurriculum priorities. The Australian Curriculum affords many exciting and varied entry points for the teaching of…

  2. On the Computation of the RMSEA and CFI from the Mean-And-Variance Corrected Test Statistic with Nonnormal Data in SEM.

    Science.gov (United States)

    Savalei, Victoria

    2018-01-01

    A new type of nonnormality correction to the RMSEA has recently been developed, which has several advantages over existing corrections. In particular, the new correction adjusts the sample estimate of the RMSEA for the inflation due to nonnormality, while leaving its population value unchanged, so that established cutoff criteria can still be used to judge the degree of approximate fit. A confidence interval (CI) for the new robust RMSEA based on the mean-corrected ("Satorra-Bentler") test statistic has also been proposed. Follow up work has provided the same type of nonnormality correction for the CFI (Brosseau-Liard & Savalei, 2014). These developments have recently been implemented in lavaan. This note has three goals: a) to show how to compute the new robust RMSEA and CFI from the mean-and-variance corrected test statistic; b) to offer a new CI for the robust RMSEA based on the mean-and-variance corrected test statistic; and c) to caution that the logic of the new nonnormality corrections to RMSEA and CFI is most appropriate for the maximum likelihood (ML) estimator, and cannot easily be generalized to the most commonly used categorical data estimators.

  3. The Concise Encyclopedia of Statistics

    CERN Document Server

    Dodge, Yadolah

    2008-01-01

    The Concise Encyclopedia of Statistics presents the essential information about statistical tests, concepts, and analytical methods in language that is accessible to practitioners and students of the vast community using statistics in medicine, engineering, physical science, life science, social science, and business/economics. The reference is alphabetically arranged to provide quick access to the fundamental tools of statistical methodology and biographies of famous statisticians. The more than 500 entries include definitions, history, mathematical details, limitations, examples, references,

  4. Statistics for X-chromosome associations.

    Science.gov (United States)

    Özbek, Umut; Lin, Hui-Min; Lin, Yan; Weeks, Daniel E; Chen, Wei; Shaffer, John R; Purcell, Shaun M; Feingold, Eleanor

    2018-06-13

    In a genome-wide association study (GWAS), association between genotype and phenotype at autosomal loci is generally tested by regression models. However, X-chromosome data are often excluded from published analyses of autosomes because of the difference between males and females in number of X chromosomes. Failure to analyze X-chromosome data at all is obviously less than ideal, and can lead to missed discoveries. Even when X-chromosome data are included, they are often analyzed with suboptimal statistics. Several mathematically sensible statistics for X-chromosome association have been proposed. The optimality of these statistics, however, is based on very specific simple genetic models. In addition, while previous simulation studies of these statistics have been informative, they have focused on single-marker tests and have not considered the types of error that occur even under the null hypothesis when the entire X chromosome is scanned. In this study, we comprehensively tested several X-chromosome association statistics using simulation studies that include the entire chromosome. We also considered a wide range of trait models for sex differences and phenotypic effects of X inactivation. We found that models that do not incorporate a sex effect can have large type I error in some cases. We also found that many of the best statistics perform well even when there are modest deviations, such as trait variance differences between the sexes or small sex differences in allele frequencies, from assumptions. © 2018 WILEY PERIODICALS, INC.

  5. Interpreting the gamma statistic in phylogenetic diversification rate studies: a rate decrease does not necessarily indicate an early burst.

    Science.gov (United States)

    Fordyce, James A

    2010-07-23

    Phylogenetic hypotheses are increasingly being used to elucidate historical patterns of diversification rate-variation. Hypothesis testing is often conducted by comparing the observed vector of branching times to a null, pure-birth expectation. A popular method for inferring a decrease in speciation rate, which might suggest an early burst of diversification followed by a decrease in diversification rate is the gamma statistic. Using simulations under varying conditions, I examine the sensitivity of gamma to the distribution of the most recent branching times. Using an exploratory data analysis tool for lineages through time plots, tree deviation, I identified trees with a significant gamma statistic that do not appear to have the characteristic early accumulation of lineages consistent with an early, rapid rate of cladogenesis. I further investigated the sensitivity of the gamma statistic to recent diversification by examining the consequences of failing to simulate the full time interval following the most recent cladogenic event. The power of gamma to detect rate decrease at varying times was assessed for simulated trees with an initial high rate of diversification followed by a relatively low rate. The gamma statistic is extraordinarily sensitive to recent diversification rates, and does not necessarily detect early bursts of diversification. This was true for trees of various sizes and completeness of taxon sampling. The gamma statistic had greater power to detect recent diversification rate decreases compared to early bursts of diversification. Caution should be exercised when interpreting the gamma statistic as an indication of early, rapid diversification.

  6. Interpreting the gamma statistic in phylogenetic diversification rate studies: a rate decrease does not necessarily indicate an early burst.

    Directory of Open Access Journals (Sweden)

    James A Fordyce

    Full Text Available BACKGROUND: Phylogenetic hypotheses are increasingly being used to elucidate historical patterns of diversification rate-variation. Hypothesis testing is often conducted by comparing the observed vector of branching times to a null, pure-birth expectation. A popular method for inferring a decrease in speciation rate, which might suggest an early burst of diversification followed by a decrease in diversification rate is the gamma statistic. METHODOLOGY: Using simulations under varying conditions, I examine the sensitivity of gamma to the distribution of the most recent branching times. Using an exploratory data analysis tool for lineages through time plots, tree deviation, I identified trees with a significant gamma statistic that do not appear to have the characteristic early accumulation of lineages consistent with an early, rapid rate of cladogenesis. I further investigated the sensitivity of the gamma statistic to recent diversification by examining the consequences of failing to simulate the full time interval following the most recent cladogenic event. The power of gamma to detect rate decrease at varying times was assessed for simulated trees with an initial high rate of diversification followed by a relatively low rate. CONCLUSIONS: The gamma statistic is extraordinarily sensitive to recent diversification rates, and does not necessarily detect early bursts of diversification. This was true for trees of various sizes and completeness of taxon sampling. The gamma statistic had greater power to detect recent diversification rate decreases compared to early bursts of diversification. Caution should be exercised when interpreting the gamma statistic as an indication of early, rapid diversification.

  7. Extending multivariate distance matrix regression with an effect size measure and the asymptotic null distribution of the test statistic.

    Science.gov (United States)

    McArtor, Daniel B; Lubke, Gitta H; Bergeman, C S

    2017-12-01

    Person-centered methods are useful for studying individual differences in terms of (dis)similarities between response profiles on multivariate outcomes. Multivariate distance matrix regression (MDMR) tests the significance of associations of response profile (dis)similarities and a set of predictors using permutation tests. This paper extends MDMR by deriving and empirically validating the asymptotic null distribution of its test statistic, and by proposing an effect size for individual outcome variables, which is shown to recover true associations. These extensions alleviate the computational burden of permutation tests currently used in MDMR and render more informative results, thus making MDMR accessible to new research domains.

  8. Nonparametric tests for equality of psychometric functions.

    Science.gov (United States)

    García-Pérez, Miguel A; Núñez-Antón, Vicente

    2017-12-07

    Many empirical studies measure psychometric functions (curves describing how observers' performance varies with stimulus magnitude) because these functions capture the effects of experimental conditions. To assess these effects, parametric curves are often fitted to the data and comparisons are carried out by testing for equality of mean parameter estimates across conditions. This approach is parametric and, thus, vulnerable to violations of the implied assumptions. Furthermore, testing for equality of means of parameters may be misleading: Psychometric functions may vary meaningfully across conditions on an observer-by-observer basis with no effect on the mean values of the estimated parameters. Alternative approaches to assess equality of psychometric functions per se are thus needed. This paper compares three nonparametric tests that are applicable in all situations of interest: The existing generalized Mantel-Haenszel test, a generalization of the Berry-Mielke test that was developed here, and a split variant of the generalized Mantel-Haenszel test also developed here. Their statistical properties (accuracy and power) are studied via simulation and the results show that all tests are indistinguishable as to accuracy but they differ non-uniformly as to power. Empirical use of the tests is illustrated via analyses of published data sets and practical recommendations are given. The computer code in MATLAB and R to conduct these tests is available as Electronic Supplemental Material.

  9. Statistically derived factors of varied importance to audiologists when making a hearing aid brand preference decision.

    Science.gov (United States)

    Johnson, Earl E; Mueller, H Gustav; Ricketts, Todd A

    2009-01-01

    To determine the amount of importance audiologists place on various items related to their selection of a preferred hearing aid brand manufacturer. Three hundred forty-three hearing aid-dispensing audiologists rated a total of 32 randomized items by survey methodology. Principle component analysis identified seven orthogonal statistical factors of importance. In rank order, these factors were Aptitude of the Brand, Image, Cost, Sales and Speed of Delivery, Exposure, Colleague Recommendations, and Contracts and Incentives. While it was hypothesized that differences among audiologists in the importance ratings of these factors would dictate their preference for a given brand, that was not our finding. Specifically, mean ratings for the six most important factors did not differ among audiologists preferring different brands. A statistically significant difference among audiologists preferring different brands was present, however, for one factor: Contracts and Incentives. Its assigned importance, though, was always lower than that for the other six factors. Although most audiologists have a preferred hearing aid brand, differences in the perceived importance of common factors attributed to brands do not largely determine preference for a particular brand.

  10. Statistical Tutorial | Center for Cancer Research

    Science.gov (United States)

    Recent advances in cancer biology have resulted in the need for increased statistical analysis of research data.  ST is designed as a follow up to Statistical Analysis of Research Data (SARD) held in April 2018.  The tutorial will apply the general principles of statistical analysis of research data including descriptive statistics, z- and t-tests of means and mean

  11. Scalable Video Streaming Adaptive to Time-Varying IEEE 802.11 MAC Parameters

    Science.gov (United States)

    Lee, Kyung-Jun; Suh, Doug-Young; Park, Gwang-Hoon; Huh, Jae-Doo

    This letter proposes a QoS control method for video streaming service over wireless networks. Based on statistical analysis, the time-varying MAC parameters highly related to channel condition are selected to predict available bitrate. Adaptive bitrate control of scalably-encoded video guarantees continuity in streaming service even if the channel condition changes abruptly.

  12. Mapping cell populations in flow cytometry data for cross‐sample comparison using the Friedman–Rafsky test statistic as a distance measure

    Science.gov (United States)

    Hsiao, Chiaowen; Liu, Mengya; Stanton, Rick; McGee, Monnie; Qian, Yu

    2015-01-01

    Abstract Flow cytometry (FCM) is a fluorescence‐based single‐cell experimental technology that is routinely applied in biomedical research for identifying cellular biomarkers of normal physiological responses and abnormal disease states. While many computational methods have been developed that focus on identifying cell populations in individual FCM samples, very few have addressed how the identified cell populations can be matched across samples for comparative analysis. This article presents FlowMap‐FR, a novel method for cell population mapping across FCM samples. FlowMap‐FR is based on the Friedman–Rafsky nonparametric test statistic (FR statistic), which quantifies the equivalence of multivariate distributions. As applied to FCM data by FlowMap‐FR, the FR statistic objectively quantifies the similarity between cell populations based on the shapes, sizes, and positions of fluorescence data distributions in the multidimensional feature space. To test and evaluate the performance of FlowMap‐FR, we simulated the kinds of biological and technical sample variations that are commonly observed in FCM data. The results show that FlowMap‐FR is able to effectively identify equivalent cell populations between samples under scenarios of proportion differences and modest position shifts. As a statistical test, FlowMap‐FR can be used to determine whether the expression of a cellular marker is statistically different between two cell populations, suggesting candidates for new cellular phenotypes by providing an objective statistical measure. In addition, FlowMap‐FR can indicate situations in which inappropriate splitting or merging of cell populations has occurred during gating procedures. We compared the FR statistic with the symmetric version of Kullback–Leibler divergence measure used in a previous population matching method with both simulated and real data. The FR statistic outperforms the symmetric version of KL‐distance in distinguishing

  13. Mapping cell populations in flow cytometry data for cross-sample comparison using the Friedman-Rafsky test statistic as a distance measure.

    Science.gov (United States)

    Hsiao, Chiaowen; Liu, Mengya; Stanton, Rick; McGee, Monnie; Qian, Yu; Scheuermann, Richard H

    2016-01-01

    Flow cytometry (FCM) is a fluorescence-based single-cell experimental technology that is routinely applied in biomedical research for identifying cellular biomarkers of normal physiological responses and abnormal disease states. While many computational methods have been developed that focus on identifying cell populations in individual FCM samples, very few have addressed how the identified cell populations can be matched across samples for comparative analysis. This article presents FlowMap-FR, a novel method for cell population mapping across FCM samples. FlowMap-FR is based on the Friedman-Rafsky nonparametric test statistic (FR statistic), which quantifies the equivalence of multivariate distributions. As applied to FCM data by FlowMap-FR, the FR statistic objectively quantifies the similarity between cell populations based on the shapes, sizes, and positions of fluorescence data distributions in the multidimensional feature space. To test and evaluate the performance of FlowMap-FR, we simulated the kinds of biological and technical sample variations that are commonly observed in FCM data. The results show that FlowMap-FR is able to effectively identify equivalent cell populations between samples under scenarios of proportion differences and modest position shifts. As a statistical test, FlowMap-FR can be used to determine whether the expression of a cellular marker is statistically different between two cell populations, suggesting candidates for new cellular phenotypes by providing an objective statistical measure. In addition, FlowMap-FR can indicate situations in which inappropriate splitting or merging of cell populations has occurred during gating procedures. We compared the FR statistic with the symmetric version of Kullback-Leibler divergence measure used in a previous population matching method with both simulated and real data. The FR statistic outperforms the symmetric version of KL-distance in distinguishing equivalent from nonequivalent cell

  14. Two-Sample Statistics for Testing the Equality of Survival Functions Against Improper Semi-parametric Accelerated Failure Time Alternatives: An Application to the Analysis of a Breast Cancer Clinical Trial

    Science.gov (United States)

    BROËT, PHILIPPE; TSODIKOV, ALEXANDER; DE RYCKE, YANN; MOREAU, THIERRY

    2010-01-01

    This paper presents two-sample statistics suited for testing equality of survival functions against improper semi-parametric accelerated failure time alternatives. These tests are designed for comparing either the short- or the long-term effect of a prognostic factor, or both. These statistics are obtained as partial likelihood score statistics from a time-dependent Cox model. As a consequence, the proposed tests can be very easily implemented using widely available software. A breast cancer clinical trial is presented as an example to demonstrate the utility of the proposed tests. PMID:15293627

  15. Two-sample statistics for testing the equality of survival functions against improper semi-parametric accelerated failure time alternatives: an application to the analysis of a breast cancer clinical trial.

    Science.gov (United States)

    Broët, Philippe; Tsodikov, Alexander; De Rycke, Yann; Moreau, Thierry

    2004-06-01

    This paper presents two-sample statistics suited for testing equality of survival functions against improper semi-parametric accelerated failure time alternatives. These tests are designed for comparing either the short- or the long-term effect of a prognostic factor, or both. These statistics are obtained as partial likelihood score statistics from a time-dependent Cox model. As a consequence, the proposed tests can be very easily implemented using widely available software. A breast cancer clinical trial is presented as an example to demonstrate the utility of the proposed tests.

  16. A generalized Grubbs-Beck test statistic for detecting multiple potentially influential low outliers in flood series

    Science.gov (United States)

    Cohn, T.A.; England, J.F.; Berenbrock, C.E.; Mason, R.R.; Stedinger, J.R.; Lamontagne, J.R.

    2013-01-01

    he Grubbs-Beck test is recommended by the federal guidelines for detection of low outliers in flood flow frequency computation in the United States. This paper presents a generalization of the Grubbs-Beck test for normal data (similar to the Rosner (1983) test; see also Spencer and McCuen (1996)) that can provide a consistent standard for identifying multiple potentially influential low flows. In cases where low outliers have been identified, they can be represented as “less-than” values, and a frequency distribution can be developed using censored-data statistical techniques, such as the Expected Moments Algorithm. This approach can improve the fit of the right-hand tail of a frequency distribution and provide protection from lack-of-fit due to unimportant but potentially influential low flows (PILFs) in a flood series, thus making the flood frequency analysis procedure more robust.

  17. A Statistical Toolkit for Data Analysis

    International Nuclear Information System (INIS)

    Donadio, S.; Guatelli, S.; Mascialino, B.; Pfeiffer, A.; Pia, M.G.; Ribon, A.; Viarengo, P.

    2006-01-01

    The present project aims to develop an open-source and object-oriented software Toolkit for statistical data analysis. Its statistical testing component contains a variety of Goodness-of-Fit tests, from Chi-squared to Kolmogorov-Smirnov, to less known, but generally much more powerful tests such as Anderson-Darling, Goodman, Fisz-Cramer-von Mises, Kuiper, Tiku. Thanks to the component-based design and the usage of the standard abstract interfaces for data analysis, this tool can be used by other data analysis systems or integrated in experimental software frameworks. This Toolkit has been released and is downloadable from the web. In this paper we describe the statistical details of the algorithms, the computational features of the Toolkit and describe the code validation

  18. Statistical literacy for clinical practitioners

    CERN Document Server

    Holmes, William H

    2014-01-01

    This textbook on statistics is written for students in medicine, epidemiology, and public health. It builds on the important role evidence-based medicine now plays in the clinical practice of physicians, physician assistants and allied health practitioners. By bringing research design and statistics to the fore, this book can integrate these skills into the curricula of professional programs. Students, particularly practitioners-in-training, will learn statistical skills that are required of today’s clinicians. Practice problems at the end of each chapter and downloadable data sets provided by the authors ensure readers get practical experience that they can then apply to their own work.  Topics covered include:   Functions of Statistics in Clinical Research Common Study Designs Describing Distributions of Categorical and Quantitative Variables Confidence Intervals and Hypothesis Testing Documenting Relationships in Categorical and Quantitative Data Assessing Screening and Diagnostic Tests Comparing Mean...

  19. Classical model of intermediate statistics

    International Nuclear Information System (INIS)

    Kaniadakis, G.

    1994-01-01

    In this work we present a classical kinetic model of intermediate statistics. In the case of Brownian particles we show that the Fermi-Dirac (FD) and Bose-Einstein (BE) distributions can be obtained, just as the Maxwell-Boltzmann (MD) distribution, as steady states of a classical kinetic equation that intrinsically takes into account an exclusion-inclusion principle. In our model the intermediate statistics are obtained as steady states of a system of coupled nonlinear kinetic equations, where the coupling constants are the transmutational potentials η κκ' . We show that, besides the FD-BE intermediate statistics extensively studied from the quantum point of view, we can also study the MB-FD and MB-BE ones. Moreover, our model allows us to treat the three-state mixing FD-MB-BE intermediate statistics. For boson and fermion mixing in a D-dimensional space, we obtain a family of FD-BE intermediate statistics by varying the transmutational potential η BF . This family contains, as a particular case when η BF =0, the quantum statistics recently proposed by L. Wu, Z. Wu, and J. Sun [Phys. Lett. A 170, 280 (1992)]. When we consider the two-dimensional FD-BE statistics, we derive an analytic expression of the fraction of fermions. When the temperature T→∞, the system is composed by an equal number of bosons and fermions, regardless of the value of η BF . On the contrary, when T=0, η BF becomes important and, according to its value, the system can be completely bosonic or fermionic, or composed both by bosons and fermions

  20. Change detection in a time series of polarimetric SAR data by an omnibus test statistic and its factorization (Conference Presentation)

    Science.gov (United States)

    Nielsen, Allan A.; Conradsen, Knut; Skriver, Henning

    2016-10-01

    Test statistics for comparison of real (as opposed to complex) variance-covariance matrices exist in the statistics literature [1]. In earlier publications we have described a test statistic for the equality of two variance-covariance matrices following the complex Wishart distribution with an associated p-value [2]. We showed their application to bitemporal change detection and to edge detection [3] in multilook, polarimetric synthetic aperture radar (SAR) data in the covariance matrix representation [4]. The test statistic and the associated p-value is described in [5] also. In [6] we focussed on the block-diagonal case, we elaborated on some computer implementation issues, and we gave examples on the application to change detection in both full and dual polarization bitemporal, bifrequency, multilook SAR data. In [7] we described an omnibus test statistic Q for the equality of k variance-covariance matrices following the complex Wishart distribution. We also described a factorization of Q = R2 R3 … Rk where Q and Rj determine if and when a difference occurs. Additionally, we gave p-values for Q and Rj. Finally, we demonstrated the use of Q and Rj and the p-values to change detection in truly multitemporal, full polarization SAR data. Here we illustrate the methods by means of airborne L-band SAR data (EMISAR) [8,9]. The methods may be applied to other polarimetric SAR data also such as data from Sentinel-1, COSMO-SkyMed, TerraSAR-X, ALOS, and RadarSat-2 and also to single-pol data. The account given here closely follows that given our recent IEEE TGRS paper [7]. Selected References [1] Anderson, T. W., An Introduction to Multivariate Statistical Analysis, John Wiley, New York, third ed. (2003). [2] Conradsen, K., Nielsen, A. A., Schou, J., and Skriver, H., "A test statistic in the complex Wishart distribution and its application to change detection in polarimetric SAR data," IEEE Transactions on Geoscience and Remote Sensing 41(1): 4-19, 2003. [3] Schou, J

  1. Applied statistical designs for the researcher

    CERN Document Server

    Paulson, Daryl S

    2003-01-01

    Research and Statistics Basic Review of Parametric Statistics Exploratory Data Analysis Two Sample Tests Completely Randomized One-Factor Analysis of Variance One and Two Restrictions on Randomization Completely Randomized Two-Factor Factorial Designs Two-Factor Factorial Completely Randomized Blocked Designs Useful Small Scale Pilot Designs Nested Statistical Designs Linear Regression Nonparametric Statistics Introduction to Research Synthesis and "Meta-Analysis" and Conclusory Remarks References Index.

  2. Conversion factors and oil statistics

    International Nuclear Information System (INIS)

    Karbuz, Sohbet

    2004-01-01

    World oil statistics, in scope and accuracy, are often far from perfect. They can easily lead to misguided conclusions regarding the state of market fundamentals. Without proper attention directed at statistic caveats, the ensuing interpretation of oil market data opens the door to unnecessary volatility, and can distort perception of market fundamentals. Among the numerous caveats associated with the compilation of oil statistics, conversion factors, used to produce aggregated data, play a significant role. Interestingly enough, little attention is paid to conversion factors, i.e. to the relation between different units of measurement for oil. Additionally, the underlying information regarding the choice of a specific factor when trying to produce measurements of aggregated data remains scant. The aim of this paper is to shed some light on the impact of conversion factors for two commonly encountered issues, mass to volume equivalencies (barrels to tonnes) and for broad energy measures encountered in world oil statistics. This paper will seek to demonstrate how inappropriate and misused conversion factors can yield wildly varying results and ultimately distort oil statistics. Examples will show that while discrepancies in commonly used conversion factors may seem trivial, their impact on the assessment of a world oil balance is far from negligible. A unified and harmonised convention for conversion factors is necessary to achieve accurate comparisons and aggregate oil statistics for the benefit of both end-users and policy makers

  3. Notices about using elementary statistics in psychology

    OpenAIRE

    松田, 文子; 三宅, 幹子; 橋本, 優花里; 山崎, 理央; 森田, 愛子; 小嶋, 佳子

    2003-01-01

    Improper uses of elementary statistics that were often observed in beginners' manuscripts and papers were collected and better ways were suggested. This paper consists of three parts: About descriptive statistics, multivariate analyses, and statistical tests.

  4. Nonparametric statistics a step-by-step approach

    CERN Document Server

    Corder, Gregory W

    2014-01-01

    "…a very useful resource for courses in nonparametric statistics in which the emphasis is on applications rather than on theory.  It also deserves a place in libraries of all institutions where introductory statistics courses are taught."" -CHOICE This Second Edition presents a practical and understandable approach that enhances and expands the statistical toolset for readers. This book includes: New coverage of the sign test and the Kolmogorov-Smirnov two-sample test in an effort to offer a logical and natural progression to statistical powerSPSS® (Version 21) software and updated screen ca

  5. DWPF Sample Vial Insert Study-Statistical Analysis of DWPF Mock-Up Test Data

    Energy Technology Data Exchange (ETDEWEB)

    Harris, S.P. [Westinghouse Savannah River Company, AIKEN, SC (United States)

    1997-09-18

    This report is prepared as part of Technical/QA Task Plan WSRC-RP-97-351 which was issued in response to Technical Task Request HLW/DWPF/TTR-970132 submitted by DWPF. Presented in this report is a statistical analysis of DWPF Mock-up test data for evaluation of two new analytical methods which use insert samples from the existing HydragardTM sampler. The first is a new hydrofluoric acid based method called the Cold Chemical Method (Cold Chem) and the second is a modified fusion method.Either new DWPF analytical method could result in a two to three fold improvement in sample analysis time.Both new methods use the existing HydragardTM sampler to collect a smaller insert sample from the process sampling system. The insert testing methodology applies to the DWPF Slurry Mix Evaporator (SME) and the Melter Feed Tank (MFT) samples.The insert sample is named after the initial trials which placed the container inside the sample (peanut) vials. Samples in small 3 ml containers (Inserts) are analyzed by either the cold chemical method or a modified fusion method. The current analytical method uses a HydragardTM sample station to obtain nearly full 15 ml peanut vials. The samples are prepared by a multi-step process for Inductively Coupled Plasma (ICP) analysis by drying, vitrification, grinding and finally dissolution by either mixed acid or fusion. In contrast, the insert sample is placed directly in the dissolution vessel, thus eliminating the drying, vitrification and grinding operations for the Cold chem method. Although the modified fusion still requires drying and calcine conversion, the process is rapid due to the decreased sample size and that no vitrification step is required.A slurry feed simulant material was acquired from the TNX pilot facility from the test run designated as PX-7.The Mock-up test data were gathered on the basis of a statistical design presented in SRT-SCS-97004 (Rev. 0). Simulant PX-7 samples were taken in the DWPF Analytical Cell Mock

  6. Multivariate statistical process control (MSPC) using Raman spectroscopy for in-line culture cell monitoring considering time-varying batches synchronized with correlation optimized warping (COW).

    Science.gov (United States)

    Liu, Ya-Juan; André, Silvère; Saint Cristau, Lydia; Lagresle, Sylvain; Hannas, Zahia; Calvosa, Éric; Devos, Olivier; Duponchel, Ludovic

    2017-02-01

    Multivariate statistical process control (MSPC) is increasingly popular as the challenge provided by large multivariate datasets from analytical instruments such as Raman spectroscopy for the monitoring of complex cell cultures in the biopharmaceutical industry. However, Raman spectroscopy for in-line monitoring often produces unsynchronized data sets, resulting in time-varying batches. Moreover, unsynchronized data sets are common for cell culture monitoring because spectroscopic measurements are generally recorded in an alternate way, with more than one optical probe parallelly connecting to the same spectrometer. Synchronized batches are prerequisite for the application of multivariate analysis such as multi-way principal component analysis (MPCA) for the MSPC monitoring. Correlation optimized warping (COW) is a popular method for data alignment with satisfactory performance; however, it has never been applied to synchronize acquisition time of spectroscopic datasets in MSPC application before. In this paper we propose, for the first time, to use the method of COW to synchronize batches with varying durations analyzed with Raman spectroscopy. In a second step, we developed MPCA models at different time intervals based on the normal operation condition (NOC) batches synchronized by COW. New batches are finally projected considering the corresponding MPCA model. We monitored the evolution of the batches using two multivariate control charts based on Hotelling's T 2 and Q. As illustrated with results, the MSPC model was able to identify abnormal operation condition including contaminated batches which is of prime importance in cell culture monitoring We proved that Raman-based MSPC monitoring can be used to diagnose batches deviating from the normal condition, with higher efficacy than traditional diagnosis, which would save time and money in the biopharmaceutical industry. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Practical Statistics for the LHC

    CERN Document Server

    Cranmer, Kyle

    2015-05-22

    This document is a pedagogical introduction to statistics for particle physics. Emphasis is placed on the terminology, concepts, and methods being used at the Large Hadron Collider. The document addresses both the statistical tests applied to a model of the data and the modeling itself.

  8. Beginning R The Statistical Programming Language

    CERN Document Server

    Gardener, Mark

    2012-01-01

    Conquer the complexities of this open source statistical language R is fast becoming the de facto standard for statistical computing and analysis in science, business, engineering, and related fields. This book examines this complex language using simple statistical examples, showing how R operates in a user-friendly context. Both students and workers in fields that require extensive statistical analysis will find this book helpful as they learn to use R for simple summary statistics, hypothesis testing, creating graphs, regression, and much more. It covers formula notation, complex statistics

  9. The natural statistics of audiovisual speech.

    Directory of Open Access Journals (Sweden)

    Chandramouli Chandrasekaran

    2009-07-01

    Full Text Available Humans, like other animals, are exposed to a continuous stream of signals, which are dynamic, multimodal, extended, and time varying in nature. This complex input space must be transduced and sampled by our sensory systems and transmitted to the brain where it can guide the selection of appropriate actions. To simplify this process, it's been suggested that the brain exploits statistical regularities in the stimulus space. Tests of this idea have largely been confined to unimodal signals and natural scenes. One important class of multisensory signals for which a quantitative input space characterization is unavailable is human speech. We do not understand what signals our brain has to actively piece together from an audiovisual speech stream to arrive at a percept versus what is already embedded in the signal structure of the stream itself. In essence, we do not have a clear understanding of the natural statistics of audiovisual speech. In the present study, we identified the following major statistical features of audiovisual speech. First, we observed robust correlations and close temporal correspondence between the area of the mouth opening and the acoustic envelope. Second, we found the strongest correlation between the area of the mouth opening and vocal tract resonances. Third, we observed that both area of the mouth opening and the voice envelope are temporally modulated in the 2-7 Hz frequency range. Finally, we show that the timing of mouth movements relative to the onset of the voice is consistently between 100 and 300 ms. We interpret these data in the context of recent neural theories of speech which suggest that speech communication is a reciprocally coupled, multisensory event, whereby the outputs of the signaler are matched to the neural processes of the receiver.

  10. Fundamentals of statistics

    CERN Document Server

    Mulholland, Henry

    1968-01-01

    Fundamentals of Statistics covers topics on the introduction, fundamentals, and science of statistics. The book discusses the collection, organization and representation of numerical data; elementary probability; the binomial Poisson distributions; and the measures of central tendency. The text describes measures of dispersion for measuring the spread of a distribution; continuous distributions for measuring on a continuous scale; the properties and use of normal distribution; and tests involving the normal or student's 't' distributions. The use of control charts for sample means; the ranges

  11. Statistical Analysis of Geo-electric Imaging and Geotechnical Test ...

    Indian Academy of Sciences (India)

    12

    On the other hand cost-effective geoelctric imaging methods provide 2-D / 3-D .... SPSS (Statistical package for social sciences) have been used to carry out linear ..... P W J 1997 Theory of ionic surface electrical conduction in porous media;.

  12. A new efficient statistical test for detecting variability in the gene expression data.

    Science.gov (United States)

    Mathur, Sunil; Dolo, Samuel

    2008-08-01

    DNA microarray technology allows researchers to monitor the expressions of thousands of genes under different conditions. The detection of differential gene expression under two different conditions is very important in microarray studies. Microarray experiments are multi-step procedures and each step is a potential source of variance. This makes the measurement of variability difficult because approach based on gene-by-gene estimation of variance will have few degrees of freedom. It is highly possible that the assumption of equal variance for all the expression levels may not hold. Also, the assumption of normality of gene expressions may not hold. Thus it is essential to have a statistical procedure which is not based on the normality assumption and also it can detect genes with differential variance efficiently. The detection of differential gene expression variance will allow us to identify experimental variables that affect different biological processes and accuracy of DNA microarray measurements.In this article, a new nonparametric test for scale is developed based on the arctangent of the ratio of two expression levels. Most of the tests available in literature require the assumption of normal distribution, which makes them inapplicable in many situations, and it is also hard to verify the suitability of the normal distribution assumption for the given data set. The proposed test does not require the assumption of the distribution for the underlying population and hence makes it more practical and widely applicable. The asymptotic relative efficiency is calculated under different distributions, which show that the proposed test is very powerful when the assumption of normality breaks down. Monte Carlo simulation studies are performed to compare the power of the proposed test with some of the existing procedures. It is found that the proposed test is more powerful than commonly used tests under almost all the distributions considered in the study. A

  13. Rényi statistics for testing composite hypotheses in general exponential models

    Czech Academy of Sciences Publication Activity Database

    Morales, D.; Pardo, L.; Pardo, M. C.; Vajda, Igor

    2004-01-01

    Roč. 38, č. 2 (2004), s. 133-147 ISSN 0233-1888 R&D Projects: GA ČR GA201/02/1391 Grant - others:BMF(ES) 2003-00892; BMF(ES) 2003-04820 Institutional research plan: CEZ:AV0Z1075907 Keywords : natural exponential models * Levy processes * generalized Wald statistics Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.323, year: 2004

  14. Extending the Reach of Statistical Software Testing

    National Research Council Canada - National Science Library

    Weber, Robert

    2004-01-01

    .... In particular, as system complexity increases, the matrices required to generate test cases and perform model analysis can grow dramatically, even exponentially, overwhelming the test generation...

  15. Reply: Birnbaum's (2012 statistical tests of independence have unknown Type-I error rates and do not replicate within participant

    Directory of Open Access Journals (Sweden)

    Yun-shil Cha

    2013-01-01

    Full Text Available Birnbaum (2011, 2012 questioned the iid (independent and identically distributed sampling assumptions used by state-of-the-art statistical tests in Regenwetter, Dana and Davis-Stober's (2010, 2011 analysis of the ``linear order model''. Birnbaum (2012 cited, but did not use, a test of iid by Smith and Batchelder (2008 with analytically known properties. Instead, he created two new test statistics with unknown sampling distributions. Our rebuttal has five components: 1 We demonstrate that the Regenwetter et al. data pass Smith and Batchelder's test of iid with flying colors. 2 We provide evidence from Monte Carlo simulations that Birnbaum's (2012 proposed tests have unknown Type-I error rates, which depend on the actual choice probabilities and on how data are coded as well as on the null hypothesis of iid sampling. 3 Birnbaum analyzed only a third of Regenwetter et al.'s data. We show that his two new tests fail to replicate on the other two-thirds of the data, within participants. 4 Birnbaum selectively picked data of one respondent to suggest that choice probabilities may have changed partway into the experiment. Such nonstationarity could potentially cause a seemingly good fit to be a Type-II error. We show that the linear order model fits equally well if we allow for warm-up effects. 5 Using hypothetical data, Birnbaum (2012 claimed to show that ``true-and-error'' models for binary pattern probabilities overcome the alleged short-comings of Regenwetter et al.'s approach. We disprove this claim on the same data.

  16. Modelling Conditional and Unconditional Heteroskedasticity with Smoothly Time-Varying Structure

    DEFF Research Database (Denmark)

    Amado, Christina; Teräsvirta, Timo

    multiplier type misspecification tests. Finite-sample properties of these procedures and tests are examined by simulation. An empirical application to daily stock returns and another one to daily exchange rate returns illustrate the functioning and properties of our modelling strategy in practice......In this paper, we propose two parametric alternatives to the standard GARCH model. They allow the conditional variance to have a smooth time-varying structure of either ad- ditive or multiplicative type. The suggested parameterizations describe both nonlinearity and structural change...... in the conditional and unconditional variances where the transition between regimes over time is smooth. A modelling strategy for these new time-varying parameter GARCH models is developed. It relies on a sequence of Lagrange multiplier tests, and the adequacy of the estimated models is investigated by Lagrange...

  17. Statistical evaluation of cleanup: How should it be done?

    International Nuclear Information System (INIS)

    Gilbert, R.O.

    1993-02-01

    This paper discusses statistical issues that must be addressed when conducting statistical tests for the purpose of evaluating if a site has been remediated to guideline values or standards. The importance of using the Data Quality Objectives (DQO) process to plan and design the sampling plan is emphasized. Other topics discussed are: (1) accounting for the uncertainty of cleanup standards when conducting statistical tests, (2) determining the number of samples and measurements needed to attain specified DQOs, (3) considering whether the appropriate testing philosophy in a given situation is ''guilty until proven innocent'' or ''innocent until proven guilty'' when selecting a statistical test for evaluating the attainment of standards, (4) conducting tests using data sets that contain measurements that have been reported by the laboratory as less than the minimum detectable activity, and (5) selecting statistical tests that are appropriate for risk-based or background-based standards. A recent draft report by Berger that provides guidance on sampling plans and data analyses for final status surveys at US Nuclear Regulatory Commission licensed facilities serves as a focal point for discussion

  18. Age related neuromuscular changes in sEMG of m. Tibialis Anterior using higher order statistics (Gaussianity & linearity test).

    Science.gov (United States)

    Siddiqi, Ariba; Arjunan, Sridhar P; Kumar, Dinesh K

    2016-08-01

    Age-associated changes in the surface electromyogram (sEMG) of Tibialis Anterior (TA) muscle can be attributable to neuromuscular alterations that precede strength loss. We have used our sEMG model of the Tibialis Anterior to interpret the age-related changes and compared with the experimental sEMG. Eighteen young (20-30 years) and 18 older (60-85 years) performed isometric dorsiflexion at 6 different percentage levels of maximum voluntary contractions (MVC), and their sEMG from the TA muscle was recorded. Six different age-related changes in the neuromuscular system were simulated using the sEMG model at the same MVCs as the experiment. The maximal power of the spectrum, Gaussianity and Linearity Test Statistics were computed from the simulated and experimental sEMG. A correlation analysis at α=0.05 was performed between the simulated and experimental age-related change in the sEMG features. The results show the loss in motor units was distinguished by the Gaussianity and Linearity test statistics; while the maximal power of the PSD distinguished between the muscular factors. The simulated condition of 40% loss of motor units with halved the number of fast fibers best correlated with the age-related change observed in the experimental sEMG higher order statistical features. The simulated aging condition found by this study corresponds with the moderate motor unit remodelling and negligible strength loss reported in literature for the cohorts aged 60-70 years.

  19. Statistics II essentials

    CERN Document Server

    Milewski, Emil G

    2012-01-01

    REA's Essentials provide quick and easy access to critical information in a variety of different fields, ranging from the most basic to the most advanced. As its name implies, these concise, comprehensive study guides summarize the essentials of the field covered. Essentials are helpful when preparing for exams, doing homework and will remain a lasting reference source for students, teachers, and professionals. Statistics II discusses sampling theory, statistical inference, independent and dependent variables, correlation theory, experimental design, count data, chi-square test, and time se

  20. Sensometrics: Thurstonian and Statistical Models

    DEFF Research Database (Denmark)

    Christensen, Rune Haubo Bojesen

    . sensR is a package for sensory discrimination testing with Thurstonian models and ordinal supports analysis of ordinal data with cumulative link (mixed) models. While sensR is closely connected to the sensometrics field, the ordinal package has developed into a generic statistical package applicable......This thesis is concerned with the development and bridging of Thurstonian and statistical models for sensory discrimination testing as applied in the scientific discipline of sensometrics. In sensory discrimination testing sensory differences between products are detected and quantified by the use...... and sensory discrimination testing in particular in a series of papers by advancing Thurstonian models for a range of sensory discrimination protocols in addition to facilitating their application by providing software for fitting these models. The main focus is on identifying Thurstonian models...

  1. Statistical yearbook 2005. Data available as of March 2006. 50 ed

    International Nuclear Information System (INIS)

    2006-08-01

    The Statistical Yearbook is an annual compilation of a wide range of international economic, social and environmental statistics on over 200 countries and areas, compiled from sources including UN agencies and other international, national and specialized organizations. The 50th issue contains data available to the Statistics Division as of March 2006 and presents them in 76 tables. The number of years of data shown in the tables varies from one to ten, with the ten-year tables covering 1994 to 2003 or 1995 to 2004. Accompanying the tables are technical notes providing brief descriptions of major statistical concepts, definitions and classifications

  2. Statistical methods for the analysis of a screening test for chronic beryllium disease

    Energy Technology Data Exchange (ETDEWEB)

    Frome, E.L.; Neubert, R.L. [Oak Ridge National Lab., TN (United States). Mathematical Sciences Section; Smith, M.H.; Littlefield, L.G.; Colyer, S.P. [Oak Ridge Inst. for Science and Education, TN (United States). Medical Sciences Div.

    1994-10-01

    The lymphocyte proliferation test (LPT) is a noninvasive screening procedure used to identify persons who may have chronic beryllium disease. A practical problem in the analysis of LPT well counts is the occurrence of outlying data values (approximately 7% of the time). A log-linear regression model is used to describe the expected well counts for each set of test conditions. The variance of the well counts is proportional to the square of the expected counts, and two resistant regression methods are used to estimate the parameters of interest. The first approach uses least absolute values (LAV) on the log of the well counts to estimate beryllium stimulation indices (SIs) and the coefficient of variation. The second approach uses a resistant regression version of maximum quasi-likelihood estimation. A major advantage of the resistant regression methods is that it is not necessary to identify and delete outliers. These two new methods for the statistical analysis of the LPT data and the outlier rejection method that is currently being used are applied to 173 LPT assays. The authors strongly recommend the LAV method for routine analysis of the LPT.

  3. Statistical assessment of the learning curves of health technologies.

    Science.gov (United States)

    Ramsay, C R; Grant, A M; Wallace, S A; Garthwaite, P H; Monk, A F; Russell, I T

    2001-01-01

    (1) To describe systematically studies that directly assessed the learning curve effect of health technologies. (2) Systematically to identify 'novel' statistical techniques applied to learning curve data in other fields, such as psychology and manufacturing. (3) To test these statistical techniques in data sets from studies of varying designs to assess health technologies in which learning curve effects are known to exist. METHODS - STUDY SELECTION (HEALTH TECHNOLOGY ASSESSMENT LITERATURE REVIEW): For a study to be included, it had to include a formal analysis of the learning curve of a health technology using a graphical, tabular or statistical technique. METHODS - STUDY SELECTION (NON-HEALTH TECHNOLOGY ASSESSMENT LITERATURE SEARCH): For a study to be included, it had to include a formal assessment of a learning curve using a statistical technique that had not been identified in the previous search. METHODS - DATA SOURCES: Six clinical and 16 non-clinical biomedical databases were searched. A limited amount of handsearching and scanning of reference lists was also undertaken. METHODS - DATA EXTRACTION (HEALTH TECHNOLOGY ASSESSMENT LITERATURE REVIEW): A number of study characteristics were abstracted from the papers such as study design, study size, number of operators and the statistical method used. METHODS - DATA EXTRACTION (NON-HEALTH TECHNOLOGY ASSESSMENT LITERATURE SEARCH): The new statistical techniques identified were categorised into four subgroups of increasing complexity: exploratory data analysis; simple series data analysis; complex data structure analysis, generic techniques. METHODS - TESTING OF STATISTICAL METHODS: Some of the statistical methods identified in the systematic searches for single (simple) operator series data and for multiple (complex) operator series data were illustrated and explored using three data sets. The first was a case series of 190 consecutive laparoscopic fundoplication procedures performed by a single surgeon; the second

  4. Statistics Using Just One Formula

    Science.gov (United States)

    Rosenthal, Jeffrey S.

    2018-01-01

    This article advocates that introductory statistics be taught by basing all calculations on a single simple margin-of-error formula and deriving all of the standard introductory statistical concepts (confidence intervals, significance tests, comparisons of means and proportions, etc) from that one formula. It is argued that this approach will…

  5. Listening Strategies of L2 Learners with Varied Test Tasks

    Science.gov (United States)

    Chang, Anna Ching-Shyang

    2008-01-01

    This article investigates the strategies that EFL students used and how they adjusted these strategies in response to various listening test tasks. The test tasks involved four forms of listening support: previewing questions, repeated input, background information preparation, and vocabulary instruction. Twenty-two participants were enlisted and…

  6. Using the method of statistic tests for determining the pressure in the UNC-600 vacuum chamber

    International Nuclear Information System (INIS)

    Kiver, A.M.; Mirzoev, K.G.

    1998-01-01

    The aim of the paper is to simulate the process of pumping-out the UNC-600 vacuum chamber. The simulation is carried out by the Monte-Carlo statistic test method. It is shown that the pressure value in every liner of the chamber may be determined from the pressure in the pump branch pipe, determined by the discharge current of this pump. Therefore, it is possible to precise the working pressure in the ion guide of the UNC-600 vacuum chamber [ru

  7. Introduction to Statistics - eNotes

    DEFF Research Database (Denmark)

    Brockhoff, Per B.; Møller, Jan Kloppenborg; Andersen, Elisabeth Wreford

    2015-01-01

    Online textbook used in the introductory statistics courses at DTU. It provides a basic introduction to applied statistics for engineers. The necessary elements from probability theory are introduced (stochastic variable, density and distribution function, mean and variance, etc.) and thereafter...... the most basic statistical analysis methods are presented: Confidence band, hypothesis testing, simulation, simple and muliple regression, ANOVA and analysis of contingency tables. Examples with the software R are included for all presented theory and methods....

  8. Some challenges with statistical inference in adaptive designs.

    Science.gov (United States)

    Hung, H M James; Wang, Sue-Jane; Yang, Peiling

    2014-01-01

    Adaptive designs have generated a great deal of attention to clinical trial communities. The literature contains many statistical methods to deal with added statistical uncertainties concerning the adaptations. Increasingly encountered in regulatory applications are adaptive statistical information designs that allow modification of sample size or related statistical information and adaptive selection designs that allow selection of doses or patient populations during the course of a clinical trial. For adaptive statistical information designs, a few statistical testing methods are mathematically equivalent, as a number of articles have stipulated, but arguably there are large differences in their practical ramifications. We pinpoint some undesirable features of these methods in this work. For adaptive selection designs, the selection based on biomarker data for testing the correlated clinical endpoints may increase statistical uncertainty in terms of type I error probability, and most importantly the increased statistical uncertainty may be impossible to assess.

  9. Investigating the Investigative Task: Testing for Skewness--An Investigation of Different Test Statistics and Their Power to Detect Skewness

    Science.gov (United States)

    Tabor, Josh

    2010-01-01

    On the 2009 AP[c] Statistics Exam, students were asked to create a statistic to measure skewness in a distribution. This paper explores several of the most popular student responses and evaluates which statistic performs best when sampling from various skewed populations. (Contains 8 figures, 3 tables, and 4 footnotes.)

  10. Parameter estimation and statistical test of geographically weighted bivariate Poisson inverse Gaussian regression models

    Science.gov (United States)

    Amalia, Junita; Purhadi, Otok, Bambang Widjanarko

    2017-11-01

    Poisson distribution is a discrete distribution with count data as the random variables and it has one parameter defines both mean and variance. Poisson regression assumes mean and variance should be same (equidispersion). Nonetheless, some case of the count data unsatisfied this assumption because variance exceeds mean (over-dispersion). The ignorance of over-dispersion causes underestimates in standard error. Furthermore, it causes incorrect decision in the statistical test. Previously, paired count data has a correlation and it has bivariate Poisson distribution. If there is over-dispersion, modeling paired count data is not sufficient with simple bivariate Poisson regression. Bivariate Poisson Inverse Gaussian Regression (BPIGR) model is mix Poisson regression for modeling paired count data within over-dispersion. BPIGR model produces a global model for all locations. In another hand, each location has different geographic conditions, social, cultural and economic so that Geographically Weighted Regression (GWR) is needed. The weighting function of each location in GWR generates a different local model. Geographically Weighted Bivariate Poisson Inverse Gaussian Regression (GWBPIGR) model is used to solve over-dispersion and to generate local models. Parameter estimation of GWBPIGR model obtained by Maximum Likelihood Estimation (MLE) method. Meanwhile, hypothesis testing of GWBPIGR model acquired by Maximum Likelihood Ratio Test (MLRT) method.

  11. Statistical Power in Meta-Analysis

    Science.gov (United States)

    Liu, Jin

    2015-01-01

    Statistical power is important in a meta-analysis study, although few studies have examined the performance of simulated power in meta-analysis. The purpose of this study is to inform researchers about statistical power estimation on two sample mean difference test under different situations: (1) the discrepancy between the analytical power and…

  12. Statistical and extra-statistical considerations in differential item functioning analyses

    Directory of Open Access Journals (Sweden)

    G. K. Huysamen

    2004-10-01

    Full Text Available This article briefly describes the main procedures for performing differential item functioning (DIF analyses and points out some of the statistical and extra-statistical implications of these methods. Research findings on the sources of DIF, including those associated with translated tests, are reviewed. As DIF analyses are oblivious of correlations between a test and relevant criteria, the elimination of differentially functioning items does not necessarily improve predictive validity or reduce any predictive bias. The implications of the results of past DIF research for test development in the multilingual and multi-cultural South African society are considered. Opsomming Hierdie artikel beskryf kortliks die hoofprosedures vir die ontleding van differensiële itemfunksionering (DIF en verwys na sommige van die statistiese en buite-statistiese implikasies van hierdie metodes. ’n Oorsig word verskaf van navorsingsbevindings oor die bronne van DIF, insluitend dié by vertaalde toetse. Omdat DIF-ontledings nie die korrelasies tussen ’n toets en relevante kriteria in ag neem nie, sal die verwydering van differensieel-funksionerende items nie noodwendig voorspellingsgeldigheid verbeter of voorspellingsydigheid verminder nie. Die implikasies van vorige DIF-navorsingsbevindings vir toetsontwikkeling in die veeltalige en multikulturele Suid-Afrikaanse gemeenskap word oorweeg.

  13. Statistical modeling of urban air temperature distributions under different synoptic conditions

    Science.gov (United States)

    Beck, Christoph; Breitner, Susanne; Cyrys, Josef; Hald, Cornelius; Hartz, Uwe; Jacobeit, Jucundus; Richter, Katja; Schneider, Alexandra; Wolf, Kathrin

    2015-04-01

    Within urban areas air temperature may vary distinctly between different locations. These intra-urban air temperature variations partly reach magnitudes that are relevant with respect to human thermal comfort. Therefore and furthermore taking into account potential interrelations with other health related environmental factors (e.g. air quality) it is important to estimate spatial patterns of intra-urban air temperature distributions that may be incorporated into urban planning processes. In this contribution we present an approach to estimate spatial temperature distributions in the urban area of Augsburg (Germany) by means of statistical modeling. At 36 locations in the urban area of Augsburg air temperatures are measured with high temporal resolution (4 min.) since December 2012. These 36 locations represent different typical urban land use characteristics in terms of varying percentage coverages of different land cover categories (e.g. impervious, built-up, vegetated). Percentage coverages of these land cover categories have been extracted from different sources (Open Street Map, European Urban Atlas, Urban Morphological Zones) for regular grids of varying size (50, 100, 200 meter horizonal resolution) for the urban area of Augsburg. It is well known from numerous studies that land use characteristics have a distinct influence on air temperature and as well other climatic variables at a certain location. Therefore air temperatures at the 36 locations are modeled utilizing land use characteristics (percentage coverages of land cover categories) as predictor variables in Stepwise Multiple Regression models and in Random Forest based model approaches. After model evaluation via cross-validation appropriate statistical models are applied to gridded land use data to derive spatial urban air temperature distributions. Varying models are tested and applied for different seasons and times of the day and also for different synoptic conditions (e.g. clear and calm

  14. Goodness of Fit Test and Test of Independence by Entropy

    OpenAIRE

    M. Sharifdoost; N. Nematollahi; E. Pasha

    2009-01-01

    To test whether a set of data has a specific distribution or not, we can use the goodness of fit test. This test can be done by one of Pearson X 2 -statistic or the likelihood ratio statistic G 2 , which are asymptotically equal, and also by using the Kolmogorov-Smirnov statistic in continuous distributions. In this paper, we introduce a new test statistic for goodness of fit test which is based on entropy distance, and which can be applied for large sample sizes...

  15. A statistical approach to plasma profile analysis

    International Nuclear Information System (INIS)

    Kardaun, O.J.W.F.; McCarthy, P.J.; Lackner, K.; Riedel, K.S.

    1990-05-01

    A general statistical approach to the parameterisation and analysis of tokamak profiles is presented. The modelling of the profile dependence on both the radius and the plasma parameters is discussed, and pertinent, classical as well as robust, methods of estimation are reviewed. Special attention is given to statistical tests for discriminating between the various models, and to the construction of confidence intervals for the parameterised profiles and the associated global quantities. The statistical approach is shown to provide a rigorous approach to the empirical testing of plasma profile invariance. (orig.)

  16. Statistical comparative study on a combined radioiodine test and extended protirelin test and correlation with the common in vitro parameters of hyroid function

    International Nuclear Information System (INIS)

    Kraemer, H.A.

    1982-01-01

    Using the data of 339 patients, the following parameters of thyroid function were statistically evaluated. The in vitro parameters ET 3 U, TT 4 (D), FT 4 -index and PB 127 I and the radioiodine test with determination of PB 131 I before i.v. injection of 400 μg protirelin (DHP) and 120 minutes after the injection. There was no correlation between the percentage Change of the PB 121 I level 120 min after protirelin (DHP) administration and the percentage change of the TSH level 30 min after protirelin (DTP1) administration. The accuracies of the in vitro parameters ET 3 U, TT 4 (D) and FT 4 -index on the one hand and the extended protirelin test on the other hand were compared. (orig./MG) [de

  17. Wilcoxon's signed-rank statistic: what null hypothesis and why it matters.

    Science.gov (United States)

    Li, Heng; Johnson, Terri

    2014-01-01

    In statistical literature, the term 'signed-rank test' (or 'Wilcoxon signed-rank test') has been used to refer to two distinct tests: a test for symmetry of distribution and a test for the median of a symmetric distribution, sharing a common test statistic. To avoid potential ambiguity, we propose to refer to those two tests by different names, as 'test for symmetry based on signed-rank statistic' and 'test for median based on signed-rank statistic', respectively. The utility of such terminological differentiation should become evident through our discussion of how those tests connect and contrast with sign test and one-sample t-test. Published 2014. This article is a U.S. Government work and is in the public domain in the USA. Published 2014. This article is a U.S. Government work and is in the public domain in the USA.

  18. Statistical inference a short course

    CERN Document Server

    Panik, Michael J

    2012-01-01

    A concise, easily accessible introduction to descriptive and inferential techniques Statistical Inference: A Short Course offers a concise presentation of the essentials of basic statistics for readers seeking to acquire a working knowledge of statistical concepts, measures, and procedures. The author conducts tests on the assumption of randomness and normality, provides nonparametric methods when parametric approaches might not work. The book also explores how to determine a confidence interval for a population median while also providing coverage of ratio estimation, randomness, and causal

  19. Resemblance profiles as clustering decision criteria: Estimating statistical power, error, and correspondence for a hypothesis test for multivariate structure.

    Science.gov (United States)

    Kilborn, Joshua P; Jones, David L; Peebles, Ernst B; Naar, David F

    2017-04-01

    Clustering data continues to be a highly active area of data analysis, and resemblance profiles are being incorporated into ecological methodologies as a hypothesis testing-based approach to clustering multivariate data. However, these new clustering techniques have not been rigorously tested to determine the performance variability based on the algorithm's assumptions or any underlying data structures. Here, we use simulation studies to estimate the statistical error rates for the hypothesis test for multivariate structure based on dissimilarity profiles (DISPROF). We concurrently tested a widely used algorithm that employs the unweighted pair group method with arithmetic mean (UPGMA) to estimate the proficiency of clustering with DISPROF as a decision criterion. We simulated unstructured multivariate data from different probability distributions with increasing numbers of objects and descriptors, and grouped data with increasing overlap, overdispersion for ecological data, and correlation among descriptors within groups. Using simulated data, we measured the resolution and correspondence of clustering solutions achieved by DISPROF with UPGMA against the reference grouping partitions used to simulate the structured test datasets. Our results highlight the dynamic interactions between dataset dimensionality, group overlap, and the properties of the descriptors within a group (i.e., overdispersion or correlation structure) that are relevant to resemblance profiles as a clustering criterion for multivariate data. These methods are particularly useful for multivariate ecological datasets that benefit from distance-based statistical analyses. We propose guidelines for using DISPROF as a clustering decision tool that will help future users avoid potential pitfalls during the application of methods and the interpretation of results.

  20. TESTING MODELS OF MAGNETIC FIELD EVOLUTION OF NEUTRON STARS WITH THE STATISTICAL PROPERTIES OF THEIR SPIN EVOLUTIONS

    International Nuclear Information System (INIS)

    Zhang Shuangnan; Xie Yi

    2012-01-01

    We test models for the evolution of neutron star (NS) magnetic fields (B). Our model for the evolution of the NS spin is taken from an analysis of pulsar timing noise presented by Hobbs et al.. We first test the standard model of a pulsar's magnetosphere in which B does not change with time and magnetic dipole radiation is assumed to dominate the pulsar's spin-down. We find that this model fails to predict both the magnitudes and signs of the second derivatives of the spin frequencies (ν-double dot). We then construct a phenomenological model of the evolution of B, which contains a long-term decay (LTD) modulated by short-term oscillations; a pulsar's spin is thus modified by its B-evolution. We find that an exponential LTD is not favored by the observed statistical properties of ν-double dot for young pulsars and fails to explain the fact that ν-double dot is negative for roughly half of the old pulsars. A simple power-law LTD can explain all the observed statistical properties of ν-double dot. Finally, we discuss some physical implications of our results to models of the B-decay of NSs and suggest reliable determination of the true ages of many young NSs is needed, in order to constrain further the physical mechanisms of their B-decay. Our model can be further tested with the measured evolutions of ν-dot and ν-double dot for an individual pulsar; the decay index, oscillation amplitude, and period can also be determined this way for the pulsar.

  1. A New Statistical Approach to Characterize Chemical-Elicited Behavioral Effects in High-Throughput Studies Using Zebrafish.

    Directory of Open Access Journals (Sweden)

    Guozhu Zhang

    Full Text Available Zebrafish have become an important alternative model for characterizing chemical bioactivity, partly due to the efficiency at which systematic, high-dimensional data can be generated. However, these new data present analytical challenges associated with scale and diversity. We developed a novel, robust statistical approach to characterize chemical-elicited effects in behavioral data from high-throughput screening (HTS of all 1,060 Toxicity Forecaster (ToxCast™ chemicals across 5 concentrations at 120 hours post-fertilization (hpf. Taking advantage of the immense scale of data for a global view, we show that this new approach reduces bias introduced by extreme values yet allows for diverse response patterns that confound the application of traditional statistics. We have also shown that, as a summary measure of response for local tests of chemical-associated behavioral effects, it achieves a significant reduction in coefficient of variation compared to many traditional statistical modeling methods. This effective increase in signal-to-noise ratio augments statistical power and is observed across experimental periods (light/dark conditions that display varied distributional response patterns. Finally, we integrated results with data from concomitant developmental endpoint measurements to show that appropriate statistical handling of HTS behavioral data can add important biological context that informs mechanistic hypotheses.

  2. Varying hemin concentrations affect Porphyromonas gingivalis strains differently.

    Science.gov (United States)

    Ohya, Manabu; Cueno, Marni E; Tamura, Muneaki; Ochiai, Kuniyasu

    2016-05-01

    Porphyromonas gingivalis requires heme to grow, however, heme availability and concentration in the periodontal pockets vary. Fluctuations in heme concentration may affect each P. gingivalis strain differently, however, this was never fully demonstrated. Here, we elucidated the effects of varying hemin concentrations in representative P. gingivalis strains. Throughout this study, representative P. gingivalis strains [FDC381 (type I), MPWIb-01 (type Ib), TDC60 (type II), ATCC49417 (type III), W83 (type IV), and HNA99 (type V)] were used and grown for 24 h in growth media under varying hemin concentrations (5 × , 1 × , 0.5 × , 0.1 × ). Samples were lysed and protein standardized. Arg-gingipain (Rgp), H2O2, and superoxide dismutase (SOD) levels were subsequently measured. We focused our study on 24 h-grown strains which excluded MPWIb-01 and HNA99. Rgp activity among the 4 remaining strains varied with Rgp peaking at: 1 × for FDC381, 5 × for TDC60, 0.5 × for ATCC49417, 5 × and 0.5 × for W83. With regards to H2O2 and SOD amounts: FDC381 had similar H2O2 amounts in all hemin concentrations while SOD levels varied; TDC60 had the lowest H2O2 amount at 1 × while SOD levels became higher in relation to hemin concentration; ATCC49417 also had similar H2O2 amounts in all hemin concentrations while SOD levels were higher at 1 × and 0.5 × ; and W83 had statistically similar H2O2 and SOD amounts regardless of hemin concentration. Our results show that variations in hemin concentration affect each P. gingivalis strain differently. Published by Elsevier Ltd.

  3. Information theory and statistics

    CERN Document Server

    Kullback, Solomon

    1968-01-01

    Highly useful text studies logarithmic measures of information and their application to testing statistical hypotheses. Includes numerous worked examples and problems. References. Glossary. Appendix. 1968 2nd, revised edition.

  4. Sample Size and Statistical Conclusions from Tests of Fit to the Rasch Model According to the Rasch Unidimensional Measurement Model (Rumm) Program in Health Outcome Measurement.

    Science.gov (United States)

    Hagell, Peter; Westergren, Albert

    Sample size is a major factor in statistical null hypothesis testing, which is the basis for many approaches to testing Rasch model fit. Few sample size recommendations for testing fit to the Rasch model concern the Rasch Unidimensional Measurement Models (RUMM) software, which features chi-square and ANOVA/F-ratio based fit statistics, including Bonferroni and algebraic sample size adjustments. This paper explores the occurrence of Type I errors with RUMM fit statistics, and the effects of algebraic sample size adjustments. Data with simulated Rasch model fitting 25-item dichotomous scales and sample sizes ranging from N = 50 to N = 2500 were analysed with and without algebraically adjusted sample sizes. Results suggest the occurrence of Type I errors with N less then or equal to 500, and that Bonferroni correction as well as downward algebraic sample size adjustment are useful to avoid such errors, whereas upward adjustment of smaller samples falsely signal misfit. Our observations suggest that sample sizes around N = 250 to N = 500 may provide a good balance for the statistical interpretation of the RUMM fit statistics studied here with respect to Type I errors and under the assumption of Rasch model fit within the examined frame of reference (i.e., about 25 item parameters well targeted to the sample).

  5. Multi-disciplinary techniques for understanding time-varying space-based imagery

    Science.gov (United States)

    Casasent, D.; Sanderson, A.; Kanade, T.

    1984-06-01

    A multidisciplinary program for space-based image processing is reported. This project combines optical and digital processing techniques and pattern recognition, image understanding and artificial intelligence methodologies. Time change image processing was recognized as the key issue to be addressed. Three time change scenarios were defined based on the frame rate of the data change. This report details the recent research on: various statistical and deterministic image features, recognition of sub-pixel targets in time varying imagery, and 3-D object modeling and recognition.

  6. Beyond P Values and Hypothesis Testing: Using the Minimum Bayes Factor to Teach Statistical Inference in Undergraduate Introductory Statistics Courses

    Science.gov (United States)

    Page, Robert; Satake, Eiki

    2017-01-01

    While interest in Bayesian statistics has been growing in statistics education, the treatment of the topic is still inadequate in both textbooks and the classroom. Because so many fields of study lead to careers that involve a decision-making process requiring an understanding of Bayesian methods, it is becoming increasingly clear that Bayesian…

  7. Vector field statistical analysis of kinematic and force trajectories.

    Science.gov (United States)

    Pataky, Todd C; Robinson, Mark A; Vanrenterghem, Jos

    2013-09-27

    When investigating the dynamics of three-dimensional multi-body biomechanical systems it is often difficult to derive spatiotemporally directed predictions regarding experimentally induced effects. A paradigm of 'non-directed' hypothesis testing has emerged in the literature as a result. Non-directed analyses typically consist of ad hoc scalar extraction, an approach which substantially simplifies the original, highly multivariate datasets (many time points, many vector components). This paper describes a commensurately multivariate method as an alternative to scalar extraction. The method, called 'statistical parametric mapping' (SPM), uses random field theory to objectively identify field regions which co-vary significantly with the experimental design. We compared SPM to scalar extraction by re-analyzing three publicly available datasets: 3D knee kinematics, a ten-muscle force system, and 3D ground reaction forces. Scalar extraction was found to bias the analyses of all three datasets by failing to consider sufficient portions of the dataset, and/or by failing to consider covariance amongst vector components. SPM overcame both problems by conducting hypothesis testing at the (massively multivariate) vector trajectory level, with random field corrections simultaneously accounting for temporal correlation and vector covariance. While SPM has been widely demonstrated to be effective for analyzing 3D scalar fields, the current results are the first to demonstrate its effectiveness for 1D vector field analysis. It was concluded that SPM offers a generalized, statistically comprehensive solution to scalar extraction's over-simplification of vector trajectories, thereby making it useful for objectively guiding analyses of complex biomechanical systems. © 2013 Published by Elsevier Ltd. All rights reserved.

  8. Statistical and Conceptual Model Testing Geomorphic Principles through Quantification in the Middle Rio Grande River, NM.

    Science.gov (United States)

    Posner, A. J.

    2017-12-01

    The Middle Rio Grande River (MRG) traverses New Mexico from Cochiti to Elephant Butte reservoirs. Since the 1100s, cultivating and inhabiting the valley of this alluvial river has required various river training works. The mid-20th century saw a concerted effort to tame the river through channelization, Jetty Jacks, and dam construction. A challenge for river managers is to better understand the interactions between a river training works, dam construction, and the geomorphic adjustments of a desert river driven by spring snowmelt and summer thunderstorms carrying water and large sediment inputs from upstream and ephemeral tributaries. Due to its importance to the region, a vast wealth of data exists for conditions along the MRG. The investigation presented herein builds upon previous efforts by combining hydraulic model results, digitized planforms, and stream gage records in various statistical and conceptual models in order to test our understanding of this complex system. Spatially continuous variables were clipped by a set of river cross section data that is collected at decadal intervals since the early 1960s, creating a spatially homogenous database upon which various statistical testing was implemented. Conceptual models relate forcing variables and response variables to estimate river planform changes. The developed database, represents a unique opportunity to quantify and test geomorphic conceptual models in the unique characteristics of the MRG. The results of this investigation provides a spatially distributed characterization of planform variable changes, permitting managers to predict planform at a much higher resolution than previously available, and a better understanding of the relationship between flow regime and planform changes such as changes to longitudinal slope, sinuosity, and width. Lastly, data analysis and model interpretation led to the development of a new conceptual model for the impact of ephemeral tributaries in alluvial rivers.

  9. Biering-Sorensen test scores in coal miners

    Energy Technology Data Exchange (ETDEWEB)

    Tekin, Y.; Ortancil, O.; Ankarali, H.; Basaran, A.; Sarikaya, S.; Ozdolap, S. [Zonguldak Karaelmas University, Zonguldak (Turkey)

    2009-05-15

    Biering-Sorensen test is an isometric back endurance test. Biering-Sorensen test scores have varied in different cultural and occupational groups. The aims of this study were to collect normative data on Biering-Sorensen holding times, to determine the discriminative ability of the Biering-Sorensen test in Turkish coal miners, and to examine the association between Biering-Sorensen test result and functional disability. One hundred and fifty male coal miners participated in this study. Trunk extensor muscle strength was measured using the Biering-Sorensen test. Oswestry disability index was used to measure the functional disability level of low back pain. The mean Biering-Sorensen holding time for the total subject group was 107.3 {+-} 22.5 s. The mean time of Biering-Sorensen test of the subjects with and without low back pain were 99.9 {+-} 19.8 and 128.6 {+-} 15.2 s, respectively. The difference between the subjects with and without low back pain was statistically significant (p < 0.001). There was a statistically significant negative correlation between Oswestry functional disability score and Biering-Sorensen holding time (R = -0.824, p < 0.001). Turkish coal miners have low mean back extensor endurance holding times. Biering-Sorensen test had a good discriminative ability in our study group. Trunk muscle strength has a significant effect on the disability level of low back pain. Thus trunk muscle endurance training exercise therapy may be effective for the reduction of disability in patients with low back pain.

  10. Practical Statistics for LHC Physicists: Descriptive Statistics, Probability and Likelihood (1/3)

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    These lectures cover those principles and practices of statistics that are most relevant for work at the LHC. The first lecture discusses the basic ideas of descriptive statistics, probability and likelihood. The second lecture covers the key ideas in the frequentist approach, including confidence limits, profile likelihoods, p-values, and hypothesis testing. The third lecture covers inference in the Bayesian approach. Throughout, real-world examples will be used to illustrate the practical application of the ideas. No previous knowledge is assumed.

  11. Nonparametric statistics for social and behavioral sciences

    CERN Document Server

    Kraska-MIller, M

    2013-01-01

    Introduction to Research in Social and Behavioral SciencesBasic Principles of ResearchPlanning for ResearchTypes of Research Designs Sampling ProceduresValidity and Reliability of Measurement InstrumentsSteps of the Research Process Introduction to Nonparametric StatisticsData AnalysisOverview of Nonparametric Statistics and Parametric Statistics Overview of Parametric Statistics Overview of Nonparametric StatisticsImportance of Nonparametric MethodsMeasurement InstrumentsAnalysis of Data to Determine Association and Agreement Pearson Chi-Square Test of Association and IndependenceContingency

  12. TESTING TESTS ON ACTIVE GALACTIC NUCLEI MICROVARIABILITY

    International Nuclear Information System (INIS)

    De Diego, Jose A.

    2010-01-01

    Literature on optical and infrared microvariability in active galactic nuclei (AGNs) reflects a diversity of statistical tests and strategies to detect tiny variations in the light curves of these sources. Comparison between the results obtained using different methodologies is difficult, and the pros and cons of each statistical method are often badly understood or even ignored. Even worse, improperly tested methodologies are becoming more and more common, and biased results may be misleading with regard to the origin of the AGN microvariability. This paper intends to point future research on AGN microvariability toward the use of powerful and well-tested statistical methodologies, providing a reference for choosing the best strategy to obtain unbiased results. Light curves monitoring has been simulated for quasars and for reference and comparison stars. Changes for the quasar light curves include both Gaussian fluctuations and linear variations. Simulated light curves have been analyzed using χ 2 tests, F tests for variances, one-way analyses of variance and C-statistics. Statistical Type I and Type II errors, which indicate the robustness and the power of the tests, have been obtained in each case. One-way analyses of variance and χ 2 prove to be powerful and robust estimators for microvariations, while the C-statistic is not a reliable methodology and its use should be avoided.

  13. Statistics available for site studies in registers and surveys at Statistics Sweden

    Energy Technology Data Exchange (ETDEWEB)

    Haldorson, Marie [Statistics Sweden, Oerebro (Sweden)

    2000-03-01

    Statistics Sweden (SCB) has produced this report on behalf of the Swedish Nuclear Fuel and Waste Management Company (SKB), as part of the data to be used by SKB in conducting studies of potential sites. The report goes over the statistics obtainable from SCB in the form of registers and surveys. The purpose is to identify the variables that are available, and to specify their degree of geographical detail and the time series that are available. Chapter two describes the statistical registers available at SCB, registers that share the common feature that they provide total coverage, i.e. they contain all 'objects' of a given type, such as population, economic activities (e.g. from statements of employees' earnings provided to the tax authorities), vehicles, enterprises or real estate. SCB has exclusive responsibility for seven of the nine registers included in the chapter, while two registers are ordered by public authorities with statistical responsibilities. Chapter three describes statistical surveys that are conducted by SCB, with the exception of the National Forest Inventory, which is carried out by the Swedish University of Agricultural Sciences. In terms of geographical breakdown, the degree of detail in the surveys varies, but all provide some possibility of reporting data at lower than the national level. The level involved may be county, municipality, yield district, coastal district or category of enterprises, e.g. aquaculture. Six of the nine surveys included in the chapter have been ordered by public authorities with statistical responsibilities, while SCB has exclusive responsibility for the others. Chapter four presents an overview of the statistics on land use maintained by SCB. This chapter does not follow the same pattern as chapters two and three but instead gives a more general account. The conclusion can be drawn that there are good prospects that SKB can make use of SCB's data as background information or in other ways when

  14. Statistics available for site studies in registers and surveys at Statistics Sweden

    International Nuclear Information System (INIS)

    Haldorson, Marie

    2000-03-01

    Statistics Sweden (SCB) has produced this report on behalf of the Swedish Nuclear Fuel and Waste Management Company (SKB), as part of the data to be used by SKB in conducting studies of potential sites. The report goes over the statistics obtainable from SCB in the form of registers and surveys. The purpose is to identify the variables that are available, and to specify their degree of geographical detail and the time series that are available. Chapter two describes the statistical registers available at SCB, registers that share the common feature that they provide total coverage, i.e. they contain all 'objects' of a given type, such as population, economic activities (e.g. from statements of employees' earnings provided to the tax authorities), vehicles, enterprises or real estate. SCB has exclusive responsibility for seven of the nine registers included in the chapter, while two registers are ordered by public authorities with statistical responsibilities. Chapter three describes statistical surveys that are conducted by SCB, with the exception of the National Forest Inventory, which is carried out by the Swedish University of Agricultural Sciences. In terms of geographical breakdown, the degree of detail in the surveys varies, but all provide some possibility of reporting data at lower than the national level. The level involved may be county, municipality, yield district, coastal district or category of enterprises, e.g. aquaculture. Six of the nine surveys included in the chapter have been ordered by public authorities with statistical responsibilities, while SCB has exclusive responsibility for the others. Chapter four presents an overview of the statistics on land use maintained by SCB. This chapter does not follow the same pattern as chapters two and three but instead gives a more general account. The conclusion can be drawn that there are good prospects that SKB can make use of SCB's data as background information or in other ways when undertaking future

  15. Statistics available for site studies in registers and surveys at Statistics Sweden

    Energy Technology Data Exchange (ETDEWEB)

    Haldorson, Marie [Statistics Sweden, Oerebro (Sweden)

    2000-03-01

    Statistics Sweden (SCB) has produced this report on behalf of the Swedish Nuclear Fuel and Waste Management Company (SKB), as part of the data to be used by SKB in conducting studies of potential sites. The report goes over the statistics obtainable from SCB in the form of registers and surveys. The purpose is to identify the variables that are available, and to specify their degree of geographical detail and the time series that are available. Chapter two describes the statistical registers available at SCB, registers that share the common feature that they provide total coverage, i.e. they contain all 'objects' of a given type, such as population, economic activities (e.g. from statements of employees' earnings provided to the tax authorities), vehicles, enterprises or real estate. SCB has exclusive responsibility for seven of the nine registers included in the chapter, while two registers are ordered by public authorities with statistical responsibilities. Chapter three describes statistical surveys that are conducted by SCB, with the exception of the National Forest Inventory, which is carried out by the Swedish University of Agricultural Sciences. In terms of geographical breakdown, the degree of detail in the surveys varies, but all provide some possibility of reporting data at lower than the national level. The level involved may be county, municipality, yield district, coastal district or category of enterprises, e.g. aquaculture. Six of the nine surveys included in the chapter have been ordered by public authorities with statistical responsibilities, while SCB has exclusive responsibility for the others. Chapter four presents an overview of the statistics on land use maintained by SCB. This chapter does not follow the same pattern as chapters two and three but instead gives a more general account. The conclusion can be drawn that there are good prospects that SKB can make use of SCB's data as background information or in other ways when undertaking future

  16. Study of selected phenotype switching strategies in time varying environment

    Energy Technology Data Exchange (ETDEWEB)

    Horvath, Denis, E-mail: horvath.denis@gmail.com [Centre of Interdisciplinary Biosciences, Institute of Physics, Faculty of Science, P.J. Šafárik University in Košice, Jesenná 5, 040 01 Košice (Slovakia); Brutovsky, Branislav, E-mail: branislav.brutovsky@upjs.sk [Department of Biophysics, Institute of Physics, P.J. Šafárik University in Košice, Jesenná 5, 040 01 Košice (Slovakia)

    2016-03-22

    Population heterogeneity plays an important role across many research, as well as the real-world, problems. The population heterogeneity relates to the ability of a population to cope with an environment change (or uncertainty) preventing its extinction. However, this ability is not always desirable as can be exemplified by an intratumor heterogeneity which positively correlates with the development of resistance to therapy. Causation of population heterogeneity is therefore in biology and medicine an intensively studied topic. In this paper the evolution of a specific strategy of population diversification, the phenotype switching, is studied at a conceptual level. The presented simulation model studies evolution of a large population of asexual organisms in a time-varying environment represented by a stochastic Markov process. Each organism disposes with a stochastic or nonlinear deterministic switching strategy realized by discrete-time models with evolvable parameters. We demonstrate that under rapidly varying exogenous conditions organisms operate in the vicinity of the bet-hedging strategy, while the deterministic patterns become relevant as the environmental variations are less frequent. Statistical characterization of the steady state regimes of the populations is done using the Hellinger and Kullback–Leibler functional distances and the Hamming distance. - Highlights: • Relation between phenotype switching and environment is studied. • The Markov chain Monte Carlo based model is developed. • Stochastic and deterministic strategies of phenotype switching are utilized. • Statistical measures of the dynamic heterogeneity reveal universal properties. • The results extend to higher lattice dimensions.

  17. Study of selected phenotype switching strategies in time varying environment

    International Nuclear Information System (INIS)

    Horvath, Denis; Brutovsky, Branislav

    2016-01-01

    Population heterogeneity plays an important role across many research, as well as the real-world, problems. The population heterogeneity relates to the ability of a population to cope with an environment change (or uncertainty) preventing its extinction. However, this ability is not always desirable as can be exemplified by an intratumor heterogeneity which positively correlates with the development of resistance to therapy. Causation of population heterogeneity is therefore in biology and medicine an intensively studied topic. In this paper the evolution of a specific strategy of population diversification, the phenotype switching, is studied at a conceptual level. The presented simulation model studies evolution of a large population of asexual organisms in a time-varying environment represented by a stochastic Markov process. Each organism disposes with a stochastic or nonlinear deterministic switching strategy realized by discrete-time models with evolvable parameters. We demonstrate that under rapidly varying exogenous conditions organisms operate in the vicinity of the bet-hedging strategy, while the deterministic patterns become relevant as the environmental variations are less frequent. Statistical characterization of the steady state regimes of the populations is done using the Hellinger and Kullback–Leibler functional distances and the Hamming distance. - Highlights: • Relation between phenotype switching and environment is studied. • The Markov chain Monte Carlo based model is developed. • Stochastic and deterministic strategies of phenotype switching are utilized. • Statistical measures of the dynamic heterogeneity reveal universal properties. • The results extend to higher lattice dimensions.

  18. Improving the Crossing-SIBTEST Statistic for Detecting Non-uniform DIF.

    Science.gov (United States)

    Chalmers, R Philip

    2018-06-01

    This paper demonstrates that, after applying a simple modification to Li and Stout's (Psychometrika 61(4):647-677, 1996) CSIBTEST statistic, an improved variant of the statistic could be realized. It is shown that this modified version of CSIBTEST has a more direct association with the SIBTEST statistic presented by Shealy and Stout (Psychometrika 58(2):159-194, 1993). In particular, the asymptotic sampling distributions and general interpretation of the effect size estimates are the same for SIBTEST and the new CSIBTEST. Given the more natural connection to SIBTEST, it is shown that Li and Stout's hypothesis testing approach is insufficient for CSIBTEST; thus, an improved hypothesis testing procedure is required. Based on the presented arguments, a new chi-squared-based hypothesis testing approach is proposed for the modified CSIBTEST statistic. Positive results from a modest Monte Carlo simulation study strongly suggest the original CSIBTEST procedure and randomization hypothesis testing approach should be replaced by the modified statistic and hypothesis testing method.

  19. Computation of the Molenaar Sijtsma Statistic

    Science.gov (United States)

    Andries van der Ark, L.

    The Molenaar Sijtsma statistic is an estimate of the reliability of a test score. In some special cases, computation of the Molenaar Sijtsma statistic requires provisional measures. These provisional measures have not been fully described in the literature, and we show that they have not been implemented in the software. We describe the required provisional measures as to allow the computation of the Molenaar Sijtsma statistic for all data sets.

  20. Statistical analysis and interpretation of prenatal diagnostic imaging studies, Part 2: descriptive and inferential statistical methods.

    Science.gov (United States)

    Tuuli, Methodius G; Odibo, Anthony O

    2011-08-01

    The objective of this article is to discuss the rationale for common statistical tests used for the analysis and interpretation of prenatal diagnostic imaging studies. Examples from the literature are used to illustrate descriptive and inferential statistics. The uses and limitations of linear and logistic regression analyses are discussed in detail.

  1. Is There a Common Summary Statistical Process for Representing the Mean and Variance? A Study Using Illustrations of Familiar Items.

    Science.gov (United States)

    Yang, Yi; Tokita, Midori; Ishiguchi, Akira

    2018-01-01

    A number of studies revealed that our visual system can extract different types of summary statistics, such as the mean and variance, from sets of items. Although the extraction of such summary statistics has been studied well in isolation, the relationship between these statistics remains unclear. In this study, we explored this issue using an individual differences approach. Observers viewed illustrations of strawberries and lollypops varying in size or orientation and performed four tasks in a within-subject design, namely mean and variance discrimination tasks with size and orientation domains. We found that the performances in the mean and variance discrimination tasks were not correlated with each other and demonstrated that extractions of the mean and variance are mediated by different representation mechanisms. In addition, we tested the relationship between performances in size and orientation domains for each summary statistic (i.e. mean and variance) and examined whether each summary statistic has distinct processes across perceptual domains. The results illustrated that statistical summary representations of size and orientation may share a common mechanism for representing the mean and possibly for representing variance. Introspections for each observer performing the tasks were also examined and discussed.

  2. Polychronakos fractional statistics with a complex-valued parameter

    International Nuclear Information System (INIS)

    Rovenchak, Andrij

    2012-01-01

    A generalization of quantum statistics is proposed in a fashion similar to the suggestion of Polychronakos [Phys. Lett. B 365, 202 (1996)] with the parameter α varying between −1 (fermionic case) and +1 (bosonic case). However, unlike the original formulation, it is suggested that intermediate values are located on the unit circle in the complex plane. In doing so one can avoid the case α = 0 corresponding to the Boltzmann statistics, which is not a quantum one. The limits of α → +1 and α → −1 reproducing small deviations from the Bose and Fermi statistics, respectively, are studied in detail. The equivalence between the statistics parameter and a possible dissipative part of the excitation spectrum is established. The case of a non-conserving number of excitations is analyzed. It is defined from the condition that the real part of the chemical potential equals zero. Thermodynamic quantities of a model system of two-dimensional harmonic oscillators are calculated.

  3. Statistical data fusion for cross-tabulation

    NARCIS (Netherlands)

    Kamakura, W.A.; Wedel, M.

    The authors address the situation in which a researcher wants to cross-tabulate two sets of discrete variables collected in independent samples, but a subset of the variables is common to both samples. The authors propose a statistical data-fusion model that allows for statistical tests of

  4. Statistical Reporting Errors and Collaboration on Statistical Analyses in Psychological Science.

    Science.gov (United States)

    Veldkamp, Coosje L S; Nuijten, Michèle B; Dominguez-Alvarez, Linda; van Assen, Marcel A L M; Wicherts, Jelte M

    2014-01-01

    Statistical analysis is error prone. A best practice for researchers using statistics would therefore be to share data among co-authors, allowing double-checking of executed tasks just as co-pilots do in aviation. To document the extent to which this 'co-piloting' currently occurs in psychology, we surveyed the authors of 697 articles published in six top psychology journals and asked them whether they had collaborated on four aspects of analyzing data and reporting results, and whether the described data had been shared between the authors. We acquired responses for 49.6% of the articles and found that co-piloting on statistical analysis and reporting results is quite uncommon among psychologists, while data sharing among co-authors seems reasonably but not completely standard. We then used an automated procedure to study the prevalence of statistical reporting errors in the articles in our sample and examined the relationship between reporting errors and co-piloting. Overall, 63% of the articles contained at least one p-value that was inconsistent with the reported test statistic and the accompanying degrees of freedom, and 20% of the articles contained at least one p-value that was inconsistent to such a degree that it may have affected decisions about statistical significance. Overall, the probability that a given p-value was inconsistent was over 10%. Co-piloting was not found to be associated with reporting errors.

  5. SOCR: Statistics Online Computational Resource

    Directory of Open Access Journals (Sweden)

    Ivo D. Dinov

    2006-10-01

    Full Text Available The need for hands-on computer laboratory experience in undergraduate and graduate statistics education has been firmly established in the past decade. As a result a number of attempts have been undertaken to develop novel approaches for problem-driven statistical thinking, data analysis and result interpretation. In this paper we describe an integrated educational web-based framework for: interactive distribution modeling, virtual online probability experimentation, statistical data analysis, visualization and integration. Following years of experience in statistical teaching at all college levels using established licensed statistical software packages, like STATA, S-PLUS, R, SPSS, SAS, Systat, etc., we have attempted to engineer a new statistics education environment, the Statistics Online Computational Resource (SOCR. This resource performs many of the standard types of statistical analysis, much like other classical tools. In addition, it is designed in a plug-in object-oriented architecture and is completely platform independent, web-based, interactive, extensible and secure. Over the past 4 years we have tested, fine-tuned and reanalyzed the SOCR framework in many of our undergraduate and graduate probability and statistics courses and have evidence that SOCR resources build student's intuition and enhance their learning.

  6. Back to basics: an introduction to statistics.

    Science.gov (United States)

    Halfens, R J G; Meijers, J M M

    2013-05-01

    In the second in the series, Professor Ruud Halfens and Dr Judith Meijers give an overview of statistics, both descriptive and inferential. They describe the first principles of statistics, including some relevant inferential tests.

  7. Statistical methods for ranking data

    CERN Document Server

    Alvo, Mayer

    2014-01-01

    This book introduces advanced undergraduate, graduate students and practitioners to statistical methods for ranking data. An important aspect of nonparametric statistics is oriented towards the use of ranking data. Rank correlation is defined through the notion of distance functions and the notion of compatibility is introduced to deal with incomplete data. Ranking data are also modeled using a variety of modern tools such as CART, MCMC, EM algorithm and factor analysis. This book deals with statistical methods used for analyzing such data and provides a novel and unifying approach for hypotheses testing. The techniques described in the book are illustrated with examples and the statistical software is provided on the authors’ website.

  8. Using the Δ3 statistic to test for missed levels in mixed sequence neutron resonance data

    International Nuclear Information System (INIS)

    Mulhall, Declan

    2009-01-01

    The Δ 3 (L) statistic is studied as a tool to detect missing levels in the neutron resonance data where two sequences are present. These systems are problematic because there is no level repulsion, and the resonances can be too close to resolve. Δ 3 (L) is a measure of the fluctuations in the number of levels in an interval of length L on the energy axis. The method used is tested on ensembles of mixed Gaussian orthogonal ensemble spectra, with a known fraction of levels (x%) randomly depleted, and can accurately return x. The accuracy of the method as a function of spectrum size is established. The method is used on neutron resonance data for 11 isotopes with either s-wave neutrons on odd-A isotopes, or p-wave neutrons on even-A isotopes. The method compares favorably with a maximum likelihood method applied to the level spacing distribution. Nuclear data ensembles were made from 20 isotopes in total, and their Δ 3 (L) statistics are discussed in the context of random matrix theory.

  9. Statistical theory of dynamo

    Science.gov (United States)

    Kim, E.; Newton, A. P.

    2012-04-01

    One major problem in dynamo theory is the multi-scale nature of the MHD turbulence, which requires statistical theory in terms of probability distribution functions. In this contribution, we present the statistical theory of magnetic fields in a simplified mean field α-Ω dynamo model by varying the statistical property of alpha, including marginal stability and intermittency, and then utilize observational data of solar activity to fine-tune the mean field dynamo model. Specifically, we first present a comprehensive investigation into the effect of the stochastic parameters in a simplified α-Ω dynamo model. Through considering the manifold of marginal stability (the region of parameter space where the mean growth rate is zero), we show that stochastic fluctuations are conductive to dynamo. Furthermore, by considering the cases of fluctuating alpha that are periodic and Gaussian coloured random noise with identical characteristic time-scales and fluctuating amplitudes, we show that the transition to dynamo is significantly facilitated for stochastic alpha with random noise. Furthermore, we show that probability density functions (PDFs) of the growth-rate, magnetic field and magnetic energy can provide a wealth of useful information regarding the dynamo behaviour/intermittency. Finally, the precise statistical property of the dynamo such as temporal correlation and fluctuating amplitude is found to be dependent on the distribution the fluctuations of stochastic parameters. We then use observations of solar activity to constrain parameters relating to the effect in stochastic α-Ω nonlinear dynamo models. This is achieved through performing a comprehensive statistical comparison by computing PDFs of solar activity from observations and from our simulation of mean field dynamo model. The observational data that are used are the time history of solar activity inferred for C14 data in the past 11000 years on a long time scale and direct observations of the sun spot

  10. Integrating Symbolic and Statistical Methods for Testing Intelligent Systems Applications to Machine Learning and Computer Vision

    Energy Technology Data Exchange (ETDEWEB)

    Jha, Sumit Kumar [University of Central Florida, Orlando; Pullum, Laura L [ORNL; Ramanathan, Arvind [ORNL

    2016-01-01

    Embedded intelligent systems ranging from tiny im- plantable biomedical devices to large swarms of autonomous un- manned aerial systems are becoming pervasive in our daily lives. While we depend on the flawless functioning of such intelligent systems, and often take their behavioral correctness and safety for granted, it is notoriously difficult to generate test cases that expose subtle errors in the implementations of machine learning algorithms. Hence, the validation of intelligent systems is usually achieved by studying their behavior on representative data sets, using methods such as cross-validation and bootstrapping.In this paper, we present a new testing methodology for studying the correctness of intelligent systems. Our approach uses symbolic decision procedures coupled with statistical hypothesis testing to. We also use our algorithm to analyze the robustness of a human detection algorithm built using the OpenCV open-source computer vision library. We show that the human detection implementation can fail to detect humans in perturbed video frames even when the perturbations are so small that the corresponding frames look identical to the naked eye.

  11. Evaluation of undergraduate nursing students' attitudes towards statistics courses, before and after a course in applied statistics.

    Science.gov (United States)

    Hagen, Brad; Awosoga, Olu; Kellett, Peter; Dei, Samuel Ofori

    2013-09-01

    Undergraduate nursing students must often take a course in statistics, yet there is scant research to inform teaching pedagogy. The objectives of this study were to assess nursing students' overall attitudes towards statistics courses - including (among other things) overall fear and anxiety, preferred learning and teaching styles, and the perceived utility and benefit of taking a statistics course - before and after taking a mandatory course in applied statistics. The authors used a pre-experimental research design (a one-group pre-test/post-test research design), by administering a survey to nursing students at the beginning and end of the course. The study was conducted at a University in Western Canada that offers an undergraduate Bachelor of Nursing degree. Participants included 104 nursing students, in the third year of a four-year nursing program, taking a course in statistics. Although students only reported moderate anxiety towards statistics, student anxiety about statistics had dropped by approximately 40% by the end of the course. Students also reported a considerable and positive change in their attitudes towards learning in groups by the end of the course, a potential reflection of the team-based learning that was used. Students identified preferred learning and teaching approaches, including the use of real-life examples, visual teaching aids, clear explanations, timely feedback, and a well-paced course. Students also identified preferred instructor characteristics, such as patience, approachability, in-depth knowledge of statistics, and a sense of humor. Unfortunately, students only indicated moderate agreement with the idea that statistics would be useful and relevant to their careers, even by the end of the course. Our findings validate anecdotal reports on statistics teaching pedagogy, although more research is clearly needed, particularly on how to increase students' perceptions of the benefit and utility of statistics courses for their nursing

  12. Regionally-varying and regionally-uniform electricity pricing policies compared across four usage categories

    International Nuclear Information System (INIS)

    Cho, Seong-Hoon; Kim, Taeyoung; Kim, Hyun Jae; Park, Kihyun; Roberts, Roland K.

    2015-01-01

    The objective of our research is to predict how electricity demand varies spatially between status quo regionally-uniform electricity pricing and hypothetical regionally-varying electricity pricing across usage categories. We summarize the empirical results of a case study of electricity demand in South Korea with three key findings and their related implications. First, the price elasticities of electricity demand differ across usage categories. Specifically, electricity demands for manufacturing and retail uses are price inelastic and close to unit elastic, respectively, while those for agricultural and residential uses are not statistically significant. This information is important in designing energy policy, because higher electricity prices could reduce electricity demands for manufacturing and retail uses, resulting in slower growth in those sectors. Second, spatial spillovers in electricity demand vary across uses. Understanding the spatial structure of electricity demand provides useful information to energy policy makers for anticipating changes in demand across regions via regionally-varying electricity pricing for different uses. Third, simulation results suggest that spatial variations among electricity demands by usage category under a regionally-varying electricity-pricing policy differ from those under a regionally-uniform electricity-pricing policy. Differences in spatial changes between the policies provide information for developing a realistic regionally-varying electricity-pricing policy according to usage category. - Highlights: • We compare regionally-varying and regionally-uniform electricity pricing policies. • We summarize empirical results of a case study of electricity demand in South Korea. • We confirm that spatial spillovers in electricity demands vary across different uses. • We find positive spatial spillovers in the manufacturing and residential sectors. • Our methods help policy makers evaluate regionally-varying pricing

  13. RILEM technical committee 195-DTD recommendation for test methods for AD and TD of early age concrete Round Robin documentation report : program, test results and statistical evaluation

    CERN Document Server

    Bjøntegaard, Øyvind; Krauss, Matias; Budelmann, Harald

    2015-01-01

    This report presents the Round-Robin (RR) program and test results including a statistical evaluation of the RILEM TC195-DTD committee named “Recommendation for test methods for autogenous deformation (AD) and thermal dilation (TD) of early age concrete”. The task of the committee was to investigate the linear test set-up for AD and TD measurements (Dilation Rigs) in the period from setting to the end of the hardening phase some weeks after. These are the stress-inducing deformations in a hardening concrete structure subjected to restraint conditions. The main task was to carry out an RR program on testing of AD of one concrete at 20 °C isothermal conditions in Dilation Rigs. The concrete part materials were distributed to 10 laboratories (Canada, Denmark, France, Germany, Japan, The Netherlands, Norway, Sweden and USA), and in total 30 tests on AD were carried out. Some supporting tests were also performed, as well as a smaller RR on cement paste. The committee has worked out a test procedure recommenda...

  14. Stable statistical representations facilitate visual search.

    Science.gov (United States)

    Corbett, Jennifer E; Melcher, David

    2014-10-01

    Observers represent the average properties of object ensembles even when they cannot identify individual elements. To investigate the functional role of ensemble statistics, we examined how modulating statistical stability affects visual search. We varied the mean and/or individual sizes of an array of Gabor patches while observers searched for a tilted target. In "stable" blocks, the mean and/or local sizes of the Gabors were constant over successive displays, whereas in "unstable" baseline blocks they changed from trial to trial. Although there was no relationship between the context and the spatial location of the target, observers found targets faster (as indexed by faster correct responses and fewer saccades) as the global mean size became stable over several displays. Building statistical stability also facilitated scanning the scene, as measured by larger saccadic amplitudes, faster saccadic reaction times, and shorter fixation durations. These findings suggest a central role for peripheral visual information, creating context to free resources for detailed processing of salient targets and maintaining the illusion of visual stability.

  15. Design of durability test protocol for vehicular fuel cell systems operated in power-follow mode based on statistical results of on-road data

    Science.gov (United States)

    Xu, Liangfei; Reimer, Uwe; Li, Jianqiu; Huang, Haiyan; Hu, Zunyan; Jiang, Hongliang; Janßen, Holger; Ouyang, Minggao; Lehnert, Werner

    2018-02-01

    City buses using polymer electrolyte membrane (PEM) fuel cells are considered to be the most likely fuel cell vehicles to be commercialized in China. The technical specifications of the fuel cell systems (FCSs) these buses are equipped with will differ based on the powertrain configurations and vehicle control strategies, but can generally be classified into the power-follow and soft-run modes. Each mode imposes different levels of electrochemical stress on the fuel cells. Evaluating the aging behavior of fuel cell stacks under the conditions encountered in fuel cell buses requires new durability test protocols based on statistical results obtained during actual driving tests. In this study, we propose a systematic design method for fuel cell durability test protocols that correspond to the power-follow mode based on three parameters for different fuel cell load ranges. The powertrain configurations and control strategy are described herein, followed by a presentation of the statistical data for the duty cycles of FCSs in one city bus in the demonstration project. Assessment protocols are presented based on the statistical results using mathematical optimization methods, and are compared to existing protocols with respect to common factors, such as time at open circuit voltage and root-mean-square power.

  16. Kolmogorov-Smirnov statistical test for analysis of ZAP-70 expression in B-CLL, compared with quantitative PCR and IgV(H) mutation status.

    Science.gov (United States)

    Van Bockstaele, Femke; Janssens, Ann; Piette, Anne; Callewaert, Filip; Pede, Valerie; Offner, Fritz; Verhasselt, Bruno; Philippé, Jan

    2006-07-15

    ZAP-70 has been proposed as a surrogate marker for immunoglobulin heavy-chain variable region (IgV(H)) mutation status, which is known as a prognostic marker in B-cell chronic lymphocytic leukemia (CLL). The flow cytometric analysis of ZAP-70 suffers from difficulties in standardization and interpretation. We applied the Kolmogorov-Smirnov (KS) statistical test to make analysis more straightforward. We examined ZAP-70 expression by flow cytometry in 53 patients with CLL. Analysis was performed as initially described by Crespo et al. (New England J Med 2003; 348:1764-1775) and alternatively by application of the KS statistical test comparing T cells with B cells. Receiver-operating-characteristics (ROC)-curve analyses were performed to determine the optimal cut-off values for ZAP-70 measured by the two approaches. ZAP-70 protein expression was compared with ZAP-70 mRNA expression measured by a quantitative PCR (qPCR) and with the IgV(H) mutation status. Both flow cytometric analyses correlated well with the molecular technique and proved to be of equal value in predicting the IgV(H) mutation status. Applying the KS test is reproducible, simple, straightforward, and overcomes a number of difficulties encountered in the Crespo-method. The KS statistical test is an essential part of the software delivered with modern routine analytical flow cytometers and is well suited for analysis of ZAP-70 expression in CLL. (c) 2006 International Society for Analytical Cytology.

  17. A generalization of Friedman's rank statistic

    NARCIS (Netherlands)

    Kroon, de J.; Laan, van der P.

    1983-01-01

    In this paper a very natural generalization of the two·way analysis of variance rank statistic of FRIEDMAN is given. The general distribution-free test procedure based on this statistic for the effect of J treatments in a random block design can be applied in general two-way layouts without

  18. Comments on statistical issues in numerical modeling for underground nuclear test monitoring

    International Nuclear Information System (INIS)

    Nicholson, W.L.; Anderson, K.K.

    1993-01-01

    The Symposium concluded with prepared summaries by four experts in the involved disciplines. These experts made no mention of statistics and/or the statistical content of issues. The first author contributed an extemporaneous statement at the Symposium because there are important issues associated with conducting and evaluating numerical modeling that are familiar to statisticians and often treated successfully by them. This note expands upon these extemporaneous remarks

  19. Computationally efficient statistical differential equation modeling using homogenization

    Science.gov (United States)

    Hooten, Mevin B.; Garlick, Martha J.; Powell, James A.

    2013-01-01

    Statistical models using partial differential equations (PDEs) to describe dynamically evolving natural systems are appearing in the scientific literature with some regularity in recent years. Often such studies seek to characterize the dynamics of temporal or spatio-temporal phenomena such as invasive species, consumer-resource interactions, community evolution, and resource selection. Specifically, in the spatial setting, data are often available at varying spatial and temporal scales. Additionally, the necessary numerical integration of a PDE may be computationally infeasible over the spatial support of interest. We present an approach to impose computationally advantageous changes of support in statistical implementations of PDE models and demonstrate its utility through simulation using a form of PDE known as “ecological diffusion.” We also apply a statistical ecological diffusion model to a data set involving the spread of mountain pine beetle (Dendroctonus ponderosae) in Idaho, USA.

  20. Search Databases and Statistics

    DEFF Research Database (Denmark)

    Refsgaard, Jan C; Munk, Stephanie; Jensen, Lars J

    2016-01-01

    having strengths and weaknesses that must be considered for the individual needs. These are reviewed in this chapter. Equally critical for generating highly confident output datasets is the application of sound statistical criteria to limit the inclusion of incorrect peptide identifications from database...... searches. Additionally, careful filtering and use of appropriate statistical tests on the output datasets affects the quality of all downstream analyses and interpretation of the data. Our considerations and general practices on these aspects of phosphoproteomics data processing are presented here....

  1. [The relationship between ischemic preconditioning-induced infarction size limitation and duration of test myocardial ischemia].

    Science.gov (United States)

    Blokhin, I O; Galagudza, M M; Vlasov, T D; Nifontov, E M; Petrishchev, N N

    2008-07-01

    Traditionally infarction size reduction by ischemic preconditioning is estimated in duration of test ischemia. This approach limits the understanding of real antiischemic efficacy of ischemic preconditioning. Present study was performed in the in vivo rat model of regional myocardial ischemia-reperfusion and showed that protective effect afforded by ischemic preconditioning progressively decreased with prolongation of test ischemia. There were no statistically significant differences in infarction size between control and preconditioned animals when the duration of test ischemia was increased up to 1 hour. Preconditioning ensured maximal infarction-limiting effect in duration of test ischemia varying from 20 to 40 minutes.

  2. A behavioral asset pricing model with a time-varying second moment

    International Nuclear Information System (INIS)

    Chiarella, Carl; He Xuezhong; Wang, Duo

    2006-01-01

    We develop a simple behavioral asset pricing model with fundamentalists and chartists in order to study price behavior in financial markets when chartists estimate both conditional mean and variance by using a weighted averaging process. Through a stability, bifurcation, and normal form analysis, the market impact of the weighting process and time-varying second moment are examined. It is found that the fundamental price becomes stable (unstable) when the activities from both types of traders are balanced (unbalanced). When the fundamental price becomes unstable, the weighting process leads to different price dynamics, depending on whether the chartists act as either trend followers or contrarians. It is also found that a time-varying second moment of the chartists does not change the stability of the fundamental price, but it does influence the stability of the bifurcations. The bifurcation becomes stable (unstable) when the chartists are more (less) concerned about the market risk characterized by the time-varying second moment. Different routes to complicated price dynamics are also observed. The analysis provides an analytical foundation for the statistical analysis of the corresponding stochastic version of this type of behavioral model

  3. Statistical analysis of random pulse trains

    International Nuclear Information System (INIS)

    Da Costa, G.

    1977-02-01

    Some experimental and theoretical results concerning the statistical properties of optical beams formed by a finite number of independent pulses are presented. The considered waves (corresponding to each pulse) present important spatial variations of the illumination distribution in a cross-section of the beam, due to the time-varying random refractive index distribution in the active medium. Some examples of this kind of emission are: (a) Free-running ruby laser emission; (b) Mode-locked pulse trains; (c) Randomly excited nonlinear media

  4. Variáveis significativas na avaliação da inteligência

    OpenAIRE

    Alves, Irai Cristina Boccato

    1998-01-01

    Algumas variáveis têm mostrado uma influência significativa nos resultados dos testes de inteligência, o que leva à necessidade de considerá-las e controlá-las em estudos com os mesmos. Entre elas podemos destacar o nível sócio-econômico, a idade, o sexo e o grau de escolaridade. O nível sócio-econômico mostrou ser relevante no Teste de Goodenough, nas Matrizes Progressivas Coloridas de Raven e na Escala de Maturidade Mental Colúmbia. Quanto à variável sexo, foram constatadas diferenças signi...

  5. Application of pedagogy reflective in statistical methods course and practicum statistical methods

    Science.gov (United States)

    Julie, Hongki

    2017-08-01

    Subject Elementary Statistics, Statistical Methods and Statistical Methods Practicum aimed to equip students of Mathematics Education about descriptive statistics and inferential statistics. The students' understanding about descriptive and inferential statistics were important for students on Mathematics Education Department, especially for those who took the final task associated with quantitative research. In quantitative research, students were required to be able to present and describe the quantitative data in an appropriate manner, to make conclusions from their quantitative data, and to create relationships between independent and dependent variables were defined in their research. In fact, when students made their final project associated with quantitative research, it was not been rare still met the students making mistakes in the steps of making conclusions and error in choosing the hypothetical testing process. As a result, they got incorrect conclusions. This is a very fatal mistake for those who did the quantitative research. There were some things gained from the implementation of reflective pedagogy on teaching learning process in Statistical Methods and Statistical Methods Practicum courses, namely: 1. Twenty two students passed in this course and and one student did not pass in this course. 2. The value of the most accomplished student was A that was achieved by 18 students. 3. According all students, their critical stance could be developed by them, and they could build a caring for each other through a learning process in this course. 4. All students agreed that through a learning process that they undergo in the course, they can build a caring for each other.

  6. Permutation statistical methods an integrated approach

    CERN Document Server

    Berry, Kenneth J; Johnston, Janis E

    2016-01-01

    This research monograph provides a synthesis of a number of statistical tests and measures, which, at first consideration, appear disjoint and unrelated. Numerous comparisons of permutation and classical statistical methods are presented, and the two methods are compared via probability values and, where appropriate, measures of effect size. Permutation statistical methods, compared to classical statistical methods, do not rely on theoretical distributions, avoid the usual assumptions of normality and homogeneity of variance, and depend only on the data at hand. This text takes a unique approach to explaining statistics by integrating a large variety of statistical methods, and establishing the rigor of a topic that to many may seem to be a nascent field in statistics. This topic is new in that it took modern computing power to make permutation methods available to people working in the mainstream of research. This research monograph addresses a statistically-informed audience, and can also easily serve as a ...

  7. Exploiting the full power of temporal gene expression profiling through a new statistical test: Application to the analysis of muscular dystrophy data

    Directory of Open Access Journals (Sweden)

    Turk Rolf

    2006-04-01

    Full Text Available Abstract Background The identification of biologically interesting genes in a temporal expression profiling dataset is challenging and complicated by high levels of experimental noise. Most statistical methods used in the literature do not fully exploit the temporal ordering in the dataset and are not suited to the case where temporal profiles are measured for a number of different biological conditions. We present a statistical test that makes explicit use of the temporal order in the data by fitting polynomial functions to the temporal profile of each gene and for each biological condition. A Hotelling T2-statistic is derived to detect the genes for which the parameters of these polynomials are significantly different from each other. Results We validate the temporal Hotelling T2-test on muscular gene expression data from four mouse strains which were profiled at different ages: dystrophin-, beta-sarcoglycan and gamma-sarcoglycan deficient mice, and wild-type mice. The first three are animal models for different muscular dystrophies. Extensive biological validation shows that the method is capable of finding genes with temporal profiles significantly different across the four strains, as well as identifying potential biomarkers for each form of the disease. The added value of the temporal test compared to an identical test which does not make use of temporal ordering is demonstrated via a simulation study, and through confirmation of the expression profiles from selected genes by quantitative PCR experiments. The proposed method maximises the detection of the biologically interesting genes, whilst minimising false detections. Conclusion The temporal Hotelling T2-test is capable of finding relatively small and robust sets of genes that display different temporal profiles between the conditions of interest. The test is simple, it can be used on gene expression data generated from any experimental design and for any number of conditions, and it

  8. Statistical methods for evaluating the attainment of cleanup standards

    Energy Technology Data Exchange (ETDEWEB)

    Gilbert, R.O.; Simpson, J.C.

    1992-12-01

    This document is the third volume in a series of volumes sponsored by the US Environmental Protection Agency (EPA), Statistical Policy Branch, that provide statistical methods for evaluating the attainment of cleanup Standards at Superfund sites. Volume 1 (USEPA 1989a) provides sampling designs and tests for evaluating attainment of risk-based standards for soils and solid media. Volume 2 (USEPA 1992) provides designs and tests for evaluating attainment of risk-based standards for groundwater. The purpose of this third volume is to provide statistical procedures for designing sampling programs and conducting statistical tests to determine whether pollution parameters in remediated soils and solid media at Superfund sites attain site-specific reference-based standards. This.document is written for individuals who may not have extensive training or experience with statistical methods. The intended audience includes EPA regional remedial project managers, Superfund-site potentially responsible parties, state environmental protection agencies, and contractors for these groups.

  9. ArrayVigil: a methodology for statistical comparison of gene signatures using segregated-one-tailed (SOT) Wilcoxon's signed-rank test.

    Science.gov (United States)

    Khan, Haseeb Ahmad

    2005-01-28

    Due to versatile diagnostic and prognostic fidelity molecular signatures or fingerprints are anticipated as the most powerful tools for cancer management in the near future. Notwithstanding the experimental advancements in microarray technology, methods for analyzing either whole arrays or gene signatures have not been firmly established. Recently, an algorithm, ArraySolver has been reported by Khan for two-group comparison of microarray gene expression data using two-tailed Wilcoxon signed-rank test. Most of the molecular signatures are composed of two sets of genes (hybrid signatures) wherein up-regulation of one set and down-regulation of the other set collectively define the purpose of a gene signature. Since the direction of a selected gene's expression (positive or negative) with respect to a particular disease condition is known, application of one-tailed statistics could be a more relevant choice. A novel method, ArrayVigil, is described for comparing hybrid signatures using segregated-one-tailed (SOT) Wilcoxon signed-rank test and the results compared with integrated-two-tailed (ITT) procedures (SPSS and ArraySolver). ArrayVigil resulted in lower P values than those obtained from ITT statistics while comparing real data from four signatures.

  10. Possible Solution to Publication Bias Through Bayesian Statistics, Including Proper Null Hypothesis Testing

    NARCIS (Netherlands)

    Konijn, Elly A.; van de Schoot, Rens; Winter, Sonja D.; Ferguson, Christopher J.

    2015-01-01

    The present paper argues that an important cause of publication bias resides in traditional frequentist statistics forcing binary decisions. An alternative approach through Bayesian statistics provides various degrees of support for any hypothesis allowing balanced decisions and proper null

  11. Statistical theory and inference

    CERN Document Server

    Olive, David J

    2014-01-01

    This text is for  a one semester graduate course in statistical theory and covers minimal and complete sufficient statistics, maximum likelihood estimators, method of moments, bias and mean square error, uniform minimum variance estimators and the Cramer-Rao lower bound, an introduction to large sample theory, likelihood ratio tests and uniformly most powerful  tests and the Neyman Pearson Lemma. A major goal of this text is to make these topics much more accessible to students by using the theory of exponential families. Exponential families, indicator functions and the support of the distribution are used throughout the text to simplify the theory. More than 50 ``brand name" distributions are used to illustrate the theory with many examples of exponential families, maximum likelihood estimators and uniformly minimum variance unbiased estimators. There are many homework problems with over 30 pages of solutions.

  12. Time-varying surface electromyography topography as a prognostic tool for chronic low back pain rehabilitation.

    Science.gov (United States)

    Hu, Yong; Kwok, Jerry Weilun; Tse, Jessica Yuk-Hang; Luk, Keith Dip-Kei

    2014-06-01

    Nonsurgical rehabilitation therapy is a commonly used strategy to treat chronic low back pain (LBP). The selection of the most appropriate therapeutic options is still a big challenge in clinical practices. Surface electromyography (sEMG) topography has been proposed to be an objective assessment of LBP rehabilitation. The quantitative analysis of dynamic sEMG would provide an objective tool of prognosis for LBP rehabilitation. To evaluate the prognostic value of quantitative sEMG topographic analysis and to verify the accuracy of the performance of proposed time-varying topographic parameters for identifying the patients who have better response toward the rehabilitation program. A retrospective study of consecutive patients. Thirty-eight patients with chronic nonspecific LBP and 43 healthy subjects. The accuracy of the time-varying quantitative sEMG topographic analysis for monitoring LBP rehabilitation progress was determined by calculating the corresponding receiver-operating characteristic (ROC) curves. Physiologic measure was the sEMG during lumbar flexion and extension. Patients who suffered from chronic nonspecific LBP without the history of back surgery and any medical conditions causing acute exacerbation of LBP during the clinical test were enlisted to perform the clinical test during the 12-week physiotherapy (PT) treatment. Low back pain patients were classified into two groups: "responding" and "nonresponding" based on the clinical assessment. The responding group referred to the LBP patients who began to recover after the PT treatment, whereas the nonresponding group referred to some LBP patients who did not recover or got worse after the treatment. The results of the time-varying analysis in the responding group were compared with those in the nonresponding group. In addition, the accuracy of the analysis was analyzed through ROC curves. The time-varying analysis showed discrepancies in the root-mean-square difference (RMSD) parameters between the

  13. Statistical methods for astronomical data analysis

    CERN Document Server

    Chattopadhyay, Asis Kumar

    2014-01-01

    This book introduces “Astrostatistics” as a subject in its own right with rewarding examples, including work by the authors with galaxy and Gamma Ray Burst data to engage the reader. This includes a comprehensive blending of Astrophysics and Statistics. The first chapter’s coverage of preliminary concepts and terminologies for astronomical phenomenon will appeal to both Statistics and Astrophysics readers as helpful context. Statistics concepts covered in the book provide a methodological framework. A unique feature is the inclusion of different possible sources of astronomical data, as well as software packages for converting the raw data into appropriate forms for data analysis. Readers can then use the appropriate statistical packages for their particular data analysis needs. The ideas of statistical inference discussed in the book help readers determine how to apply statistical tests. The authors cover different applications of statistical techniques already developed or specifically introduced for ...

  14. Statistical application of groundwater monitoring data at the Hanford Site

    International Nuclear Information System (INIS)

    Chou, C.J.; Johnson, V.G.; Hodges, F.N.

    1993-09-01

    Effective use of groundwater monitoring data requires both statistical and geohydrologic interpretations. At the Hanford Site in south-central Washington state such interpretations are used for (1) detection monitoring, assessment monitoring, and/or corrective action at Resource Conservation and Recovery Act sites; (2) compliance testing for operational groundwater surveillance; (3) impact assessments at active liquid-waste disposal sites; and (4) cleanup decisions at Comprehensive Environmental Response Compensation and Liability Act sites. Statistical tests such as the Kolmogorov-Smirnov two-sample test are used to test the hypothesis that chemical concentrations from spatially distinct subsets or populations are identical within the uppermost unconfined aquifer. Experience at the Hanford Site in applying groundwater background data indicates that background must be considered as a statistical distribution of concentrations, rather than a single value or threshold. The use of a single numerical value as a background-based standard ignores important information and may result in excessive or unnecessary remediation. Appropriate statistical evaluation techniques include Wilcoxon rank sum test, Quantile test, ''hot spot'' comparisons, and Kolmogorov-Smirnov types of tests. Application of such tests is illustrated with several case studies derived from Hanford groundwater monitoring programs. To avoid possible misuse of such data, an understanding of the limitations is needed. In addition to statistical test procedures, geochemical, and hydrologic considerations are integral parts of the decision process. For this purpose a phased approach is recommended that proceeds from simple to the more complex, and from an overview to detailed analysis

  15. Testing for statistical discrimination in health care.

    Science.gov (United States)

    Balsa, Ana I; McGuire, Thomas G; Meredith, Lisa S

    2005-02-01

    To examine the extent to which doctors' rational reactions to clinical uncertainty ("statistical discrimination") can explain racial differences in the diagnosis of depression, hypertension, and diabetes. Main data are from the Medical Outcomes Study (MOS), a 1986 study conducted by RAND Corporation in three U.S. cities. The study compares the processes and outcomes of care for patients in different health care systems. Complementary data from National Health And Examination Survey III (NHANES III) and National Comorbidity Survey (NCS) are also used. Across three systems of care (staff health maintenance organizations, multispecialty groups, and solo practices), the MOS selected 523 health care clinicians. A representative cross-section (21,480) of patients was then chosen from a pool of adults who visited any of these providers during a 9-day period. We analyzed a subsample of the MOS data consisting of patients of white family physicians or internists (11,664 patients). We obtain variables reflecting patients' health conditions and severity, demographics, socioeconomic status, and insurance from the patients' screener interview (administered by MOS staff prior to the patient's encounter with the clinician). We used the reports made by the clinician after the visit to construct indicators of doctors' diagnoses. We obtained prevalence rates from NHANES III and NCS. We find evidence consistent with statistical discrimination for diagnoses of hypertension, diabetes, and depression. In particular, we find that if clinicians act like Bayesians, plausible priors held by the physician about the prevalence of the disease across racial groups could account for racial differences in the diagnosis of hypertension and diabetes. In the case of depression, we find evidence that race affects decisions through differences in communication patterns between doctors and white and minority patients. To contend effectively with inequities in health care, it is necessary to understand

  16. Improved air ventilation rate estimation based on a statistical model

    International Nuclear Information System (INIS)

    Brabec, M.; Jilek, K.

    2004-01-01

    A new approach to air ventilation rate estimation from CO measurement data is presented. The approach is based on a state-space dynamic statistical model, allowing for quick and efficient estimation. Underlying computations are based on Kalman filtering, whose practical software implementation is rather easy. The key property is the flexibility of the model, allowing various artificial regimens of CO level manipulation to be treated. The model is semi-parametric in nature and can efficiently handle time-varying ventilation rate. This is a major advantage, compared to some of the methods which are currently in practical use. After a formal introduction of the statistical model, its performance is demonstrated on real data from routine measurements. It is shown how the approach can be utilized in a more complex situation of major practical relevance, when time-varying air ventilation rate and radon entry rate are to be estimated simultaneously from concurrent radon and CO measurements

  17. Track 4: basic nuclear science variance reduction for Monte Carlo criticality simulations. 2. Assessment of MCNP Statistical Analysis of keff Eigenvalue Convergence with an Analytical Criticality Verification Test Set

    International Nuclear Information System (INIS)

    Sood, Avnet; Forster, R. Arthur; Parsons, D. Kent

    2001-01-01

    Monte Carlo simulations of nuclear criticality eigenvalue problems are often performed by general purpose radiation transport codes such as MCNP. MCNP performs detailed statistical analysis of the criticality calculation and provides feedback to the user with warning messages, tables, and graphs. The purpose of the analysis is to provide the user with sufficient information to assess spatial convergence of the eigenfunction and thus the validity of the criticality calculation. As a test of this statistical analysis package in MCNP, analytic criticality verification benchmark problems have been used for the first time to assess the performance of the criticality convergence tests in MCNP. The MCNP statistical analysis capability has been recently assessed using the 75 multigroup criticality verification analytic problem test set. MCNP was verified with these problems at the 10 -4 to 10 -5 statistical error level using 40 000 histories per cycle and 2000 active cycles. In all cases, the final boxed combined k eff answer was given with the standard deviation and three confidence intervals that contained the analytic k eff . To test the effectiveness of the statistical analysis checks in identifying poor eigenfunction convergence, ten problems from the test set were deliberately run incorrectly using 1000 histories per cycle, 200 active cycles, and 10 inactive cycles. Six problems with large dominance ratios were chosen from the test set because they do not achieve the normal spatial mode in the beginning of the calculation. To further stress the convergence tests, these problems were also started with an initial fission source point 1 cm from the boundary thus increasing the likelihood of a poorly converged initial fission source distribution. The final combined k eff confidence intervals for these deliberately ill-posed problems did not include the analytic k eff value. In no case did a bad confidence interval go undetected. Warning messages were given signaling that

  18. Data-driven inference for the spatial scan statistic.

    Science.gov (United States)

    Almeida, Alexandre C L; Duarte, Anderson R; Duczmal, Luiz H; Oliveira, Fernando L P; Takahashi, Ricardo H C

    2011-08-02

    Kulldorff's spatial scan statistic for aggregated area maps searches for clusters of cases without specifying their size (number of areas) or geographic location in advance. Their statistical significance is tested while adjusting for the multiple testing inherent in such a procedure. However, as is shown in this work, this adjustment is not done in an even manner for all possible cluster sizes. A modification is proposed to the usual inference test of the spatial scan statistic, incorporating additional information about the size of the most likely cluster found. A new interpretation of the results of the spatial scan statistic is done, posing a modified inference question: what is the probability that the null hypothesis is rejected for the original observed cases map with a most likely cluster of size k, taking into account only those most likely clusters of size k found under null hypothesis for comparison? This question is especially important when the p-value computed by the usual inference process is near the alpha significance level, regarding the correctness of the decision based in this inference. A practical procedure is provided to make more accurate inferences about the most likely cluster found by the spatial scan statistic.

  19. Basic statistical tools in research and data analysis

    Directory of Open Access Journals (Sweden)

    Zulfiqar Ali

    2016-01-01

    Full Text Available Statistical methods involved in carrying out a study include planning, designing, collecting data, analysing, drawing meaningful interpretation and reporting of the research findings. The statistical analysis gives meaning to the meaningless numbers, thereby breathing life into a lifeless data. The results and inferences are precise only if proper statistical tests are used. This article will try to acquaint the reader with the basic research tools that are utilised while conducting various studies. The article covers a brief outline of the variables, an understanding of quantitative and qualitative variables and the measures of central tendency. An idea of the sample size estimation, power analysis and the statistical errors is given. Finally, there is a summary of parametric and non-parametric tests used for data analysis.

  20. STATLIB, Interactive Statistics Program Library of Tutorial System

    International Nuclear Information System (INIS)

    Anderson, H.E.

    1986-01-01

    1 - Description of program or function: STATLIB is a conversational statistical program library developed in conjunction with a Sandia National Laboratories applied statistics course intended for practicing engineers and scientists. STATLIB is a group of 15 interactive, argument-free, statistical routines. Included are analysis of sensitivity tests; sample statistics for the normal, exponential, hypergeometric, Weibull, and extreme value distributions; three models of multiple regression analysis; x-y data plots; exact probabilities for RxC tables; n sets of m permuted integers in the range 1 to m; simple linear regression and correlation; K different random integers in the range m to n; and Fisher's exact test of independence for a 2 by 2 contingency table. Forty-five other subroutines in the library support the basic 15

  1. Accelerated testing statistical models, test plans, and data analysis

    CERN Document Server

    Nelson, Wayne B

    2009-01-01

    The Wiley-Interscience Paperback Series consists of selected books that have been made more accessible to consumers in an effort to increase global appeal and general circulation. With these new unabridged softcover volumes, Wiley hopes to extend the lives of these works by making them available to future generations of statisticians, mathematicians, and scientists. "". . . a goldmine of knowledge on accelerated life testing principles and practices . . . one of the very few capable of advancing the science of reliability. It definitely belongs in every bookshelf on engineering.""-Dev G.

  2. Hemophilia Data and Statistics

    Science.gov (United States)

    ... View public health webinars on blood disorders Data & Statistics Language: English (US) Español (Spanish) Recommend on Facebook ... genetic testing is done to diagnose hemophilia before birth. For the one-third ... rates and hospitalization rates for bleeding complications from hemophilia ...

  3. The Bayesian Score Statistic

    NARCIS (Netherlands)

    Kleibergen, F.R.; Kleijn, R.; Paap, R.

    2000-01-01

    We propose a novel Bayesian test under a (noninformative) Jeffreys'priorspecification. We check whether the fixed scalar value of the so-calledBayesian Score Statistic (BSS) under the null hypothesis is aplausiblerealization from its known and standardized distribution under thealternative. Unlike

  4. Statistical Analysis for Test Papers with Software SPSS

    Institute of Scientific and Technical Information of China (English)

    张燕君

    2012-01-01

      Test paper evaluation is an important work for the management of tests, which results are significant bases for scientific summation of teaching and learning. Taking an English test paper of high students’monthly examination as the object, it focuses on the interpretation of SPSS output concerning item and whole quantitative analysis of papers. By analyzing and evaluating the papers, it can be a feedback for teachers to check the students’progress and adjust their teaching process.

  5. Using statistics to understand the environment

    CERN Document Server

    Cook, Penny A

    2000-01-01

    Using Statistics to Understand the Environment covers all the basic tests required for environmental practicals and projects and points the way to the more advanced techniques that may be needed in more complex research designs. Following an introduction to project design, the book covers methods to describe data, to examine differences between samples, and to identify relationships and associations between variables.Featuring: worked examples covering a wide range of environmental topics, drawings and icons, chapter summaries, a glossary of statistical terms and a further reading section, this book focuses on the needs of the researcher rather than on the mathematics behind the tests.

  6. A statistical approach to instrument calibration

    Science.gov (United States)

    Robert R. Ziemer; David Strauss

    1978-01-01

    Summary - It has been found that two instruments will yield different numerical values when used to measure identical points. A statistical approach is presented that can be used to approximate the error associated with the calibration of instruments. Included are standard statistical tests that can be used to determine if a number of successive calibrations of the...

  7. Análise de itens de uma prova de raciocínio estatístico Analysis of items of a statistical reasoning test

    Directory of Open Access Journals (Sweden)

    Claudette Maria Medeiros Vendramini

    2004-12-01

    Full Text Available Este estudo objetivou analisar as 18 questões (do tipo múltipla escolha de uma prova sobre conceitos básicos de Estatística pelas teorias clássica e moderna. Participaram 325 universitários, selecionados aleatoriamente das áreas de humanas, exatas e saúde. A análise indicou que a prova é predominantemente unidimensional e que os itens podem ser mais bem ajustados ao modelo de três parâmetros. Os índices de dificuldade, discriminação e correlação bisserial apresentam valores aceitáveis. Sugere-se a inclusão de novos itens na prova, que busquem confiabilidade e validade para o contexto educacional e revelem o raciocínio estatístico de universitários ao ler representações de dados estatísticos.This study aimed at to analyze the 18 questions (of multiple choice type of a test on basic concepts of Statistics for the classic and modern theories. The test was taken by 325 undergraduate students, randomly selected from the areas of Human, Exact and Health Sciences. The analysis indicated that the test has predominantly one dimension and that the items can be better fitting to the model of three parameters. The indexes of difficulty, discrimination and biserial correlation present acceptable values. It is suggested to include new items to the test in order to obtain reliability and validity to use it in the education context and to reveal the statistical reasoning of undergraduate students when dealing with statistical data representation.

  8. Test of statistical models of the ν-delayed neutron emission by application of the Monte Carlo method

    International Nuclear Information System (INIS)

    Ohm, H.

    1982-01-01

    Using the example of the delayed neutron spectrum of 24 s- 137 I the statistical model is tested in view of its applicability. A computer code was developed which simulates delayed neutron spectra by the Monte Carlo method under the assumption that the transition probabilities of the ν and the neutron decays obey the Porter-Thomas distribution while the distances of the neutron emitting levels are Wigner distribution. Gramow-Teller ν-transitions and simply forbidden ν-transitions from the preceding nucleus to the emitting nucleus were regarded. (orig./HSI) [de

  9. Perceived Statistical Knowledge Level and Self-Reported Statistical Practice Among Academic Psychologists

    Directory of Open Access Journals (Sweden)

    Laura Badenes-Ribera

    2018-06-01

    Full Text Available Introduction: Publications arguing against the null hypothesis significance testing (NHST procedure and in favor of good statistical practices have increased. The most frequently mentioned alternatives to NHST are effect size statistics (ES, confidence intervals (CIs, and meta-analyses. A recent survey conducted in Spain found that academic psychologists have poor knowledge about effect size statistics, confidence intervals, and graphic displays for meta-analyses, which might lead to a misinterpretation of the results. In addition, it also found that, although the use of ES is becoming generalized, the same thing is not true for CIs. Finally, academics with greater knowledge about ES statistics presented a profile closer to good statistical practice and research design. Our main purpose was to analyze the extension of these results to a different geographical area through a replication study.Methods: For this purpose, we elaborated an on-line survey that included the same items as the original research, and we asked academic psychologists to indicate their level of knowledge about ES, their CIs, and meta-analyses, and how they use them. The sample consisted of 159 Italian academic psychologists (54.09% women, mean age of 47.65 years. The mean number of years in the position of professor was 12.90 (SD = 10.21.Results: As in the original research, the results showed that, although the use of effect size estimates is becoming generalized, an under-reporting of CIs for ES persists. The most frequent ES statistics mentioned were Cohen's d and R2/η2, which can have outliers or show non-normality or violate statistical assumptions. In addition, academics showed poor knowledge about meta-analytic displays (e.g., forest plot and funnel plot and quality checklists for studies. Finally, academics with higher-level knowledge about ES statistics seem to have a profile closer to good statistical practices.Conclusions: Changing statistical practice is not

  10. Statistical Methods for Environmental Pollution Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Gilbert, Richard O. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    1987-01-01

    The application of statistics to environmental pollution monitoring studies requires a knowledge of statistical analysis methods particularly well suited to pollution data. This book fills that need by providing sampling plans, statistical tests, parameter estimation procedure techniques, and references to pertinent publications. Most of the statistical techniques are relatively simple, and examples, exercises, and case studies are provided to illustrate procedures. The book is logically divided into three parts. Chapters 1, 2, and 3 are introductory chapters. Chapters 4 through 10 discuss field sampling designs and Chapters 11 through 18 deal with a broad range of statistical analysis procedures. Some statistical techniques given here are not commonly seen in statistics book. For example, see methods for handling correlated data (Sections 4.5 and 11.12), for detecting hot spots (Chapter 10), and for estimating a confidence interval for the mean of a lognormal distribution (Section 13.2). Also, Appendix B lists a computer code that estimates and tests for trends over time at one or more monitoring stations using nonparametric methods (Chapters 16 and 17). Unfortunately, some important topics could not be included because of their complexity and the need to limit the length of the book. For example, only brief mention could be made of time series analysis using Box-Jenkins methods and of kriging techniques for estimating spatial and spatial-time patterns of pollution, although multiple references on these topics are provided. Also, no discussion of methods for assessing risks from environmental pollution could be included.

  11. A goodness of fit statistic for the geometric distribution

    NARCIS (Netherlands)

    J.A. Ferreira

    2003-01-01

    textabstractWe propose a goodness of fit statistic for the geometric distribution and compare it in terms of power, via simulation, with the chi-square statistic. The statistic is based on the Lau-Rao theorem and can be seen as a discrete analogue of the total time on test statistic. The results

  12. Statistics of the Hubble diagram. I. Determination of q2 and luminosity evolution with application to quasars

    International Nuclear Information System (INIS)

    Turner, E.L.

    1979-01-01

    A rank statistic version of the magnitude-redshift q 0 test is developed. It may be applied to the Hubble diagram of objects with an arbitrary and unknown luminosity function; in particular, the objects need not be ''standard candles.'' Only the single restriction that the objects' luminosity function does not vary in functional form is placed on the sources' intrinsic properties. Density and/or luminosity evolution are taken into account. Corrections for sample selection biases are incorporated into the analysis. Tests for the presence of luminosity evolution are given. Methods for determining either q 0 or the luminosity evolution when the other is a priori known are described.Application of these techniques to a sample of 119 3CR and 4C quasars leads to the following results: The radio Hubble diagram is consistent with all values of q 0 , suggesting that the quasar radio luminosity function is a featureless power law. The optical Hubble diagram indicates one of these possibilities: (1) the value of q 0 is in the range 2--32, probably near 5; (2) the value of q 0 is more reasonable and there is strong optical luminosity evolution [e.g., if q/sub o/ approx. = 0.05, then the characteristic optical luminosity scales like approx. (1 + Z)/sup 7/3/]; or (3) the data are a low-probability (< or =0.05) statistical fluctuation. The second interpretation is probably the most sensible one.Generalizations of the rank statistic magnitude-redshift test are suggested for application to a variety of extragalactic and stellar problems. Specific examples of applications to unorthodox cosmologies are given. Even for the unfavorable (very broad luminosity function) case of the optical quasar data, the rank statistic analysis is sensitive to relative variations in the distance-modulus-redshift relation as small as approx.0.4 mag for 1/2 < or = Z < or = 2

  13. International Conference on Robust Statistics

    CERN Document Server

    Filzmoser, Peter; Gather, Ursula; Rousseeuw, Peter

    2003-01-01

    Aspects of Robust Statistics are important in many areas. Based on the International Conference on Robust Statistics 2001 (ICORS 2001) in Vorau, Austria, this volume discusses future directions of the discipline, bringing together leading scientists, experienced researchers and practitioners, as well as younger researchers. The papers cover a multitude of different aspects of Robust Statistics. For instance, the fundamental problem of data summary (weights of evidence) is considered and its robustness properties are studied. Further theoretical subjects include e.g.: robust methods for skewness, time series, longitudinal data, multivariate methods, and tests. Some papers deal with computational aspects and algorithms. Finally, the aspects of application and programming tools complete the volume.

  14. Propensity Score Analysis: An Alternative Statistical Approach for HRD Researchers

    Science.gov (United States)

    Keiffer, Greggory L.; Lane, Forrest C.

    2016-01-01

    Purpose: This paper aims to introduce matching in propensity score analysis (PSA) as an alternative statistical approach for researchers looking to make causal inferences using intact groups. Design/methodology/approach: An illustrative example demonstrated the varying results of analysis of variance, analysis of covariance and PSA on a heuristic…

  15. Application of nonparametric statistics to material strength/reliability assessment

    International Nuclear Information System (INIS)

    Arai, Taketoshi

    1992-01-01

    An advanced material technology requires data base on a wide variety of material behavior which need to be established experimentally. It may often happen that experiments are practically limited in terms of reproducibility or a range of test parameters. Statistical methods can be applied to understanding uncertainties in such a quantitative manner as required from the reliability point of view. Statistical assessment involves determinations of a most probable value and the maximum and/or minimum value as one-sided or two-sided confidence limit. A scatter of test data can be approximated by a theoretical distribution only if the goodness of fit satisfies a test criterion. Alternatively, nonparametric statistics (NPS) or distribution-free statistics can be applied. Mathematical procedures by NPS are well established for dealing with most reliability problems. They handle only order statistics of a sample. Mathematical formulas and some applications to engineering assessments are described. They include confidence limits of median, population coverage of sample, required minimum number of a sample, and confidence limits of fracture probability. These applications demonstrate that a nonparametric statistical estimation is useful in logical decision making in the case a large uncertainty exists. (author)

  16. Introduction to statistics

    CERN Multimedia

    CERN. Geneva

    2005-01-01

    The three lectures will present an introduction to statistical methods as used in High Energy Physics. As the time will be very limited, the course will seek mainly to define the important issues and to introduce the most wide used tools. Topics will include the interpretation and use of probability, estimation of parameters and testing of hypotheses.

  17. Introduction to statistics

    CERN Multimedia

    CERN. Geneva

    2004-01-01

    The three lectures will present an introduction to statistical methods as used in High Energy Physics. As the time will be very limited, the course will seek mainly to define the important issues and to introduce the most wide used tools. Topics will include the interpretation and use of probability, estimation of parameters and testing of hypotheses.

  18. Statistical problems in medical research

    African Journals Online (AJOL)

    STORAGESEVER

    2008-12-29

    Dec 29, 2008 ... medical research, there are some common problems in using statistical methodology which may result ... optimal combination of diagnostic tests for osteoporosis .... randomization used include stratification and minimize-.

  19. Infants generalize representations of statistically segmented words

    Directory of Open Access Journals (Sweden)

    Katharine eGraf Estes

    2012-10-01

    Full Text Available The acoustic variation in language presents learners with a substantial challenge. To learn by tracking statistical regularities in speech, infants must recognize words across tokens that differ based on characteristics such as the speaker’s voice, affect, or the sentence context. Previous statistical learning studies have not investigated how these types of surface form variation affect learning. The present experiments used tasks tailored to two distinct developmental levels to investigate the robustness of statistical learning to variation. Experiment 1 examined statistical word segmentation in 11-month-olds and found that infants can recognize statistically segmented words across a change in the speaker’s voice from segmentation to testing. The direction of infants’ preferences suggests that recognizing words across a voice change is more difficult than recognizing them in a consistent voice. Experiment 2 tested whether 17-month-olds can generalize the output of statistical learning across variation to support word learning. The infants were successful in their generalization; they associated referents with statistically defined words despite a change in voice from segmentation to label learning. Infants’ learning patterns also indicate that they formed representations of across-word syllable sequences during segmentation. Thus, low probability sequences can act as object labels in some conditions. The findings of these experiments suggest that the units that emerge during statistical learning are not perceptually constrained, but rather are robust to naturalistic acoustic variation.

  20. A simple stochastic rainstorm generator for simulating spatially and temporally varying rainfall

    Science.gov (United States)

    Singer, M. B.; Michaelides, K.; Nichols, M.; Nearing, M. A.

    2016-12-01

    In semi-arid to arid drainage basins, rainstorms often control both water supply and flood risk to marginal communities of people. They also govern the availability of water to vegetation and other ecological communities, as well as spatial patterns of sediment, nutrient, and contaminant transport and deposition on local to basin scales. All of these landscape responses are sensitive to changes in climate that are projected to occur throughout western North America. Thus, it is important to improve characterization of rainstorms in a manner that enables statistical assessment of rainfall at spatial scales below that of existing gauging networks and the prediction of plausible manifestations of climate change. Here we present a simple, stochastic rainstorm generator that was created using data from a rich and dense network of rain gauges at the Walnut Gulch Experimental Watershed (WGEW) in SE Arizona, but which is applicable anywhere. We describe our methods for assembling pdfs of relevant rainstorm characteristics including total annual rainfall, storm area, storm center location, and storm duration. We also generate five fitted intensity-duration curves and apply a spatial rainfall gradient to generate precipitation at spatial scales below gauge spacing. The model then runs by Monte Carlo simulation in which a total annual rainfall is selected before we generate rainstorms until the annual precipitation total is reached. The procedure continues for decadal simulations. Thus, we keep track of the hydrologic impact of individual storms and the integral of precipitation over multiple decades. We first test the model using ensemble predictions until we reach statistical similarity to the input data from WGEW. We then employ the model to assess decadal precipitation under simulations of climate change in which we separately vary the distribution of total annual rainfall (trend in moisture) and the intensity-duration curves used for simulation (trends in storminess). We

  1. Statistical analysis of questionnaires a unified approach based on R and Stata

    CERN Document Server

    Bartolucci, Francesco; Gnaldi, Michela

    2015-01-01

    Statistical Analysis of Questionnaires: A Unified Approach Based on R and Stata presents special statistical methods for analyzing data collected by questionnaires. The book takes an applied approach to testing and measurement tasks, mirroring the growing use of statistical methods and software in education, psychology, sociology, and other fields. It is suitable for graduate students in applied statistics and psychometrics and practitioners in education, health, and marketing.The book covers the foundations of classical test theory (CTT), test reliability, va

  2. Variability in source sediment contributions by applying different statistic test for a Pyrenean catchment.

    Science.gov (United States)

    Palazón, L; Navas, A

    2017-06-01

    Information on sediment contribution and transport dynamics from the contributing catchments is needed to develop management plans to tackle environmental problems related with effects of fine sediment as reservoir siltation. In this respect, the fingerprinting technique is an indirect technique known to be valuable and effective for sediment source identification in river catchments. Large variability in sediment delivery was found in previous studies in the Barasona catchment (1509 km 2 , Central Spanish Pyrenees). Simulation results with SWAT and fingerprinting approaches identified badlands and agricultural uses as the main contributors to sediment supply in the reservoir. In this study the Kruskal-Wallis H-test and (3) principal components analysis. Source contribution results were different between assessed options with the greatest differences observed for option using #3, including the two step process: principal components analysis and discriminant function analysis. The characteristics of the solutions by the applied mixing model and the conceptual understanding of the catchment showed that the most reliable solution was achieved using #2, the two step process of Kruskal-Wallis H-test and discriminant function analysis. The assessment showed the importance of the statistical procedure used to define the optimum composite fingerprint for sediment fingerprinting applications. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Efficient kinetic Monte Carlo method for reaction-diffusion problems with spatially varying annihilation rates

    Science.gov (United States)

    Schwarz, Karsten; Rieger, Heiko

    2013-03-01

    We present an efficient Monte Carlo method to simulate reaction-diffusion processes with spatially varying particle annihilation or transformation rates as it occurs for instance in the context of motor-driven intracellular transport. Like Green's function reaction dynamics and first-passage time methods, our algorithm avoids small diffusive hops by propagating sufficiently distant particles in large hops to the boundaries of protective domains. Since for spatially varying annihilation or transformation rates the single particle diffusion propagator is not known analytically, we present an algorithm that generates efficiently either particle displacements or annihilations with the correct statistics, as we prove rigorously. The numerical efficiency of the algorithm is demonstrated with an illustrative example.

  4. Increased Statistical Efficiency in a Lognormal Mean Model

    Directory of Open Access Journals (Sweden)

    Grant H. Skrepnek

    2014-01-01

    Full Text Available Within the context of clinical and other scientific research, a substantial need exists for an accurate determination of the point estimate in a lognormal mean model, given that highly skewed data are often present. As such, logarithmic transformations are often advocated to achieve the assumptions of parametric statistical inference. Despite this, existing approaches that utilize only a sample’s mean and variance may not necessarily yield the most efficient estimator. The current investigation developed and tested an improved efficient point estimator for a lognormal mean by capturing more complete information via the sample’s coefficient of variation. Results of an empirical simulation study across varying sample sizes and population standard deviations indicated relative improvements in efficiency of up to 129.47 percent compared to the usual maximum likelihood estimator and up to 21.33 absolute percentage points above the efficient estimator presented by Shen and colleagues (2006. The relative efficiency of the proposed estimator increased particularly as a function of decreasing sample size and increasing population standard deviation.

  5. Taylor-series and Monte-Carlo-method uncertainty estimation of the width of a probability distribution based on varying bias and random error

    International Nuclear Information System (INIS)

    Wilson, Brandon M; Smith, Barton L

    2013-01-01

    Uncertainties are typically assumed to be constant or a linear function of the measured value; however, this is generally not true. Particle image velocimetry (PIV) is one example of a measurement technique that has highly nonlinear, time varying local uncertainties. Traditional uncertainty methods are not adequate for the estimation of the uncertainty of measurement statistics (mean and variance) in the presence of nonlinear, time varying errors. Propagation of instantaneous uncertainty estimates into measured statistics is performed allowing accurate uncertainty quantification of time-mean and statistics of measurements such as PIV. It is shown that random errors will always elevate the measured variance, and thus turbulent statistics such as u'u'-bar. Within this paper, nonlinear, time varying errors are propagated from instantaneous measurements into the measured mean and variance using the Taylor-series method. With these results and knowledge of the systematic and random uncertainty of each measurement, the uncertainty of the time-mean, the variance and covariance can be found. Applicability of the Taylor-series uncertainty equations to time varying systematic and random errors and asymmetric error distributions are demonstrated with Monte-Carlo simulations. The Taylor-series uncertainty estimates are always accurate for uncertainties on the mean quantity. The Taylor-series variance uncertainty is similar to the Monte-Carlo results for cases in which asymmetric random errors exist or the magnitude of the instantaneous variations in the random and systematic errors is near the ‘true’ variance. However, the Taylor-series method overpredicts the uncertainty in the variance as the instantaneous variations of systematic errors are large or are on the same order of magnitude as the ‘true’ variance. (paper)

  6. Confiabilidade teste-reteste de aspectos da rede social no Estudo Pró-Saúde Test-retest reliability of measures of social network in the "Pró-Saúde" Study

    Directory of Open Access Journals (Sweden)

    Rosane Harter Griep

    2003-06-01

    Full Text Available OBJETIVO: Avaliar os níveis de confiabilidade teste-reteste de informações relativas à rede social no Estudo Pró-saúde. MÉTODOS: Foi estimada a confiabilidade pelo estudo teste-reteste por meio de questionário multidimensional aplicado a uma coorte de trabalhadores de uma universidade. O mesmo questionário foi preenchido duas vezes por 192 funcionários não efetivos da universidade, com duas semanas de intervalo entre as aplicações. A concordância foi estimada pela estatística Kappa (variáveis categóricas, estatística Kappa ponderado e modelos log-lineares (variáveis ordinais, e coeficiente de correlação intraclasse (variáveis discretas. RESULTADOS: As medidas de concordância situaram-se acima de 0,70 para a maioria das variáveis. Estratificando-se as informações segundo gênero, idade e escolaridade, observou-se que a confiabilidade não apresentou padrão consistente de variabilidade. A aplicação de modelos log-lineares indicou que, para as variáveis ordinais do estudo, o modelo de melhor ajuste foi o de "concordância diagonal mais associação linear por linear". CONCLUSÕES: Os altos níveis de confiabilidade estimados permitem concluir que o processo de aferição dos itens sobre rede social foi adequado para as características investigadas. Estudos de validação em andamento complementarão a avaliação da qualidade dessas informações.OBJECTIVE: To evaluate test-retest reliability of social network-related information of the" Pró-Saúde" study. METHODS: A test-retest reliability study was conducted using a multidimensional questionnaire applied to a cohort of university employees. The same questionnaire was filled out twice by 192 non-permanent employees with two weeks apart. Agreement was estimated using kappa statistics (categorical variables, weighted kappa statistics, log-linear models (ordinal variables, and intraclass correlation coefficient (discrete variables. RESULTS: Estimates of reliability

  7. Time-varying analysis of CO_2 emissions, energy consumption, and economic growth nexus: Statistical experience in next 11 countries

    International Nuclear Information System (INIS)

    Shahbaz, Muhammad; Mahalik, Mantu Kumar; Shah, Syed Hasanat; Sato, João Ricardo

    2016-01-01

    This paper detects the direction of causality among carbon dioxide (CO_2) emissions, energy consumption, and economic growth in Next 11 countries for the period 1972–2013. Changes in economic, energy, and environmental policies as well as regulatory and technological advancement over time, cause changes in the relationship among the variables. We use a novel approach i.e. time-varying Granger causality and find that economic growth is the cause of CO_2 emissions in Bangladesh and Egypt. Economic growth causes energy consumption in the Philippines, Turkey, and Vietnam but the feedback effect exists between energy consumption and economic growth in South Korea. In the cases of Indonesia and Turkey, we find the unidirectional time-varying Granger causality running from economic growth to CO_2 emissions thus validates the existence of the Environmental Kuznets Curve hypothesis, which indicates that economic growth is achievable at the minimal cost of environment. The paper gives new insights for policy makers to attain sustainable economic growth while maintaining long-run environmental quality.

  8. Statistics in the pharmacy literature.

    Science.gov (United States)

    Lee, Charlene M; Soin, Herpreet K; Einarson, Thomas R

    2004-09-01

    Research in statistical methods is essential for maintenance of high quality of the published literature. To update previous reports of the types and frequencies of statistical terms and procedures in research studies of selected professional pharmacy journals. We obtained all research articles published in 2001 in 6 journals: American Journal of Health-System Pharmacy, The Annals of Pharmacotherapy, Canadian Journal of Hospital Pharmacy, Formulary, Hospital Pharmacy, and Journal of the American Pharmaceutical Association. Two independent reviewers identified and recorded descriptive and inferential statistical terms/procedures found in the methods, results, and discussion sections of each article. Results were determined by tallying the total number of times, as well as the percentage, that each statistical term or procedure appeared in the articles. One hundred forty-four articles were included. Ninety-eight percent employed descriptive statistics; of these, 28% used only descriptive statistics. The most common descriptive statistical terms were percentage (90%), mean (74%), standard deviation (58%), and range (46%). Sixty-nine percent of the articles used inferential statistics, the most frequent being chi(2) (33%), Student's t-test (26%), Pearson's correlation coefficient r (18%), ANOVA (14%), and logistic regression (11%). Statistical terms and procedures were found in nearly all of the research articles published in pharmacy journals. Thus, pharmacy education should aim to provide current and future pharmacists with an understanding of the common statistical terms and procedures identified to facilitate the appropriate appraisal and consequential utilization of the information available in research articles.

  9. Application of Statistics in Engineering Technology Programs

    Science.gov (United States)

    Zhan, Wei; Fink, Rainer; Fang, Alex

    2010-01-01

    Statistics is a critical tool for robustness analysis, measurement system error analysis, test data analysis, probabilistic risk assessment, and many other fields in the engineering world. Traditionally, however, statistics is not extensively used in undergraduate engineering technology (ET) programs, resulting in a major disconnect from industry…

  10. F0 Characteristics of Newsreaders on Varied Emotional Texts in Tamil Language.

    Science.gov (United States)

    Gunasekaran, Nishanthi; Boominathan, Prakash; Seethapathy, Jayashree

    2017-12-26

    The objective of this study was to profile speaking F 0 and its variations in newsreaders on varied emotional texts. This study has a prospective, case-control study design. Fifteen professional newsreaders and 15 non-newsreaders were the participants. The participants read the news bulletin that conveyed different emotions (shock, neutral, happy, and sad) in a habitual and "newsreading" voice. Speaking fundamental frequency (SFF) and F 0 variations were extracted from 1620 tokens using Praat software (version 5.2.32) on the opening lines, headlines, news stories, and closing lines of each news item. Paired t test, independent t test, and Friedman test were used for statistical analysis. Both male and female newsreaders had significantly (P ≤ 0.05) higher SFFs and standard deviations (SDs) of SFF in newsreading voice than speaking voice. Female non-newsreaders demonstrated significantly higher SFF and SD of SFF in newsreading voice, whereas no significant differences were noticed in the frequency parameters for male non-newsreaders. No significant difference was noted in the frequency parameters of speaking and newsreading voice between male newsreaders and male non-newsreaders. A significant difference in the SD of SFF was noticed between female newsreaders and female non-newsreaders in newsreading voice. Female newsreaders had a higher frequency range in both speaking voice and newsreading voice when compared with non-newsreaders. F 0 characteristics and frequency range determine the amount of frequency changes exercised by newsreaders while reading bulletins. This information is highly pedagogic for training voices in this profession. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  11. Statistics for High Energy Physics

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    The lectures emphasize the frequentist approach used for Dark Matter search and the Higgs search, discovery and measurements of its properties. An emphasis is put on hypothesis test using the asymptotic formulae formalism and its derivation, and on the derivation of the trial factor formulae in one and two dimensions. Various test statistics and their applications are discussed.  Some keywords: Profile Likelihood, Neyman Pearson, Feldman Cousins, Coverage, CLs. Nuisance Parameters Impact, Look Elsewhere Effect... Selected Bibliography: G. J. Feldman and R. D. Cousins, A Unified approach to the classical statistical analysis of small signals, Phys.\\ Rev.\\ D {\\bf 57}, 3873 (1998). A. L. Read, Presentation of search results: The CL(s) technique,'' J.\\ Phys.\\ G {\\bf 28}, 2693 (2002). G. Cowan, K. Cranmer, E. Gross and O. Vitells,  Asymptotic formulae for likelihood-based tests of new physics,' Eur.\\ Phys.\\ J.\\ C {\\bf 71}, 1554 (2011) Erratum: [Eur.\\ Phys.\\ J.\\ C {\\bf 73}...

  12. Statistical Characterization of 18650-Format Lithium-Ion Cell Thermal Runaway Energy Distributions

    Science.gov (United States)

    Walker, William Q.; Rickman, Steven; Darst, John; Finegan, Donal; Bayles, Gary; Darcy, Eric

    2017-01-01

    Effective thermal management systems, designed to handle the impacts of thermal runaway (TR) and to prevent cell-to-cell propagation, are key to safe operation of lithium-ion (Li-ion) battery assemblies. Critical factors for optimizing these systems include the total energy released during a single cell TR event and the fraction of the total energy that is released through the cell casing vs. through the ejecta material. A unique calorimeter was utilized to examine the TR behavior of a statistically significant number of 18650-format Li-ion cells with varying manufacturers, chemistries, and capacities. The calorimeter was designed to contain the TR energy in a format conducive to discerning the fractions of energy released through the cell casing vs. through the ejecta material. Other benefits of this calorimeter included the ability to rapidly test of large quantities of cells and the intentional minimization of secondary combustion effects. High energy (270 Wh/kg) and moderate energy (200 Wh/kg) 18650 cells were tested. Some of the cells had an imbedded short circuit (ISC) device installed to aid in the examination of TR mechanisms under more realistic conditions. Other variations included cells with bottom vent (BV) features and cells with thin casings (0.22 1/4m). After combining the data gathered with the calorimeter, a statistical approach was used to examine the probability of certain TR behavior, and the associated energy distributions, as a function of capacity, venting features, cell casing thickness and temperature.

  13. Experimental statistics

    CERN Document Server

    Natrella, Mary Gibbons

    1963-01-01

    Formulated to assist scientists and engineers engaged in army ordnance research and development programs, this well-known and highly regarded handbook is a ready reference for advanced undergraduate and graduate students as well as for professionals seeking engineering information and quantitative data for designing, developing, constructing, and testing equipment. Topics include characterizing and comparing the measured performance of a material, product, or process; general considerations in planning experiments; statistical techniques for analyzing extreme-value data; use of transformations

  14. Selection and reporting of statistical methods to assess reliability of a diagnostic test: Conformity to recommended methods in a peer-reviewed journal

    International Nuclear Information System (INIS)

    Park, Ji Eun; Sung, Yu Sub; Han, Kyung Hwa

    2017-01-01

    To evaluate the frequency and adequacy of statistical analyses in a general radiology journal when reporting a reliability analysis for a diagnostic test. Sixty-three studies of diagnostic test accuracy (DTA) and 36 studies reporting reliability analyses published in the Korean Journal of Radiology between 2012 and 2016 were analyzed. Studies were judged using the methodological guidelines of the Radiological Society of North America-Quantitative Imaging Biomarkers Alliance (RSNA-QIBA), and COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) initiative. DTA studies were evaluated by nine editorial board members of the journal. Reliability studies were evaluated by study reviewers experienced with reliability analysis. Thirty-one (49.2%) of the 63 DTA studies did not include a reliability analysis when deemed necessary. Among the 36 reliability studies, proper statistical methods were used in all (5/5) studies dealing with dichotomous/nominal data, 46.7% (7/15) of studies dealing with ordinal data, and 95.2% (20/21) of studies dealing with continuous data. Statistical methods were described in sufficient detail regarding weighted kappa in 28.6% (2/7) of studies and regarding the model and assumptions of intraclass correlation coefficient in 35.3% (6/17) and 29.4% (5/17) of studies, respectively. Reliability parameters were used as if they were agreement parameters in 23.1% (3/13) of studies. Reproducibility and repeatability were used incorrectly in 20% (3/15) of studies. Greater attention to the importance of reporting reliability, thorough description of the related statistical methods, efforts not to neglect agreement parameters, and better use of relevant terminology is necessary

  15. Selection and reporting of statistical methods to assess reliability of a diagnostic test: Conformity to recommended methods in a peer-reviewed journal

    Energy Technology Data Exchange (ETDEWEB)

    Park, Ji Eun; Sung, Yu Sub [Dept. of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul (Korea, Republic of); Han, Kyung Hwa [Dept. of Radiology, Research Institute of Radiological Science, Yonsei University College of Medicine, Seoul (Korea, Republic of); and others

    2017-11-15

    To evaluate the frequency and adequacy of statistical analyses in a general radiology journal when reporting a reliability analysis for a diagnostic test. Sixty-three studies of diagnostic test accuracy (DTA) and 36 studies reporting reliability analyses published in the Korean Journal of Radiology between 2012 and 2016 were analyzed. Studies were judged using the methodological guidelines of the Radiological Society of North America-Quantitative Imaging Biomarkers Alliance (RSNA-QIBA), and COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) initiative. DTA studies were evaluated by nine editorial board members of the journal. Reliability studies were evaluated by study reviewers experienced with reliability analysis. Thirty-one (49.2%) of the 63 DTA studies did not include a reliability analysis when deemed necessary. Among the 36 reliability studies, proper statistical methods were used in all (5/5) studies dealing with dichotomous/nominal data, 46.7% (7/15) of studies dealing with ordinal data, and 95.2% (20/21) of studies dealing with continuous data. Statistical methods were described in sufficient detail regarding weighted kappa in 28.6% (2/7) of studies and regarding the model and assumptions of intraclass correlation coefficient in 35.3% (6/17) and 29.4% (5/17) of studies, respectively. Reliability parameters were used as if they were agreement parameters in 23.1% (3/13) of studies. Reproducibility and repeatability were used incorrectly in 20% (3/15) of studies. Greater attention to the importance of reporting reliability, thorough description of the related statistical methods, efforts not to neglect agreement parameters, and better use of relevant terminology is necessary.

  16. A reanalysis of Lord's statistical treatment of football numbers

    NARCIS (Netherlands)

    Zand Scholten, A.; Borsboom, D.

    2009-01-01

    Stevens’ theory of admissible statistics [Stevens, S. S. (1946). On the theory of scales of measurement. Science, 103, 677680] states that measurement levels should guide the choice of statistical test, such that the truth value of statements based on a statistical analysis remains invariant under

  17. Statistics of software vulnerability detection in certification testing

    Science.gov (United States)

    Barabanov, A. V.; Markov, A. S.; Tsirlov, V. L.

    2018-05-01

    The paper discusses practical aspects of introduction of the methods to detect software vulnerability in the day-to-day activities of the accredited testing laboratory. It presents the approval results of the vulnerability detection methods as part of the study of the open source software and the software that is a test object of the certification tests under information security requirements, including software for communication networks. Results of the study showing the allocation of identified vulnerabilities by types of attacks, country of origin, programming languages used in the development, methods for detecting vulnerability, etc. are given. The experience of foreign information security certification systems related to the detection of certified software vulnerabilities is analyzed. The main conclusion based on the study is the need to implement practices for developing secure software in the development life cycle processes. The conclusions and recommendations for the testing laboratories on the implementation of the vulnerability analysis methods are laid down.

  18. Statistics applied to the testing of cladding tubes

    International Nuclear Information System (INIS)

    Perdijon, J.

    1987-01-01

    Cladding tubes, either steel or zircaloy, are generally given a 100 % inspection through ultrasonic non-destructive testing. This inspection may be completed beneficially with an eddy current test, as this is not sensitive to the same defects as those typically traced by ultrasonic testing. Unfortunately, the two methods (as with other non-destructive tests) exhibit poor precision; this means that a flaw, whose size is close to that denoted as rejection limit, may be accepted or rejected. Currently, rejection, i.e. the measurement above which a tube is rejected, is generally determined through measuring a calibration tube at regular time intervals, and the signal of a given tube is compared to that of the most recently completed calibration. This measurement is thus subject to variations which can be attributed to an actual shift of adjustments as well as to poor precision. For this reason, monitoring instrument adjustments using the so-called control chart method are proposed

  19. Data-driven inference for the spatial scan statistic

    Directory of Open Access Journals (Sweden)

    Duczmal Luiz H

    2011-08-01

    Full Text Available Abstract Background Kulldorff's spatial scan statistic for aggregated area maps searches for clusters of cases without specifying their size (number of areas or geographic location in advance. Their statistical significance is tested while adjusting for the multiple testing inherent in such a procedure. However, as is shown in this work, this adjustment is not done in an even manner for all possible cluster sizes. Results A modification is proposed to the usual inference test of the spatial scan statistic, incorporating additional information about the size of the most likely cluster found. A new interpretation of the results of the spatial scan statistic is done, posing a modified inference question: what is the probability that the null hypothesis is rejected for the original observed cases map with a most likely cluster of size k, taking into account only those most likely clusters of size k found under null hypothesis for comparison? This question is especially important when the p-value computed by the usual inference process is near the alpha significance level, regarding the correctness of the decision based in this inference. Conclusions A practical procedure is provided to make more accurate inferences about the most likely cluster found by the spatial scan statistic.

  20. Principles of thermodynamics and statistical mechanics

    CERN Document Server

    Lawden, D F

    2005-01-01

    A thorough exploration of the universal principles of thermodynamics and statistical mechanics, this volume explains the applications of these essential rules to a multitude of situations arising in physics and engineering. It develops their use in a variety of circumstances-including those involving gases, crystals, and magnets-in order to illustrate general methods of analysis and to provide readers with all the necessary background to continue in greater depth with specific topics.Author D. F. Lawden has considerable experience in teaching this subject to university students of varied abili