WorldWideScience

Sample records for underlying test statistics

  1. Robustness of Two-Level Testing Procedures under Distortions of First Level Statistics

    OpenAIRE

    Kostevich, A. L.; Nikitina, I. S.

    2007-01-01

    We investigate robustness of some two-level testing procedures under distortions induced by using an asymptotic distribution of first level statistics instead of an exact one. We demonstrate that ignoring the distortions results in unreliable conclusions and we propose robustness conditions for the two-level procedures.

  2. Empirical Statistical Power for Testing Multilocus Genotypic Effects under Unbalanced Designs Using a Gibbs Sampler

    Directory of Open Access Journals (Sweden)

    Chaeyoung Lee

    2012-11-01

    Full Text Available Epistasis that may explain a large portion of the phenotypic variation for complex economic traits of animals has been ignored in many genetic association studies. A Baysian method was introduced to draw inferences about multilocus genotypic effects based on their marginal posterior distributions by a Gibbs sampler. A simulation study was conducted to provide statistical powers under various unbalanced designs by using this method. Data were simulated by combined designs of number of loci, within genotype variance, and sample size in unbalanced designs with or without null combined genotype cells. Mean empirical statistical power was estimated for testing posterior mean estimate of combined genotype effect. A practical example for obtaining empirical statistical power estimates with a given sample size was provided under unbalanced designs. The empirical statistical powers would be useful for determining an optimal design when interactive associations of multiple loci with complex phenotypes were examined.

  3. 100 statistical tests

    CERN Document Server

    Kanji, Gopal K

    2006-01-01

    This expanded and updated Third Edition of Gopal K. Kanji's best-selling resource on statistical tests covers all the most commonly used tests with information on how to calculate and interpret results with simple datasets. Each entry begins with a short summary statement about the test's purpose, and contains details of the test objective, the limitations (or assumptions) involved, a brief outline of the method, a worked example, and the numerical calculation. 100 Statistical Tests, Third Edition is the one indispensable guide for users of statistical materials and consumers of statistical information at all levels and across all disciplines.

  4. Testing statistical hypotheses

    CERN Document Server

    Lehmann, E L

    2005-01-01

    The third edition of Testing Statistical Hypotheses updates and expands upon the classic graduate text, emphasizing optimality theory for hypothesis testing and confidence sets. The principal additions include a rigorous treatment of large sample optimality, together with the requisite tools. In addition, an introduction to the theory of resampling methods such as the bootstrap is developed. The sections on multiple testing and goodness of fit testing are expanded. The text is suitable for Ph.D. students in statistics and includes over 300 new problems out of a total of more than 760. E.L. Lehmann is Professor of Statistics Emeritus at the University of California, Berkeley. He is a member of the National Academy of Sciences and the American Academy of Arts and Sciences, and the recipient of honorary degrees from the University of Leiden, The Netherlands and the University of Chicago. He is the author of Elements of Large-Sample Theory and (with George Casella) he is also the author of Theory of Point Estimat...

  5. Statistical test of anarchy

    Energy Technology Data Exchange (ETDEWEB)

    Gouvea, Andre de; Murayama, Hitoshi

    2003-10-30

    'Anarchy' is the hypothesis that there is no fundamental distinction among the three flavors of neutrinos. It describes the mixing angles as random variables, drawn from well-defined probability distributions dictated by the group Haar measure. We perform a Kolmogorov-Smirnov (KS) statistical test to verify whether anarchy is consistent with all neutrino data, including the new result presented by KamLAND. We find a KS probability for Nature's choice of mixing angles equal to 64%, quite consistent with the anarchical hypothesis. In turn, assuming that anarchy is indeed correct, we compute lower bounds on vertical bar U{sub e3} vertical bar{sup 2}, the remaining unknown 'angle' of the leptonic mixing matrix.

  6. Statistical test of anarchy

    Science.gov (United States)

    de Gouvêa, André; Murayama, Hitoshi

    2003-10-01

    “Anarchy” is the hypothesis that there is no fundamental distinction among the three flavors of neutrinos. It describes the mixing angles as random variables, drawn from well-defined probability distributions dictated by the group Haar measure. We perform a Kolmogorov-Smirnov (KS) statistical test to verify whether anarchy is consistent with all neutrino data, including the new result presented by KamLAND. We find a KS probability for Nature's choice of mixing angles equal to 64%, quite consistent with the anarchical hypothesis. In turn, assuming that anarchy is indeed correct, we compute lower bounds on |Ue3|2, the remaining unknown “angle” of the leptonic mixing matrix.

  7. Statistical test of anarchy

    International Nuclear Information System (INIS)

    Gouvea, Andre de; Murayama, Hitoshi

    2003-01-01

    'Anarchy' is the hypothesis that there is no fundamental distinction among the three flavors of neutrinos. It describes the mixing angles as random variables, drawn from well-defined probability distributions dictated by the group Haar measure. We perform a Kolmogorov-Smirnov (KS) statistical test to verify whether anarchy is consistent with all neutrino data, including the new result presented by KamLAND. We find a KS probability for Nature's choice of mixing angles equal to 64%, quite consistent with the anarchical hypothesis. In turn, assuming that anarchy is indeed correct, we compute lower bounds on vertical bar U e3 vertical bar 2 , the remaining unknown 'angle' of the leptonic mixing matrix

  8. Testing statistical hypotheses of equivalence

    CERN Document Server

    Wellek, Stefan

    2010-01-01

    Equivalence testing has grown significantly in importance over the last two decades, especially as its relevance to a variety of applications has become understood. Yet published work on the general methodology remains scattered in specialists' journals, and for the most part, it focuses on the relatively narrow topic of bioequivalence assessment.With a far broader perspective, Testing Statistical Hypotheses of Equivalence provides the first comprehensive treatment of statistical equivalence testing. The author addresses a spectrum of specific, two-sided equivalence testing problems, from the

  9. Statistical properties of microcracking in polyurethane foams under tensile and creep tests: influence of temperature and density.

    Science.gov (United States)

    Deschanel, Stephanie; Vigier, Gerard; Godin, Nathalie; Vanel, Loic; Ciliberto, Sergio

    2007-03-01

    For some heterogeneous materials fracture can be described as a clustering of microcracks: global rupture being not controlled by a single event. We focus on polyurethane foams whose heterogeneities (pores) constitute the termination points where microcracks can stop. We record both the spatial and time distributions of acoustic emission emitted by a sample during mechanical tests: each microcrack nucleation corresponds to a burst of energy that can be localized on the widest face of the specimen. The probability distributions of the energy released is power-law distributed, independently of the material density, the loading mode or the mechanical behavior. On the other hand, the agreement of a power law for the time intervals between two damaging events seems to require a quasi constant stress during damaging. Moreover, we notice a behavior difference of the cumulative number of events and the cumulative energy of the localized events with temperature in the case of tensile tests and not any more for creep tests. The occurrence of a unique behavior and a power law in a restricted time interval for the cumulative number of events and the cumulative energy in creep allow us to apprehend interesting later studies of materials' lifetime prediction.

  10. Statistical hypothesis testing with SAS and R

    CERN Document Server

    Taeger, Dirk

    2014-01-01

    A comprehensive guide to statistical hypothesis testing with examples in SAS and R When analyzing datasets the following questions often arise:Is there a short hand procedure for a statistical test available in SAS or R?If so, how do I use it?If not, how do I program the test myself? This book answers these questions and provides an overview of the most commonstatistical test problems in a comprehensive way, making it easy to find and performan appropriate statistical test. A general summary of statistical test theory is presented, along with a basicdescription for each test, including the

  11. Improvement of Statistical Decisions under Parametric Uncertainty

    Science.gov (United States)

    Nechval, Nicholas A.; Nechval, Konstantin N.; Purgailis, Maris; Berzins, Gundars; Rozevskis, Uldis

    2011-10-01

    A large number of problems in production planning and scheduling, location, transportation, finance, and engineering design require that decisions be made in the presence of uncertainty. Decision-making under uncertainty is a central problem in statistical inference, and has been formally studied in virtually all approaches to inference. The aim of the present paper is to show how the invariant embedding technique, the idea of which belongs to the authors, may be employed in the particular case of finding the improved statistical decisions under parametric uncertainty. This technique represents a simple and computationally attractive statistical method based on the constructive use of the invariance principle in mathematical statistics. Unlike the Bayesian approach, an invariant embedding technique is independent of the choice of priors. It allows one to eliminate unknown parameters from the problem and to find the best invariant decision rule, which has smaller risk than any of the well-known decision rules. To illustrate the proposed technique, application examples are given.

  12. Statistical decisions under nonparametric a priori information

    International Nuclear Information System (INIS)

    Chilingaryan, A.A.

    1985-01-01

    The basic module of applied program package for statistical analysis of the ANI experiment data is described. By means of this module tasks of choosing theoretical model most adequately fitting to experimental data, selection of events of definte type, identification of elementary particles are carried out. For mentioned problems solving, the Bayesian rules, one-leave out test and KNN (K Nearest Neighbour) adaptive density estimation are utilized

  13. The insignificance of statistical significance testing

    Science.gov (United States)

    Johnson, Douglas H.

    1999-01-01

    Despite their use in scientific journals such as The Journal of Wildlife Management, statistical hypothesis tests add very little value to the products of research. Indeed, they frequently confuse the interpretation of data. This paper describes how statistical hypothesis tests are often viewed, and then contrasts that interpretation with the correct one. I discuss the arbitrariness of P-values, conclusions that the null hypothesis is true, power analysis, and distinctions between statistical and biological significance. Statistical hypothesis testing, in which the null hypothesis about the properties of a population is almost always known a priori to be false, is contrasted with scientific hypothesis testing, which examines a credible null hypothesis about phenomena in nature. More meaningful alternatives are briefly outlined, including estimation and confidence intervals for determining the importance of factors, decision theory for guiding actions in the face of uncertainty, and Bayesian approaches to hypothesis testing and other statistical practices.

  14. Similar tests and the standardized log likelihood ratio statistic

    DEFF Research Database (Denmark)

    Jensen, Jens Ledet

    1986-01-01

    When testing an affine hypothesis in an exponential family the 'ideal' procedure is to calculate the exact similar test, or an approximation to this, based on the conditional distribution given the minimal sufficient statistic under the null hypothesis. By contrast to this there is a 'primitive......' approach in which the marginal distribution of a test statistic considered and any nuisance parameter appearing in the test statistic is replaced by an estimate. We show here that when using standardized likelihood ratio statistics the 'primitive' procedure is in fact an 'ideal' procedure to order O(n -3...

  15. Polarimetric Segmentation Using Wishart Test Statistic

    DEFF Research Database (Denmark)

    Skriver, Henning; Schou, Jesper; Nielsen, Allan Aasbjerg

    2002-01-01

    ) approach, which is a merging algorithm for single channel SAR images. The polarimetric version described in this paper uses the above-mentioned test statistic for merging. The segmentation algorithm has been applied to polarimetric SAR data from the Danish dual-frequency, airborne polarimetric SAR, EMISAR......A newly developed test statistic for equality of two complex covariance matrices following the complex Wishart distribution and an associated asymptotic probability for the test statistic has been used in a segmentation algorithm. The segmentation algorithm is based on the MUM (merge using moments....... The results show clearly an improved segmentation performance for the full polarimetric algorithm compared to single channel approaches....

  16. A simplification of the likelihood ratio test statistic for testing ...

    African Journals Online (AJOL)

    The traditional likelihood ratio test statistic for testing hypothesis about goodness of fit of multinomial probabilities in one, two and multi – dimensional contingency table was simplified. Advantageously, using the simplified version of the statistic to test the null hypothesis is easier and faster because calculating the expected ...

  17. SPSS for applied sciences basic statistical testing

    CERN Document Server

    Davis, Cole

    2013-01-01

    This book offers a quick and basic guide to using SPSS and provides a general approach to solving problems using statistical tests. It is both comprehensive in terms of the tests covered and the applied settings it refers to, and yet is short and easy to understand. Whether you are a beginner or an intermediate level test user, this book will help you to analyse different types of data in applied settings. It will also give you the confidence to use other statistical software and to extend your expertise to more specific scientific settings as required.The author does not use mathematical form

  18. A Statistical Perspective on Highly Accelerated Testing

    Energy Technology Data Exchange (ETDEWEB)

    Thomas, Edward V. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-02-01

    Highly accelerated life testing has been heavily promoted at Sandia (and elsewhere) as a means to rapidly identify product weaknesses caused by flaws in the product's design or manufacturing process. During product development, a small number of units are forced to fail at high stress. The failed units are then examined to determine the root causes of failure. The identification of the root causes of product failures exposed by highly accelerated life testing can instigate changes to the product's design and/or manufacturing process that result in a product with increased reliability. It is widely viewed that this qualitative use of highly accelerated life testing (often associated with the acronym HALT) can be useful. However, highly accelerated life testing has also been proposed as a quantitative means for "demonstrating" the reliability of a product where unreliability is associated with loss of margin via an identified and dominating failure mechanism. It is assumed that the dominant failure mechanism can be accelerated by changing the level of a stress factor that is assumed to be related to the dominant failure mode. In extreme cases, a minimal number of units (often from a pre-production lot) are subjected to a single highly accelerated stress relative to normal use. If no (or, sufficiently few) units fail at this high stress level, some might claim that a certain level of reliability has been demonstrated (relative to normal use conditions). Underlying this claim are assumptions regarding the level of knowledge associated with the relationship between the stress level and the probability of failure. The primary purpose of this document is to discuss (from a statistical perspective) the efficacy of using accelerated life testing protocols (and, in particular, "highly accelerated" protocols) to make quantitative inferences concerning the performance of a product (e.g., reliability) when in fact there is lack-of-knowledge and uncertainty concerning

  19. Statistical treatment of fatigue test data

    International Nuclear Information System (INIS)

    Raske, D.T.

    1980-01-01

    This report discussed several aspects of fatigue data analysis in order to provide a basis for the development of statistically sound design curves. Included is a discussion on the choice of the dependent variable, the assumptions associated with least squares regression models, the variability of fatigue data, the treatment of data from suspended tests and outlying observations, and various strain-life relations

  20. Statistical test theory for the behavioral sciences

    CERN Document Server

    de Gruijter, Dato N M

    2007-01-01

    Since the development of the first intelligence test in the early 20th century, educational and psychological tests have become important measurement techniques to quantify human behavior. Focusing on this ubiquitous yet fruitful area of research, Statistical Test Theory for the Behavioral Sciences provides both a broad overview and a critical survey of assorted testing theories and models used in psychology, education, and other behavioral science fields. Following a logical progression from basic concepts to more advanced topics, the book first explains classical test theory, covering true score, measurement error, and reliability. It then presents generalizability theory, which provides a framework to deal with various aspects of test scores. In addition, the authors discuss the concept of validity in testing, offering a strategy for evidence-based validity. In the two chapters devoted to item response theory (IRT), the book explores item response models, such as the Rasch model, and applications, incl...

  1. Simplified Freeman-Tukey test statistics for testing probabilities in ...

    African Journals Online (AJOL)

    This paper presents the simplified version of the Freeman-Tukey test statistic for testing hypothesis about multinomial probabilities in one, two and multidimensional contingency tables that does not require calculating the expected cell frequencies before test of significance. The simplified method established new criteria of ...

  2. New Graphical Methods and Test Statistics for Testing Composite Normality

    Directory of Open Access Journals (Sweden)

    Marc S. Paolella

    2015-07-01

    Full Text Available Several graphical methods for testing univariate composite normality from an i.i.d. sample are presented. They are endowed with correct simultaneous error bounds and yield size-correct tests. As all are based on the empirical CDF, they are also consistent for all alternatives. For one test, called the modified stabilized probability test, or MSP, a highly simplified computational method is derived, which delivers the test statistic and also a highly accurate p-value approximation, essentially instantaneously. The MSP test is demonstrated to have higher power against asymmetric alternatives than the well-known and powerful Jarque-Bera test. A further size-correct test, based on combining two test statistics, is shown to have yet higher power. The methodology employed is fully general and can be applied to any i.i.d. univariate continuous distribution setting.

  3. Comparing statistical tests for detecting soil contamination greater than background

    International Nuclear Information System (INIS)

    Hardin, J.W.; Gilbert, R.O.

    1993-12-01

    The Washington State Department of Ecology (WSDE) recently issued a report that provides guidance on statistical issues regarding investigation and cleanup of soil and groundwater contamination under the Model Toxics Control Act Cleanup Regulation. Included in the report are procedures for determining a background-based cleanup standard and for conducting a 3-step statistical test procedure to decide if a site is contaminated greater than the background standard. The guidance specifies that the State test should only be used if the background and site data are lognormally distributed. The guidance in WSDE allows for using alternative tests on a site-specific basis if prior approval is obtained from WSDE. This report presents the results of a Monte Carlo computer simulation study conducted to evaluate the performance of the State test and several alternative tests for various contamination scenarios (background and site data distributions). The primary test performance criteria are (1) the probability the test will indicate that a contaminated site is indeed contaminated, and (2) the probability that the test will indicate an uncontaminated site is contaminated. The simulation study was conducted assuming the background concentrations were from lognormal or Weibull distributions. The site data were drawn from distributions selected to represent various contamination scenarios. The statistical tests studied are the State test, t test, Satterthwaite's t test, five distribution-free tests, and several tandem tests (wherein two or more tests are conducted using the same data set)

  4. Computation of a test statistic in data quality control

    NARCIS (Netherlands)

    Chang, X.W.; Paige, C.C.; Tiberius, C.C.J.M.

    2005-01-01

    When processing observational data, statistical testing is an essential instrument for rendering harmless incidental anomalies and disturbances in the measurements. A commonly used test statistic based on the general linear model is the generalized likelihood ratio test statistic. The standard

  5. Kepler Planet Detection Metrics: Statistical Bootstrap Test

    Science.gov (United States)

    Jenkins, Jon M.; Burke, Christopher J.

    2016-01-01

    This document describes the data produced by the Statistical Bootstrap Test over the final three Threshold Crossing Event (TCE) deliveries to NExScI: SOC 9.1 (Q1Q16)1 (Tenenbaum et al. 2014), SOC 9.2 (Q1Q17) aka DR242 (Seader et al. 2015), and SOC 9.3 (Q1Q17) aka DR253 (Twicken et al. 2016). The last few years have seen significant improvements in the SOC science data processing pipeline, leading to higher quality light curves and more sensitive transit searches. The statistical bootstrap analysis results presented here and the numerical results archived at NASAs Exoplanet Science Institute (NExScI) bear witness to these software improvements. This document attempts to introduce and describe the main features and differences between these three data sets as a consequence of the software changes.

  6. Evaluation of Multi-parameter Test Statistics for Multiple Imputation.

    Science.gov (United States)

    Liu, Yu; Enders, Craig K

    2017-01-01

    In Ordinary Least Square regression, researchers often are interested in knowing whether a set of parameters is different from zero. With complete data, this could be achieved using the gain in prediction test, hierarchical multiple regression, or an omnibus F test. However, in substantive research scenarios, missing data often exist. In the context of multiple imputation, one of the current state-of-art missing data strategies, there are several different analogous multi-parameter tests of the joint significance of a set of parameters, and these multi-parameter test statistics can be referenced to various distributions to make statistical inferences. However, little is known about the performance of these tests, and virtually no research study has compared the Type 1 error rates and statistical power of these tests in scenarios that are typical of behavioral science data (e.g., small to moderate samples, etc.). This paper uses Monte Carlo simulation techniques to examine the performance of these multi-parameter test statistics for multiple imputation under a variety of realistic conditions. We provide a number of practical recommendations for substantive researchers based on the simulation results, and illustrate the calculation of these test statistics with an empirical example.

  7. Statistical tests for person misfit in computerized adaptive testing

    NARCIS (Netherlands)

    Glas, Cornelis A.W.; Meijer, R.R.; van Krimpen-Stoop, Edith

    1998-01-01

    Recently, several person-fit statistics have been proposed to detect nonfitting response patterns. This study is designed to generalize an approach followed by Klauer (1995) to an adaptive testing system using the two-parameter logistic model (2PL) as a null model. The approach developed by Klauer

  8. On Consistent Nonparametric Statistical Tests of Symmetry Hypotheses

    Directory of Open Access Journals (Sweden)

    Jean-François Quessy

    2016-05-01

    Full Text Available Being able to formally test for symmetry hypotheses is an important topic in many fields, including environmental and physical sciences. In this paper, one concentrates on a large family of nonparametric tests of symmetry based on Cramér–von Mises statistics computed from empirical distribution and characteristic functions. These tests possess the highly desirable property of being universally consistent in the sense that they detect any kind of departure from symmetry as the sample size becomes large. The asymptotic behaviour of these test statistics under symmetry is deduced from the theory of first-order degenerate V-statistics. The issue of computing valid p-values is tackled using the multiplier bootstrap method suitably adapted to V-statistics, yielding elegant, easy-to-compute and quick procedures for testing symmetry. A special focus is put on tests of univariate symmetry, bivariate exchangeability and reflected symmetry; a simulation study indicates the good sampling properties of these tests. Finally, a framework for testing general symmetry hypotheses is introduced.

  9. ROTS: An R package for reproducibility-optimized statistical testing.

    Science.gov (United States)

    Suomi, Tomi; Seyednasrollah, Fatemeh; Jaakkola, Maria K; Faux, Thomas; Elo, Laura L

    2017-05-01

    Differential expression analysis is one of the most common types of analyses performed on various biological data (e.g. RNA-seq or mass spectrometry proteomics). It is the process that detects features, such as genes or proteins, showing statistically significant differences between the sample groups under comparison. A major challenge in the analysis is the choice of an appropriate test statistic, as different statistics have been shown to perform well in different datasets. To this end, the reproducibility-optimized test statistic (ROTS) adjusts a modified t-statistic according to the inherent properties of the data and provides a ranking of the features based on their statistical evidence for differential expression between two groups. ROTS has already been successfully applied in a range of different studies from transcriptomics to proteomics, showing competitive performance against other state-of-the-art methods. To promote its widespread use, we introduce here a Bioconductor R package for performing ROTS analysis conveniently on different types of omics data. To illustrate the benefits of ROTS in various applications, we present three case studies, involving proteomics and RNA-seq data from public repositories, including both bulk and single cell data. The package is freely available from Bioconductor (https://www.bioconductor.org/packages/ROTS).

  10. Statistical tests of simple earthquake cycle models

    Science.gov (United States)

    Devries, Phoebe M. R.; Evans, Eileen

    2016-01-01

    A central goal of observing and modeling the earthquake cycle is to forecast when a particular fault may generate an earthquake: a fault late in its earthquake cycle may be more likely to generate an earthquake than a fault early in its earthquake cycle. Models that can explain geodetic observations throughout the entire earthquake cycle may be required to gain a more complete understanding of relevant physics and phenomenology. Previous efforts to develop unified earthquake models for strike-slip faults have largely focused on explaining both preseismic and postseismic geodetic observations available across a few faults in California, Turkey, and Tibet. An alternative approach leverages the global distribution of geodetic and geologic slip rate estimates on strike-slip faults worldwide. Here we use the Kolmogorov-Smirnov test for similarity of distributions to infer, in a statistically rigorous manner, viscoelastic earthquake cycle models that are inconsistent with 15 sets of observations across major strike-slip faults. We reject a large subset of two-layer models incorporating Burgers rheologies at a significance level of α = 0.05 (those with long-term Maxwell viscosities ηM ~ 4.6 × 1020 Pa s) but cannot reject models on the basis of transient Kelvin viscosity ηK. Finally, we examine the implications of these results for the predicted earthquake cycle timing of the 15 faults considered and compare these predictions to the geologic and historical record.

  11. Testing for Statistical Discrimination based on Gender

    DEFF Research Database (Denmark)

    Lesner, Rune Vammen

    This paper develops a model which incorporates the two most commonly cited strands of the literature on statistical discrimination, namely screening discrimination and stereotyping. The model is used to provide empirical evidence of statistical discrimination based on gender in the labour market....... It is shown that the implications of both screening discrimination and stereotyping are consistent with observable wage dynamics. In addition, it is found that the gender wage gap decreases in tenure but increases in job transitions and that the fraction of women in high-ranking positions within a firm does...... not affect the level of statistical discrimination by gender....

  12. Fully Bayesian tests of neutrality using genealogical summary statistics

    Directory of Open Access Journals (Sweden)

    Drummond Alexei J

    2008-10-01

    Full Text Available Abstract Background Many data summary statistics have been developed to detect departures from neutral expectations of evolutionary models. However questions about the neutrality of the evolution of genetic loci within natural populations remain difficult to assess. One critical cause of this difficulty is that most methods for testing neutrality make simplifying assumptions simultaneously about the mutational model and the population size model. Consequentially, rejecting the null hypothesis of neutrality under these methods could result from violations of either or both assumptions, making interpretation troublesome. Results Here we harness posterior predictive simulation to exploit summary statistics of both the data and model parameters to test the goodness-of-fit of standard models of evolution. We apply the method to test the selective neutrality of molecular evolution in non-recombining gene genealogies and we demonstrate the utility of our method on four real data sets, identifying significant departures of neutrality in human influenza A virus, even after controlling for variation in population size. Conclusion Importantly, by employing a full model-based Bayesian analysis, our method separates the effects of demography from the effects of selection. The method also allows multiple summary statistics to be used in concert, thus potentially increasing sensitivity. Furthermore, our method remains useful in situations where analytical expectations and variances of summary statistics are not available. This aspect has great potential for the analysis of temporally spaced data, an expanding area previously ignored for limited availability of theory and methods.

  13. A critique of statistical hypothesis testing in clinical research

    Directory of Open Access Journals (Sweden)

    Somik Raha

    2011-01-01

    Full Text Available Many have documented the difficulty of using the current paradigm of Randomized Controlled Trials (RCTs to test and validate the effectiveness of alternative medical systems such as Ayurveda. This paper critiques the applicability of RCTs for all clinical knowledge-seeking endeavors, of which Ayurveda research is a part. This is done by examining statistical hypothesis testing, the underlying foundation of RCTs, from a practical and philosophical perspective. In the philosophical critique, the two main worldviews of probability are that of the Bayesian and the frequentist. The frequentist worldview is a special case of the Bayesian worldview requiring the unrealistic assumptions of knowing nothing about the universe and believing that all observations are unrelated to each other. Many have claimed that the first belief is necessary for science, and this claim is debunked by comparing variations in learning with different prior beliefs. Moving beyond the Bayesian and frequentist worldviews, the notion of hypothesis testing itself is challenged on the grounds that a hypothesis is an unclear distinction, and assigning a probability on an unclear distinction is an exercise that does not lead to clarity of action. This critique is of the theory itself and not any particular application of statistical hypothesis testing. A decision-making frame is proposed as a way of both addressing this critique and transcending ideological debates on probability. An example of a Bayesian decision-making approach is shown as an alternative to statistical hypothesis testing, utilizing data from a past clinical trial that studied the effect of Aspirin on heart attacks in a sample population of doctors. As a big reason for the prevalence of RCTs in academia is legislation requiring it, the ethics of legislating the use of statistical methods for clinical research is also examined.

  14. Statistical Tests for Mixed Linear Models

    CERN Document Server

    Khuri, André I; Sinha, Bimal K

    2011-01-01

    An advanced discussion of linear models with mixed or random effects. In recent years a breakthrough has occurred in our ability to draw inferences from exact and optimum tests of variance component models, generating much research activity that relies on linear models with mixed and random effects. This volume covers the most important research of the past decade as well as the latest developments in hypothesis testing. It compiles all currently available results in the area of exact and optimum tests for variance component models and offers the only comprehensive treatment for these models a

  15. Caveats for using statistical significance tests in research assessments

    DEFF Research Database (Denmark)

    Schneider, Jesper Wiborg

    2013-01-01

    This article raises concerns about the advantages of using statistical significance tests in research assessments as has recently been suggested in the debate about proper normalization procedures for citation indicators by Opthof and Leydesdorff (2010). Statistical significance tests are highly ...... are important or not. On the contrary their use may be harmful. Like many other critics, we generally believe that statistical significance tests are over- and misused in the empirical sciences including scientometrics and we encourage a reform on these matters....

  16. HOW TO SELECT APPROPRIATE STATISTICAL TEST IN SCIENTIFIC ARTICLES

    Directory of Open Access Journals (Sweden)

    Vladimir TRAJKOVSKI

    2016-09-01

    Full Text Available Statistics is mathematical science dealing with the collection, analysis, interpretation, and presentation of masses of numerical data in order to draw relevant conclusions. Statistics is a form of mathematical analysis that uses quantified models, representations and synopses for a given set of experimental data or real-life studies. The students and young researchers in biomedical sciences and in special education and rehabilitation often declare that they have chosen to enroll that study program because they have lack of knowledge or interest in mathematics. This is a sad statement, but there is much truth in it. The aim of this editorial is to help young researchers to select statistics or statistical techniques and statistical software appropriate for the purposes and conditions of a particular analysis. The most important statistical tests are reviewed in the article. Knowing how to choose right statistical test is an important asset and decision in the research data processing and in the writing of scientific papers. Young researchers and authors should know how to choose and how to use statistical methods. The competent researcher will need knowledge in statistical procedures. That might include an introductory statistics course, and it most certainly includes using a good statistics textbook. For this purpose, there is need to return of Statistics mandatory subject in the curriculum of the Institute of Special Education and Rehabilitation at Faculty of Philosophy in Skopje. Young researchers have a need of additional courses in statistics. They need to train themselves to use statistical software on appropriate way.

  17. New Statistical Randomness Tests Based on Length of Runs

    Directory of Open Access Journals (Sweden)

    Ali Doğanaksoy

    2015-01-01

    Full Text Available Random sequences and random numbers constitute a necessary part of cryptography. Many cryptographic protocols depend on random values. Randomness is measured by statistical tests and hence security evaluation of a cryptographic algorithm deeply depends on statistical randomness tests. In this work we focus on statistical distributions of runs of lengths one, two, and three. Using these distributions we state three new statistical randomness tests. New tests use χ2 distribution and, therefore, exact values of probabilities are needed. Probabilities associated runs of lengths one, two, and three are stated. Corresponding probabilities are divided into five subintervals of equal probabilities. Accordingly, three new statistical tests are defined and pseudocodes for these new statistical tests are given. New statistical tests are designed to detect the deviations in the number of runs of various lengths from a random sequence. Together with some other statistical tests, we analyse our tests’ results on outputs of well-known encryption algorithms and on binary expansions of e, π, and 2. Experimental results show the performance and sensitivity of our tests.

  18. Statistical Processes Under Change: Enhancing Data Quality with Pretests

    Science.gov (United States)

    Radermacher, Walter; Sattelberger, Sabine

    Statistical offices in Europe, in particular the Federal Statistical Office in Germany, are meeting users’ ever more demanding requirements with innovative and appropriate responses, such as the multiple sources mixed-mode design model. This combines various objectives: reducing survey costs and the burden on interviewees, and maximising data quality. The same improvements are also being sought by way of the systematic use of pretests to optimise survey documents. This paper provides a first impression of the many procedures available. An ideal pretest combines both quantitative and qualitative test methods. Quantitative test procedures can be used to determine how often particular input errors arise. The questionnaire is tested in the field in the corresponding survey mode. Qualitative test procedures can find the reasons for input errors. Potential interviewees are included in the questionnaire tests, and their feedback on the survey documentation is systematically analysed and used to upgrade the questionnaire. This was illustrated in our paper by an example from business statistics (“Umstellung auf die Wirtschaftszweigklassifikation 2008” - Change-over to the 2008 economic sector classification). This pretest not only gave important clues about how to improve the contents, but also helped to realistically estimate the organisational cost of the main survey.

  19. Statistics

    CERN Document Server

    Hayslett, H T

    1991-01-01

    Statistics covers the basic principles of Statistics. The book starts by tackling the importance and the two kinds of statistics; the presentation of sample data; the definition, illustration and explanation of several measures of location; and the measures of variation. The text then discusses elementary probability, the normal distribution and the normal approximation to the binomial. Testing of statistical hypotheses and tests of hypotheses about the theoretical proportion of successes in a binomial population and about the theoretical mean of a normal population are explained. The text the

  20. Radio resource allocation over fading channels under statistical delay constraints

    CERN Document Server

    Le-Ngoc, Tho

    2017-01-01

    This SpringerBrief presents radio resource allocation schemes for buffer-aided communications systems over fading channels under statistical delay constraints in terms of upper-bounded average delay or delay-outage probability. This Brief starts by considering a source-destination communications link with data arriving at the source transmission buffer. The first scenario, the joint optimal data admission control and power allocation problem for throughput maximization is considered, where the source is assumed to have a maximum power and an average delay constraints. The second scenario, optimal power allocation problems for energy harvesting (EH) communications systems under average delay or delay-outage constraints are explored, where the EH source harvests random amounts of energy from renewable energy sources, and stores the harvested energy in a battery during data transmission. Online resource allocation algorithms are developed when the statistical knowledge of the random channel fading, data arrivals...

  1. Multi-sample Rényi test statistics

    Czech Academy of Sciences Publication Activity Database

    Hobza, Tomáš; Molina, I.; Morales, D.

    2009-01-01

    Roč. 23, č. 2 (2009), s. 196-215 ISSN 0103-0752 R&D Projects: GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : Rényi divergence * divergence statistic s * testing composite hypotheses * homogeneity of variances Subject RIV: BB - Applied Statistic s, Operational Research http://library.utia.cas.cz/separaty/2009/SI/hobza-multi-sample renyi test statistic s.pdf

  2. Corrections of the NIST Statistical Test Suite for Randomness

    OpenAIRE

    Kim, Song-Ju; Umeno, Ken; Hasegawa, Akio

    2004-01-01

    It is well known that the NIST statistical test suite was used for the evaluation of AES candidate algorithms. We have found that the test setting of Discrete Fourier Transform test and Lempel-Ziv test of this test suite are wrong. We give four corrections of mistakes in the test settings. This suggests that re-evaluation of the test results should be needed.

  3. Two independent pivotal statistics that test location and misspecification and add-up to the Anderson-Rubin statistic

    NARCIS (Netherlands)

    Kleibergen, F.R.

    2002-01-01

    We extend the novel pivotal statistics for testing the parameters in the instrumental variables regression model. We show that these statistics result from a decomposition of the Anderson-Rubin statistic into two independent pivotal statistics. The first statistic is a score statistic that tests

  4. The Use of Meta-Analytic Statistical Significance Testing

    Science.gov (United States)

    Polanin, Joshua R.; Pigott, Terri D.

    2015-01-01

    Meta-analysis multiplicity, the concept of conducting multiple tests of statistical significance within one review, is an underdeveloped literature. We address this issue by considering how Type I errors can impact meta-analytic results, suggest how statistical power may be affected through the use of multiplicity corrections, and propose how…

  5. Kolmogorov complexity, pseudorandom generators and statistical models testing

    Czech Academy of Sciences Publication Activity Database

    Šindelář, Jan; Boček, Pavel

    2002-01-01

    Roč. 38, č. 6 (2002), s. 747-759 ISSN 0023-5954 R&D Projects: GA ČR GA102/99/1564 Institutional research plan: CEZ:AV0Z1075907 Keywords : Kolmogorov complexity * pseudorandom generators * statistical models testing Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.341, year: 2002

  6. A Statistical Test for Differential Item Pair Functioning

    NARCIS (Netherlands)

    Bechger, T.M.; Maris, G.

    This paper presents an IRT-based statistical test for differential item functioning (DIF). The test is developed for items conforming to the Rasch (Probabilistic models for some intelligence and attainment tests, The Danish Institute of Educational Research, Copenhagen, 1960) model but we will

  7. Comparison of Statistical Methods for Detector Testing Programs

    Energy Technology Data Exchange (ETDEWEB)

    Rennie, John Alan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Abhold, Mark [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-10-14

    A typical goal for any detector testing program is to ascertain not only the performance of the detector systems under test, but also the confidence that systems accepted using that testing program’s acceptance criteria will exceed a minimum acceptable performance (which is usually expressed as the minimum acceptable success probability, p). A similar problem often arises in statistics, where we would like to ascertain the fraction, p, of a population of items that possess a property that may take one of two possible values. Typically, the problem is approached by drawing a fixed sample of size n, with the number of items out of n that possess the desired property, x, being termed successes. The sample mean gives an estimate of the population mean p ≈ x/n, although usually it is desirable to accompany such an estimate with a statement concerning the range within which p may fall and the confidence associated with that range. Procedures for establishing such ranges and confidence limits are described in detail by Clopper, Brown, and Agresti for two-sided symmetric confidence intervals.

  8. Statistical inferences for bearings life using sudden death test

    Directory of Open Access Journals (Sweden)

    Morariu Cristin-Olimpiu

    2017-01-01

    Full Text Available In this paper we propose a calculus method for reliability indicators estimation and a complete statistical inferences for three parameters Weibull distribution of bearings life. Using experimental values regarding the durability of bearings tested on stands by the sudden death tests involves a series of particularities of the estimation using maximum likelihood method and statistical inference accomplishment. The paper detailing these features and also provides an example calculation.

  9. Comparison of Statistical Data Models for Identifying Differentially Expressed Genes Using a Generalized Likelihood Ratio Test

    Directory of Open Access Journals (Sweden)

    Kok-Yong Seng

    2008-01-01

    Full Text Available Currently, statistical techniques for analysis of microarray-generated data sets have deficiencies due to limited understanding of errors inherent in the data. A generalized likelihood ratio (GLR test based on an error model has been recently proposed to identify differentially expressed genes from microarray experiments. However, the use of different error structures under the GLR test has not been evaluated, nor has this method been compared to commonly used statistical tests such as the parametric t-test. The concomitant effects of varying data signal-to-noise ratio and replication number on the performance of statistical tests also remain largely unexplored. In this study, we compared the effects of different underlying statistical error structures on the GLR test’s power in identifying differentially expressed genes in microarray data. We evaluated such variants of the GLR test as well as the one sample t-test based on simulated data by means of receiver operating characteristic (ROC curves. Further, we used bootstrapping of ROC curves to assess statistical significance of differences between the areas under the curves. Our results showed that i the GLR tests outperformed the t-test for detecting differential gene expression, ii the identity of the underlying error structure was important in determining the GLR tests’ performance, and iii signal-to-noise ratio was a more important contributor than sample replication in identifying statistically significant differential gene expression.

  10. Timed Testing under Partial Observability

    DEFF Research Database (Denmark)

    David, Alexandre; Larsen, Kim Guldstrand; Li, Shuhao

    2009-01-01

    observability of SUT using a set of predicates over the TGA state space, and specify the test purposes in Computation Tree Logic (CTL) formulas. A recently developed partially observable timed game solver is used to generate winning strategies, which are used as test cases. We propose a conformance testing...

  11. Log-concave Probability Distributions: Theory and Statistical Testing

    DEFF Research Database (Denmark)

    An, Mark Yuing

    1996-01-01

    This paper studies the broad class of log-concave probability distributions that arise in economics of uncertainty and information. For univariate, continuous, and log-concave random variables we prove useful properties without imposing the differentiability of density functions. Discrete...... and multivariate distributions are also discussed. We propose simple non-parametric testing procedures for log-concavity. The test statistics are constructed to test one of the two implicati ons of log-concavity: increasing hazard rates and new-is-better-than-used (NBU) property. The test for increasing hazard...... rates are based on normalized spacing of the sample order statistics. The tests for NBU property fall into the category of Hoeffding's U-statistics...

  12. Accelerated testing statistical models, test plans, and data analysis

    CERN Document Server

    Nelson, Wayne B

    2009-01-01

    The Wiley-Interscience Paperback Series consists of selected books that have been made more accessible to consumers in an effort to increase global appeal and general circulation. With these new unabridged softcover volumes, Wiley hopes to extend the lives of these works by making them available to future generations of statisticians, mathematicians, and scientists. "". . . a goldmine of knowledge on accelerated life testing principles and practices . . . one of the very few capable of advancing the science of reliability. It definitely belongs in every bookshelf on engineering.""-Dev G.

  13. CUSUM-based person-fit statistics for adaptive testing

    NARCIS (Netherlands)

    van Krimpen-Stoop, Edith; Meijer, R.R.

    2001-01-01

    Item scores that do not fit an assumed item response theory model may cause the latent trait value to be inaccurately estimated. Several person-fit statistics for detecting nonfitting score patterns for paper-and-pencil tests have been proposed. In the context of computerized adaptive tests (CAT),

  14. Statistical test for the distribution of galaxies on plates

    International Nuclear Information System (INIS)

    Garcia Lambas, D.

    1985-01-01

    A statistical test for the distribution of galaxies on plates is presented. We apply the test to synthetic astronomical plates obtained by means of numerical simulation (Garcia Lambas and Sersic 1983) with three different models for the 3-dimensional distribution, comparison with an observational plate, suggest the presence of filamentary structure. (author)

  15. CUSUM-based person-fit statistics for adaptive testing

    NARCIS (Netherlands)

    van Krimpen-Stoop, Edith; Meijer, R.R.

    1999-01-01

    Item scores that do not fit an assumed item response theory model may cause the latent trait value to be estimated inaccurately. Several person-fit statistics for detecting nonfitting score patterns for paper-and-pencil tests have been proposed. In the context of computerized adaptive tests (CAT),

  16. Statistical power of likelihood ratio and Wald tests in latent class models with covariates

    NARCIS (Netherlands)

    Gudicha, D.W.; Schmittmann, V.D.; Vermunt, J.K.

    2017-01-01

    This paper discusses power and sample-size computation for likelihood ratio and Wald testing of the significance of covariate effects in latent class models. For both tests, asymptotic distributions can be used; that is, the test statistic can be assumed to follow a central Chi-square under the null

  17. Statistical sampling and hypothesis testing in orthopaedic research.

    Science.gov (United States)

    Bernstein, Joseph; McGuire, Kevin; Freedman, Kevin B

    2003-08-01

    The purpose of the current article was to review the process of hypothesis testing and statistical sampling and empower readers to critically appraise the literature. When the p value of a study lies above the alpha threshold, the results are said to be not statistically significant. It is possible, however, that real differences do exist, but the study was insufficiently powerful to detect them. In that case, the conclusion that two groups are equivalent is wrong. The probability of this mistake, the Type II error, is given by the beta statistic. The complement of beta, or 1-beta, representing the chance of avoiding a Type II error, is termed the statistical power of the study. We previously examined the statistical power and sample size in all of the studies published in 1997 in the American and British volumes of the Journal of Bone and Joint Surgery, and in Clinical Orthopaedics and Related Research. In the journals examined, only 3% of studies had adequate statistical power to detect a small effect size in this sample. In addition, a study examining only randomized control trials in these journals showed that none of 25 randomized control trials had adequate statistical power to detect a small effect size. However, beta, or power, is less well understood. Because of this, researchers and readers should be aware of the need to address issues of statistical power before a study begins and be cautious of studies that conclude that no difference exists between groups.

  18. 688,112 statistical results : Content mining psychology articles for statistical test results

    NARCIS (Netherlands)

    Hartgerink, C.H.J.

    2016-01-01

    In this data deposit, I describe a dataset that is the result of content mining 167,318 published articles for statistical test results reported according to the standards prescribed by the American Psychological Association (APA). Articles published by the APA, Springer, Sage, and Taylor & Francis

  19. Test Statistics and Confidence Intervals to Establish Noninferiority between Treatments with Ordinal Categorical Data.

    Science.gov (United States)

    Zhang, Fanghong; Miyaoka, Etsuo; Huang, Fuping; Tanaka, Yutaka

    2015-01-01

    The problem for establishing noninferiority is discussed between a new treatment and a standard (control) treatment with ordinal categorical data. A measure of treatment effect is used and a method of specifying noninferiority margin for the measure is provided. Two Z-type test statistics are proposed where the estimation of variance is constructed under the shifted null hypothesis using U-statistics. Furthermore, the confidence interval and the sample size formula are given based on the proposed test statistics. The proposed procedure is applied to a dataset from a clinical trial. A simulation study is conducted to compare the performance of the proposed test statistics with that of the existing ones, and the results show that the proposed test statistics are better in terms of the deviation from nominal level and the power.

  20. Testing the statistical compatibility of independent data sets

    International Nuclear Information System (INIS)

    Maltoni, M.; Schwetz, T.

    2003-01-01

    We discuss a goodness-of-fit method which tests the compatibility between statistically independent data sets. The method gives sensible results even in cases where the χ 2 minima of the individual data sets are very low or when several parameters are fitted to a large number of data points. In particular, it avoids the problem that a possible disagreement between data sets becomes diluted by data points which are insensitive to the crucial parameters. A formal derivation of the probability distribution function for the proposed test statistics is given, based on standard theorems of statistics. The application of the method is illustrated on data from neutrino oscillation experiments, and its complementarity to the standard goodness-of-fit is discussed

  1. Interactive comparison of hypothesis tests for statistical model checking

    NARCIS (Netherlands)

    de Boer, Pieter-Tjerk; Reijsbergen, D.P.; Scheinhardt, Willem R.W.

    2015-01-01

    We present a web-based interactive comparison of hypothesis tests as are used in statistical model checking, providing users and tool developers with more insight into their characteristics. Parameters can be modified easily and their influence is visualized in real time; an integrated simulation

  2. The performance of robust test statistics with categorical data

    NARCIS (Netherlands)

    Savalei, V.; Rhemtulla, M.

    2013-01-01

    This paper reports on a simulation study that evaluated the performance of five structural equation model test statistics appropriate for categorical data. Both Type I error rate and power were investigated. Different model sizes, sample sizes, numbers of categories, and threshold distributions were

  3. Statistical approach for collaborative tests, reference material certification procedures

    International Nuclear Information System (INIS)

    Fangmeyer, H.; Haemers, L.; Larisse, J.

    1977-01-01

    The first part introduces the different aspects in organizing and executing intercomparison tests of chemical or physical quantities. It follows a description of a statistical procedure to handle the data collected in a circular analysis. Finally, an example demonstrates how the tool can be applied and which conclusion can be drawn of the results obtained

  4. 1980 Summer Study on Statistical Techniques in Army Testing.

    Science.gov (United States)

    1980-07-01

    WASHINGTON, D. C. 20310 f ARMY CIENCE BOARD 1980 SUMMER STUDY ON STATISTICAL TECHNIQUES IN ARMY TESTING JULY 1980 DTICS ELECTE NOV 2 5 1980 B _STRI...statisticians is adequate, and in some cases, excellent. In the areas of education and the dissemination of information, the Study Group found that the

  5. Conducting tests for statistically significant differences using forest inventory data

    Science.gov (United States)

    James A. Westfall; Scott A. Pugh; John W. Coulston

    2013-01-01

    Many forest inventory and monitoring programs are based on a sample of ground plots from which estimates of forest resources are derived. In addition to evaluating metrics such as number of trees or amount of cubic wood volume, it is often desirable to make comparisons between resource attributes. To properly conduct statistical tests for differences, it is imperative...

  6. A Note on Three Statistical Tests in the Logistic Regression DIF Procedure

    Science.gov (United States)

    Paek, Insu

    2012-01-01

    Although logistic regression became one of the well-known methods in detecting differential item functioning (DIF), its three statistical tests, the Wald, likelihood ratio (LR), and score tests, which are readily available under the maximum likelihood, do not seem to be consistently distinguished in DIF literature. This paper provides a clarifying…

  7. Using the Bootstrap Method for a Statistical Significance Test of Differences between Summary Histograms

    Science.gov (United States)

    Xu, Kuan-Man

    2006-01-01

    A new method is proposed to compare statistical differences between summary histograms, which are the histograms summed over a large ensemble of individual histograms. It consists of choosing a distance statistic for measuring the difference between summary histograms and using a bootstrap procedure to calculate the statistical significance level. Bootstrapping is an approach to statistical inference that makes few assumptions about the underlying probability distribution that describes the data. Three distance statistics are compared in this study. They are the Euclidean distance, the Jeffries-Matusita distance and the Kuiper distance. The data used in testing the bootstrap method are satellite measurements of cloud systems called cloud objects. Each cloud object is defined as a contiguous region/patch composed of individual footprints or fields of view. A histogram of measured values over footprints is generated for each parameter of each cloud object and then summary histograms are accumulated over all individual histograms in a given cloud-object size category. The results of statistical hypothesis tests using all three distances as test statistics are generally similar, indicating the validity of the proposed method. The Euclidean distance is determined to be most suitable after comparing the statistical tests of several parameters with distinct probability distributions among three cloud-object size categories. Impacts on the statistical significance levels resulting from differences in the total lengths of satellite footprint data between two size categories are also discussed.

  8. Testing statistical isotropy in cosmic microwave background polarization maps

    Science.gov (United States)

    Rath, Pranati K.; Samal, Pramoda Kumar; Panda, Srikanta; Mishra, Debesh D.; Aluri, Pavan K.

    2018-04-01

    We apply our symmetry based Power tensor technique to test conformity of PLANCK Polarization maps with statistical isotropy. On a wide range of angular scales (l = 40 - 150), our preliminary analysis detects many statistically anisotropic multipoles in foreground cleaned full sky PLANCK polarization maps viz., COMMANDER and NILC. We also study the effect of residual foregrounds that may still be present in the Galactic plane using both common UPB77 polarization mask, as well as the individual component separation method specific polarization masks. However, some of the statistically anisotropic modes still persist, albeit significantly in NILC map. We further probed the data for any coherent alignments across multipoles in several bins from the chosen multipole range.

  9. Statistical Control Paradigm for Aerospace Structures Under Impulsive Disturbances

    National Research Council Canada - National Science Library

    Pham, Khanh D; Robertson, Lawrence M

    2006-01-01

    In this paper, the newly developed statistical control theory is revisited to autonomously control the satellite attitude as well as to provide a means of actively attenuating impulsive disturbances...

  10. Use of statistical tests and statistical software choice in 2014: tale from three Medline indexed Pakistani journals.

    Science.gov (United States)

    Shaikh, Masood Ali

    2016-04-01

    Statistical tests help infer meaningful conclusions from studies conducted and data collected. This descriptive study analyzed the type of statistical tests used and the statistical software utilized for analysis reported in the original articles published in 2014 by the three Medline-indexed journals of Pakistan. Cumulatively, 466 original articles were published in 2014. The most frequently reported statistical tests for original articles by all three journals were bivariate parametric and non-parametric tests i.e. involving comparisons between two groups e.g. Chi-square test, t-test, and various types of correlations. Cumulatively, 201 (43.1%) articles used these tests. SPSS was the primary choice for statistical analysis, as it was exclusively used in 374 (80.3%) original articles. There has been a substantial increase in the number of articles published, and in the sophistication of statistical tests used in the articles published in the Pakistani Medline indexed journals in 2014, compared to 2007.

  11. Xylitol production by Candida tropicalis under different statistically ...

    African Journals Online (AJOL)

    Nutritional and environmental conditions of the xylose utilizing yeast Candida tropicalis were optimized on a shake-flask scale using a statistical factorial design to maximize the production of xylitol. Effects of the three growth medium components (rice bran, ammonium sulfate and xylose) on the xylitol production were ...

  12. Statistical Literacy Among Academic Pathologists: A Survey Study to Gauge Knowledge of Frequently Used Statistical Tests Among Trainees and Faculty.

    Science.gov (United States)

    Schmidt, Robert L; Chute, Deborah J; Colbert-Getz, Jorie M; Firpo-Betancourt, Adolfo; James, Daniel S; Karp, Julie K; Miller, Douglas C; Milner, Danny A; Smock, Kristi J; Sutton, Ann T; Walker, Brandon S; White, Kristie L; Wilson, Andrew R; Wojcik, Eva M; Yared, Marwan A; Factor, Rachel E

    2017-02-01

    -Statistical literacy can be defined as understanding the statistical tests and terminology needed for the design, analysis, and conclusions of original research or laboratory testing. Little is known about the statistical literacy of clinical or anatomic pathologists. -To determine the statistical methods most commonly used in pathology studies from the literature and to assess familiarity and knowledge level of these statistical tests by pathology residents and practicing pathologists. -The most frequently used statistical methods were determined by a review of 1100 research articles published in 11 pathology journals during 2015. Familiarity with statistical methods was determined by a survey of pathology trainees and practicing pathologists at 9 academic institutions in which pathologists were asked to rate their knowledge of the methods identified by the focused review of the literature. -We identified 18 statistical tests that appear frequently in published pathology studies. On average, pathologists reported a knowledge level between "no knowledge" and "basic knowledge" of most statistical tests. Knowledge of tests was higher for more frequently used tests. Greater statistical knowledge was associated with a focus on clinical pathology versus anatomic pathology, having had a statistics course, having an advanced degree other than an MD degree, and publishing research. Statistical knowledge was not associated with length of pathology practice. -An audit of pathology literature reveals that knowledge of about 12 statistical tests would be sufficient to provide statistical literacy for pathologists. On average, most pathologists report they can interpret commonly used tests but are unable to perform them. Most pathologists indicated that they would benefit from additional statistical training.

  13. Development of modelling algorithm of technological systems by statistical tests

    Science.gov (United States)

    Shemshura, E. A.; Otrokov, A. V.; Chernyh, V. G.

    2018-03-01

    The paper tackles the problem of economic assessment of design efficiency regarding various technological systems at the stage of their operation. The modelling algorithm of a technological system was performed using statistical tests and with account of the reliability index allows estimating the level of machinery technical excellence and defining the efficiency of design reliability against its performance. Economic feasibility of its application shall be determined on the basis of service quality of a technological system with further forecasting of volumes and the range of spare parts supply.

  14. Reliability assessment for safety critical systems by statistical random testing

    International Nuclear Information System (INIS)

    Mills, S.E.

    1995-11-01

    In this report we present an overview of reliability assessment for software and focus on some basic aspects of assessing reliability for safety critical systems by statistical random testing. We also discuss possible deviations from some essential assumptions on which the general methodology is based. These deviations appear quite likely in practical applications. We present and discuss possible remedies and adjustments and then undertake applying this methodology to a portion of the SDS1 software. We also indicate shortcomings of the methodology and possible avenues to address to follow to address these problems. (author). 128 refs., 11 tabs., 31 figs

  15. Evaluation of the Wishart test statistics for polarimetric SAR data

    DEFF Research Database (Denmark)

    Skriver, Henning; Nielsen, Allan Aasbjerg; Conradsen, Knut

    2003-01-01

    A test statistic for equality of two covariance matrices following the complex Wishart distribution has previously been used in new algorithms for change detection, edge detection and segmentation in polarimetric SAR images. Previously, the results for change detection and edge detection have been...... quantitatively evaluated. This paper deals with the evaluation of segmentation. A segmentation performance measure originally developed for single-channel SAR images has been extended to polarimetric SAR images, and used to evaluate segmentation for a merge-using-moment algorithm for polarimetric SAR data....

  16. Quantum Statistical Testing of a Quantum Random Number Generator

    Energy Technology Data Exchange (ETDEWEB)

    Humble, Travis S [ORNL

    2014-01-01

    The unobservable elements in a quantum technology, e.g., the quantum state, complicate system verification against promised behavior. Using model-based system engineering, we present methods for verifying the opera- tion of a prototypical quantum random number generator. We begin with the algorithmic design of the QRNG followed by the synthesis of its physical design requirements. We next discuss how quantum statistical testing can be used to verify device behavior as well as detect device bias. We conclude by highlighting how system design and verification methods must influence effort to certify future quantum technologies.

  17. Statistical tests for power-law cross-correlated processes

    Science.gov (United States)

    Podobnik, Boris; Jiang, Zhi-Qiang; Zhou, Wei-Xing; Stanley, H. Eugene

    2011-12-01

    For stationary time series, the cross-covariance and the cross-correlation as functions of time lag n serve to quantify the similarity of two time series. The latter measure is also used to assess whether the cross-correlations are statistically significant. For nonstationary time series, the analogous measures are detrended cross-correlations analysis (DCCA) and the recently proposed detrended cross-correlation coefficient, ρDCCA(T,n), where T is the total length of the time series and n the window size. For ρDCCA(T,n), we numerically calculated the Cauchy inequality -1≤ρDCCA(T,n)≤1. Here we derive -1≤ρDCCA(T,n)≤1 for a standard variance-covariance approach and for a detrending approach. For overlapping windows, we find the range of ρDCCA within which the cross-correlations become statistically significant. For overlapping windows we numerically determine—and for nonoverlapping windows we derive—that the standard deviation of ρDCCA(T,n) tends with increasing T to 1/T. Using ρDCCA(T,n) we show that the Chinese financial market's tendency to follow the U.S. market is extremely weak. We also propose an additional statistical test that can be used to quantify the existence of cross-correlations between two power-law correlated time series.

  18. A study of statistical tests for near-real-time materials accountancy using field test data of Tokai reprocessing plant

    International Nuclear Information System (INIS)

    Ihara, Hitoshi; Nishimura, Hideo; Ikawa, Koji; Miura, Nobuyuki; Iwanaga, Masayuki; Kusano, Toshitsugu.

    1988-03-01

    An Near-Real-Time Materials Accountancy(NRTA) system had been developed as an advanced safeguards measure for PNC Tokai Reprocessing Plant; a minicomputer system for NRTA data processing was designed and constructed. A full scale field test was carried out as a JASPAS(Japan Support Program for Agency Safeguards) project with the Agency's participation and the NRTA data processing system was used. Using this field test data, investigation of the detection power of a statistical test under real circumstances was carried out for five statistical tests, i.e., a significance test of MUF, CUMUF test, average loss test, MUF residual test and Page's test on MUF residuals. The result shows that the CUMUF test, average loss test, MUF residual test and the Page's test on MUF residual test are useful to detect a significant loss or diversion. An unmeasured inventory estimation model for the PNC reprocessing plant was developed in this study. Using this model, the field test data from the C-1 to 85 - 2 campaigns were re-analyzed. (author)

  19. Wind turbine blade testing under combined loading

    DEFF Research Database (Denmark)

    Roczek-Sieradzan, Agnieszka; Nielsen, Magda; Branner, Kim

    2011-01-01

    The paper presents full-scale blade tests under a combined flap- and edgewise loading. The main aim of this paper is to present the results from testing a wind turbine blade under such conditions and to study the structural behavior of the blade subjected to combined loading. A loading method using...... anchor plates was applied, allowing transverse shear distortion. The global and local deformation of the blade as well as the reproducibility of the test was studied and the results from the investigations are presented....

  20. Analysis of Preference Data Using Intermediate Test Statistic Abstract

    African Journals Online (AJOL)

    PROF. O. E. OSUAGWU

    2013-06-01

    Jun 1, 2013 ... [5] Hill, I.D., Some Aspects of Election-to-fill one seat or many, Journal of Royal. Statistical Society A, No. 151, pp. 310-314. [6] Myers, R.H., A First Course in the Theorey of Linear Statistical Models, PWS-. KENT, Boston, 1991. [7] Taplin, R.H., The Statistical Analysis of Preference Data, Applied Statistics, No.

  1. Testing Punctuated Equilibrium Theory Using Evolutionary Activity Statistics

    Science.gov (United States)

    Woodberry, O. G.; Korb, K. B.; Nicholson, A. E.

    The Punctuated Equilibrium hypothesis (Eldredge and Gould,1972) asserts that most evolutionary change occurs during geologically rapid speciation events, with species exhibiting stasis most of the time. Punctuated Equilibrium is a natural extension of Mayr's theories on peripatric speciation via the founder effect, (Mayr, 1963; Eldredge and Gould, 1972) which associates changes in diversity to a population bottleneck. That is, while the formation of a foundation bottleneck brings an initial loss of genetic variation, it may subsequently result in the emergence of a child species distinctly different from its parent species. In this paper we adapt Bedau's evolutionary activity statistics (Bedau and Packard, 1991) to test these effects in an ALife simulation of speciation. We find a relative increase in evolutionary activity during speciations events, indicating that punctuation is occurring.

  2. Standard Errors and Confidence Intervals of Norm Statistics for Educational and Psychological Tests.

    Science.gov (United States)

    Oosterhuis, Hannah E M; van der Ark, L Andries; Sijtsma, Klaas

    2016-11-14

    Norm statistics allow for the interpretation of scores on psychological and educational tests, by relating the test score of an individual test taker to the test scores of individuals belonging to the same gender, age, or education groups, et cetera. Given the uncertainty due to sampling error, one would expect researchers to report standard errors for norm statistics. In practice, standard errors are seldom reported; they are either unavailable or derived under strong distributional assumptions that may not be realistic for test scores. We derived standard errors for four norm statistics (standard deviation, percentile ranks, stanine boundaries and Z-scores) under the mild assumption that the test scores are multinomially distributed. A simulation study showed that the standard errors were unbiased and that corresponding Wald-based confidence intervals had good coverage. Finally, we discuss the possibilities for applying the standard errors in practical test use in education and psychology. The procedure is provided via the R function check.norms, which is available in the mokken package.

  3. Tauberian conditions under which statistical convergence follows from statistical summability $(EC_{n}^1$

    Directory of Open Access Journals (Sweden)

    Naim L. Braha

    2019-10-01

    Full Text Available Let $(x_k$, for $k\\in \\mathbb{N}\\cup \\{0\\}$  be a sequence of real or complex numbers and set $(EC_{n}^{1}=\\frac{1}{2^n}\\sum_{j=0}^{n}{\\binom{n}{j}\\frac{1}{j+1}\\sum_{v=0}^{j}{x_v}},$ $n\\in \\mathbb{N}\\cup \\{0\\}.$  We present necessary and sufficient conditions, under which $st-\\lim_{}{x_k}= L$ follows from $st-\\lim_{}{(EC_{n}^{1}} = L,$ where L is a finite number. If $(x_k$ is a sequence of real numbers, then these are one-sided Tauberian conditions. If $(x_k$ is a sequence of complex numbers, then these are two-sided Tauberian conditions.

  4. Development and testing of improved statistical wind power forecasting methods.

    Energy Technology Data Exchange (ETDEWEB)

    Mendes, J.; Bessa, R.J.; Keko, H.; Sumaili, J.; Miranda, V.; Ferreira, C.; Gama, J.; Botterud, A.; Zhou, Z.; Wang, J. (Decision and Information Sciences); (INESC Porto)

    2011-12-06

    Wind power forecasting (WPF) provides important inputs to power system operators and electricity market participants. It is therefore not surprising that WPF has attracted increasing interest within the electric power industry. In this report, we document our research on improving statistical WPF algorithms for point, uncertainty, and ramp forecasting. Below, we provide a brief introduction to the research presented in the following chapters. For a detailed overview of the state-of-the-art in wind power forecasting, we refer to [1]. Our related work on the application of WPF in operational decisions is documented in [2]. Point forecasts of wind power are highly dependent on the training criteria used in the statistical algorithms that are used to convert weather forecasts and observational data to a power forecast. In Chapter 2, we explore the application of information theoretic learning (ITL) as opposed to the classical minimum square error (MSE) criterion for point forecasting. In contrast to the MSE criterion, ITL criteria do not assume a Gaussian distribution of the forecasting errors. We investigate to what extent ITL criteria yield better results. In addition, we analyze time-adaptive training algorithms and how they enable WPF algorithms to cope with non-stationary data and, thus, to adapt to new situations without requiring additional offline training of the model. We test the new point forecasting algorithms on two wind farms located in the U.S. Midwest. Although there have been advancements in deterministic WPF, a single-valued forecast cannot provide information on the dispersion of observations around the predicted value. We argue that it is essential to generate, together with (or as an alternative to) point forecasts, a representation of the wind power uncertainty. Wind power uncertainty representation can take the form of probabilistic forecasts (e.g., probability density function, quantiles), risk indices (e.g., prediction risk index) or scenarios

  5. Statistical Tests for One-way/Two-way Translation in Translational Medicine

    Directory of Open Access Journals (Sweden)

    Siu-Keung Tse

    2008-12-01

    Full Text Available Translational medicine has been defined as bench-to-bedside research, where a basic laboratory discovery becomes applicable to the diagnosis, treatment or prevention of a specific disease, and is brought forth by either a physician/scientist who works at the interface between the research laboratory and patient care, or by a team of basic and clinical science investigators. Statistics plays an important role in translational medicine to ensure that the translational process is accurate and reliable, with statistical assurance. For this purpose, statistical criteria for assessment of one-way and two-way translation are proposed. Under a well established and validated translational model, statistical tests for one-way and two-way translation are discussed. Some discussion on lost in translation is also given.

  6. Identification of Statistically Homogeneous Pixels Based on One-Sample Test

    Directory of Open Access Journals (Sweden)

    Keng-Fan Lin

    2017-01-01

    Full Text Available Statistically homogeneous pixels (SHP play a crucial role in synthetic aperture radar (SAR analysis. In past studies, various two-sample tests were applied on multitemporal SAR data stacks under the assumption of having stationary backscattering properties over time. In this letter, we propose the Robust T-test (TR to improve the effectiveness of test operation. The TR test reduces the impact of temporal variabilities and outliers, thus helping to identify SHP with assurances of similar temporal behaviors. This method includes three steps: (1 signal suppression; (2 outlier removal; and (3 one-sample test. In the experiments, we apply the TR test on both simulated and real data. Different stack sizes, types of distributions, and hypothesis tests are compared. Results of both experiments signify that the TR test outperforms conventional approaches and provides reliable SHP for SAR image analysis.

  7. Co-integration Rank Testing under Conditional Heteroskedasticity

    DEFF Research Database (Denmark)

    Cavaliere, Guiseppe; Rahbæk, Anders; Taylor, A.M. Robert

    We analyse the properties of the conventional Gaussian-based co-integrating rank tests of Johansen (1996) in the case where the vector of series under test is driven by globally stationary, conditionally heteroskedastic (martingale differ- ence) innovations. We first demonstrate that the limiting...... null distributions of the rank statistics coincide with those derived by previous authors who assume either i.i.d. or (strict and covariance) stationary martingale difference innovations. We then propose wild bootstrap implementations of the co-integrating rank tests and demonstrate that the associated...

  8. Effect of non-normality on test statistics for one-way independent groups designs.

    Science.gov (United States)

    Cribbie, Robert A; Fiksenbaum, Lisa; Keselman, H J; Wilcox, Rand R

    2012-02-01

    The data obtained from one-way independent groups designs is typically non-normal in form and rarely equally variable across treatment populations (i.e., population variances are heterogeneous). Consequently, the classical test statistic that is used to assess statistical significance (i.e., the analysis of variance F test) typically provides invalid results (e.g., too many Type I errors, reduced power). For this reason, there has been considerable interest in finding a test statistic that is appropriate under conditions of non-normality and variance heterogeneity. Previously recommended procedures for analysing such data include the James test, the Welch test applied either to the usual least squares estimators of central tendency and variability, or the Welch test with robust estimators (i.e., trimmed means and Winsorized variances). A new statistic proposed by Krishnamoorthy, Lu, and Mathew, intended to deal with heterogeneous variances, though not non-normality, uses a parametric bootstrap procedure. In their investigation of the parametric bootstrap test, the authors examined its operating characteristics under limited conditions and did not compare it to the Welch test based on robust estimators. Thus, we investigated how the parametric bootstrap procedure and a modified parametric bootstrap procedure based on trimmed means perform relative to previously recommended procedures when data are non-normal and heterogeneous. The results indicated that the tests based on trimmed means offer the best Type I error control and power when variances are unequal and at least some of the distribution shapes are non-normal. © 2011 The British Psychological Society.

  9. Using historical vital statistics to predict the distribution of under-five mortality by cause.

    Science.gov (United States)

    Rao, Chalapati; Adair, Timothy; Kinfu, Yohannes

    2011-06-01

    Cause-specific mortality data is essential for planning intervention programs to reduce mortality in the under age five years population (under-five). However, there is a critical paucity of such information for most of the developing world, particularly where progress towards the United Nations Millennium Development Goal 4 (MDG 4) has been slow. This paper presents a predictive cause of death model for under-five mortality based on historical vital statistics and discusses the utility of the model in generating information that could accelerate progress towards MDG 4. Over 1400 country years of vital statistics from 34 countries collected over a period of nearly a century were analyzed to develop relationships between levels of under-five mortality, related mortality ratios, and proportionate mortality from four cause groups: perinatal conditions; diarrhea and lower respiratory infections; congenital anomalies; and all other causes of death. A system of multiple equations with cross-equation parameter restrictions and correlated error terms was developed to predict proportionate mortality by cause based on given measures of under-five mortality. The strength of the predictive model was tested through internal and external cross-validation techniques. Modeled cause-specific mortality estimates for major regions in Africa, Asia, Central America, and South America are presented to illustrate its application across a range of under-five mortality rates. Consistent and plausible trends and relationships are observed from historical data. High mortality rates are associated with increased proportions of deaths from diarrhea and lower respiratory infections. Perinatal conditions assume importance as a proportionate cause at under-five mortality rates below 60 per 1000 live births. Internal and external validation confirms strength and consistency of the predictive model. Model application at regional level demonstrates heterogeneity and non-linearity in cause

  10. Nuclides migration tests under deep geological conditions

    International Nuclear Information System (INIS)

    Kumata, M.; Vandergraaf, T.T.

    1991-01-01

    Migration behaviour of technetium and iodine under deep geological conditions was investigated by performing column tests under in-situ conditions at the 240 m level of the Underground Research Laboratory (URL) constructed in a granitic batholith near Pinawa, Manitoba, Canada. 131 I was injected with tritiated water into the column. Tritium and 131 I were eluted simultaneously. Almost 100 % of injected 131 I was recovered in the tritium breakthrough region, indicating that iodine moved through the column almost without retardation under experimental conditions. On the other hand, the injected technetium with tritium was strongly retarded in the column even though the groundwater was mildly reducing. Only about 7 % of injected 95m Tc was recovered in the tritium breakthrough region and the remaining fraction was strongly sorbed on the dark mafic minerals of column materials. This strong sorption of technetium on the column materials had not been expected from the results obtained from batch experiments carried out under anaerobic conditions. (author)

  11. A statistical test for the habitable zone concept

    Science.gov (United States)

    Checlair, J.; Abbot, D. S.

    2017-12-01

    Traditional habitable zone theory assumes that the silicate-weathering feedback regulates the atmospheric CO2 of planets within the habitable zone to maintain surface temperatures that allow for liquid water. There is some non-definitive evidence that this feedback has worked in Earth history, but it is untested in an exoplanet context. A critical prediction of the silicate-weathering feedback is that, on average, within the habitable zone planets that receive a higher stellar flux should have a lower CO2 in order to maintain liquid water at their surface. We can test this prediction directly by using a statistical approach involving low-precision CO2 measurements on many planets with future instruments such as JWST, LUVOIR, or HabEx. The purpose of this work is to carefully outline the requirements for such a test. First, we use a radiative-transfer model to compute the amount of CO2 necessary to maintain surface liquid water on planets for different values of insolation and planetary parameters. We run a large ensemble of Earth-like planets with different masses, atmospheric masses, inert atmospheric composition, cloud composition and level, and other greenhouse gases. Second, we post-process this data to determine the precision with which future instruments such as JWST, LUVOIR, and HabEx could measure the CO2. We then combine the variation due to planetary parameters and observational error to determine the number of planet measurements that would be needed to effectively marginalize over uncertainties and resolve the predicted trend in CO2 vs. stellar flux. The results of this work may influence the usage of JWST and will enhance mission planning for LUVOIR and HabEx.

  12. Statistics

    Science.gov (United States)

    Links to sources of cancer-related statistics, including the Surveillance, Epidemiology and End Results (SEER) Program, SEER-Medicare datasets, cancer survivor prevalence data, and the Cancer Trends Progress Report.

  13. Appropriate statistical methods are required to assess diagnostic tests for replacement, add-on, and triage

    NARCIS (Netherlands)

    Hayen, Andrew; Macaskill, Petra; Irwig, Les; Bossuyt, Patrick

    2010-01-01

    To explain which measures of accuracy and which statistical methods should be used in studies to assess the value of a new binary test as a replacement test, an add-on test, or a triage test. Selection and explanation of statistical methods, illustrated with examples. Statistical methods for

  14. Statistics

    International Nuclear Information System (INIS)

    2005-01-01

    For the years 2004 and 2005 the figures shown in the tables of Energy Review are partly preliminary. The annual statistics published in Energy Review are presented in more detail in a publication called Energy Statistics that comes out yearly. Energy Statistics also includes historical time-series over a longer period of time (see e.g. Energy Statistics, Statistics Finland, Helsinki 2004.) The applied energy units and conversion coefficients are shown in the back cover of the Review. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in GDP, energy consumption and electricity consumption, Carbon dioxide emissions from fossile fuels use, Coal consumption, Consumption of natural gas, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices in heat production, Fuel prices in electricity production, Price of electricity by type of consumer, Average monthly spot prices at the Nord pool power exchange, Total energy consumption by source and CO 2 -emissions, Supplies and total consumption of electricity GWh, Energy imports by country of origin in January-June 2003, Energy exports by recipient country in January-June 2003, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Price of natural gas by type of consumer, Price of electricity by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes, precautionary stock fees and oil pollution fees

  15. Examining publication bias—a simulation-based evaluation of statistical tests on publication bias

    Directory of Open Access Journals (Sweden)

    Andreas Schneck

    2017-11-01

    Full Text Available Background Publication bias is a form of scientific misconduct. It threatens the validity of research results and the credibility of science. Although several tests on publication bias exist, no in-depth evaluations are available that examine which test performs best for different research settings. Methods Four tests on publication bias, Egger’s test (FAT, p-uniform, the test of excess significance (TES, as well as the caliper test, were evaluated in a Monte Carlo simulation. Two different types of publication bias and its degree (0%, 50%, 100% were simulated. The type of publication bias was defined either as file-drawer, meaning the repeated analysis of new datasets, or p-hacking, meaning the inclusion of covariates in order to obtain a significant result. In addition, the underlying effect (β = 0, 0.5, 1, 1.5, effect heterogeneity, the number of observations in the simulated primary studies (N = 100, 500, and the number of observations for the publication bias tests (K = 100, 1,000 were varied. Results All tests evaluated were able to identify publication bias both in the file-drawer and p-hacking condition. The false positive rates were, with the exception of the 15%- and 20%-caliper test, unbiased. The FAT had the largest statistical power in the file-drawer conditions, whereas under p-hacking the TES was, except under effect heterogeneity, slightly better. The CTs were, however, inferior to the other tests under effect homogeneity and had a decent statistical power only in conditions with 1,000 primary studies. Discussion The FAT is recommended as a test for publication bias in standard meta-analyses with no or only small effect heterogeneity. If two-sided publication bias is suspected as well as under p-hacking the TES is the first alternative to the FAT. The 5%-caliper test is recommended under conditions of effect heterogeneity and a large number of primary studies, which may be found if publication bias is examined in a

  16. Statistics

    International Nuclear Information System (INIS)

    2000-01-01

    For the year 1999 and 2000, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period (see e.g., Energiatilastot 1998, Statistics Finland, Helsinki 1999, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in January-March 2000, Energy exports by recipient country in January-March 2000, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products

  17. Statistics

    International Nuclear Information System (INIS)

    1999-01-01

    For the year 1998 and the year 1999, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period (see e.g. Energiatilastot 1998, Statistics Finland, Helsinki 1999, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in January-June 1999, Energy exports by recipient country in January-June 1999, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products

  18. Statistics

    International Nuclear Information System (INIS)

    2001-01-01

    For the year 2000, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period (see e.g. Energiatilastot 1999, Statistics Finland, Helsinki 2000, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions from the use of fossil fuels, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in 2000, Energy exports by recipient country in 2000, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products

  19. Statistics

    International Nuclear Information System (INIS)

    2003-01-01

    For the year 2002, part of the figures shown in the tables of the Energy Review are partly preliminary. The annual statistics of the Energy Review also includes historical time-series over a longer period (see e.g. Energiatilastot 2001, Statistics Finland, Helsinki 2002). The applied energy units and conversion coefficients are shown in the inside back cover of the Review. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in GDP, energy consumption and electricity consumption, Carbon dioxide emissions from fossile fuels use, Coal consumption, Consumption of natural gas, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices in heat production, Fuel prices in electricity production, Price of electricity by type of consumer, Average monthly spot prices at the Nord pool power exchange, Total energy consumption by source and CO 2 -emissions, Supply and total consumption of electricity GWh, Energy imports by country of origin in January-June 2003, Energy exports by recipient country in January-June 2003, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Price of natural gas by type of consumer, Price of electricity by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Excise taxes, precautionary stock fees on oil pollution fees on energy products

  20. Statistics

    International Nuclear Information System (INIS)

    2000-01-01

    For the year 1999 and 2000, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy also includes historical time series over a longer period (see e.g., Energiatilastot 1999, Statistics Finland, Helsinki 2000, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in January-June 2000, Energy exports by recipient country in January-June 2000, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products

  1. Statistics

    International Nuclear Information System (INIS)

    2004-01-01

    For the year 2003 and 2004, the figures shown in the tables of the Energy Review are partly preliminary. The annual statistics of the Energy Review also includes historical time-series over a longer period (see e.g. Energiatilastot, Statistics Finland, Helsinki 2003, ISSN 0785-3165). The applied energy units and conversion coefficients are shown in the inside back cover of the Review. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in GDP, energy consumption and electricity consumption, Carbon dioxide emissions from fossile fuels use, Coal consumption, Consumption of natural gas, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices in heat production, Fuel prices in electricity production, Price of electricity by type of consumer, Average monthly spot prices at the Nord pool power exchange, Total energy consumption by source and CO 2 -emissions, Supplies and total consumption of electricity GWh, Energy imports by country of origin in January-March 2004, Energy exports by recipient country in January-March 2004, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Price of natural gas by type of consumer, Price of electricity by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Excise taxes, precautionary stock fees on oil pollution fees

  2. Testing for changes using permutations of U-statistics

    Czech Academy of Sciences Publication Activity Database

    Horvath, L.; Hušková, Marie

    2005-01-01

    Roč. 2005, č. 128 (2005), s. 351-371 ISSN 0378-3758 R&D Projects: GA ČR GA201/00/0769 Institutional research plan: CEZ:AV0Z10750506 Keywords : U-statistics * permutations * change-point * weighted approximation * Brownian bridge Subject RIV: BD - Theory of Information Impact factor: 0.481, year: 2005

  3. Understanding the Sampling Distribution and Its Use in Testing Statistical Significance.

    Science.gov (United States)

    Breunig, Nancy A.

    Despite the increasing criticism of statistical significance testing by researchers, particularly in the publication of the 1994 American Psychological Association's style manual, statistical significance test results are still popular in journal articles. For this reason, it remains important to understand the logic of inferential statistics. A…

  4. Statistics of sampling for microbiological testing of foodborne pathogens

    Science.gov (United States)

    Despite the many recent advances in protocols for testing for pathogens in foods, a number of challenges still exist. For example, the microbiological safety of food cannot be completely ensured by testing because microorganisms are not evenly distributed throughout the food. Therefore, since it i...

  5. Distinguish Dynamic Basic Blocks by Structural Statistical Testing

    DEFF Research Database (Denmark)

    Petit, Matthieu; Gotlieb, Arnaud

    selection of test data over the subdomain associated with these paths. Baudry et al. present a testing for-diagnosis method where the essential notion of Dynamic Basic Block was identified to be strongly correlated to the effectiveness of fault-localization technique. We show that generating a sequence...

  6. Statistical tests for equal predictive ability across multiple forecasting methods

    DEFF Research Database (Denmark)

    Borup, Daniel; Thyrsgaard, Martin

    We develop a multivariate generalization of the Giacomini-White tests for equal conditional predictive ability. The tests are applicable to a mixture of nested and non-nested models, incorporate estimation uncertainty explicitly, and allow for misspecification of the forecasting model as well as ...

  7. [Mastered with statistics: perfect eye drops and ideal screening test : Possibilities and limits of statistical methods for glaucoma].

    Science.gov (United States)

    Kotliar, K E; Lanzl, I M

    2016-10-01

    The use and the understanding of statistics are very important for biomedical research and for the clinical practice. This is particularly true for estimation of the possibilities for different diagnostic and therapy options in the field of glaucoma. The apparent complexity and contraintuitiveness of statistics along with a cautious acceptance by many physicians, might be the cause of conscious and unconscious manipulation with data representation and interpretation. Comprehendable clarification of some typical errors in the handling of medical statistical data. Using two hypothetical examples from glaucoma diagnostics the presentation of the effect of a hypotensive drug and interpretation of the results of a diagnostic test and typical statistical applications and sources of error are analyzed in detail and discussed. Mechanisms of data manipulation and incorrect data interpretation are elucidated. Typical sources of error in the statistical analysis and data presentation are explained. The practical examples analyzed demonstrate the need to understand the basics of statistics and to be able to apply them correctly. The lack of basic knowledge or half-knowledge in medical statistics can lead to misunderstandings, confusion and wrong decisions in medical research and also in clinical practice.

  8. Evaluating clinical significance: incorporating robust statistics with normative comparison tests.

    Science.gov (United States)

    van Wieringen, Katrina; Cribbie, Robert A

    2014-05-01

    The purpose of this study was to evaluate a modified test of equivalence for conducting normative comparisons when distribution shapes are non-normal and variances are unequal. A Monte Carlo study was used to compare the empirical Type I error rates and power of the proposed Schuirmann-Yuen test of equivalence, which utilizes trimmed means, with that of the previously recommended Schuirmann and Schuirmann-Welch tests of equivalence when the assumptions of normality and variance homogeneity are satisfied, as well as when they are not satisfied. The empirical Type I error rates of the Schuirmann-Yuen were much closer to the nominal α level than those of the Schuirmann or Schuirmann-Welch tests, and the power of the Schuirmann-Yuen was substantially greater than that of the Schuirmann or Schuirmann-Welch tests when distributions were skewed or outliers were present. The Schuirmann-Yuen test is recommended for assessing clinical significance with normative comparisons. © 2013 The British Psychological Society.

  9. Statistical power of likelihood ratio and Wald tests in latent class models with covariates.

    Science.gov (United States)

    Gudicha, Dereje W; Schmittmann, Verena D; Vermunt, Jeroen K

    2017-10-01

    This paper discusses power and sample-size computation for likelihood ratio and Wald testing of the significance of covariate effects in latent class models. For both tests, asymptotic distributions can be used; that is, the test statistic can be assumed to follow a central Chi-square under the null hypothesis and a non-central Chi-square under the alternative hypothesis. Power or sample-size computation using these asymptotic distributions requires specification of the non-centrality parameter, which in practice is rarely known. We show how to calculate this non-centrality parameter using a large simulated data set from the model under the alternative hypothesis. A simulation study is conducted evaluating the adequacy of the proposed power analysis methods, determining the key study design factor affecting the power level, and comparing the performance of the likelihood ratio and Wald test. The proposed power analysis methods turn out to perform very well for a broad range of conditions. Moreover, apart from effect size and sample size, an important factor affecting the power is the class separation, implying that when class separation is low, rather large sample sizes are needed to achieve a reasonable power level.

  10. Statistical Analysis of Geo-electric Imaging and Geotechnical Test ...

    Indian Academy of Sciences (India)

    12

    assessment of influencing factors is important. Here, we present a multiple regression analyses of both geoelectric (Electrical Resistivity. Tomography, ERT, Induced Polarization Imaging, IPI) and geotechnical site investigations. (Standard Penetration Test, SPT) for two profiles at a construction site for CGEWHO Complex.

  11. Statistical Tests for Frequency Distribution of Mean Gravity Anomalies

    African Journals Online (AJOL)

    The hypothesis that a very large number of lOx 10mean gravity anomalies are normally distributed has been rejected at 5% Significance level based on the X2 and the unit normal deviate tests. However, the 50 equal area mean anomalies derived from the lOx 10data, have been found to be normally distributed at the same ...

  12. statistical tests for frequency distribution of mean gravity anomalies

    African Journals Online (AJOL)

    ES Obe

    1980-03-01

    Mar 1, 1980 ... ABSTRACT. The hypothesis that a very large number of lOx 10mean gravity anomalies are normally distributed has been rejected at 5%. Significance level based on the X2 and the unit normal deviate tests. However, the 50 equal area mean anomalies derived from the lOx 10data, have been found to be ...

  13. Statistical Analysis of Geo-electric Imaging and Geotechnical Test ...

    Indian Academy of Sciences (India)

    12

    undrained shear strength there are other soil parameters (Plasticity index, pore pressure coefficient, over-consolidation ratio) that have an impact on Static Cone Penetration Test (SCPT) measurements (Remai, 2013). The quality of groundwater influences resistivity while it has no role on SPT 'N'. Pidlisecky et al. 2006 have ...

  14. Approximations to the distribution of a test statistic in covariance structure analysis: A comprehensive study.

    Science.gov (United States)

    Wu, Hao

    2018-05-01

    In structural equation modelling (SEM), a robust adjustment to the test statistic or to its reference distribution is needed when its null distribution deviates from a χ 2 distribution, which usually arises when data do not follow a multivariate normal distribution. Unfortunately, existing studies on this issue typically focus on only a few methods and neglect the majority of alternative methods in statistics. Existing simulation studies typically consider only non-normal distributions of data that either satisfy asymptotic robustness or lead to an asymptotic scaled χ 2 distribution. In this work we conduct a comprehensive study that involves both typical methods in SEM and less well-known methods from the statistics literature. We also propose the use of several novel non-normal data distributions that are qualitatively different from the non-normal distributions widely used in existing studies. We found that several under-studied methods give the best performance under specific conditions, but the Satorra-Bentler method remains the most viable method for most situations. © 2017 The British Psychological Society.

  15. Testing the performance of a blind burst statistic

    Energy Technology Data Exchange (ETDEWEB)

    Vicere, A [Istituto di Fisica, Universita di Urbino (Italy); Calamai, G [Istituto Nazionale di Fisica Nucleare, Sez. Firenze/Urbino (Italy); Campagna, E [Istituto Nazionale di Fisica Nucleare, Sez. Firenze/Urbino (Italy); Conforto, G [Istituto di Fisica, Universita di Urbino (Italy); Cuoco, E [Istituto Nazionale di Fisica Nucleare, Sez. Firenze/Urbino (Italy); Dominici, P [Istituto di Fisica, Universita di Urbino (Italy); Fiori, I [Istituto di Fisica, Universita di Urbino (Italy); Guidi, G M [Istituto di Fisica, Universita di Urbino (Italy); Losurdo, G [Istituto Nazionale di Fisica Nucleare, Sez. Firenze/Urbino (Italy); Martelli, F [Istituto di Fisica, Universita di Urbino (Italy); Mazzoni, M [Istituto Nazionale di Fisica Nucleare, Sez. Firenze/Urbino (Italy); Perniola, B [Istituto di Fisica, Universita di Urbino (Italy); Stanga, R [Istituto Nazionale di Fisica Nucleare, Sez. Firenze/Urbino (Italy); Vetrano, F [Istituto di Fisica, Universita di Urbino (Italy)

    2003-09-07

    In this work, we estimate the performance of a method for the detection of burst events in the data produced by interferometric gravitational wave detectors. We compute the receiver operating characteristics in the specific case of a simulated noise having the spectral density expected for Virgo, using test signals taken from a library of possible waveforms emitted during the collapse of the core of type II supernovae.

  16. Development and performances of a high statistics PMT test facility

    Directory of Open Access Journals (Sweden)

    Mollo Carlos Maximiliano

    2016-01-01

    Full Text Available Since almost a century photomultipliers have been the main sensors for photon detection in nuclear and astro-particle physics experiments. In recent years the search for cosmic neutrinos gave birth to enormous size experiments (Antares, Kamiokande, Super-Kamiokande, etc. and even kilometric scale experiments as ICECUBE and the future KM3NeT. A very large volume neutrino telescope like KM3NeT requires several hundreds of thousands photomultipliers. The performance of the telescope strictly depends on the performance of each PMT. For this reason, it is mandatory to measure the characteristics of each single sensor. The characterization of a PMT normally requires more than 8 hours mostly due to the darkening step. This means that it is not feasible to measure the parameters of each PMT of a neutrino telescope without a system able to test more than one PMT simultaneously. For this application, we have designed, developed and realized a system able to measure the main characteristics of 62 3-inch photomultipliers simultaneously. Two measurement sessions per day are possible. In this work, we describe the design constraints and how they have been satisfied. Finally, we show the performance of the system and the first results coming from the test of few thousand tested PMTs.

  17. A simple and robust statistical test for detecting the presence of recombination.

    Science.gov (United States)

    Bruen, Trevor C; Philippe, Hervé; Bryant, David

    2006-04-01

    Recombination is a powerful evolutionary force that merges historically distinct genotypes. But the extent of recombination within many organisms is unknown, and even determining its presence within a set of homologous sequences is a difficult question. Here we develop a new statistic, phi(w), that can be used to test for recombination. We show through simulation that our test can discriminate effectively between the presence and absence of recombination, even in diverse situations such as exponential growth (star-like topologies) and patterns of substitution rate correlation. A number of other tests, Max chi2, NSS, a coalescent-based likelihood permutation test (from LDHat), and correlation of linkage disequilibrium (both r2 and /D'/) with distance, all tend to underestimate the presence of recombination under strong population growth. Moreover, both Max chi2 and NSS falsely infer the presence of recombination under a simple model of mutation rate correlation. Results on empirical data show that our test can be used to detect recombination between closely as well as distantly related samples, regardless of the suspected rate of recombination. The results suggest that phi(w) is one of the best approaches to distinguish recurrent mutation from recombination in a wide variety of circumstances.

  18. Bootstrap testing for cross-correlation under low firing activity.

    Science.gov (United States)

    González-Montoro, Aldana M; Cao, Ricardo; Espinosa, Nelson; Cudeiro, Javier; Mariño, Jorge

    2015-06-01

    A new cross-correlation synchrony index for neural activity is proposed. The index is based on the integration of the kernel estimation of the cross-correlation function. It is used to test for the dynamic synchronization levels of spontaneous neural activity under two induced brain states: sleep-like and awake-like. Two bootstrap resampling plans are proposed to approximate the distribution of the test statistics. The results of the first bootstrap method indicate that it is useful to discern significant differences in the synchronization dynamics of brain states characterized by a neural activity with low firing rate. The second bootstrap method is useful to unveil subtle differences in the synchronization levels of the awake-like state, depending on the activation pathway.

  19. Caracterização estatística de variáveis usadas para ensaiar uma semeadora-adubadora em semeadura direta e convencional = Statistical characterization of variables used to test a planter under direct and conventional sowing systems

    Directory of Open Access Journals (Sweden)

    Geraldo do Amaral Gravina

    2009-10-01

    Full Text Available O objetivo deste trabalho foi caracterizar, estatisticamente, as variáveis patinagem dos rodados, espaço percorrido por parcela, área da parcela trabalhada,capacidade de campo teórica e efetiva de uma semeadora em sistema de semeadura direta (SD e convencional (SC, com base na verificação do ajuste de uma série de dados a uma distribuição estatística, visando indicar a melhor forma de representação e valores a serem adotados para que estas variáveis sejam utilizadas em operações de práticas agrícolas. O experimento na SC foi realizado com velocidade de 1,5 m s-1, com 190 repetições, durante a semeadura de milho em solo classificado como Cambissolo; oexperimento na SD, com 1,8 m s-1 e 58 repetições, durante a semeadura de sorgo em solo classificado como Latossolo Vermelho-Amarelo. Concluiu-se que não foram detectados valores discrepantes e que as variáveis em estudo podem ser representadas pela função densidade de probabilidade normal (Distribuição de Gauss, podendo ser utilizados parâmetros para suas representações.Statistical characterization of variables used to test a planter under direct and conventional sowing systems. The objective of this work was to statistically characterize variables such as wheels slip, theoretical and effective field capacity of a planter in direct (DS and conventional sowing (CS systems, based on the verification of the adjustment of a series of data to a statistical distribution, aiming for the best form of representation and values be adopted so that these variables be used in agricultural practices operations. The experiment in CS was done with a speed of 1.5 m s-1 with 190 repetitions, and the experiment in DS was done with 1.8 m s-1 and58 repetitions. It was concluded that differing values were not detected, and the variables in study can be represented by the function density of normal probability (Gaussian distribution, and the variables can be used for its representations.

  20. Monte Carlo testing in spatial statistics, with applications to spatial residuals

    DEFF Research Database (Denmark)

    Mrkvička, Tomáš; Soubeyrand, Samuel; Myllymäki, Mari

    2016-01-01

    with an appropriate type I error probability. Two novel examples are given on their usage. First, in addition to the test based on a classical one-dimensional summary function, the goodness-of-fit of a point process model is evaluated by means of the test based on a higher dimensional functional statistic, namely......This paper reviews recent advances made in testing in spatial statistics and discussed at the Spatial Statistics conference in Avignon 2015. The rank and directional quantile envelope tests are discussed and practical rules for their use are provided. These tests are global envelope tests...

  1. Cointegration rank testing under conditional heteroskedasticity

    DEFF Research Database (Denmark)

    Cavaliere, Giuseppe; Rahbek, Anders Christian; Taylor, Robert M.

    2010-01-01

    (martingale difference) innovations. We first demonstrate that the limiting null distributions of the rank statistics coincide with those derived by previous authors who assume either independent and identically distributed (i.i.d.) or (strict and covariance) stationary martingale difference innovations. We...

  2. STATISTIC TESTS AIDED MULTI-SOURCE DEM FUSION

    Directory of Open Access Journals (Sweden)

    C. Y. Fu

    2016-06-01

    Full Text Available Since the land surface has been changing naturally or manually, DEMs have to be updated continually to satisfy applications using the latest DEM at present. However, the cost of wide-area DEM production is too high. DEMs, which cover the same area but have different quality, grid sizes, generation time or production methods, are called as multi-source DEMs. It provides a solution to fuse multi-source DEMs for low cost DEM updating. The coverage of DEM has to be classified according to slope and visibility in advance, because the precisions of DEM grid points in different areas with different slopes and visibilities are not the same. Next, difference DEM (dDEM is computed by subtracting two DEMs. It is assumed that dDEM, which only contains random error, obeys normal distribution. Therefore, student test is implemented for blunder detection and three kinds of rejected grid points are generated. First kind of rejected grid points is blunder points and has to be eliminated. Another one is the ones in change areas, where the latest data are regarded as their fusion result. Moreover, the DEM grid points of type I error are correct data and have to be reserved for fusion. The experiment result shows that using DEMs with terrain classification can obtain better blunder detection result. A proper setting of significant levels (α can detect real blunders without creating too many type I errors. Weighting averaging is chosen as DEM fusion algorithm. The priori precisions estimated by our national DEM production guideline are applied to define weights. Fisher’s test is implemented to prove that the priori precisions correspond to the RMSEs of blunder detection result.

  3. A new efficient statistical test for detecting variability in the gene expression data.

    Science.gov (United States)

    Mathur, Sunil; Dolo, Samuel

    2008-08-01

    DNA microarray technology allows researchers to monitor the expressions of thousands of genes under different conditions. The detection of differential gene expression under two different conditions is very important in microarray studies. Microarray experiments are multi-step procedures and each step is a potential source of variance. This makes the measurement of variability difficult because approach based on gene-by-gene estimation of variance will have few degrees of freedom. It is highly possible that the assumption of equal variance for all the expression levels may not hold. Also, the assumption of normality of gene expressions may not hold. Thus it is essential to have a statistical procedure which is not based on the normality assumption and also it can detect genes with differential variance efficiently. The detection of differential gene expression variance will allow us to identify experimental variables that affect different biological processes and accuracy of DNA microarray measurements.In this article, a new nonparametric test for scale is developed based on the arctangent of the ratio of two expression levels. Most of the tests available in literature require the assumption of normal distribution, which makes them inapplicable in many situations, and it is also hard to verify the suitability of the normal distribution assumption for the given data set. The proposed test does not require the assumption of the distribution for the underlying population and hence makes it more practical and widely applicable. The asymptotic relative efficiency is calculated under different distributions, which show that the proposed test is very powerful when the assumption of normality breaks down. Monte Carlo simulation studies are performed to compare the power of the proposed test with some of the existing procedures. It is found that the proposed test is more powerful than commonly used tests under almost all the distributions considered in the study. A

  4. A Statistical Testing Approach for Quantifying Software Reliability; Application to an Example System

    Energy Technology Data Exchange (ETDEWEB)

    Chu, Tsong-Lun [Brookhaven National Lab. (BNL), Upton, NY (United States); Varuttamaseni, Athi [Brookhaven National Lab. (BNL), Upton, NY (United States); Baek, Joo-Seok [Brookhaven National Lab. (BNL), Upton, NY (United States)

    2016-11-01

    The U.S. Nuclear Regulatory Commission (NRC) encourages the use of probabilistic risk assessment (PRA) technology in all regulatory matters, to the extent supported by the state-of-the-art in PRA methods and data. Although much has been accomplished in the area of risk-informed regulation, risk assessment for digital systems has not been fully developed. The NRC established a plan for research on digital systems to identify and develop methods, analytical tools, and regulatory guidance for (1) including models of digital systems in the PRAs of nuclear power plants (NPPs), and (2) incorporating digital systems in the NRC's risk-informed licensing and oversight activities. Under NRC's sponsorship, Brookhaven National Laboratory (BNL) explored approaches for addressing the failures of digital instrumentation and control (I and C) systems in the current NPP PRA framework. Specific areas investigated included PRA modeling digital hardware, development of a philosophical basis for defining software failure, and identification of desirable attributes of quantitative software reliability methods. Based on the earlier research, statistical testing is considered a promising method for quantifying software reliability. This paper describes a statistical software testing approach for quantifying software reliability and applies it to the loop-operating control system (LOCS) of an experimental loop of the Advanced Test Reactor (ATR) at Idaho National Laboratory (INL).

  5. A Statistical Testing Approach for Quantifying Software Reliability; Application to an Example System

    International Nuclear Information System (INIS)

    Chu, Tsong-Lun; Varuttamaseni, Athi; Baek, Joo-Seok

    2016-01-01

    The U.S. Nuclear Regulatory Commission (NRC) encourages the use of probabilistic risk assessment (PRA) technology in all regulatory matters, to the extent supported by the state-of-the-art in PRA methods and data. Although much has been accomplished in the area of risk-informed regulation, risk assessment for digital systems has not been fully developed. The NRC established a plan for research on digital systems to identify and develop methods, analytical tools, and regulatory guidance for (1) including models of digital systems in the PRAs of nuclear power plants (NPPs), and (2) incorporating digital systems in the NRC's risk-informed licensing and oversight activities. Under NRC's sponsorship, Brookhaven National Laboratory (BNL) explored approaches for addressing the failures of digital instrumentation and control (I and C) systems in the current NPP PRA framework. Specific areas investigated included PRA modeling digital hardware, development of a philosophical basis for defining software failure, and identification of desirable attributes of quantitative software reliability methods. Based on the earlier research, statistical testing is considered a promising method for quantifying software reliability. This paper describes a statistical software testing approach for quantifying software reliability and applies it to the loop-operating control system (LOCS) of an experimental loop of the Advanced Test Reactor (ATR) at Idaho National Laboratory (INL).

  6. Statistical modeling of urban air temperature distributions under different synoptic conditions

    Science.gov (United States)

    Beck, Christoph; Breitner, Susanne; Cyrys, Josef; Hald, Cornelius; Hartz, Uwe; Jacobeit, Jucundus; Richter, Katja; Schneider, Alexandra; Wolf, Kathrin

    2015-04-01

    Within urban areas air temperature may vary distinctly between different locations. These intra-urban air temperature variations partly reach magnitudes that are relevant with respect to human thermal comfort. Therefore and furthermore taking into account potential interrelations with other health related environmental factors (e.g. air quality) it is important to estimate spatial patterns of intra-urban air temperature distributions that may be incorporated into urban planning processes. In this contribution we present an approach to estimate spatial temperature distributions in the urban area of Augsburg (Germany) by means of statistical modeling. At 36 locations in the urban area of Augsburg air temperatures are measured with high temporal resolution (4 min.) since December 2012. These 36 locations represent different typical urban land use characteristics in terms of varying percentage coverages of different land cover categories (e.g. impervious, built-up, vegetated). Percentage coverages of these land cover categories have been extracted from different sources (Open Street Map, European Urban Atlas, Urban Morphological Zones) for regular grids of varying size (50, 100, 200 meter horizonal resolution) for the urban area of Augsburg. It is well known from numerous studies that land use characteristics have a distinct influence on air temperature and as well other climatic variables at a certain location. Therefore air temperatures at the 36 locations are modeled utilizing land use characteristics (percentage coverages of land cover categories) as predictor variables in Stepwise Multiple Regression models and in Random Forest based model approaches. After model evaluation via cross-validation appropriate statistical models are applied to gridded land use data to derive spatial urban air temperature distributions. Varying models are tested and applied for different seasons and times of the day and also for different synoptic conditions (e.g. clear and calm

  7. Exact p-values of savage testing statistics. | Odiase | Journal of ...

    African Journals Online (AJOL)

    In recent years, the use of software for the calculation of statistical tests has become widespread. For many nonparametric tests, a number of statistical programs calculate significance levels based on algorithms appropriate for large samples only. In scientific experiments, small samples are common. This requires the use of ...

  8. "What If" Analyses: Ways to Interpret Statistical Significance Test Results Using EXCEL or "R"

    Science.gov (United States)

    Ozturk, Elif

    2012-01-01

    The present paper aims to review two motivations to conduct "what if" analyses using Excel and "R" to understand the statistical significance tests through the sample size context. "What if" analyses can be used to teach students what statistical significance tests really do and in applied research either prospectively to estimate what sample size…

  9. Testing boron carbide under triaxial compression

    Science.gov (United States)

    Anderson, Charles; Chocron, Sidney; Dannemann, Kathryn A.; Nicholls, Arthur E.

    2012-03-01

    This article focuses on the pressure dependence and summarizes the characterization work conducted on intact and predamaged specimens of boron carbide under confinement in a pressure vessel and in a thick steel sleeve. The failure curves obtained are presented, and the data compared to experimental data from the literature.

  10. Stress-testing banks under deep uncertainty

    NARCIS (Netherlands)

    Islam, T.; Vasilopoulos, C.; Pruyt, E.

    2013-01-01

    Years of turmoil in the banking sector have revealed the need to assess bank performance under deep uncertainty and identify vulnerabilities to different types of risks. Banks are not the safe houses of old. Today, banks are highly uncertain dynamically complex systems that are permanently at risk

  11. Standard errors and confidence intervals of norm statistics for educational and psychological tests

    NARCIS (Netherlands)

    Oosterhuis, H.E.M.; van der Ark, L.A.; Sijtsma, K.

    2017-01-01

    Norm statistics allow for the interpretation of scores on psychological and educational tests, by relating the test score of an individual test taker to the test scores of individuals belonging to the same gender, age, or education groups, et cetera. Given the uncertainty due to sampling error, one

  12. Standard errors and confidence intervals of norm statistics for educational and psychological tests

    NARCIS (Netherlands)

    Oosterhuis, H.E.M.; van der Ark, L.A.; Sijtsma, K.

    Norm statistics allow for the interpretation of scores on psychological and educational tests, by relating the test score of an individual test taker to the test scores of individuals belonging to the same gender, age, or education groups, et cetera. Given the uncertainty due to sampling error, one

  13. Kerosene-Fuel Engine Testing Under Way

    Science.gov (United States)

    2003-01-01

    NASA Stennis Space Center engineers conducted a successful cold-flow test of an RS-84 engine component Sept. 24. The RS-84 is a reusable engine fueled by rocket propellant - a special blend of kerosene - designed to power future flight vehicles. Liquid oxygen was blown through the RS-84 subscale preburner to characterize the test facility's performance and the hardware's resistance. Engineers are now moving into the next phase, hot-fire testing, which is expected to continue into February 2004. The RS-84 engine prototype, developed by the Rocketdyne Propulsion and Power division of The Boeing Co. of Canoga Park, Calif., is one of two competing Rocket Engine Prototype technologies - a key element of NASA's Next Generation Launch Technology program.

  14. Statistical Redundancy Testing for Improved Gene Selection in Cancer Classification Using Microarray Data

    Directory of Open Access Journals (Sweden)

    J. Sunil Rao

    2007-01-01

    Full Text Available In gene selection for cancer classifi cation using microarray data, we define an eigenvalue-ratio statistic to measure a gene’s contribution to the joint discriminability when this gene is included into a set of genes. Based on this eigenvalueratio statistic, we define a novel hypothesis testing for gene statistical redundancy and propose two gene selection methods. Simulation studies illustrate the agreement between statistical redundancy testing and gene selection methods. Real data examples show the proposed gene selection methods can select a compact gene subset which can not only be used to build high quality cancer classifiers but also show biological relevance.

  15. Transit Timing Observations from Kepler. VI. Potentially Interesting Candidate Systems from Fourier-based Statistical Tests

    OpenAIRE

    Steffen, Jason H.; Ford, Eric B.; Rowe, Jason F.; Fabrycky, Daniel C.; Holman, Matthew J.; Welsh, WIlliam F.; Batalha, Natalie M.; Borucki, William J.; Bryson, Steve; Caldwell, Douglas A.; Ciardi, David R.; Jenkins, Jon M.; Kjeldsen, Hans; Koch, David G.; Prša, Andrej

    2012-01-01

    We analyze the deviations of transit times from a linear ephemeris for the Kepler Objects of Interest (KOI) through Quarter six (Q6) of science data. We conduct two statistical tests for all KOIs and a related statistical test for all pairs of KOIs in multi-transiting systems. These tests identify several systems which show potentially interesting transit timing variations (TTVs). Strong TTV systems have been valuable for the confirmation of planets and their mass measurements. Many of the sy...

  16. Surprise responses in the human brain demonstrate statistical learning under high concurrent cognitive demand

    Science.gov (United States)

    Garrido, Marta Isabel; Teng, Chee Leong James; Taylor, Jeremy Alexander; Rowe, Elise Genevieve; Mattingley, Jason Brett

    2016-06-01

    The ability to learn about regularities in the environment and to make predictions about future events is fundamental for adaptive behaviour. We have previously shown that people can implicitly encode statistical regularities and detect violations therein, as reflected in neuronal responses to unpredictable events that carry a unique prediction error signature. In the real world, however, learning about regularities will often occur in the context of competing cognitive demands. Here we asked whether learning of statistical regularities is modulated by concurrent cognitive load. We compared electroencephalographic metrics associated with responses to pure-tone sounds with frequencies sampled from narrow or wide Gaussian distributions. We showed that outliers evoked a larger response than those in the centre of the stimulus distribution (i.e., an effect of surprise) and that this difference was greater for physically identical outliers in the narrow than in the broad distribution. These results demonstrate an early neurophysiological marker of the brain's ability to implicitly encode complex statistical structure in the environment. Moreover, we manipulated concurrent cognitive load by having participants perform a visual working memory task while listening to these streams of sounds. We again observed greater prediction error responses in the narrower distribution under both low and high cognitive load. Furthermore, there was no reliable reduction in prediction error magnitude under high-relative to low-cognitive load. Our findings suggest that statistical learning is not a capacity limited process, and that it proceeds automatically even when cognitive resources are taxed by concurrent demands.

  17. Distributional fold change test – a statistical approach for detecting differential expression in microarray experiments

    Directory of Open Access Journals (Sweden)

    Farztdinov Vadim

    2012-11-01

    Full Text Available Abstract Background Because of the large volume of data and the intrinsic variation of data intensity observed in microarray experiments, different statistical methods have been used to systematically extract biological information and to quantify the associated uncertainty. The simplest method to identify differentially expressed genes is to evaluate the ratio of average intensities in two different conditions and consider all genes that differ by more than an arbitrary cut-off value to be differentially expressed. This filtering approach is not a statistical test and there is no associated value that can indicate the level of confidence in the designation of genes as differentially expressed or not differentially expressed. At the same time the fold change by itself provide valuable information and it is important to find unambiguous ways of using this information in expression data treatment. Results A new method of finding differentially expressed genes, called distributional fold change (DFC test is introduced. The method is based on an analysis of the intensity distribution of all microarray probe sets mapped to a three dimensional feature space composed of average expression level, average difference of gene expression and total variance. The proposed method allows one to rank each feature based on the signal-to-noise ratio and to ascertain for each feature the confidence level and power for being differentially expressed. The performance of the new method was evaluated using the total and partial area under receiver operating curves and tested on 11 data sets from Gene Omnibus Database with independently verified differentially expressed genes and compared with the t-test and shrinkage t-test. Overall the DFC test performed the best – on average it had higher sensitivity and partial AUC and its elevation was most prominent in the low range of differentially expressed features, typical for formalin-fixed paraffin-embedded sample sets

  18. An integrated statistical and data-driven framework for supporting flood risk analysis under climate change

    Science.gov (United States)

    Lu, Y.; Qin, X. S.; Xie, Y. J.

    2016-02-01

    An integrated statistical and data-driven (ISD) framework was proposed for analyzing river flows and flood frequencies in the Duhe River Basin, China, under climate change. The proposed framework involved four major components: (i) a hybrid model based on ASD (Automated regression-based Statistical Downscaling tool) and KNN (K-nearest neighbor) was used for downscaling rainfall and CDEN (Conditional Density Estimate Network) was applied for downscaling minimum temperature and relative humidity from global circulation models (GCMs) to local weather stations; (ii) Bayesian neural network (BNN) was used for simulating monthly river flows based on projected weather information; (iii) KNN was applied for converting monthly flow to daily time series; (iv) Generalized Extreme Value (GEV) distribution was adopted for flood frequency analysis. In this study, the variables from CGCM3 A2 and HadCM3 A2 scenarios were employed as the large-scale predictors. The results indicated that the maximum monthly and annual runoffs would both increase under CGCM3 and HadCM3 A2 emission scenarios at the middle and end of this century. The flood risk in the study area would generally increase with a widening uncertainty range. Compared with traditional approaches, the proposed framework takes the full advantages of a series of statistical and data-driven methods and offers a parsimonious way of projecting flood risks under climatic change conditions.

  19. Statistical inferences under the Null hypothesis: Common mistakes and pitfalls in neuroimaging studies.

    Directory of Open Access Journals (Sweden)

    Jean-Michel eHupé

    2015-02-01

    Full Text Available Published studies using functional and structural MRI include many errors in the way data are analyzed and conclusions reported. This was observed when working on a comprehensive review of the neural bases of synesthesia, but these errors are probably endemic to neuroimaging studies. All studies reviewed had based their conclusions using Null Hypothesis Significance Tests (NHST. NHST have yet been criticized since their inception because they are more appropriate for taking decisions related to a Null hypothesis (like in manufacturing than for making inferences about behavioral and neuronal processes. Here I focus on a few key problems of NHST related to brain imaging techniques, and explain why or when we should not rely on significance tests. I also observed that, often, the ill-posed logic of NHST was even not correctly applied, and describe what I identified as common mistakes or at least problematic practices in published papers, in light of what could be considered as the very basics of statistical inference. MRI statistics also involve much more complex issues than standard statistical inference. Analysis pipelines vary a lot between studies, even for those using the same software, and there is no consensus which pipeline is the best. I propose a synthetic view of the logic behind the possible methodological choices, and warn against the usage and interpretation of two statistical methods popular in brain imaging studies, the false discovery rate (FDR procedure and permutation tests. I suggest that current models for the analysis of brain imaging data suffer from serious limitations and call for a revision taking into account the new statistics (confidence intervals logic.

  20. Mnemonic Aids during Tests: Worthless Frivolity or Effective Tool in Statistics Education?

    Science.gov (United States)

    Larwin, Karen H.; Larwin, David A.; Gorman, Jennifer

    2012-01-01

    Researchers have explored many pedagogical approaches in an effort to assist students in finding understanding and comfort in required statistics courses. This study investigates the impact of mnemonic aids used during tests on students' statistics course performance in particular. In addition, the present study explores several hypotheses that…

  1. Statistical Characterization of the Mechanical Parameters of Intact Rock Under Triaxial Compression: An Experimental Proof of the Jinping Marble

    Science.gov (United States)

    Jiang, Quan; Zhong, Shan; Cui, Jie; Feng, Xia-Ting; Song, Leibo

    2016-12-01

    We investigated the statistical characteristics and probability distribution of the mechanical parameters of natural rock using triaxial compression tests. Twenty cores of Jinping marble were tested under each different levels of confining stress (i.e., 5, 10, 20, 30, and 40 MPa). From these full stress-strain data, we summarized the numerical characteristics and determined the probability distribution form of several important mechanical parameters, including deformational parameters, characteristic strength, characteristic strains, and failure angle. The statistical proofs relating to the mechanical parameters of rock presented new information about the marble's probabilistic distribution characteristics. The normal and log-normal distributions were appropriate for describing random strengths of rock; the coefficients of variation of the peak strengths had no relationship to the confining stress; the only acceptable random distribution for both Young's elastic modulus and Poisson's ratio was the log-normal function; and the cohesive strength had a different probability distribution pattern than the frictional angle. The triaxial tests and statistical analysis also provided experimental evidence for deciding the minimum reliable number of experimental sample and for picking appropriate parameter distributions to use in reliability calculations for rock engineering.

  2. Selecting the most appropriate inferential statistical test for your quantitative research study.

    Science.gov (United States)

    Bettany-Saltikov, Josette; Whittaker, Victoria Jane

    2014-06-01

    To discuss the issues and processes relating to the selection of the most appropriate statistical test. A review of the basic research concepts together with a number of clinical scenarios is used to illustrate this. Quantitative nursing research generally features the use of empirical data which necessitates the selection of both descriptive and statistical tests. Different types of research questions can be answered by different types of research designs, which in turn need to be matched to a specific statistical test(s). Discursive paper. This paper discusses the issues relating to the selection of the most appropriate statistical test and makes some recommendations as to how these might be dealt with. When conducting empirical quantitative studies, a number of key issues need to be considered. Considerations for selecting the most appropriate statistical tests are discussed and flow charts provided to facilitate this process. When nursing clinicians and researchers conduct quantitative research studies, it is crucial that the most appropriate statistical test is selected to enable valid conclusions to be made. © 2013 John Wiley & Sons Ltd.

  3. A testing procedure for wind turbine generators based on the power grid statistical model

    DEFF Research Database (Denmark)

    Farajzadehbibalan, Saber; Ramezani, Mohammad Hossein; Nielsen, Peter

    2017-01-01

    In this study, a comprehensive test procedure is developed to test wind turbine generators with a hardware-in-loop setup. The procedure employs the statistical model of the power grid considering the restrictions of the test facility and system dynamics. Given the model in the latent space, the j...

  4. A General Class of Test Statistics for Van Valen's Red Queen Hypothesis.

    Science.gov (United States)

    Wiltshire, Jelani; Huffer, Fred W; Parker, William C

    2014-09-01

    Van Valen's Red Queen hypothesis states that within a homogeneous taxonomic group the age is statistically independent of the rate of extinction. The case of the Red Queen hypothesis being addressed here is when the homogeneous taxonomic group is a group of similar species. Since Van Valen's work, various statistical approaches have been used to address the relationship between taxon age and the rate of extinction. We propose a general class of test statistics that can be used to test for the effect of age on the rate of extinction. These test statistics allow for a varying background rate of extinction and attempt to remove the effects of other covariates when assessing the effect of age on extinction. No model is assumed for the covariate effects. Instead we control for covariate effects by pairing or grouping together similar species. Simulations are used to compare the power of the statistics. We apply the test statistics to data on Foram extinctions and find that age has a positive effect on the rate of extinction. A derivation of the null distribution of one of the test statistics is provided in the supplementary material.

  5. A Modified Jonckheere Test Statistic for Ordered Alternatives in Repeated Measures Design

    Directory of Open Access Journals (Sweden)

    Hatice Tül Kübra AKDUR

    2016-09-01

    Full Text Available In this article, a new test based on Jonckheere test [1] for  randomized blocks which have dependent observations within block is presented. A weighted sum for each block statistic rather than the unweighted sum proposed by Jonckheereis included. For Jonckheere type statistics, the main assumption is independency of observations within block. In the case of repeated measures design, the assumption of independence is violated. The weighted Jonckheere type statistic for the situation of dependence for different variance-covariance structure and the situation based on ordered alternative hypothesis structure of each block on the design is used. Also, the proposed statistic is compared to the existing test based on Jonckheere in terms of type I error rates by performing Monte Carlo simulation. For the strong correlations, circular bootstrap version of the proposed Jonckheere test provides lower rates of type I error.

  6. The Relationship between Test Anxiety and Academic Performance of Students in Vital Statistics Course

    Directory of Open Access Journals (Sweden)

    Shirin Iranfar

    2013-12-01

    Full Text Available Introduction: Test anxiety is a common phenomenon among students and is one of the problems of educational system. The present study was conducted to investigate the test anxiety in vital statistics course and its association with academic performance of students at Kermanshah University of Medical Sciences. This study was descriptive-analytical and the study sample included the students studying in nursing and midwifery, paramedicine and health faculties that had taken vital statistics course and were selected through census method. Sarason questionnaire was used to analyze the test anxiety. Data were analyzed by descriptive and inferential statistics. The findings indicated no significant correlation between test anxiety and score of vital statistics course.

  7. Testing the Mean of a Normal Population Under Dependence

    NARCIS (Netherlands)

    Albers, Willem/Wim

    1978-01-01

    Modifications of the $t$-test are considered which are robust under certain violations of the independence assumption. The additional number of observations these modified tests require in order to obtain under independence the same power as the $t$-test, is obtained asymptotically.

  8. Multiple statistical tests: Lessons from a d20 [version 2; referees: 3 approved

    Directory of Open Access Journals (Sweden)

    Christopher R. Madan

    2016-09-01

    Full Text Available Statistical analyses are often conducted with α= .05. When multiple statistical tests are conducted, this procedure needs to be adjusted to compensate for the otherwise inflated Type I error. In some instances in tabletop gaming, sometimes it is desired to roll a 20-sided die (or 'd20' twice and take the greater outcome. Here I draw from probability theory and the case of a d20, where the probability of obtaining any specific outcome is 1/20, to determine the probability of obtaining a specific outcome (Type-I error at least once across repeated, independent statistical tests.

  9. Statistical methodology for predicting the life of lithium-ion cells via accelerated degradation testing

    Science.gov (United States)

    Thomas, E. V.; Bloom, I.; Christophersen, J. P.; Battaglia, V. S.

    Statistical models based on data from accelerated aging experiments are used to predict cell life. In this article, we discuss a methodology for estimating the mean cell life with uncertainty bounds that uses both a degradation model (reflecting average cell performance) and an error model (reflecting the measured cell-to-cell variability in performance). Specific forms for the degradation and error models are presented and illustrated with experimental data that were acquired from calendar-life testing of high-power lithium-ion cells as part of the U.S. Department of Energy's (DOEs) Advanced Technology Development program. Monte Carlo simulations, based on the developed models, are used to assess lack-of-fit and develop uncertainty limits for the average cell life. In addition, we discuss the issue of assessing the applicability of degradation models (based on data acquired from cells aged under static conditions) to the degradation of cells aged under more realistic dynamic conditions (e.g., varying temperature).

  10. Stereo under Sequential Optimal Sampling: A Statistical Analysis Framework for Search Space Reduction (Open Access)

    Science.gov (United States)

    2014-09-24

    Stereo under Sequential Optimal Sampling: A Statistical Analysis Framework for Search Space Reduction Yilin Wang, Ke Wang, Enrique Dunn, Jan-Michael...100 Patch size 1 10 100 Re du nd an cy 0.1 10 20 30 40 50 60 70 80 90 100 Patch size 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 Sa m pl in gR at io 0 0.02

  11. Beam wandering statistics of twin thin laser beam propagation under generalized atmospheric conditions.

    Science.gov (United States)

    Pérez, Darío G; Funes, Gustavo

    2012-12-03

    Under the Geometrics Optics approximation is possible to estimate the covariance between the displacements of two thin beams after they have propagated through a turbulent medium. Previous works have concentrated in long propagation distances to provide models for the wandering statistics. These models are useful when the separation between beams is smaller than the propagation path-regardless of the characteristics scales of the turbulence. In this work we give a complete model for these covariances, behavior introducing absolute limits to the validity of former approximations. Moreover, these generalizations are established for non-Kolmogorov atmospheric models.

  12. The use of test scores from large-scale assessment surveys: psychometric and statistical considerations

    Directory of Open Access Journals (Sweden)

    Henry Braun

    2017-11-01

    Full Text Available Abstract Background Economists are making increasing use of measures of student achievement obtained through large-scale survey assessments such as NAEP, TIMSS, and PISA. The construction of these measures, employing plausible value (PV methodology, is quite different from that of the more familiar test scores associated with assessments such as the SAT or ACT. These differences have important implications both for utilization and interpretation. Although much has been written about PVs, it appears that there are still misconceptions about whether and how to employ them in secondary analyses. Methods We address a range of technical issues, including those raised in a recent article that was written to inform economists using these databases. First, an extensive review of the relevant literature was conducted, with particular attention to key publications that describe the derivation and psychometric characteristics of such achievement measures. Second, a simulation study was carried out to compare the statistical properties of estimates based on the use of PVs with those based on other, commonly used methods. Results It is shown, through both theoretical analysis and simulation, that under fairly general conditions appropriate use of PV yields approximately unbiased estimates of model parameters in regression analyses of large scale survey data. The superiority of the PV methodology is particularly evident when measures of student achievement are employed as explanatory variables. Conclusions The PV methodology used to report student test performance in large scale surveys remains the state-of-the-art for secondary analyses of these databases.

  13. A simulation study for comparing testing statistics in response-adaptive randomization.

    Science.gov (United States)

    Gu, Xuemin; Lee, J Jack

    2010-06-05

    Response-adaptive randomizations are able to assign more patients in a comparative clinical trial to the tentatively better treatment. However, due to the adaptation in patient allocation, the samples to be compared are no longer independent. At large sample sizes, many asymptotic properties of test statistics derived for independent sample comparison are still applicable in adaptive randomization provided that the patient allocation ratio converges to an appropriate target asymptotically. However, the small sample properties of commonly used test statistics in response-adaptive randomization are not fully studied. Simulations are systematically conducted to characterize the statistical properties of eight test statistics in six response-adaptive randomization methods at six allocation targets with sample sizes ranging from 20 to 200. Since adaptive randomization is usually not recommended for sample size less than 30, the present paper focuses on the case with a sample of 30 to give general recommendations with regard to test statistics for contingency tables in response-adaptive randomization at small sample sizes. Among all asymptotic test statistics, the Cook's correction to chi-square test (TMC) is the best in attaining the nominal size of hypothesis test. The William's correction to log-likelihood ratio test (TML) gives slightly inflated type I error and higher power as compared with TMC, but it is more robust against the unbalance in patient allocation. TMC and TML are usually the two test statistics with the highest power in different simulation scenarios. When focusing on TMC and TML, the generalized drop-the-loser urn (GDL) and sequential estimation-adjusted urn (SEU) have the best ability to attain the correct size of hypothesis test respectively. Among all sequential methods that can target different allocation ratios, GDL has the lowest variation and the highest overall power at all allocation ratios. The performance of different adaptive randomization

  14. Structural damage detection based on stochastic subspace identification and statistical pattern recognition: II. Experimental validation under varying temperature

    Science.gov (United States)

    Lin, Y. Q.; Ren, W. X.; Fang, S. E.

    2011-11-01

    Although most vibration-based damage detection methods can acquire satisfactory verification on analytical or numerical structures, most of them may encounter problems when applied to real-world structures under varying environments. The damage detection methods that directly extract damage features from the periodically sampled dynamic time history response measurements are desirable but relevant research and field application verification are still lacking. In this second part of a two-part paper, the robustness and performance of the statistics-based damage index using the forward innovation model by stochastic subspace identification of a vibrating structure proposed in the first part have been investigated against two prestressed reinforced concrete (RC) beams tested in the laboratory and a full-scale RC arch bridge tested in the field under varying environments. Experimental verification is focused on temperature effects. It is demonstrated that the proposed statistics-based damage index is insensitive to temperature variations but sensitive to the structural deterioration or state alteration. This makes it possible to detect the structural damage for the real-scale structures experiencing ambient excitations and varying environmental conditions.

  15. What Are Null Hypotheses? The Reasoning Linking Scientific and Statistical Hypothesis Testing

    Science.gov (United States)

    Lawson, Anton E.

    2008-01-01

    We should dispense with use of the confusing term "null hypothesis" in educational research reports. To explain why the term should be dropped, the nature of, and relationship between, scientific and statistical hypothesis testing is clarified by explication of (a) the scientific reasoning used by Gregor Mendel in testing specific…

  16. P-Value, a true test of statistical significance? a cautionary note ...

    African Journals Online (AJOL)

    While it's not the intention of the founders of significance testing and hypothesis testing to have the two ideas intertwined as if they are complementary, the inconvenient marriage of the two practices into one coherent, convenient, incontrovertible and misinterpreted practice has dotted our standard statistics textbooks and ...

  17. Transit timing observations from Kepler. VI. Potentially interesting candidate systems from fourier-based statistical tests

    DEFF Research Database (Denmark)

    Steffen, J.H.; Ford, E.B.; Rowe, J.F.

    2012-01-01

    We analyze the deviations of transit times from a linear ephemeris for the Kepler Objects of Interest (KOI) through quarter six of science data. We conduct two statistical tests for all KOIs and a related statistical test for all pairs of KOIs in multi-transiting systems. These tests identify...... several systems which show potentially interesting transit timing variations (TTVs). Strong TTV systems have been valuable for the confirmation of planets and their mass measurements. Many of the systems identified in this study should prove fruitful for detailed TTV studies....

  18. TRANSIT TIMING OBSERVATIONS FROM KEPLER. VI. POTENTIALLY INTERESTING CANDIDATE SYSTEMS FROM FOURIER-BASED STATISTICAL TESTS

    Energy Technology Data Exchange (ETDEWEB)

    Steffen, Jason H. [Fermilab Center for Particle Astrophysics, P.O. Box 500, MS 127, Batavia, IL 60510 (United States); Ford, Eric B. [Astronomy Department, University of Florida, 211 Bryant Space Sciences Center, Gainesville, FL 32111 (United States); Rowe, Jason F.; Borucki, William J.; Bryson, Steve; Caldwell, Douglas A.; Jenkins, Jon M.; Koch, David G.; Sanderfer, Dwight T.; Seader, Shawn; Twicken, Joseph D. [NASA Ames Research Center, Moffett Field, CA 94035 (United States); Fabrycky, Daniel C. [UCO/Lick Observatory, University of California, Santa Cruz, CA 95064 (United States); Holman, Matthew J. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Welsh, William F. [Astronomy Department, San Diego State University, San Diego, CA 92182-1221 (United States); Batalha, Natalie M. [Department of Physics and Astronomy, San Jose State University, San Jose, CA 95192 (United States); Ciardi, David R. [NASA Exoplanet Science Institute/California Institute of Technology, Pasadena, CA 91125 (United States); Kjeldsen, Hans [Department of Physics and Astronomy, Aarhus University, DK-8000 Aarhus C (Denmark); Prsa, Andrej, E-mail: jsteffen@fnal.gov [Department of Astronomy and Astrophysics, Villanova University, 800 East Lancaster Avenue, Villanova, PA 19085 (United States)

    2012-09-10

    We analyze the deviations of transit times from a linear ephemeris for the Kepler Objects of Interest (KOI) through quarter six of science data. We conduct two statistical tests for all KOIs and a related statistical test for all pairs of KOIs in multi-transiting systems. These tests identify several systems which show potentially interesting transit timing variations (TTVs). Strong TTV systems have been valuable for the confirmation of planets and their mass measurements. Many of the systems identified in this study should prove fruitful for detailed TTV studies.

  19. Improved Test Planning and Analysis Through the Use of Advanced Statistical Methods

    Science.gov (United States)

    Green, Lawrence L.; Maxwell, Katherine A.; Glass, David E.; Vaughn, Wallace L.; Barger, Weston; Cook, Mylan

    2016-01-01

    The goal of this work is, through computational simulations, to provide statistically-based evidence to convince the testing community that a distributed testing approach is superior to a clustered testing approach for most situations. For clustered testing, numerous, repeated test points are acquired at a limited number of test conditions. For distributed testing, only one or a few test points are requested at many different conditions. The statistical techniques of Analysis of Variance (ANOVA), Design of Experiments (DOE) and Response Surface Methods (RSM) are applied to enable distributed test planning, data analysis and test augmentation. The D-Optimal class of DOE is used to plan an optimally efficient single- and multi-factor test. The resulting simulated test data are analyzed via ANOVA and a parametric model is constructed using RSM. Finally, ANOVA can be used to plan a second round of testing to augment the existing data set with new data points. The use of these techniques is demonstrated through several illustrative examples. To date, many thousands of comparisons have been performed and the results strongly support the conclusion that the distributed testing approach outperforms the clustered testing approach.

  20. Assessing the impact of vaccination programmes on burden of disease: Underlying complexities and statistical methods.

    Science.gov (United States)

    Mealing, Nicole; Hayen, Andrew; Newall, Anthony T

    2016-06-08

    It is important to assess the impact a vaccination programme has on the burden of disease after it is implemented. For example, this may reveal herd immunity effects or vaccine-induced shifts in the incidence of disease or in circulating strains or serotypes of the pathogen. In this article we summarise the key features of infectious diseases that need to be considered when trying to detect any changes in the burden of diseases at a population level as a result of vaccination efforts. We outline the challenges of using routine surveillance databases to monitor infectious diseases, such as the identification of diseased cases and the availability of vaccination status for cases. We highlight the complexities in modelling the underlying patterns in infectious disease rates (e.g. presence of autocorrelation) and discuss the main statistical methods that can be used to control for periodicity (e.g. seasonality) and autocorrelation when assessing the impact of vaccination programmes on burden of disease (e.g. cosinor terms, generalised additive models, autoregressive processes and moving averages). For some analyses, there may be multiple methods that can be used, but it is important for authors to justify the method chosen and discuss any limitations. We present a case study review of the statistical methods used in the literature to assess the rotavirus vaccination programme impact in Australia. The methods used varied and included generalised linear models and descriptive statistics. Not all studies accounted for autocorrelation and seasonality, which can have a major influence on results. We recommend that future analyses consider the strength and weakness of alternative statistical methods and justify their choice. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. The use of carrier solvents in regulatory aquatic toxicology testing: practical, statistical and regulatory considerations.

    Science.gov (United States)

    Green, John; Wheeler, James R

    2013-11-15

    Solvents are often used to aid test item preparation in aquatic ecotoxicity experiments. This paper discusses the practical, statistical and regulatory considerations. The selection of the appropriate control (if a solvent is used) for statistical analysis is investigated using a database of 141 responses (endpoints) from 71 experiments. The advantages and disadvantages of basing the statistical analysis of treatment effects to the water control alone, solvent control alone, combined controls, or a conditional strategy of combining controls, when not statistically significantly different, are tested. The latter two approaches are shown to have distinct advantages. It is recommended that this approach continue to be the standard used for regulatory and research aquatic ecotoxicology studies. However, wherever technically feasible a solvent should not be employed or at least the concentration minimized. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. Testing University Rankings Statistically: Why this Perhaps is not such a Good Idea after All. Some Reflections on Statistical Power, Effect Size, Random Sampling and Imaginary Populations

    DEFF Research Database (Denmark)

    Schneider, Jesper Wiborg

    2012-01-01

    In this paper we discuss and question the use of statistical significance tests in relation to university rankings as recently suggested. We outline the assumptions behind and interpretations of statistical significance tests and relate this to examples from the recent SCImago Institutions Rankin...

  3. Tests and Confidence Intervals for an Extended Variance Component Using the Modified Likelihood Ratio Statistic

    DEFF Research Database (Denmark)

    Christensen, Ole Fredslund; Frydenberg, Morten; Jensen, Jens Ledet

    2005-01-01

    The large deviation modified likelihood ratio statistic is studied for testing a variance component equal to a specified value. Formulas are presented in the general balanced case, whereas in the unbalanced case only the one-way random effects model is studied. Simulation studies are presented......, showing that the normal approximation to the large deviation modified likelihood ratio statistic gives confidence intervals for variance components with coverage probabilities very close to the nominal confidence coefficient....

  4. Price limits and stock market efficiency: Evidence from rolling bicorrelation test statistic

    International Nuclear Information System (INIS)

    Lim, Kian-Ping; Brooks, Robert D.

    2009-01-01

    Using the rolling bicorrelation test statistic, the present paper compares the efficiency of stock markets from China, Korea and Taiwan in selected sub-periods with different price limits regimes. The statistical results do not support the claims that restrictive price limits and price limits per se are jeopardizing market efficiency. However, the evidence does not imply that price limits have no effect on the price discovery process but rather suggesting that market efficiency is not merely determined by price limits.

  5. A NEW TEST OF THE STATISTICAL NATURE OF THE BRIGHTEST CLUSTER GALAXIES

    International Nuclear Information System (INIS)

    Lin, Yen-Ting; Ostriker, Jeremiah P.; Miller, Christopher J.

    2010-01-01

    A novel statistic is proposed to examine the hypothesis that all cluster galaxies are drawn from the same luminosity distribution (LD). In such a 'statistical model' of galaxy LD, the brightest cluster galaxies (BCGs) are simply the statistical extreme of the galaxy population. Using a large sample of nearby clusters, we show that BCGs in high luminosity clusters (e.g., L tot ∼> 4 x 10 11 h -2 70 L sun ) are unlikely (probability ≤3 x 10 -4 ) to be drawn from the LD defined by all red cluster galaxies more luminous than M r = -20. On the other hand, BCGs in less luminous clusters are consistent with being the statistical extreme. Applying our method to the second brightest galaxies, we show that they are consistent with being the statistical extreme, which implies that the BCGs are also distinct from non-BCG luminous, red, cluster galaxies. We point out some issues with the interpretation of the classical tests proposed by Tremaine and Richstone (TR) that are designed to examine the statistical nature of BCGs, investigate the robustness of both our statistical test and those of TR against difficulties in photometry of galaxies of large angular size, and discuss the implication of our findings on surveys that use the luminous red galaxies to measure the baryon acoustic oscillation features in the galaxy power spectrum.

  6. Evaluation of eye irritation potential: statistical analysis and tier testing strategies.

    Science.gov (United States)

    de Silva, O; Cottin, M; Dami, N; Roguet, R; Catroux, P; Toufic, A; Sicard, C; Dossou, K G; Gerner, I; Schlede, E; Spielmann, H; Gupta, K C; Hills, R N

    1997-01-01

    Eye irritation testing, specifically the Draize test, has been the centre of controversy for many reasons. Several alternatives, based on the principles of reduction, refinement and replacement, have been proposed and are being used by the industry and government authorities. However, no universally applicable, validated non-animal alternative(s) is currently available. This report presents a statistical analysis and two testing approaches: the partial least squares multivariate statistical analysis of de Silva and colleagues from France, the tier-testing approach for regulatory purposes described by Gerner and colleagues from Germany, and the three-step tier-testing approach of the US Interagency Regulatory Alternatives Group described by Gupta and Hill. These approaches were presented as three separate papers at the November 1993 Interagency Regulatory Alternatives Group (IRAG) Workshop on Eye Irritation Testing; they have been summarized and combined into the following three-part report. The first part (de Silva et al.) presents statistical techniques for establishing test batteries of in vitro alternatives to the eye irritation test. The second (Gerner et al.) and third (Gupta and Hill) parts are similar in that they stage assessment of information by using a combination of screening information and animal testing to effect reductions in animal use and distress.

  7. Evaluating statistical tests on OLAP cubes to compare degree of disease.

    Science.gov (United States)

    Ordonez, Carlos; Chen, Zhibo

    2009-09-01

    Statistical tests represent an important technique used to formulate and validate hypotheses on a dataset. They are particularly useful in the medical domain, where hypotheses link disease with medical measurements, risk factors, and treatment. In this paper, we propose to compute parametric statistical tests treating patient records as elements in a multidimensional cube. We introduce a technique that combines dimension lattice traversal and statistical tests to discover significant differences in the degree of disease within pairs of patient groups. In order to understand a cause-effect relationship, we focus on patient group pairs differing in one dimension. We introduce several optimizations to prune the search space, to discover significant group pairs, and to summarize results. We present experiments showing important medical findings and evaluating scalability with medical datasets.

  8. A new model test in high energy physics in frequentist and bayesian statistical formalisms

    International Nuclear Information System (INIS)

    Kamenshchikov, A.

    2017-01-01

    The problem of a new physical model test using observed experimental data is a typical one for modern experiments in high energy physics (HEP). A solution of the problem may be provided with two alternative statistical formalisms, namely frequentist and Bayesian, which are widely spread in contemporary HEP searches. A characteristic experimental situation is modeled from general considerations, and both the approaches are utilized in order to test a new model. The results are juxtaposed, which demonstrates their consistency in this work. An effect of a systematic uncertainty treatment in the statistical analysis is also considered.

  9. Operational statistical analysis of the results of computer-based testing of students

    Directory of Open Access Journals (Sweden)

    Виктор Иванович Нардюжев

    2018-12-01

    Full Text Available The article is devoted to the issues of statistical analysis of results of computer-based testing for evaluation of educational achievements of students. The issues are relevant due to the fact that computerbased testing in Russian universities has become an important method for evaluation of educational achievements of students and quality of modern educational process. Usage of modern methods and programs for statistical analysis of results of computer-based testing and assessment of quality of developed tests is an actual problem for every university teacher. The article shows how the authors solve this problem using their own program “StatInfo”. For several years the program has been successfully applied in a credit system of education at such technological stages as loading computerbased testing protocols into a database, formation of queries, generation of reports, lists, and matrices of answers for statistical analysis of quality of test items. Methodology, experience and some results of its usage by university teachers are described in the article. Related topics of a test development, models, algorithms, technologies, and software for large scale computer-based testing has been discussed by the authors in their previous publications which are presented in the reference list.

  10. Classification of Underlying Causes of Power Quality Disturbances: Deterministic versus Statistical Methods

    Directory of Open Access Journals (Sweden)

    Emmanouil Styvaktakis

    2007-01-01

    Full Text Available This paper presents the two main types of classification methods for power quality disturbances based on underlying causes: deterministic classification, giving an expert system as an example, and statistical classification, with support vector machines (a novel method as an example. An expert system is suitable when one has limited amount of data and sufficient power system expert knowledge; however, its application requires a set of threshold values. Statistical methods are suitable when large amount of data is available for training. Two important issues to guarantee the effectiveness of a classifier, data segmentation, and feature extraction are discussed. Segmentation of a sequence of data recording is preprocessing to partition the data into segments each representing a duration containing either an event or a transition between two events. Extraction of features is applied to each segment individually. Some useful features and their effectiveness are then discussed. Some experimental results are included for demonstrating the effectiveness of both systems. Finally, conclusions are given together with the discussion of some future research directions.

  11. Statistical characteristics of suction pressure signals for a centrifugal pump under cavitating conditions

    Science.gov (United States)

    Li, Xiaojun; Yu, Benxu; Ji, Yucheng; Lu, Jiaxin; Yuan, Shouqi

    2017-02-01

    Centrifugal pumps are often used in operating conditions where they can be susceptible to premature failure. The cavitation phenomenon is a common fault in centrifugal pumps and is associated with undesired effects. Among the numerous cavitation detection methods, the measurement of suction pressure fluctuation is one of the most used methods to detect or diagnose the degree of cavitation in a centrifugal pump. In this paper, a closed loop was established to investigate the pump cavitation phenomenon, the statistical parameters for PDF (Probability Density Function), Variance and RMS (Root Mean Square) were used to analyze the relationship between the cavitation performance and the suction pressure signals during the development of cavitation. It is found that the statistical parameters used in this research are able to capture critical cavitation condition and cavitation breakdown condition, whereas difficult for the detection of incipient cavitation in the pump. At part-load conditions, the pressure fluctuations at the impeller inlet show more complexity than the best efficiency point (BEP). Amplitude of PDF values of suction pressure increased steeply when the flow rate dropped to 40 m3/h (the design flow rate was 60 m3/h). One possible reason is that the flow structure in the impeller channel promotes an increase of the cavitation intensity when the flow rate is reduced to a certain degree. This shows that it is necessary to find the relationship between the cavitation instabilities and flow instabilities when centrifugal pumps operate under part-load flow rates.

  12. The extended statistical analysis of toxicity tests using standardised effect sizes (SESs: a comparison of nine published papers.

    Directory of Open Access Journals (Sweden)

    Michael F W Festing

    Full Text Available The safety of chemicals, drugs, novel foods and genetically modified crops is often tested using repeat-dose sub-acute toxicity tests in rats or mice. It is important to avoid misinterpretations of the results as these tests are used to help determine safe exposure levels in humans. Treated and control groups are compared for a range of haematological, biochemical and other biomarkers which may indicate tissue damage or other adverse effects. However, the statistical analysis and presentation of such data poses problems due to the large number of statistical tests which are involved. Often, it is not clear whether a "statistically significant" effect is real or a false positive (type I error due to sampling variation. The author's conclusions appear to be reached somewhat subjectively by the pattern of statistical significances, discounting those which they judge to be type I errors and ignoring any biomarker where the p-value is greater than p = 0.05. However, by using standardised effect sizes (SESs a range of graphical methods and an over-all assessment of the mean absolute response can be made. The approach is an extension, not a replacement of existing methods. It is intended to assist toxicologists and regulators in the interpretation of the results. Here, the SES analysis has been applied to data from nine published sub-acute toxicity tests in order to compare the findings with those of the author's. Line plots, box plots and bar plots show the pattern of response. Dose-response relationships are easily seen. A "bootstrap" test compares the mean absolute differences across dose groups. In four out of seven papers where the no observed adverse effect level (NOAEL was estimated by the authors, it was set too high according to the bootstrap test, suggesting that possible toxicity is under-estimated.

  13. Efficient statistical tests to compare Youden index: accounting for contingency correlation.

    Science.gov (United States)

    Chen, Fangyao; Xue, Yuqiang; Tan, Ming T; Chen, Pingyan

    2015-04-30

    Youden index is widely utilized in studies evaluating accuracy of diagnostic tests and performance of predictive, prognostic, or risk models. However, both one and two independent sample tests on Youden index have been derived ignoring the dependence (association) between sensitivity and specificity, resulting in potentially misleading findings. Besides, paired sample test on Youden index is currently unavailable. This article develops efficient statistical inference procedures for one sample, independent, and paired sample tests on Youden index by accounting for contingency correlation, namely associations between sensitivity and specificity and paired samples typically represented in contingency tables. For one and two independent sample tests, the variances are estimated by Delta method, and the statistical inference is based on the central limit theory, which are then verified by bootstrap estimates. For paired samples test, we show that the estimated covariance of the two sensitivities and specificities can be represented as a function of kappa statistic so the test can be readily carried out. We then show the remarkable accuracy of the estimated variance using a constrained optimization approach. Simulation is performed to evaluate the statistical properties of the derived tests. The proposed approaches yield more stable type I errors at the nominal level and substantially higher power (efficiency) than does the original Youden's approach. Therefore, the simple explicit large sample solution performs very well. Because we can readily implement the asymptotic and exact bootstrap computation with common software like R, the method is broadly applicable to the evaluation of diagnostic tests and model performance. Copyright © 2015 John Wiley & Sons, Ltd.

  14. Beyond P Values and Hypothesis Testing: Using the Minimum Bayes Factor to Teach Statistical Inference in Undergraduate Introductory Statistics Courses

    Science.gov (United States)

    Page, Robert; Satake, Eiki

    2017-01-01

    While interest in Bayesian statistics has been growing in statistics education, the treatment of the topic is still inadequate in both textbooks and the classroom. Because so many fields of study lead to careers that involve a decision-making process requiring an understanding of Bayesian methods, it is becoming increasingly clear that Bayesian…

  15. Test Marketing Exemption (TME) for New Chemical Review under TSCA

    Science.gov (United States)

    Under section 5 of TSCA, EPA established an exemption for certain chemicals that are manufactured (including imported) for test marketing. You can learn more here about the requirements of this exemption, along with the review and submission process.

  16. The testing-effect under investigation. Experiences in Kiel

    NARCIS (Netherlands)

    Dirkx, Kim; Kester, Liesbeth; Kirschner, Paul A.

    2013-01-01

    Dirkx, K. J. H., Kester, L., & Kirschner, P. A. (2013, 22 January). The testing-effect under investigation. Experiences in Kiel. Presentation held at the Learning & Cognition meeting, Heerlen, The Netherlands.

  17. Statistical tests for the Gaussian nature of primordial fluctuations through CBR experiments

    International Nuclear Information System (INIS)

    Luo, X.

    1994-01-01

    Information about the physical processes that generate the primordial fluctuations in the early Universe can be gained by testing the Gaussian nature of the fluctuations through cosmic microwave background radiation (CBR) temperature anisotropy experiments. One of the crucial aspects of density perturbations that are produced by the standard inflation scenario is that they are Gaussian, whereas seeds produced by topological defects left over from an early cosmic phase transition tend to be non-Gaussian. To carry out this test, sophisticated statistical tools are required. In this paper, we will discuss several such statistical tools, including multivariant skewness and kurtosis, Euler-Poincare characteristics, the three-point temperature correlation function, and Hotelling's T 2 statistic defined through bispectral estimates of a one-dimensional data set. The effect of noise present in the current data is discussed in detail and the COBE 53 GHz data set is analyzed. Our analysis shows that, on the large angular scale to which COBE is sensitive, the statistics are probably Gaussian. On the small angular scales, the importance of Hotelling's T 2 statistic is stressed, and the minimum sample size required to test Gaussianity is estimated. Although the current data set available from various experiments at half-degree scales is still too small, improvement of the data set by roughly a factor of 2 will be enough to test the Gaussianity statistically. On the arc min scale, we analyze the recent RING data through bispectral analysis, and the result indicates possible deviation from Gaussianity. Effects of point sources are also discussed. It is pointed out that the Gaussianity problem can be resolved in the near future by ground-based or balloon-borne experiments

  18. A critical discussion of null hypothesis significance testing and statistical power analysis within psychological research

    DEFF Research Database (Denmark)

    Jones, Allan; Sommerlund, Bo

    2007-01-01

    The uses of null hypothesis significance testing (NHST) and statistical power analysis within psychological research are critically discussed. The article looks at the problems of relying solely on NHST when dealing with small and large sample sizes. The use of power-analysis in estimating...

  19. A Critique of One-Tailed Hypothesis Test Procedures in Business and Economics Statistics Textbooks.

    Science.gov (United States)

    Liu, Tung; Stone, Courtenay C.

    1999-01-01

    Surveys introductory business and economics statistics textbooks and finds that they differ over the best way to explain one-tailed hypothesis tests: the simple null-hypothesis approach or the composite null-hypothesis approach. Argues that the composite null-hypothesis approach contains methodological shortcomings that make it more difficult for…

  20. A Comparison of Several Statistical Tests of Reciprocity of Self-Disclosure.

    Science.gov (United States)

    Dindia, Kathryn

    1988-01-01

    Reports the results of a study that used several statistical tests of reciprocity of self-disclosure. Finds little evidence for reciprocity of self-disclosure, and concludes that either reciprocity is an illusion, or that different or more sophisticated methods are needed to detect it. (MS)

  1. Interpreting Statistical Significance Test Results: A Proposed New "What If" Method.

    Science.gov (United States)

    Kieffer, Kevin M.; Thompson, Bruce

    As the 1994 publication manual of the American Psychological Association emphasized, "p" values are affected by sample size. As a result, it can be helpful to interpret the results of statistical significant tests in a sample size context by conducting so-called "what if" analyses. However, these methods can be inaccurate…

  2. Recent Literature on Whether Statistical Significance Tests Should or Should Not Be Banned.

    Science.gov (United States)

    Deegear, James

    This paper summarizes the literature regarding statistical significant testing with an emphasis on recent literature in various discipline and literature exploring why researchers have demonstrably failed to be influenced by the American Psychological Association publication manual's encouragement to report effect sizes. Also considered are…

  3. A test statistic in the complex Wishart distribution and its application to change detection in polarimetric SAR data

    DEFF Research Database (Denmark)

    Conradsen, Knut; Nielsen, Allan Aasbjerg; Schou, Jesper

    2003-01-01

    . Based on this distribution, a test statistic for equality of two such matrices and an associated asymptotic probability for obtaining a smaller value of the test statistic are derived and applied successfully to change detection in polarimetric SAR data. In a case study, EMISAR L-band data from April 17...... to HH, VV, or HV data alone, the derived test statistic reduces to the well-known gamma likelihood-ratio test statistic. The derived test statistic and the associated significance value can be applied as a line or edge detector in fully polarimetric SAR data also....

  4. Superconductors for fusion magnets tested under pulsed field in SULTAN

    International Nuclear Information System (INIS)

    Bruzzone, P.; Bottura, L.; Katheder, H.; Blau, B.; Rohleder, I.; Vecsey, G.

    1995-01-01

    The SULTAN III test facility has been upgraded with a pair of pulsed field coils to carry out AC losses and stability experiments under full operating loads on large size, fusion conductors for ITER. A fast data aquisition system records the conductor behaviour under fast field transient. The commissioning results of the pulsed coils and instrumentation are critically discussed and the test capability of the set up is assessed. (orig.)

  5. An investigation of the statistical power of neutrality tests based on comparative and population genetic data

    DEFF Research Database (Denmark)

    Zhai, Weiwei; Nielsen, Rasmus; Slatkin, Montgomery

    2009-01-01

    is low. Tests based solely on the distribution of allele frequencies or the site frequency spectrum, such as the Ewens-Watterson test or Tajima's D, have less power in detecting both positive and negative selection because of the transient nature of positive selection and the weak signal left by negative......In this report, we investigate the statistical power of several tests of selective neutrality based on patterns of genetic diversity within and between species. The goal is to compare tests based solely on population genetic data with tests using comparative data or a combination of comparative...... and population genetic data. We show that in the presence of repeated selective sweeps on relatively neutral background, tests based on the d(N)/d(S) ratios in comparative data almost always have more power to detect selection than tests based on population genetic data, even if the overall level of divergence...

  6. A SIMPLE BUT EFFICIENT SCHEME FOR COLOUR IMAGE RETRIEVAL BASED ON STATISTICAL TESTS OF HYPOTHESIS

    Directory of Open Access Journals (Sweden)

    K. Seetharaman

    2011-02-01

    Full Text Available This paper proposes a simple but efficient scheme for colour image retrieval, based on statistical tests of hypothesis, namely test for equality of variance, test for equality of mean. The test for equality of variance is performed to test the similarity of the query and target images. If the images pass the test, then the test for equality of mean is performed on the same images to examine whether the two images have the same attributes / characteristics. If the query and target images pass the tests then it is inferred that the two images belong to the same class i.e. both the images are same; otherwise, it is assumed that the images belong to different classes i.e. both the images are different. The obtained test statistic values are indexed in ascending order and the image corresponding to the least value is identified as same / similar images. The proposed system is invariant for translation, scaling, and rotation, since the proposed system adjusts itself and treats either the query image or the target image is sample of other. The proposed scheme provides cent percent accuracy if the query and target images are same, whereas there is a slight variation for similar, transformed.

  7. Testing statistical self-similarity in the topology of river networks

    Science.gov (United States)

    Troutman, Brent M.; Mantilla, Ricardo; Gupta, Vijay K.

    2010-01-01

    Recent work has demonstrated that the topological properties of real river networks deviate significantly from predictions of Shreve's random model. At the same time the property of mean self-similarity postulated by Tokunaga's model is well supported by data. Recently, a new class of network model called random self-similar networks (RSN) that combines self-similarity and randomness has been introduced to replicate important topological features observed in real river networks. We investigate if the hypothesis of statistical self-similarity in the RSN model is supported by data on a set of 30 basins located across the continental United States that encompass a wide range of hydroclimatic variability. We demonstrate that the generators of the RSN model obey a geometric distribution, and self-similarity holds in a statistical sense in 26 of these 30 basins. The parameters describing the distribution of interior and exterior generators are tested to be statistically different and the difference is shown to produce the well-known Hack's law. The inter-basin variability of RSN parameters is found to be statistically significant. We also test generator dependence on two climatic indices, mean annual precipitation and radiative index of dryness. Some indication of climatic influence on the generators is detected, but this influence is not statistically significant with the sample size available. Finally, two key applications of the RSN model to hydrology and geomorphology are briefly discussed.

  8. Toward the detection of gravitational waves under non-Gaussian noises I. Locally optimal statistic.

    Science.gov (United States)

    Yokoyama, Jun'ichi

    2014-01-01

    After reviewing the standard hypothesis test and the matched filter technique to identify gravitational waves under Gaussian noises, we introduce two methods to deal with non-Gaussian stationary noises. We formulate the likelihood ratio function under weakly non-Gaussian noises through the Edgeworth expansion and strongly non-Gaussian noises in terms of a new method we call Gaussian mapping where the observed marginal distribution and the two-body correlation function are fully taken into account. We then apply these two approaches to Student's t-distribution which has a larger tails than Gaussian. It is shown that while both methods work well in the case the non-Gaussianity is small, only the latter method works well for highly non-Gaussian case.

  9. Statistical efficiency and optimal design for stepped cluster studies under linear mixed effects models.

    Science.gov (United States)

    Girling, Alan J; Hemming, Karla

    2016-06-15

    In stepped cluster designs the intervention is introduced into some (or all) clusters at different times and persists until the end of the study. Instances include traditional parallel cluster designs and the more recent stepped-wedge designs. We consider the precision offered by such designs under mixed-effects models with fixed time and random subject and cluster effects (including interactions with time), and explore the optimal choice of uptake times. The results apply both to cross-sectional studies where new subjects are observed at each time-point, and longitudinal studies with repeat observations on the same subjects. The efficiency of the design is expressed in terms of a 'cluster-mean correlation' which carries information about the dependency-structure of the data, and two design coefficients which reflect the pattern of uptake-times. In cross-sectional studies the cluster-mean correlation combines information about the cluster-size and the intra-cluster correlation coefficient. A formula is given for the 'design effect' in both cross-sectional and longitudinal studies. An algorithm for optimising the choice of uptake times is described and specific results obtained for the best balanced stepped designs. In large studies we show that the best design is a hybrid mixture of parallel and stepped-wedge components, with the proportion of stepped wedge clusters equal to the cluster-mean correlation. The impact of prior uncertainty in the cluster-mean correlation is considered by simulation. Some specific hybrid designs are proposed for consideration when the cluster-mean correlation cannot be reliably estimated, using a minimax principle to ensure acceptable performance across the whole range of unknown values. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  10. Probabilistic risk assessment framework for structural systems under multiple hazards using Bayesian statistics

    Energy Technology Data Exchange (ETDEWEB)

    Kwag, Shinyoung [North Carolina State University, Raleigh, NC 27695 (United States); Korea Atomic Energy Research Institute, Daejeon 305-353 (Korea, Republic of); Gupta, Abhinav, E-mail: agupta1@ncsu.edu [North Carolina State University, Raleigh, NC 27695 (United States)

    2017-04-15

    Highlights: • This study presents the development of Bayesian framework for probabilistic risk assessment (PRA) of structural systems under multiple hazards. • The concepts of Bayesian network and Bayesian inference are combined by mapping the traditionally used fault trees into a Bayesian network. • The proposed mapping allows for consideration of dependencies as well as correlations between events. • Incorporation of Bayesian inference permits a novel way for exploration of a scenario that is likely to result in a system level “vulnerability.” - Abstract: Conventional probabilistic risk assessment (PRA) methodologies (USNRC, 1983; IAEA, 1992; EPRI, 1994; Ellingwood, 2001) conduct risk assessment for different external hazards by considering each hazard separately and independent of each other. The risk metric for a specific hazard is evaluated by a convolution of the fragility and the hazard curves. The fragility curve for basic event is obtained by using empirical, experimental, and/or numerical simulation data for a particular hazard. Treating each hazard as an independently can be inappropriate in some cases as certain hazards are statistically correlated or dependent. Examples of such correlated events include but are not limited to flooding induced fire, seismically induced internal or external flooding, or even seismically induced fire. In the current practice, system level risk and consequence sequences are typically calculated using logic trees to express the causative relationship between events. In this paper, we present the results from a study on multi-hazard risk assessment that is conducted using a Bayesian network (BN) with Bayesian inference. The framework can consider statistical dependencies among risks from multiple hazards, allows updating by considering the newly available data/information at any level, and provide a novel way to explore alternative failure scenarios that may exist due to vulnerabilities.

  11. Probabilistic risk assessment framework for structural systems under multiple hazards using Bayesian statistics

    International Nuclear Information System (INIS)

    Kwag, Shinyoung; Gupta, Abhinav

    2017-01-01

    Highlights: • This study presents the development of Bayesian framework for probabilistic risk assessment (PRA) of structural systems under multiple hazards. • The concepts of Bayesian network and Bayesian inference are combined by mapping the traditionally used fault trees into a Bayesian network. • The proposed mapping allows for consideration of dependencies as well as correlations between events. • Incorporation of Bayesian inference permits a novel way for exploration of a scenario that is likely to result in a system level “vulnerability.” - Abstract: Conventional probabilistic risk assessment (PRA) methodologies (USNRC, 1983; IAEA, 1992; EPRI, 1994; Ellingwood, 2001) conduct risk assessment for different external hazards by considering each hazard separately and independent of each other. The risk metric for a specific hazard is evaluated by a convolution of the fragility and the hazard curves. The fragility curve for basic event is obtained by using empirical, experimental, and/or numerical simulation data for a particular hazard. Treating each hazard as an independently can be inappropriate in some cases as certain hazards are statistically correlated or dependent. Examples of such correlated events include but are not limited to flooding induced fire, seismically induced internal or external flooding, or even seismically induced fire. In the current practice, system level risk and consequence sequences are typically calculated using logic trees to express the causative relationship between events. In this paper, we present the results from a study on multi-hazard risk assessment that is conducted using a Bayesian network (BN) with Bayesian inference. The framework can consider statistical dependencies among risks from multiple hazards, allows updating by considering the newly available data/information at any level, and provide a novel way to explore alternative failure scenarios that may exist due to vulnerabilities.

  12. Hotspot detection using space-time scan statistics on children under five years of age in Depok

    Science.gov (United States)

    Verdiana, Miranti; Widyaningsih, Yekti

    2017-03-01

    Some problems that affect the health level in Depok is the high malnutrition rates from year to year and the more spread infectious and non-communicable diseases in some areas. Children under five years old is a vulnerable part of population to get the malnutrition and diseases. Based on this reason, it is important to observe the location and time, where and when, malnutrition in Depok happened in high intensity. To obtain the location and time of the hotspots of malnutrition and diseases that attack children under five years old, space-time scan statistics method can be used. Space-time scan statistic is a hotspot detection method, where the area and time of information and time are taken into account simultaneously in detecting the hotspots. This method detects a hotspot with a cylindrical scanning window: the cylindrical pedestal describes the area, and the height of cylinder describe the time. Cylinders formed is a hotspot candidate that may occur, which require testing of hypotheses, whether a cylinder can be summed up as a hotspot. Hotspot detection in this study carried out by forming a combination of several variables. Some combination of variables provides hotspot detection results that tend to be the same, so as to form groups (clusters). In the case of infant health level in Depok city, Beji health care center region in 2011-2012 is a hotspot. According to the combination of the variables used in the detection of hotspots, Beji health care center is most frequently as a hotspot. Hopefully the local government can take the right policy to improve the health level of children under five in the city of Depok.

  13. Statistical power analysis a simple and general model for traditional and modern hypothesis tests

    CERN Document Server

    Murphy, Kevin R; Wolach, Allen

    2014-01-01

    Noted for its accessible approach, this text applies the latest approaches of power analysis to both null hypothesis and minimum-effect testing using the same basic unified model. Through the use of a few simple procedures and examples, the authors show readers with little expertise in statistical analysis how to obtain the values needed to carry out the power analysis for their research. Illustrations of how these analyses work and how they can be used to choose the appropriate criterion for defining statistically significant outcomes are sprinkled throughout. The book presents a simple and g

  14. Precipitation projections under GCMs perspective and Turkish Water Foundation (TWF) statistical downscaling model procedures

    Science.gov (United States)

    Dabanlı, İsmail; Şen, Zekai

    2018-04-01

    The statistical climate downscaling model by the Turkish Water Foundation (TWF) is further developed and applied to a set of monthly precipitation records. The model is structured by two phases as spatial (regional) and temporal downscaling of global circulation model (GCM) scenarios. The TWF model takes into consideration the regional dependence function (RDF) for spatial structure and Markov whitening process (MWP) for temporal characteristics of the records to set projections. The impact of climate change on monthly precipitations is studied by downscaling Intergovernmental Panel on Climate Change-Special Report on Emission Scenarios (IPCC-SRES) A2 and B2 emission scenarios from Max Plank Institute (EH40PYC) and Hadley Center (HadCM3). The main purposes are to explain the TWF statistical climate downscaling model procedures and to expose the validation tests, which are rewarded in same specifications as "very good" for all stations except one (Suhut) station in the Akarcay basin that is in the west central part of Turkey. Eventhough, the validation score is just a bit lower at the Suhut station, the results are "satisfactory." It is, therefore, possible to say that the TWF model has reasonably acceptable skill for highly accurate estimation regarding standard deviation ratio (SDR), Nash-Sutcliffe efficiency (NSE), and percent bias (PBIAS) criteria. Based on the validated model, precipitation predictions are generated from 2011 to 2100 by using 30-year reference observation period (1981-2010). Precipitation arithmetic average and standard deviation have less than 5% error for EH40PYC and HadCM3 SRES (A2 and B2) scenarios.

  15. Statistical Modeling for Quality Assurance of Human Papillomavirus DNA Batch Testing.

    Science.gov (United States)

    Beylerian, Emily N; Slavkovsky, Rose C; Holme, Francesca M; Jeronimo, Jose A

    2018-03-22

    Our objective was to simulate the distribution of human papillomavirus (HPV) DNA test results from a 96-well microplate assay to identify results that may be consistent with well-to-well contamination, enabling programs to apply specific quality assurance parameters. For this modeling study, we designed an algorithm that generated the analysis population of 900,000 to simulate the results of 10,000 microplate assays, assuming discrete HPV prevalences of 12%, 13%, 14%, 15%, and 16%. Using binomial draws, the algorithm created a vector of results for each prevalence and reassembled them into 96-well matrices for results distribution analysis of the number of positive cells and number and size of cell clusters (≥2 positive cells horizontally or vertically adjacent) per matrix. For simulation conditions of 12% and 16% HPV prevalence, 95% of the matrices displayed the following characteristics: 5 to 17 and 8 to 22 total positive cells, 0 to 4 and 0 to 5 positive cell clusters, and largest cluster sizes of up to 5 and up to 6 positive cells, respectively. Our results suggest that screening programs in regions with an oncogenic HPV prevalence of 12% to 16% can expect 5 to 22 positive results per microplate in approximately 95% of assays and 0 to 5 positive results clusters with no cluster larger than 6 positive results. Results consistently outside of these ranges deviate from what is statistically expected and could be the result of well-to-well contamination. Our results provide guidance that laboratories can use to identify microplates suspicious for well-to-well contamination, enabling improved quality assurance.This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal.

  16. Spatial statistical analysis of basal stem root disease under natural field epidemic of oil palm

    Science.gov (United States)

    Kamu, Assis; Phin, Chong Khim; Seman, Idris Abu; Wan, Hoong Hak; Mun, Ho Chong

    2015-02-01

    Oil palm or scientifically known as Elaeis guineensis Jacq. is the most important commodity crop in Malaysia and has greatly contributed to the economy growth of the country. As far as disease is concerned in the industry, Basal Stem Rot (BSR) caused by Ganoderma boninence remains the most important disease. BSR disease is the most widely studied with information available for oil palm disease in Malaysia. However, there is still limited study on the spatial as well as temporal pattern or distribution of the disease especially under natural field epidemic condition in oil palm plantation. The objective of this study is to spatially identify the pattern of BSR disease under natural field epidemic using two geospatial analytical techniques, which are quadrat analysis for the first order properties of partial pattern analysis and nearest-neighbor analysis (NNA) for the second order properties of partial pattern analysis. Two study sites were selected with different age of tree. Both sites are located in Tawau, Sabah and managed by the same company. The results showed that at least one of the point pattern analysis used which is NNA (i.e. the second order properties of partial pattern analysis) has confirmed the disease is complete spatial randomness. This suggests the spread of the disease is not from tree to tree and the age of palm does not play a significance role in determining the spatial pattern of the disease. From the spatial pattern of the disease, it would help in the disease management program and for the industry in the future. The statistical modelling is expected to help in identifying the right model to estimate the yield loss of oil palm due to BSR disease in the future.

  17. Precipitation Cluster Distributions: Current Climate Storm Statistics and Projected Changes Under Global Warming

    Science.gov (United States)

    Quinn, Kevin Martin

    The total amount of precipitation integrated across a precipitation cluster (contiguous precipitating grid cells exceeding a minimum rain rate) is a useful measure of the aggregate size of the disturbance, expressed as the rate of water mass lost or latent heat released, i.e. the power of the disturbance. Probability distributions of cluster power are examined during boreal summer (May-September) and winter (January-March) using satellite-retrieved rain rates from the Tropical Rainfall Measuring Mission (TRMM) 3B42 and Special Sensor Microwave Imager and Sounder (SSM/I and SSMIS) programs, model output from the High Resolution Atmospheric Model (HIRAM, roughly 0.25-0.5 0 resolution), seven 1-2° resolution members of the Coupled Model Intercomparison Project Phase 5 (CMIP5) experiment, and National Center for Atmospheric Research Large Ensemble (NCAR LENS). Spatial distributions of precipitation-weighted centroids are also investigated in observations (TRMM-3B42) and climate models during winter as a metric for changes in mid-latitude storm tracks. Observed probability distributions for both seasons are scale-free from the smallest clusters up to a cutoff scale at high cluster power, after which the probability density drops rapidly. When low rain rates are excluded by choosing a minimum rain rate threshold in defining clusters, the models accurately reproduce observed cluster power statistics and winter storm tracks. Changes in behavior in the tail of the distribution, above the cutoff, are important for impacts since these quantify the frequency of the most powerful storms. End-of-century cluster power distributions and storm track locations are investigated in these models under a "business as usual" global warming scenario. The probability of high cluster power events increases by end-of-century across all models, by up to an order of magnitude for the highest-power events for which statistics can be computed. For the three models in the suite with continuous

  18. Vital Statistics of Panstrongylus geniculatus (Latreille 1811 (Hemiptera: Reduviidae under Experimental Conditions

    Directory of Open Access Journals (Sweden)

    Cabello Daniel R

    1998-01-01

    Full Text Available A statistical evaluation of the population dynamics of Panstrongylus geniculatus is based on a cohort experiment conducted under controlled laboratory conditions. Animals were fed on hen every 15 days. Egg incubation took 21 days; mean duration of 1st, 2nd, 3rd, 4th, and 5th instar nymphs was 25, 30, 58, 62, and 67 days, respectively; mean nymphal development time was 39 weeks and adult longevity was 72 weeks. Females reproduced during 30 weeks, producing an average of 61.6 eggs for female on its lifetime; the average number of eggs/female/week was 2.1. Total number of eggs produced by the cohort was 1379. Average hatch for the cohort was 88.9%; it was not affected by age of the mother. Age specific survival and reproduction tables were constructed. The following population parameters were evaluated, generation time was 36.1 weeks; net reproduction rate was 89.4; intrinsic rate of natural increase was 0.125; instantaneous birth and death rates were 0.163 and 0.039 respectively; finite rate of increase was 1.13; total reproductive value was 1196 and stable age distribution was 31.2% eggs, 64.7% nymphs and 4.1% adults. Finally the population characteristics of P. geniculatus lead to the conclusion that this species is a K strategist.

  19. Failure-censored accelerated life test sampling plans for Weibull distribution under expected test time constraint

    International Nuclear Information System (INIS)

    Bai, D.S.; Chun, Y.R.; Kim, J.G.

    1995-01-01

    This paper considers the design of life-test sampling plans based on failure-censored accelerated life tests. The lifetime distribution of products is assumed to be Weibull with a scale parameter that is a log linear function of a (possibly transformed) stress. Two levels of stress higher than the use condition stress, high and low, are used. Sampling plans with equal expected test times at high and low test stresses which satisfy the producer's and consumer's risk requirements and minimize the asymptotic variance of the test statistic used to decide lot acceptability are obtained. The properties of the proposed life-test sampling plans are investigated

  20. Modeling the Test-Retest Statistics of a Localization Experiment in the Full Horizontal Plane.

    Science.gov (United States)

    Morsnowski, André; Maune, Steffen

    2016-10-01

    Two approaches to model the test-retest statistics of a localization experiment basing on Gaussian distribution and on surrogate data are introduced. Their efficiency is investigated using different measures describing directional hearing ability. A localization experiment in the full horizontal plane is a challenging task for hearing impaired patients. In clinical routine, we use this experiment to evaluate the progress of our cochlear implant (CI) recipients. Listening and time effort limit the reproducibility. The localization experiment consists of a 12 loudspeaker circle, placed in an anechoic room, a "camera silens". In darkness, HSM sentences are presented at 65 dB pseudo-erratically from all 12 directions with five repetitions. This experiment is modeled by a set of Gaussian distributions with different standard deviations added to a perfect estimator, as well as by surrogate data. Five repetitions per direction are used to produce surrogate data distributions for the sensation directions. To investigate the statistics, we retrospectively use the data of 33 CI patients with 92 pairs of test-retest-measurements from the same day. The first model does not take inversions into account, (i.e., permutations of the direction from back to front and vice versa are not considered), although they are common for hearing impaired persons particularly in the rear hemisphere. The second model considers these inversions but does not work with all measures. The introduced models successfully describe test-retest statistics of directional hearing. However, since their applications on the investigated measures perform differently no general recommendation can be provided. The presented test-retest statistics enable pair test comparisons for localization experiments.

  1. Assessing Regional Scale Variability in Extreme Value Statistics Under Altered Climate Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Brunsell, Nathaniel [Univ. of Kansas, Lawrence, KS (United States); Mechem, David [Univ. of Kansas, Lawrence, KS (United States); Ma, Chunsheng [Wichita State Univ., KS (United States)

    2015-02-20

    Recent studies have suggested that low-frequency modes of climate variability can significantly influence regional climate. The climatology associated with extreme events has been shown to be particularly sensitive. This has profound implications for droughts, heat waves, and food production. We propose to examine regional climate simulations conducted over the continental United States by applying a recently developed technique which combines wavelet multi–resolution analysis with information theory metrics. This research is motivated by two fundamental questions concerning the spatial and temporal structure of extreme events. These questions are 1) what temporal scales of the extreme value distributions are most sensitive to alteration by low-frequency climate forcings and 2) what is the nature of the spatial structure of variation in these timescales? The primary objective is to assess to what extent information theory metrics can be useful in characterizing the nature of extreme weather phenomena. Specifically, we hypothesize that (1) changes in the nature of extreme events will impact the temporal probability density functions and that information theory metrics will be sensitive these changes and (2) via a wavelet multi–resolution analysis, we will be able to characterize the relative contribution of different timescales on the stochastic nature of extreme events. In order to address these hypotheses, we propose a unique combination of an established regional climate modeling approach and advanced statistical techniques to assess the effects of low-frequency modes on climate extremes over North America. The behavior of climate extremes in RCM simulations for the 20th century will be compared with statistics calculated from the United States Historical Climatology Network (USHCN) and simulations from the North American Regional Climate Change Assessment Program (NARCCAP). This effort will serve to establish the baseline behavior of climate extremes, the

  2. Combining Multiple Hypothesis Testing with Machine Learning Increases the Statistical Power of Genome-wide Association Studies

    Science.gov (United States)

    Mieth, Bettina; Kloft, Marius; Rodríguez, Juan Antonio; Sonnenburg, Sören; Vobruba, Robin; Morcillo-Suárez, Carlos; Farré, Xavier; Marigorta, Urko M.; Fehr, Ernst; Dickhaus, Thorsten; Blanchard, Gilles; Schunk, Daniel; Navarro, Arcadi; Müller, Klaus-Robert

    2016-01-01

    The standard approach to the analysis of genome-wide association studies (GWAS) is based on testing each position in the genome individually for statistical significance of its association with the phenotype under investigation. To improve the analysis of GWAS, we propose a combination of machine learning and statistical testing that takes correlation structures within the set of SNPs under investigation in a mathematically well-controlled manner into account. The novel two-step algorithm, COMBI, first trains a support vector machine to determine a subset of candidate SNPs and then performs hypothesis tests for these SNPs together with an adequate threshold correction. Applying COMBI to data from a WTCCC study (2007) and measuring performance as replication by independent GWAS published within the 2008–2015 period, we show that our method outperforms ordinary raw p-value thresholding as well as other state-of-the-art methods. COMBI presents higher power and precision than the examined alternatives while yielding fewer false (i.e. non-replicated) and more true (i.e. replicated) discoveries when its results are validated on later GWAS studies. More than 80% of the discoveries made by COMBI upon WTCCC data have been validated by independent studies. Implementations of the COMBI method are available as a part of the GWASpi toolbox 2.0. PMID:27892471

  3. Using a Statistical Approach to Anticipate Leaf Wetness Duration Under Climate Change in France

    Science.gov (United States)

    Huard, F.; Imig, A. F.; Perrin, P.

    2014-12-01

    Leaf wetness plays a major role in the development of fungal plant diseases. Leaf wetness duration (LWD) above a threshold value is determinant for infection and can be seen as a good indicator of impact of climate on infection occurrence and risk. As LWD is not widely measured, several methods, based on physics and empirical approach, have been developed to estimate it from weather data. Many LWD statistical models do exist, but the lack of standard for measurements require reassessments. A new empirical LWD model, called MEDHI (Modèle d'Estimation de la Durée d'Humectation à l'Inra) was developed for french configuration for wetness sensors (angle : 90°, height : 50 cm). This deployment is different from what is usually recommended from constructors or authors in other countries (angle from 10 to 60°, height from 10 to 150 cm…). MEDHI is a decision support system based on hourly climatic conditions at time steps n and n-1 taking account relative humidity, rainfall and previously simulated LWD. Air temperature, relative humidity, wind speed, rain and LWD data from several sensors with 2 configurations were measured during 6 months in Toulouse and Avignon (South West and South East of France) to calibrate MEDHI. A comparison of empirical models : NHRH (RH threshold), DPD (dew point depression), CART (classification and regression tree analysis dependant on RH, wind speed and dew point depression) and MEDHI, using meteorological and LWD measurements obtained during 5 months in Toulouse, showed that the development of this new model MEHDI was definitely better adapted to French conditions. In the context of climate change, MEDHI was used for mapping the evolution of leaf wetness duration in France from 1950 to 2100 with the French regional climate model ALADIN under different Representative Concentration Pathways (RCPs) and using a QM (Quantile-Mapping) statistical downscaling method. Results give information on the spatial distribution of infection risks

  4. Statistical hypothesis testing and common misinterpretations: Should we abandon p-value in forensic science applications?

    Science.gov (United States)

    Taroni, F; Biedermann, A; Bozza, S

    2016-02-01

    Many people regard the concept of hypothesis testing as fundamental to inferential statistics. Various schools of thought, in particular frequentist and Bayesian, have promoted radically different solutions for taking a decision about the plausibility of competing hypotheses. Comprehensive philosophical comparisons about their advantages and drawbacks are widely available and continue to span over large debates in the literature. More recently, controversial discussion was initiated by an editorial decision of a scientific journal [1] to refuse any paper submitted for publication containing null hypothesis testing procedures. Since the large majority of papers published in forensic journals propose the evaluation of statistical evidence based on the so called p-values, it is of interest to expose the discussion of this journal's decision within the forensic science community. This paper aims to provide forensic science researchers with a primer on the main concepts and their implications for making informed methodological choices. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  5. Detecting Randomness: the Sensitivity of Statistical Tests to Deviations from a Constant Rate Poisson Process

    Science.gov (United States)

    Michael, A. J.

    2012-12-01

    Detecting trends in the rate of sporadic events is a problem for earthquakes and other natural hazards such as storms, floods, or landslides. I use synthetic events to judge the tests used to address this problem in seismology and consider their application to other hazards. Recent papers have analyzed the record of magnitude ≥7 earthquakes since 1900 and concluded that the events are consistent with a constant rate Poisson process plus localized aftershocks (Michael, GRL, 2011; Shearer and Stark, PNAS, 2012; Daub et al., GRL, 2012; Parsons and Geist, BSSA, 2012). Each paper removed localized aftershocks and then used a different suite of statistical tests to test the null hypothesis that the remaining data could be drawn from a constant rate Poisson process. The methods include KS tests between event times or inter-event times and predictions from a Poisson process, the autocorrelation function on inter-event times, and two tests on the number of events in time bins: the Poisson dispersion test and the multinomial chi-square test. The range of statistical tests gives us confidence in the conclusions; which are robust with respect to the choice of tests and parameters. But which tests are optimal and how sensitive are they to deviations from the null hypothesis? The latter point was raised by Dimer (arXiv, 2012), who suggested that the lack of consideration of Type 2 errors prevents these papers from being able to place limits on the degree of clustering and rate changes that could be present in the global seismogenic process. I produce synthetic sets of events that deviate from a constant rate Poisson process using a variety of statistical simulation methods including Gamma distributed inter-event times and random walks. The sets of synthetic events are examined with the statistical tests described above. Preliminary results suggest that with 100 to 1000 events, a data set that does not reject the Poisson null hypothesis could have a variability that is 30% to

  6. Statistical power analyses using G*Power 3.1: tests for correlation and regression analyses.

    Science.gov (United States)

    Faul, Franz; Erdfelder, Edgar; Buchner, Axel; Lang, Albert-Georg

    2009-11-01

    G*Power is a free power analysis program for a variety of statistical tests. We present extensions and improvements of the version introduced by Faul, Erdfelder, Lang, and Buchner (2007) in the domain of correlation and regression analyses. In the new version, we have added procedures to analyze the power of tests based on (1) single-sample tetrachoric correlations, (2) comparisons of dependent correlations, (3) bivariate linear regression, (4) multiple linear regression based on the random predictor model, (5) logistic regression, and (6) Poisson regression. We describe these new features and provide a brief introduction to their scope and handling.

  7. Monitoring and analysis of bovine spongiform encephalopathy (BSE) testing in Denmark using statistical models

    DEFF Research Database (Denmark)

    Paisley, Larry

    2002-01-01

    The evolution of monitoring and surveillance for bovine spongiform encephalopathy (BSE) from the phase of passive surveillance that began in the United Kingdom in 1988 until the present is described. Currently, surveillance for BSE in Europe consists of mass testing of cattle slaughtered for human...... consumption and cattle from certain groups considered to be at higher risk of having clinical or detectable BSE. The results of the ongoing BSE testing in Denmark have been analyzed using two statistical approaches: the "classical" fequuentist and the Bayesian that is widely used in quantitative risk analysis...

  8. Case Studies for the Statistical Design of Experiments Applied to Powered Rotor Wind Tunnel Tests

    Science.gov (United States)

    Overmeyer, Austin D.; Tanner, Philip E.; Martin, Preston B.; Commo, Sean A.

    2015-01-01

    The application of statistical Design of Experiments (DOE) to helicopter wind tunnel testing was explored during two powered rotor wind tunnel entries during the summers of 2012 and 2013. These tests were performed jointly by the U.S. Army Aviation Development Directorate Joint Research Program Office and NASA Rotary Wing Project Office, currently the Revolutionary Vertical Lift Project, at NASA Langley Research Center located in Hampton, Virginia. Both entries were conducted in the 14- by 22-Foot Subsonic Tunnel with a small portion of the overall tests devoted to developing case studies of the DOE approach as it applies to powered rotor testing. A 16-47 times reduction in the number of data points required was estimated by comparing the DOE approach to conventional testing methods. The average error for the DOE surface response model for the OH-58F test was 0.95 percent and 4.06 percent for drag and download, respectively. The DOE surface response model of the Active Flow Control test captured the drag within 4.1 percent of measured data. The operational differences between the two testing approaches are identified, but did not prevent the safe operation of the powered rotor model throughout the DOE test matrices.

  9. Statistical inference on censored data for targeted clinical trials under enrichment design.

    Science.gov (United States)

    Chen, Chen-Fang; Lin, Jr-Rung; Liu, Jen-Pei

    2013-01-01

    For the traditional clinical trials, inclusion and exclusion criteria are usually based on some clinical endpoints; the genetic or genomic variability of the trial participants are not totally utilized in the criteria. After completion of the human genome project, the disease targets at the molecular level can be identified and can be utilized for the treatment of diseases. However, the accuracy of diagnostic devices for identification of such molecular targets is usually not perfect. Some of the patients enrolled in targeted clinical trials with a positive result for the molecular target might not have the specific molecular targets. As a result, the treatment effect may be underestimated in the patient population truly with the molecular target. To resolve this issue, under the exponential distribution, we develop inferential procedures for the treatment effects of the targeted drug based on the censored endpoints in the patients truly with the molecular targets. Under an enrichment design, we propose using the expectation-maximization algorithm in conjunction with the bootstrap technique to incorporate the inaccuracy of the diagnostic device for detection of the molecular targets on the inference of the treatment effects. A simulation study was conducted to empirically investigate the performance of the proposed methods. Simulation results demonstrate that under the exponential distribution, the proposed estimator is nearly unbiased with adequate precision, and the confidence interval can provide adequate coverage probability. In addition, the proposed testing procedure can adequately control the size with sufficient power. On the other hand, when the proportional hazard assumption is violated, additional simulation studies show that the type I error rate is not controlled at the nominal level and is an increasing function of the positive predictive value. A numerical example illustrates the proposed procedures. Copyright © 2013 John Wiley & Sons, Ltd.

  10. AI-based (ANN and SVM) statistical downscaling methods for precipitation estimation under climate change scenarios

    Science.gov (United States)

    Mehrvand, Masoud; Baghanam, Aida Hosseini; Razzaghzadeh, Zahra; Nourani, Vahid

    2017-04-01

    Since statistical downscaling methods are the most largely used models to study hydrologic impact studies under climate change scenarios, nonlinear regression models known as Artificial Intelligence (AI)-based models such as Artificial Neural Network (ANN) and Support Vector Machine (SVM) have been used to spatially downscale the precipitation outputs of Global Climate Models (GCMs). The study has been carried out using GCM and station data over GCM grid points located around the Peace-Tampa Bay watershed weather stations. Before downscaling with AI-based model, correlation coefficient values have been computed between a few selected large-scale predictor variables and local scale predictands to select the most effective predictors. The selected predictors are then assessed considering grid location for the site in question. In order to increase AI-based downscaling model accuracy pre-processing has been developed on precipitation time series. In this way, the precipitation data derived from various GCM data analyzed thoroughly to find the highest value of correlation coefficient between GCM-based historical data and station precipitation data. Both GCM and station precipitation time series have been assessed by comparing mean and variances over specific intervals. Results indicated that there is similar trend between GCM and station precipitation data; however station data has non-stationary time series while GCM data does not. Finally AI-based downscaling model have been applied to several GCMs with selected predictors by targeting local precipitation time series as predictand. The consequences of recent step have been used to produce multiple ensembles of downscaled AI-based models.

  11. Confidence intervals permit, but don't guarantee, better inference than statistical significance testing

    Directory of Open Access Journals (Sweden)

    Melissa Coulson

    2010-07-01

    Full Text Available A statistically significant result, and a non-significant result may differ little, although significance status may tempt an interpretation of difference. Two studies are reported that compared interpretation of such results presented using null hypothesis significance testing (NHST, or confidence intervals (CIs. Authors of articles published in psychology, behavioural neuroscience, and medical journals were asked, via email, to interpret two fictitious studies that found similar results, one statistically significant, and the other non-significant. Responses from 330 authors varied greatly, but interpretation was generally poor, whether results were presented as CIs or using NHST. However, when interpreting CIs respondents who mentioned NHST were 60% likely to conclude, unjustifiably, the two results conflicted, whereas those who interpreted CIs without reference to NHST were 95% likely to conclude, justifiably, the two results were consistent. Findings were generally similar for all three disciplines. An email survey of academic psychologists confirmed that CIs elicit better interpretations if NHST is not invoked. Improved statistical inference can result from encouragement of meta-analytic thinking and use of CIs but, for full benefit, such highly desirable statistical reform requires also that researchers interpret CIs without recourse to NHST.

  12. Does Instruction Affect the Underlying Dimensionality of a Kinesiology Test?

    Science.gov (United States)

    Bezruczko, Nikolaus; Frank, Eva; Perkins, Kyle

    Does effective instruction, which changes students' knowledge and possibly alters their cognitive functions, also affect the dimensionality of an achievement test? This question was examined by the parameterization of kinesiology test items (n = 42) with a Rasch dichotomous model, followed by an investigation of dimensionality in a pre- and post-test quasi-experimental study design. College students (n = 108) provided responses to kinesiology achievement test items. Then the stability of item difficulties, gender differences, and the interaction of item content categories with dimensionality were examined. In addition, a PCA/t-test protocol was implemented to examine dimensionality threats from the item residuals. Internal construct validity was investigated by regressing item content components on calibrated item difficulties. Measurement model item residuals were also investigated with statistical decomposition methods. In general, the results showed significant student achievement between pre and post testing, and dimensionality disturbances were relatively minor. The amount of unexpected item "shift" in an un-equated measurement dimension between pre and post testing was less than ten percent of the total items and largely concentrated among several unrelated items. An unexpected finding was a residual cluster consisting of several items testing related technical content. Complicating interpretation, these items tended to appear near the end of the test, which implicates test position as a threat to measurement equivalence. In general, the results across several methods did not tend to identify common threats and instead pointed to multiple sources of threats with varying degree of prominence. These results suggest conventional approaches to measurement equivalence that emphasize expedient overall procedures such as DIF, IRT, and factor analysis are probably capturing isolated sources of variability. Their implementation probably improves measurement

  13. Power performance under constant speed test with palm oil ...

    African Journals Online (AJOL)

    The torque and power. performance tests were carried out with a single cylinder techno four-stroke diesel engine under constant speeds of 2000, 1500 and 1100rpm. Five fuels, the Dura Palm Oil biodiesel, 81100; Tenera Palm oil biodiesel, B2100; Dura Palm Oil biodiesel/diesel blend at 10/90 vol/vol, B110; Tenera Palm oil ...

  14. Rorschach test: Italian calibration update about statistical frequencies of responses and location sheets

    Directory of Open Access Journals (Sweden)

    Stefano Caruson

    2015-12-01

    Full Text Available Abstract The remarkable importance of a calibration of a test lies in the formalization of useful statistical norms. In particular, the determination of these norms is of key importance for the Rorschach Test because of it allows objectifying the estimates of the interpretations’ formal qualities, and help to characterize responses consistent with the common perception. The aim of this work is to communicate the new results provided by a study conducted  on Rorschach protocols related to a sample of “non-clinical” subjects. The expert team in Psychodiagnostic of CIFRIC (Italian Center for training, research and clinic in medicine and psychology has carried out the following work identifying the rate at which the details of each card are interpreted by normative sample. The data obtained are systematized in new Location sheets, which refers to the next edition of the "Updated Manual of Locations and Coding of Responses to Rorschach Test".             Considering the Rorschach Test one of the more effective means for the acquaintance of the personality, it appears therefore fundamental to provide the professional, who uses it, with the possibility of accessing updated statistical data that reflect the population of reference, in order to deduce from them reliable and objectively valid indications.

  15. Filtering a statistically exactly solvable test model for turbulent tracers from partial observations

    International Nuclear Information System (INIS)

    Gershgorin, B.; Majda, A.J.

    2011-01-01

    A statistically exactly solvable model for passive tracers is introduced as a test model for the authors' Nonlinear Extended Kalman Filter (NEKF) as well as other filtering algorithms. The model involves a Gaussian velocity field and a passive tracer governed by the advection-diffusion equation with an imposed mean gradient. The model has direct relevance to engineering problems such as the spread of pollutants in the air or contaminants in the water as well as climate change problems concerning the transport of greenhouse gases such as carbon dioxide with strongly intermittent probability distributions consistent with the actual observations of the atmosphere. One of the attractive properties of the model is the existence of the exact statistical solution. In particular, this unique feature of the model provides an opportunity to design and test fast and efficient algorithms for real-time data assimilation based on rigorous mathematical theory for a turbulence model problem with many active spatiotemporal scales. Here, we extensively study the performance of the NEKF which uses the exact first and second order nonlinear statistics without any approximations due to linearization. The role of partial and sparse observations, the frequency of observations and the observation noise strength in recovering the true signal, its spectrum, and fat tail probability distribution are the central issues discussed here. The results of our study provide useful guidelines for filtering realistic turbulent systems with passive tracers through partial observations.

  16. Analysis of North Korea's Nuclear Tests under Prospect Theory

    International Nuclear Information System (INIS)

    Lee, Han Myung; Ryu, Jae Soo; Lee, Kwang Seok; Lee, Dong Hoon; Jun, Eunju; Kim, Mi Jin

    2013-01-01

    North Korea has chosen nuclear weapons as the means to protect its sovereignty. Despite international society's endeavors and sanctions to encourage North Korea to abandon its nuclear ambition, North Korea has repeatedly conducted nuclear testing. In this paper, the reason for North Korea's addiction to a nuclear arsenal is addressed within the framework of cognitive psychology. The prospect theory addresses an epistemological approach usually overlooked in rational choice theories. It provides useful implications why North Korea, being under a crisis situation has thrown out a stable choice but taken on a risky one such as nuclear testing. Under the viewpoint of prospect theory, nuclear tests by North Korea can be understood as follows: The first nuclear test in 2006 is seen as a trial to escape from loss areas such as financial sanctions and regime threats; the second test in 2009 was interpreted as a consequence of the strategy to recover losses by making a direct confrontation against the United States; and the third test in 2013 was understood as an attempt to strengthen internal solidarity after Kim Jong-eun inherited the dynasty, as well as to enhance bargaining power against the United States. Thus, it can be summarized that Pyongyang repeated its nuclear tests to escape from a negative domain and to settle into a positive one. In addition, in the future, North Korea may not be willing to readily give up its nuclear capabilities to ensure the survival of its own regime

  17. Filter Media Tests Under Simulated Martian Atmospheric Conditions

    Science.gov (United States)

    Agui, Juan H.

    2016-01-01

    Human exploration of Mars will require the optimal utilization of planetary resources. One of its abundant resources is the Martian atmosphere that can be harvested through filtration and chemical processes that purify and separate it into its gaseous and elemental constituents. Effective filtration needs to be part of the suite of resource utilization technologies. A unique testing platform is being used which provides the relevant operational and instrumental capabilities to test articles under the proper simulated Martian conditions. A series of tests were conducted to assess the performance of filter media. Light sheet imaging of the particle flow provided a means of detecting and quantifying particle concentrations to determine capturing efficiencies. The media's efficiency was also evaluated by gravimetric means through a by-layer filter media configuration. These tests will help to establish techniques and methods for measuring capturing efficiency and arrestance of conventional fibrous filter media. This paper will describe initial test results on different filter media.

  18. Association testing for next-generation sequencing data using score statistics

    DEFF Research Database (Denmark)

    Skotte, Line; Korneliussen, Thorfinn Sand; Albrechtsen, Anders

    2012-01-01

    computationally feasible due to the use of score statistics. As part of the joint likelihood, we model the distribution of the phenotypes using a generalized linear model framework, which works for both quantitative and discrete phenotypes. Thus, the method presented here is applicable to case-control studies...... of genotype calls into account have been proposed; most require numerical optimization which for large-scale data is not always computationally feasible. We show that using a score statistic for the joint likelihood of observed phenotypes and observed sequencing data provides an attractive approach...... to association testing for next-generation sequencing data. The joint model accounts for the genotype classification uncertainty via the posterior probabilities of the genotypes given the observed sequencing data, which gives the approach higher power than methods based on called genotypes. This strategy remains...

  19. Dripper testing: Application of statistical quality control for measurement system analysis

    Directory of Open Access Journals (Sweden)

    Hermes S. da Rocha

    Full Text Available ABSTRACT Laboratory tests for technical evaluation or irrigation material testing involve the measurement of many variables, as well as monitoring and control of test conditions. This study, carried out in 2016, aimed at using statistical quality control techniques to evaluate results of dripper tests. Exponentially weighted moving average control charts were elaborated, besides capability indices for the measurement of the test pressure and water temperature; and study on repeatability and reproducibility (Gage RR of flow measurement system using 10 replicates, in three work shifts (morning, afternoon and evening, with 25 emitters. Both the test pressure and water temperature remained stable, with “excellent” performance for the pressure adjustment process by integrative-derivative proportional controller. The variability between emitters was the component with highest contribution to the total variance of the flow measurements, with 96.77% of the total variance due to the variability between parts. The measurement system was classified as “acceptable” or “approved” by the Gage RR study; and non-random causes of significant variability were not identified in the routine of tests.

  20. Mulcom: a multiple comparison statistical test for microarray data in Bioconductor.

    Science.gov (United States)

    Isella, Claudio; Renzulli, Tommaso; Corà, Davide; Medico, Enzo

    2011-09-28

    Many microarray experiments search for genes with differential expression between a common "reference" group and multiple "test" groups. In such cases currently employed statistical approaches based on t-tests or close derivatives have limited efficacy, mainly because estimation of the standard error is done on only two groups at a time. Alternative approaches based on ANOVA correctly capture within-group variance from all the groups, but then do not confront single test groups with the reference. Ideally, a t-test better suited for this type of data would compare each test group with the reference, but use within-group variance calculated from all the groups. We implemented an R-Bioconductor package named Mulcom, with a statistical test derived from the Dunnett's t-test, designed to compare multiple test groups individually against a common reference. Interestingly, the Dunnett's test uses for the denominator of each comparison a within-group standard error aggregated from all the experimental groups. In addition to the basic Dunnett's t value, the package includes an optional minimal fold-change threshold, m. Due to the automated, permutation-based estimation of False Discovery Rate (FDR), the package also permits fast optimization of the test, to obtain the maximum number of significant genes at a given FDR value. When applied to a time-course experiment profiled in parallel on two microarray platforms, and compared with two commonly used tests, Mulcom displayed better concordance of significant genes in the two array platforms (39% vs. 26% or 15%), and higher enrichment in functional annotation to categories related to the biology of the experiment (p value < 0.001 in 4 categories vs. 3). The Mulcom package provides a powerful tool for the identification of differentially expressed genes when several experimental conditions are compared against a common reference. The results of the practical example presented here show that lists of differentially expressed

  1. Mulcom: a multiple comparison statistical test for microarray data in Bioconductor

    Directory of Open Access Journals (Sweden)

    Renzulli Tommaso

    2011-09-01

    Full Text Available Abstract Background Many microarray experiments search for genes with differential expression between a common "reference" group and multiple "test" groups. In such cases currently employed statistical approaches based on t-tests or close derivatives have limited efficacy, mainly because estimation of the standard error is done on only two groups at a time. Alternative approaches based on ANOVA correctly capture within-group variance from all the groups, but then do not confront single test groups with the reference. Ideally, a t-test better suited for this type of data would compare each test group with the reference, but use within-group variance calculated from all the groups. Results We implemented an R-Bioconductor package named Mulcom, with a statistical test derived from the Dunnett's t-test, designed to compare multiple test groups individually against a common reference. Interestingly, the Dunnett's test uses for the denominator of each comparison a within-group standard error aggregated from all the experimental groups. In addition to the basic Dunnett's t value, the package includes an optional minimal fold-change threshold, m. Due to the automated, permutation-based estimation of False Discovery Rate (FDR, the package also permits fast optimization of the test, to obtain the maximum number of significant genes at a given FDR value. When applied to a time-course experiment profiled in parallel on two microarray platforms, and compared with two commonly used tests, Mulcom displayed better concordance of significant genes in the two array platforms (39% vs. 26% or 15%, and higher enrichment in functional annotation to categories related to the biology of the experiment (p value Conclusions The Mulcom package provides a powerful tool for the identification of differentially expressed genes when several experimental conditions are compared against a common reference. The results of the practical example presented here show that lists of

  2. Drop Test Results of CRDM under Seismic Loads

    International Nuclear Information System (INIS)

    Choi, Myoung-Hwan; Cho, Yeong-Garp; Kim, Gyeong-Ho; Sun, Jong-Oh; Huh, Hyung

    2016-01-01

    This paper describes the test results to demonstrate the drop performance of CRDM under seismic loads. The top-mounted CRDM driven by the stepping motor for Jordan Research and Training Reactor (JRTR) has been developed in KAERI. The CRDM for JRTR has been optimized by the design improvement based on that of the HANARO. It is necessary to verify the drop performance under seismic loads such as operating basis earthquake (OBE) and safe shutdown earthquake (SSE). Especially, the CAR drop times are important data for the safety analysis. confirm the drop performance under seismic loads. The delay of drop time at Rig no. 2 due to seismic loads is greater than that at Rig no. 3. The total pure drop times under seismic loads are estimated as 1.169 and 1.855, respectively

  3. IEEE Std 101-1972: IEEE guide for the statistical analysis of thermal life test data

    International Nuclear Information System (INIS)

    Anon.

    1992-01-01

    Procedures for estimating the thermal life of electrical insulation systems and materials call for life tests at several temperatures, usually well above the expected normal operating temperature. By the selection of high temperatures for the tests, life of the insulation samples will be terminated, according to some selected failure criterion or criteria, within relatively short times -- typically one week to one year. The result of these thermally accelerated life tests will be a set of data of life values for a corresponding set of temperatures. Usually the data consist of a set of life values for each of two to four (occasionally more) test temperatures, 10 C to 25 C apart. The objective then is to establish from these data the mean life vales at each temperature and the functional dependence of life on temperature, as well as the statistical consistency and the confidence to be attributed to the mean life values and the functional life temperature dependence. The purpose of this guide is to assist in this objective and to give guidance for comparing the results of tests on different materials and of different tests on the same materials

  4. Statistical refinements for data analysis of mollusc reproduction tests: an example with Lymnaea stagnalis

    DEFF Research Database (Denmark)

    Holbech, Henrik

    -contribution of each individual to the measured response. Furthermore, the combination of a Gamma-Poisson stochastic part with a Weibull concentration-response model allowed accounting for the inter-replicate variability. Second, we checked for the possibility of optimizing the initial experimental design through...... was twofold. First, we refined the statistical analyses of reproduction data accounting for mortality all along the test period. The variable “number of clutches/eggs produced per individual-day” was used for EC x modelling, as classically done in epidemiology in order to account for the time...

  5. Testing for cubic smoothing splines under dependent data.

    Science.gov (United States)

    Nummi, Tapio; Pan, Jianxin; Siren, Tarja; Liu, Kun

    2011-09-01

    In most research on smoothing splines the focus has been on estimation, while inference, especially hypothesis testing, has received less attention. By defining design matrices for fixed and random effects and the structure of the covariance matrices of random errors in an appropriate way, the cubic smoothing spline admits a mixed model formulation, which places this nonparametric smoother firmly in a parametric setting. Thus nonlinear curves can be included with random effects and random coefficients. The smoothing parameter is the ratio of the random-coefficient and error variances and tests for linear regression reduce to tests for zero random-coefficient variances. We propose an exact F-test for the situation and investigate its performance in a real pine stem data set and by simulation experiments. Under certain conditions the suggested methods can also be applied when the data are dependent. © 2010, The International Biometric Society.

  6. Type I error and the power of the s-test: old lessons from a new, analytically justified statistical test for phylogenies.

    Science.gov (United States)

    Antezana, M A; Hudson, R R

    1999-06-01

    We present a new procedure for assessing the statistical significance of the most likely unrooted dichotomous topology inferrable from four DNA sequences. The procedure calculates directly a P-value for the support given to this topology by the informative sites congruent with it, assuming the most likely star topology as the null hypothesis. Informative sites are crucial in the determination of the maximum likelihood dichotomous topology and are therefore an obvious target for a statistical test of phylogenies. Our P-value is the probability of producing through parallel substitutions on the branches of the star topology at least as much support as that given to the maximum likelihood dichotomous topology by the aforementioned informative sites, for any of the three possible dichotomous topologies. The degree of statistical significance is simply the complement of this P-value. Ours is therefore an a posteriori testing approach, in which no dichotomous topology is specified in advance. We implement the test for the case in which all sites behave identically and the substitution model has a single parameter. Under these conditions, the P-value can be easily calculated on the basis of the probabilities of change on the branches of the most likely star topology, because under these assumptions, each site can become informative independently from every other site; accordingly, the total number of informative sites of each kind is binomially distributed. We explore the test's type I error by applying it to data produced in star topologies having all branches equally long, or having two short and two long branches, and various degrees of homoplasy. The test is conservative but we demonstrate, by means of a discreteness correction and progressively assumption-free calculations of the P-values, that (1) the conservativeness is mostly due to the discrete nature of informative sites and (2) the P-values calculated empirically are moreover mostly quite accurate in absolute

  7. An omnibus likelihood test statistic and its factorization for change detection in time series of polarimetric SAR data

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Conradsen, Knut; Skriver, Henning

    2016-01-01

    Based on an omnibus likelihood ratio test statistic for the equality of several variance-covariance matrices following the complex Wishart distribution with an associated p-value and a factorization of this test statistic, change analysis in a short sequence of multilook, polarimetric SAR data...

  8. A review of statistical methods for testing genetic anticipation: looking for an answer in Lynch syndrome

    DEFF Research Database (Denmark)

    Boonstra, Philip S; Gruber, Stephen B; Raymond, Victoria M

    2010-01-01

    Anticipation, manifested through decreasing age of onset or increased severity in successive generations, has been noted in several genetic diseases. Statistical methods for genetic anticipation range from a simple use of the paired t-test for age of onset restricted to affected parent-child pairs...... to a recently proposed random effects model which includes extended pedigree data and unaffected family members [Larsen et al., 2009]. A naive use of the paired t-test is biased for the simple reason that age of onset has to be less than the age at ascertainment (interview) for both affected parent and child...... the issue of multiplex ascertainment and its effect on the different methods. We then focus on exploring genetic anticipation in Lynch syndrome and analyze new data on the age of onset in affected parent-child pairs from families seen at the University of Michigan Cancer Genetics clinic with a mutation...

  9. Diagnosis of Misalignment in Overhung Rotor using the K-S Statistic and A2 Test

    Science.gov (United States)

    Garikapati, Diwakar; Pacharu, RaviKumar; Munukurthi, Rama Satya Satyanarayana

    2018-02-01

    Vibration measurement at the bearings of rotating machinery has become a useful technique for diagnosing incipient fault conditions. In particular, vibration measurement can be used to detect unbalance in rotor, bearing failure, gear problems or misalignment between a motor shaft and coupled shaft. This is a particular problem encountered in turbines, ID fans and FD fans used for power generation. For successful fault diagnosis, it is important to adopt motor current signature analysis (MCSA) techniques capable of identifying the faults. It is also useful to develop techniques for inferring information such as the severity of fault. It is proposed that modeling the cumulative distribution function of motor current signals with respect to appropriate theoretical distributions, and quantifying the goodness of fit with the Kolmogorov-Smirnov (KS) statistic and A2 test offers a suitable signal feature for diagnosis. This paper demonstrates the successful comparison of the K-S feature and A2 test for discriminating the misalignment fault from normal function.

  10. How allele frequency and study design affect association test statistics with misrepresentation errors.

    Science.gov (United States)

    Escott-Price, Valentina; Ghodsi, Mansoureh; Schmidt, Karl Michael

    2014-04-01

    We evaluate the effect of genotyping errors on the type-I error of a general association test based on genotypes, showing that, in the presence of errors in the case and control samples, the test statistic asymptotically follows a scaled non-central $\\chi ^2$ distribution. We give explicit formulae for the scaling factor and non-centrality parameter for the symmetric allele-based genotyping error model and for additive and recessive disease models. They show how genotyping errors can lead to a significantly higher false-positive rate, growing with sample size, compared with the nominal significance levels. The strength of this effect depends very strongly on the population distribution of the genotype, with a pronounced effect in the case of rare alleles, and a great robustness against error in the case of large minor allele frequency. We also show how these results can be used to correct $p$-values.

  11. A review of statistical methods for testing genetic anticipation: looking for an answer in Lynch syndrome

    DEFF Research Database (Denmark)

    Boonstra, Philip S; Gruber, Stephen B; Raymond, Victoria M

    2010-01-01

    the issue of multiplex ascertainment and its effect on the different methods. We then focus on exploring genetic anticipation in Lynch syndrome and analyze new data on the age of onset in affected parent-child pairs from families seen at the University of Michigan Cancer Genetics clinic with a mutation...... to a recently proposed random effects model which includes extended pedigree data and unaffected family members [Larsen et al., 2009]. A naive use of the paired t-test is biased for the simple reason that age of onset has to be less than the age at ascertainment (interview) for both affected parent and child......, and this right truncation effect is more pronounced in children than in parents. In this study, we first review different statistical methods for testing genetic anticipation in affected parent-child pairs that address the issue of bias due to right truncation. Using affected parent-child pair data, we compare...

  12. Experimental and statistical study on fracture boundary of non-irradiated Zircaloy-4 cladding tube under LOCA conditions

    Science.gov (United States)

    Narukawa, Takafumi; Yamaguchi, Akira; Jang, Sunghyon; Amaya, Masaki

    2018-02-01

    For estimating fracture probability of fuel cladding tube under loss-of-coolant accident conditions of light-water-reactors, laboratory-scale integral thermal shock tests were conducted on non-irradiated Zircaloy-4 cladding tube specimens. Then, the obtained binary data with respect to fracture or non-fracture of the cladding tube specimen were analyzed statistically. A method to obtain the fracture probability curve as a function of equivalent cladding reacted (ECR) was proposed using Bayesian inference for generalized linear models: probit, logit, and log-probit models. Then, model selection was performed in terms of physical characteristics and information criteria, a widely applicable information criterion and a widely applicable Bayesian information criterion. As a result, it was clarified that the log-probit model was the best among the three models to estimate the fracture probability in terms of the degree of prediction accuracy for both next data to be obtained and the true model. Using the log-probit model, it was shown that 20% ECR corresponded to a 5% probability level with a 95% confidence of fracture of the cladding tube specimens.

  13. Estimation of In Situ Stresses with Hydro-Fracturing Tests and a Statistical Method

    Science.gov (United States)

    Lee, Hikweon; Ong, See Hong

    2018-03-01

    At great depths, where borehole-based field stress measurements such as hydraulic fracturing are challenging due to difficult downhole conditions or prohibitive costs, in situ stresses can be indirectly estimated using wellbore failures such as borehole breakouts and/or drilling-induced tensile failures detected by an image log. As part of such efforts, a statistical method has been developed in which borehole breakouts detected on an image log are used for this purpose (Song et al. in Proceedings on the 7th international symposium on in situ rock stress, 2016; Song and Chang in J Geophys Res Solid Earth 122:4033-4052, 2017). The method employs a grid-searching algorithm in which the least and maximum horizontal principal stresses ( S h and S H) are varied, and the corresponding simulated depth-related breakout width distribution as a function of the breakout angle ( θ B = 90° - half of breakout width) is compared to that observed along the borehole to determine a set of S h and S H having the lowest misfit between them. An important advantage of the method is that S h and S H can be estimated simultaneously in vertical wells. To validate the statistical approach, the method is applied to a vertical hole where a set of field hydraulic fracturing tests have been carried out. The stress estimations using the proposed method were found to be in good agreement with the results interpreted from the hydraulic fracturing test measurements.

  14. Confidence Intervals: From tests of statistical significance to confidence intervals, range hypotheses and substantial effects

    Directory of Open Access Journals (Sweden)

    Dominic Beaulieu-Prévost

    2006-03-01

    Full Text Available For the last 50 years of research in quantitative social sciences, the empirical evaluation of scientific hypotheses has been based on the rejection or not of the null hypothesis. However, more than 300 articles demonstrated that this method was problematic. In summary, null hypothesis testing (NHT is unfalsifiable, its results depend directly on sample size and the null hypothesis is both improbable and not plausible. Consequently, alternatives to NHT such as confidence intervals (CI and measures of effect size are starting to be used in scientific publications. The purpose of this article is, first, to provide the conceptual tools necessary to implement an approach based on confidence intervals, and second, to briefly demonstrate why such an approach is an interesting alternative to an approach based on NHT. As demonstrated in the article, the proposed CI approach avoids most problems related to a NHT approach and can often improve the scientific and contextual relevance of the statistical interpretations by testing range hypotheses instead of a point hypothesis and by defining the minimal value of a substantial effect. The main advantage of such a CI approach is that it replaces the notion of statistical power by an easily interpretable three-value logic (probable presence of a substantial effect, probable absence of a substantial effect and probabilistic undetermination. The demonstration includes a complete example.

  15. Partial discharge testing: a progress report. Statistical evaluation of PD data

    International Nuclear Information System (INIS)

    Warren, V.; Allan, J.

    2005-01-01

    It has long been known that comparing the partial discharge results obtained from a single machine is a valuable tool enabling companies to observe the gradual deterioration of a machine stator winding and thus plan appropriate maintenance for the machine. In 1998, at the annual Iris Rotating Machines Conference (IRMC), a paper was presented that compared thousands of PD test results to establish the criteria for comparing results from different machines and the expected PD levels. At subsequent annual Iris conferences, using similar analytical procedures, papers were presented that supported the previous criteria and: in 1999, established sensor location as an additional criterion; in 2000, evaluated the effect of insulation type and age on PD activity; in 2001, evaluated the effect of manufacturer on PD activity; in 2002, evaluated the effect of operating pressure for hydrogen-cooled machines; in 2003, evaluated the effect of insulation type and setting Trac alarms; in 2004, re-evaluated the effect of manufacturer on PD activity. Before going further in database analysis procedures, it would be prudent to statistically evaluate the anecdotal evidence observed to date. The goal was to determine which variables of machine conditions greatly influenced the PD results and which didn't. Therefore, this year's paper looks at the impact of operating voltage, machine type and winding type on the test results for air-cooled machines. Because of resource constraints, only data collected through 2003 was used; however, as before, it is still standardized for frequency bandwidth and pruned to include only full-load-hot (FLH) results collected for one sensor on operating machines. All questionable data, or data from off-line testing or unusual machine conditions was excluded, leaving 6824 results. Calibration of on-line PD test results is impractical; therefore, only results obtained using the same method of data collection and noise separation techniques are compared. For

  16. Testing of newly developed functional surfaces under pure sliding conditions

    DEFF Research Database (Denmark)

    Godi, Alessandro; Mohaghegh, Kamran; Grønbæk, J.

    2013-01-01

    -polished counterpart. A number of experiments were carried out at different normal pressures employing for all specimens the same reciprocating movement and the same lubrication. The measured friction forces were plotted against the incremental normal pressure, and the friction coefficients were calculated....... The results comparison showed clearly how employing multifunctional surfaces can reduce friction forces up to 50 % at high normal loads compared to regularly ground or turned surfaces. Friction coefficients approximately equal to 0.12 were found for classically machined surfaces, whereas the values were 0...... the surfaces in an industrial context. In this paper, a number of experimental tests were performed using a novel test rig, called axial sliding test, simulating the contact of surfaces under pure sliding conditions. The aim of the experiments is to evaluate the frictional behavior of a new typology...

  17. Accounting providing of statistical analysis of intangible assets renewal under marketing strategy

    Directory of Open Access Journals (Sweden)

    I.R. Polishchuk

    2016-12-01

    Full Text Available The article analyzes the content of the Regulations on accounting policies of the surveyed enterprises in terms of the operations concerning the amortization of intangible assets on the following criteria: assessment on admission, determination of useful life, the period of depreciation, residual value, depreciation method, reflection in the financial statements, a unit of account, revaluation, formation of fair value. The characteristic of factors affecting the accounting policies and determining the mechanism for evaluating the completeness and timeliness of intangible assets renewal is showed. The algorithm for selecting the method of intangible assets amortization is proposed. The knowledge base of statistical analysis of timeliness and completeness of intangible assets renewal in terms of the developed internal reporting is expanded. The statistical indicators to assess the effectiveness of the amortization policy for intangible assets are proposed. The marketing strategies depending on the condition and amount of intangible assets in relation to increasing marketing potential for continuity of economic activity are described.

  18. Failure Modes in Capacitors When Tested Under a Time-Varying Stress

    Science.gov (United States)

    Liu, David (Donhang)

    2011-01-01

    Steady step surge testing (SSST) is widely applied to screen out potential power-on failures in solid tantalum capacitors. The test simulates the power supply's on and off characteristics. Power-on failure has been the prevalent failure mechanism for solid tantalum capacitors for decoupling applications. On the other hand, the SSST can also be reviewed as an electrically destructive test under a time-varying stress. It consists of rapidly charging the capacitor with incremental voltage increases, through a low resistance in series, until the capacitor under test is electrically shorted. Highly accelerated life testing (HALT) is usually a time-efficient method for determining the failure mechanism in capacitors; however, a destructive test under a time-varying stress like SSST is even more effective. It normally takes days to complete a HALT test, but it only takes minutes for a time-varying stress test to produce failures. The advantage of incorporating specific time-varying stress into a statistical model is significant in providing an alternative life test method for quickly revealing the failure modes in capacitors. In this paper, a time-varying stress has been incorporated into the Weibull model to characterize the failure modes. The SSST circuit and transient conditions to correctly test the capacitors is discussed. Finally, the SSST was applied for testing polymer aluminum capacitors (PA capacitors), Ta capacitors, and multi-layer ceramic capacitors with both precious metal electrode (PME) and base-metal-electrodes (BME). It appears that testing results are directly associated to the dielectric layer breakdown in PA and Ta capacitors and are independent on the capacitor values, the way the capacitors being built, and the manufactures. The testing results also reveal that ceramic capacitors exhibit breakdown voltages more than 20 times the rated voltage, and the breakdown voltages are inverse proportional to the dielectric layer thickness. The possibility of

  19. Statistical and Conceptual Model Testing Geomorphic Principles through Quantification in the Middle Rio Grande River, NM.

    Science.gov (United States)

    Posner, A. J.

    2017-12-01

    The Middle Rio Grande River (MRG) traverses New Mexico from Cochiti to Elephant Butte reservoirs. Since the 1100s, cultivating and inhabiting the valley of this alluvial river has required various river training works. The mid-20th century saw a concerted effort to tame the river through channelization, Jetty Jacks, and dam construction. A challenge for river managers is to better understand the interactions between a river training works, dam construction, and the geomorphic adjustments of a desert river driven by spring snowmelt and summer thunderstorms carrying water and large sediment inputs from upstream and ephemeral tributaries. Due to its importance to the region, a vast wealth of data exists for conditions along the MRG. The investigation presented herein builds upon previous efforts by combining hydraulic model results, digitized planforms, and stream gage records in various statistical and conceptual models in order to test our understanding of this complex system. Spatially continuous variables were clipped by a set of river cross section data that is collected at decadal intervals since the early 1960s, creating a spatially homogenous database upon which various statistical testing was implemented. Conceptual models relate forcing variables and response variables to estimate river planform changes. The developed database, represents a unique opportunity to quantify and test geomorphic conceptual models in the unique characteristics of the MRG. The results of this investigation provides a spatially distributed characterization of planform variable changes, permitting managers to predict planform at a much higher resolution than previously available, and a better understanding of the relationship between flow regime and planform changes such as changes to longitudinal slope, sinuosity, and width. Lastly, data analysis and model interpretation led to the development of a new conceptual model for the impact of ephemeral tributaries in alluvial rivers.

  20. Field Test of Driven Pile Group under Lateral Loading

    Science.gov (United States)

    Gorska, Karolina; Rybak, Jaroslaw; Wyjadlowski, Marek

    2017-12-01

    All the geotechnical works need to be tested because the diversity of soil parameters is much higher than in other fields of construction. Horizontal load tests are necessary to determine the lateral capacity of driven piles subject to lateral load. Various load tests were carried out altogether on the test field in Kutno (Poland). While selecting the piles for load tests, different load combinations were taken into account. The piles with diverse length were chosen, on the basis of the previous tests of their length and integrity. The subsoil around the piles consisted of mineral soils: clays and medium compacted sands with the density index ID>0.50. The pile heads were free. The points of support of the “base” to which the dial gauges (displacement sensors) were fastened were located at the distance of 0.7 m from the side surface of the pile loaded laterally. In order to assure the independence of measurement, additional control (verifying) geodetic survey of the displacement of the piles subject to the load tests was carried out (by means of the alignment method). The trial load was imposed in stages by means of a hydraulic jack. The oil pressure in the actuator was corrected by means of a manual pump in order to ensure the constant value of the load in the on-going process of the displacement of the pile under test. On the basis of the obtained results it is possible to verify the numerical simulations of the behaviour of piles loaded by a lateral force.

  1. Testing of high-level waste forms under repository conditions

    International Nuclear Information System (INIS)

    Mc Menamin, T.

    1989-01-01

    The workshop on testing of high-level waste forms under repository conditions was held on 17 to 21 October 1988 in Cadarache, France, and sponsored by the Commission of the European Communities (CEC), the Commissariat a l'energie atomique (CEA) and the Savannah River Laboratory (US DOE). Participants included representatives from Australia, Belgium, Denmark, France, Germany, Italy, Japan, the Netherlands, Sweden, Switzerland, The United Kingdom and the United States. The first part of the conference featured a workshop on in situ testing of simulated nuclear waste forms and proposed package components, with an emphasis on the materials interface interactions tests (MIIT). MIIT is a sevent-part programme that involves field testing of 15 glass and waste form systems supplied by seven countries, along with potential canister and overpack materials as well as geologic samples, in the salt geology at the Waste Isolation Pilot Plant (WIPP) in Carlsbad, New Mexico, USA. This effort is still in progress and these proceedings document studies and findings obtained thus far. The second part of the meeting emphasized multinational experimental studies and results derived from repository systems simulation tests (RSST), which were performed in granite, clay and salt environments

  2. Quantile estimation to derive optimized test thresholds for random field statistics.

    Science.gov (United States)

    Hinrichs, H; Scholz, M; Noesselt, T; Heinze, H J

    2005-08-01

    We present a numerical method to estimate the true threshold values in random fields needed to determine the significance of apparent signals observed in noisy images. To accomplish this, a quantile estimation algorithm is applied to derive the threshold with a predefined confidence interval from a large number of simulated random fields. Also, a computationally efficient method for generating a random field simulation is presented using resampling techniques. Applying these techniques, thresholds have been determined for a large variety of parameter settings (smoothness, voxel size, brain shape, type of statistics). By means of interpolation techniques, thresholds for additional arbitrary settings can be quickly derived without the need to run individual simulations. Compared to the parametric approach of Worsley et al. (1996) (Worsley, K.J., Marrett, S., Neelin P., Vandal, A.C., Friston, K.J., Evans, A.C., 1996. A unified statistical approach for determining significant signals in images of cerebral activation. Hum. Brain Mapp. 4, 58-73) and Friston et al. (1991) (Friston, K.J., Frith, C.D., Liddle, P.F., Frackowiak, R.S. 1991. Comparing functional (PET) images: the assessment of significant change. J. Cereb. Blood Flow Metab. 11(4), 690-699), and to the Bonferroni approach, these optimized thresholds lead to higher levels of significance (i.e., lower p values) with a specific amount of activation especially with fields of moderate smoothness (i.e., with a relative full width half maximum between 2 and 6). Alternatively, the threshold for a specified level of significance can be lowered. This improved statistical sensitivity is illustrated by the analysis of an actual event related functional magnetic resonance data set, and its limitations are tested by determining the false positive rate with experimental MR noise data. The grid of estimated threshold values as well as the interpolation algorithm to derive thresholds for arbitrary parameter settings are made

  3. Limit distributions for the terms of central order statistics under power normalization

    OpenAIRE

    El Sayed M. Nigm

    2007-01-01

    In this paper the limiting distributions for sequences of central terms under power nonrandom normalization are obtained. The classes of the limit types having domain of L- attraction are investigated.

  4. Limit distributions for the terms of central order statistics under power normalization

    Directory of Open Access Journals (Sweden)

    El Sayed M. Nigm

    2007-12-01

    Full Text Available In this paper the limiting distributions for sequences of central terms under power nonrandom normalization are obtained. The classes of the limit types having domain of L- attraction are investigated.

  5. FADTTSter: accelerating hypothesis testing with functional analysis of diffusion tensor tract statistics

    Science.gov (United States)

    Noel, Jean; Prieto, Juan C.; Styner, Martin

    2017-03-01

    Functional Analysis of Diffusion Tensor Tract Statistics (FADTTS) is a toolbox for analysis of white matter (WM) fiber tracts. It allows associating diffusion properties along major WM bundles with a set of covariates of interest, such as age, diagnostic status and gender, and the structure of the variability of these WM tract properties. However, to use this toolbox, a user must have an intermediate knowledge in scripting languages (MATLAB). FADTTSter was created to overcome this issue and make the statistical analysis accessible to any non-technical researcher. FADTTSter is actively being used by researchers at the University of North Carolina. FADTTSter guides non-technical users through a series of steps including quality control of subjects and fibers in order to setup the necessary parameters to run FADTTS. Additionally, FADTTSter implements interactive charts for FADTTS' outputs. This interactive chart enhances the researcher experience and facilitates the analysis of the results. FADTTSter's motivation is to improve usability and provide a new analysis tool to the community that complements FADTTS. Ultimately, by enabling FADTTS to a broader audience, FADTTSter seeks to accelerate hypothesis testing in neuroimaging studies involving heterogeneous clinical data and diffusion tensor imaging. This work is submitted to the Biomedical Applications in Molecular, Structural, and Functional Imaging conference. The source code of this application is available in NITRC.

  6. A statistical design for testing transgenerational genomic imprinting in natural human populations.

    Directory of Open Access Journals (Sweden)

    Yao Li

    Full Text Available Genomic imprinting is a phenomenon in which the same allele is expressed differently, depending on its parental origin. Such a phenomenon, also called the parent-of-origin effect, has been recognized to play a pivotal role in embryological development and pathogenesis in many species. Here we propose a statistical design for detecting imprinted loci that control quantitative traits based on a random set of three-generation families from a natural population in humans. This design provides a pathway for characterizing the effects of imprinted genes on a complex trait or disease at different generations and testing transgenerational changes of imprinted effects. The design is integrated with population and cytogenetic principles of gene segregation and transmission from a previous generation to next. The implementation of the EM algorithm within the design framework leads to the estimation of genetic parameters that define imprinted effects. A simulation study is used to investigate the statistical properties of the model and validate its utilization. This new design, coupled with increasingly used genome-wide association studies, should have an immediate implication for studying the genetic architecture of complex traits in humans.

  7. Using the Δ3 statistic to test for missed levels in mixed sequence neutron resonance data

    International Nuclear Information System (INIS)

    Mulhall, Declan

    2009-01-01

    The Δ 3 (L) statistic is studied as a tool to detect missing levels in the neutron resonance data where two sequences are present. These systems are problematic because there is no level repulsion, and the resonances can be too close to resolve. Δ 3 (L) is a measure of the fluctuations in the number of levels in an interval of length L on the energy axis. The method used is tested on ensembles of mixed Gaussian orthogonal ensemble spectra, with a known fraction of levels (x%) randomly depleted, and can accurately return x. The accuracy of the method as a function of spectrum size is established. The method is used on neutron resonance data for 11 isotopes with either s-wave neutrons on odd-A isotopes, or p-wave neutrons on even-A isotopes. The method compares favorably with a maximum likelihood method applied to the level spacing distribution. Nuclear data ensembles were made from 20 isotopes in total, and their Δ 3 (L) statistics are discussed in the context of random matrix theory.

  8. Statistical Investigation of the Mechanical and Geometrical Properties of Polysilicon Films through On-Chip Tests

    Directory of Open Access Journals (Sweden)

    Ramin Mirzazadeh

    2018-01-01

    Full Text Available In this work, we provide a numerical/experimental investigation of the micromechanics-induced scattered response of a polysilicon on-chip MEMS testing device, whose moving structure is constituted by a slender cantilever supporting a massive perforated plate. The geometry of the cantilever was specifically designed to emphasize the micromechanical effects, in compliance with the process constraints. To assess the effects of the variability of polysilicon morphology and of geometrical imperfections on the experimentally observed nonlinear sensor response, we adopt statistical Monte Carlo analyses resting on a coupled electromechanical finite element model of the device. For each analysis, the polysilicon morphology was digitally built through a Voronoi tessellation of the moving structure, whose geometry was in turn varied by sampling out of a uniform probability density function the value of the over-etch, considered as the main source of geometrical imperfections. The comparison between the statistics of numerical and experimental results is adopted to assess the relative significance of the uncertainties linked to variations in the micro-fabrication process, and the mechanical film properties due to the polysilicon morphology.

  9. Multirods burst tests under loss-of-coolant conditions

    International Nuclear Information System (INIS)

    Kawasaki, S.; Uetsuka, H.; Furuta, T.

    1983-01-01

    In order to know the upper limit of coolant flow area restriction in a fuel assembly under loss-of-coolant accidents in LWRs, burst tests of fuel bundles were performed. Each bundle consisted of 49 rods(7x7 rods), and bursts were conducted in flowing steam. In some cases, 4 rods were replaced by control rods with guide tubes in a bundle. After the burst, the ballooning behavior of each rod and the degree of coolant flow area restriction in the bundle were measured. Ballooning behavior of rods and degree of coolant flow channel restriction in bundles with control rods were not different from those without control rods. The upper limit of coolant flow channel restriction under loss-of-coolant conditions was estimated to be about 80%. (author)

  10. Portfolio selection problem with liquidity constraints under non-extensive statistical mechanics

    International Nuclear Information System (INIS)

    Zhao, Pan; Xiao, Qingxian

    2016-01-01

    In this study, we consider the optimal portfolio selection problem with liquidity limits. A portfolio selection model is proposed in which the risky asset price is driven by the process based on non-extensive statistical mechanics instead of the classic Wiener process. Using dynamic programming and Lagrange multiplier methods, we obtain the optimal policy and value function. Moreover, the numerical results indicate that this model is considerably different from the model based on the classic Wiener process, the optimal strategy is affected by the non-extensive parameter q, the increase in the investment in the risky asset is faster at a larger parameter q and the increase in wealth is similar.

  11. Computational Protein Design Under a Given Backbone Structure with the ABACUS Statistical Energy Function.

    Science.gov (United States)

    Xiong, Peng; Chen, Quan; Liu, Haiyan

    2017-01-01

    An important objective of computational protein design is to identify amino acid sequences that stably fold into a given backbone structure. A general approach to this problem is to minimize an energy function in the sequence space. We have previously reported a method to derive statistical energies for fixed-backbone protein design and showed that it led to de novo proteins that fold as expected. Here, we present the usage of the program that implements this method, which we now name as ABACUS (A Backbone-based Amino aCid Usage Survey).

  12. Parameter estimation and statistical test of geographically weighted bivariate Poisson inverse Gaussian regression models

    Science.gov (United States)

    Amalia, Junita; Purhadi, Otok, Bambang Widjanarko

    2017-11-01

    Poisson distribution is a discrete distribution with count data as the random variables and it has one parameter defines both mean and variance. Poisson regression assumes mean and variance should be same (equidispersion). Nonetheless, some case of the count data unsatisfied this assumption because variance exceeds mean (over-dispersion). The ignorance of over-dispersion causes underestimates in standard error. Furthermore, it causes incorrect decision in the statistical test. Previously, paired count data has a correlation and it has bivariate Poisson distribution. If there is over-dispersion, modeling paired count data is not sufficient with simple bivariate Poisson regression. Bivariate Poisson Inverse Gaussian Regression (BPIGR) model is mix Poisson regression for modeling paired count data within over-dispersion. BPIGR model produces a global model for all locations. In another hand, each location has different geographic conditions, social, cultural and economic so that Geographically Weighted Regression (GWR) is needed. The weighting function of each location in GWR generates a different local model. Geographically Weighted Bivariate Poisson Inverse Gaussian Regression (GWBPIGR) model is used to solve over-dispersion and to generate local models. Parameter estimation of GWBPIGR model obtained by Maximum Likelihood Estimation (MLE) method. Meanwhile, hypothesis testing of GWBPIGR model acquired by Maximum Likelihood Ratio Test (MLRT) method.

  13. Integrating Symbolic and Statistical Methods for Testing Intelligent Systems Applications to Machine Learning and Computer Vision

    Energy Technology Data Exchange (ETDEWEB)

    Jha, Sumit Kumar [University of Central Florida, Orlando; Pullum, Laura L [ORNL; Ramanathan, Arvind [ORNL

    2016-01-01

    Embedded intelligent systems ranging from tiny im- plantable biomedical devices to large swarms of autonomous un- manned aerial systems are becoming pervasive in our daily lives. While we depend on the flawless functioning of such intelligent systems, and often take their behavioral correctness and safety for granted, it is notoriously difficult to generate test cases that expose subtle errors in the implementations of machine learning algorithms. Hence, the validation of intelligent systems is usually achieved by studying their behavior on representative data sets, using methods such as cross-validation and bootstrapping.In this paper, we present a new testing methodology for studying the correctness of intelligent systems. Our approach uses symbolic decision procedures coupled with statistical hypothesis testing to. We also use our algorithm to analyze the robustness of a human detection algorithm built using the OpenCV open-source computer vision library. We show that the human detection implementation can fail to detect humans in perturbed video frames even when the perturbations are so small that the corresponding frames look identical to the naked eye.

  14. A rank-based statistical test for measuring synergistic effects between two gene sets.

    Science.gov (United States)

    Shiraishi, Yuichi; Okada-Hatakeyama, Mariko; Miyano, Satoru

    2011-09-01

    Due to recent advances in high-throughput technologies, data on various types of genomic annotation have accumulated. These data will be crucially helpful for elucidating the combinatorial logic of transcription. Although several approaches have been proposed for inferring cooperativity among multiple factors, most approaches are haunted by the issues of normalization and threshold values. In this article, we propose a rank-based non-parametric statistical test for measuring the effects between two gene sets. This method is free from the issues of normalization and threshold value determination for gene expression values. Furthermore, we have proposed an efficient Markov chain Monte Carlo method for calculating an approximate significance value of synergy. We have applied this approach for detecting synergistic combinations of transcription factor binding motifs and histone modifications. C implementation of the method is available from http://www.hgc.jp/~yshira/software/rankSynergy.zip. yshira@hgc.jp Supplementary data are available at Bioinformatics online.

  15. Statistical Testing of Segment Homogeneity in Classification of Piecewise–Regular Objects

    Directory of Open Access Journals (Sweden)

    Savchenko Andrey V.

    2015-12-01

    Full Text Available The paper is focused on the problem of multi-class classification of composite (piecewise-regular objects (e.g., speech signals, complex images, etc.. We propose a mathematical model of composite object representation as a sequence of independent segments. Each segment is represented as a random sample of independent identically distributed feature vectors. Based on this model and a statistical approach, we reduce the task to a problem of composite hypothesis testing of segment homogeneity. Several nearest-neighbor criteria are implemented, and for some of them the well-known special cases (e.g., the Kullback–Leibler minimum information discrimination principle, the probabilistic neural network are highlighted. It is experimentally shown that the proposed approach improves the accuracy when compared with contemporary classifiers.

  16. Investigating the Investigative Task: Testing for Skewness--An Investigation of Different Test Statistics and Their Power to Detect Skewness

    Science.gov (United States)

    Tabor, Josh

    2010-01-01

    On the 2009 AP[c] Statistics Exam, students were asked to create a statistic to measure skewness in a distribution. This paper explores several of the most popular student responses and evaluates which statistic performs best when sampling from various skewed populations. (Contains 8 figures, 3 tables, and 4 footnotes.)

  17. A new statistical tool to predict phenology under climate change scenarios

    NARCIS (Netherlands)

    Gienapp, P.; Hemerik, L.; Visser, M.E.

    2005-01-01

    Climate change will likely affect the phenology of trophic levels differently and thereby disrupt the phenological synchrony between predators and prey. To predict this disruption of the synchrony under different climate change scenarios, good descriptive models for the phenology of the different

  18. Strong field line shapes and photon statistics from a single molecule under anomalous noise.

    Science.gov (United States)

    Sanda, Frantisek

    2009-10-01

    We revisit the line-shape theory of a single molecule with anomalous stochastic spectral diffusion. Waiting time profiles for bath induced spectral jumps in the ground and excited states become different when a molecule, probed by continuous-wave laser field, reaches the steady state. This effect is studied for the stationary dichotomic continuous-time-random-walk spectral diffusion of a single two-level chromophore with power-law distributions of waiting times. Correlated waiting time distributions, line shapes, two-point fluorescence correlation function, and Mandel Q parameter are calculated for arbitrary magnitude of laser field. We extended previous weak field results and examined the breakdown of the central limit theorem in photon statistics, indicated by asymptotic power-law growth of Mandel Q parameter. Frequency profile of the Mandel Q parameter identifies the peaks of spectrum, which are related to anomalous spectral diffusion dynamics.

  19. Demography and the statistics of lifetime economic transfers under individual stochasticity

    Directory of Open Access Journals (Sweden)

    Hal Caswell

    2015-02-01

    Full Text Available Background: As individuals progress through the life cycle, they receive income and consume goods and services. The age schedules of labor income, consumption, and life cycle deficit reflect the economic roles played at different ages. Lifetime accumulation of economic variables has been less well studied, and our goal here is to rectify that. Objective: To derive and apply a method to compute the lifetime accumulated labor income, consumption, and life cycle deficit, and to go beyond the calculation of mean lifetime accumulation to calculate statistics of variability among individuals in lifetime accumulation. Methods: To quantify variation among individuals, we calculate the mean, standard deviation, coefficient of variation, and skewness of lifetime accumulated transfers, using the theory of Markov chains with rewards (Caswell 2011, applied to National Transfer Account data for Germany of 1978, and 2003. Results: The age patterns of lifetime accumulated labor income are relatively stable over time. Both the mean and the standard deviation of remaining lifetime labor income decline with age; the coefficient of variation, measuring variation relative to the mean, increases dramatically with age. The skewness becomes large and positive at older ages. Education level affects all the statistics. About 30Š of the variance in lifetime income is due to variance in age-specific income, and about 70Š is contributed by the mortality schedule. Lifetime consumption is less variable (as measured by the CV than lifetime labor income. Conclusions: We conclude that demographic Markov chains with rewards can add a potentially valuable perspective to studies of the economic lifecycle. The variation among individuals in lifetime accumulations in our results reflects individual stochasticity, not heterogeneity among individuals. Incorporating heterogeneity remains an important problem.

  20. Statistical methods for the analysis of a screening test for chronic beryllium disease

    Energy Technology Data Exchange (ETDEWEB)

    Frome, E.L.; Neubert, R.L. [Oak Ridge National Lab., TN (United States). Mathematical Sciences Section; Smith, M.H.; Littlefield, L.G.; Colyer, S.P. [Oak Ridge Inst. for Science and Education, TN (United States). Medical Sciences Div.

    1994-10-01

    The lymphocyte proliferation test (LPT) is a noninvasive screening procedure used to identify persons who may have chronic beryllium disease. A practical problem in the analysis of LPT well counts is the occurrence of outlying data values (approximately 7% of the time). A log-linear regression model is used to describe the expected well counts for each set of test conditions. The variance of the well counts is proportional to the square of the expected counts, and two resistant regression methods are used to estimate the parameters of interest. The first approach uses least absolute values (LAV) on the log of the well counts to estimate beryllium stimulation indices (SIs) and the coefficient of variation. The second approach uses a resistant regression version of maximum quasi-likelihood estimation. A major advantage of the resistant regression methods is that it is not necessary to identify and delete outliers. These two new methods for the statistical analysis of the LPT data and the outlier rejection method that is currently being used are applied to 173 LPT assays. The authors strongly recommend the LAV method for routine analysis of the LPT.

  1. Statistical testing of the full-range leadership theory in nursing.

    Science.gov (United States)

    Kanste, Outi; Kääriäinen, Maria; Kyngäs, Helvi

    2009-12-01

    The aim of this study is to test statistically the structure of the full-range leadership theory in nursing. The data were gathered by postal questionnaires from nurses and nurse leaders working in healthcare organizations in Finland. A follow-up study was performed 1 year later. The sample consisted of 601 nurses and nurse leaders, and the follow-up study had 78 respondents. Theory was tested through structural equation modelling, standard regression analysis and two-way anova. Rewarding transformational leadership seems to promote and passive laissez-faire leadership to reduce willingness to exert extra effort, perceptions of leader effectiveness and satisfaction with the leader. Active management-by-exception seems to reduce willingness to exert extra effort and perception of leader effectiveness. Rewarding transformational leadership remained as a strong explanatory factor of all outcome variables measured 1 year later. The data supported the main structure of the full-range leadership theory, lending support to the universal nature of the theory.

  2. Orthodontic brackets removal under shear and tensile bond strength resistance tests – a comparative test between light sources

    International Nuclear Information System (INIS)

    Silva, P C G; Porto-Neto, S T; Lizarelli, R F Z; Bagnato, V S

    2008-01-01

    We have investigated if a new LEDs system has enough efficient energy to promote efficient shear and tensile bonding strength resistance under standardized tests. LEDs 470 ± 10 nm can be used to photocure composite during bracket fixation. Advantages considering resistance to tensile and shear bonding strength when these systems were used are necessary to justify their clinical use. Forty eight human extracted premolars teeth and two light sources were selected, one halogen lamp and a LEDs system. Brackets for premolar were bonded through composite resin. Samples were submitted to standardized tests. A comparison between used sources under shear bonding strength test, obtained similar results; however, tensile bonding test showed distinct results: a statistical difference at a level of 1% between exposure times (40 and 60 seconds) and even to an interaction between light source and exposure time. The best result was obtained with halogen lamp use by 60 seconds, even during re-bonding; however LEDs system can be used for bonding and re-bonding brackets if power density could be increased

  3. Possible Solution to Publication Bias Through Bayesian Statistics, Including Proper Null Hypothesis Testing

    NARCIS (Netherlands)

    Konijn, Elly A.; van de Schoot, Rens; Winter, Sonja D.; Ferguson, Christopher J.

    2015-01-01

    The present paper argues that an important cause of publication bias resides in traditional frequentist statistics forcing binary decisions. An alternative approach through Bayesian statistics provides various degrees of support for any hypothesis allowing balanced decisions and proper null

  4. A new statistical tool to predict phenology under climate change scenarios

    OpenAIRE

    Gienapp, P.; Hemerik, L.; Visser, M.E.

    2005-01-01

    Climate change will likely affect the phenology of trophic levels differently and thereby disrupt the phenological synchrony between predators and prey. To predict this disruption of the synchrony under different climate change scenarios, good descriptive models for the phenology of the different species are necessary. Many phenological models are based on regressing the observed phenological event against temperatures measured over a fixed period. This is problematic, especially when used fo...

  5. Testing of a "smart-pebble" for measuring particle transport statistics

    Science.gov (United States)

    Kitsikoudis, Vasileios; Avgeris, Loukas; Valyrakis, Manousos

    2017-04-01

    This paper presents preliminary results from novel experiments aiming to assess coarse sediment transport statistics for a range of transport conditions, via the use of an innovative "smart-pebble" device. This device is a waterproof sphere, which has 7 cm diameter and is equipped with a number of sensors that provide information about the velocity, acceleration and positioning of the "smart-pebble" within the flow field. A series of specifically designed experiments are carried out to monitor the entrainment of a "smart-pebble" for fully developed, uniform, turbulent flow conditions over a hydraulically rough bed. Specifically, the bed surface is configured to three sections, each of them consisting of well packed glass beads of slightly increasing size at the downstream direction. The first section has a streamwise length of L1=150 cm and beads size of D1=15 mm, the second section has a length of L2=85 cm and beads size of D2=22 mm, and the third bed section has a length of L3=55 cm and beads size of D3=25.4 mm. Two cameras monitor the area of interest to provide additional information regarding the "smart-pebble" movement. Three-dimensional flow measurements are obtained with the aid of an acoustic Doppler velocimeter along a measurement grid to assess the flow forcing field. A wide range of flow rates near and above the threshold of entrainment is tested, while using four distinct densities for the "smart-pebble", which can affect its transport speed and total momentum. The acquired data are analyzed to derive Lagrangian transport statistics and the implications of such an important experiment for the transport of particles by rolling are discussed. The flow conditions for the initiation of motion, particle accelerations and equilibrium particle velocities (translating into transport rates), statistics of particle impact and its motion, can be extracted from the acquired data, which can be further compared to develop meaningful insights for sediment transport

  6. The Bayesian Score Statistic

    NARCIS (Netherlands)

    Kleibergen, F.R.; Kleijn, R.; Paap, R.

    2000-01-01

    We propose a novel Bayesian test under a (noninformative) Jeffreys'priorspecification. We check whether the fixed scalar value of the so-calledBayesian Score Statistic (BSS) under the null hypothesis is aplausiblerealization from its known and standardized distribution under thealternative. Unlike

  7. Methods for Determining the Statistical Significance of Enrichment or Depletion of Gene Ontology Classifications under Weighted Membership

    Directory of Open Access Journals (Sweden)

    Ernesto eIacucci

    2012-02-01

    Full Text Available High-throughput molecular biology studies, such as microarray assays of gene expression, two-hybrid experiments for detecting protein interactions, or ChIP-Seq experiments for transcription factor binding, often result in an interesting set of genes—say, genes that are co-expressed or bound by the same factor. One way of understanding the biological meaning of such a set is to consider what processes or functions, as defined in an ontology, are over-represented (enriched or under-represented (depleted among genes in the set. Usually, the significance of enrichment or depletion scores is based on simple statistical models and on the membership of genes in different classifications. We consider the more general problem of computing p-values for arbitrary integer additive statistics, or weighted membership functions. Such membership functions can be used to represent, for example, prior knowledge on the role of certain genes or classifications, differential importance of different classifications or genes to the experimenter, hierarchical relationships between classifications, or different degrees of interestingness or evidence for specific genes. We describe a generic dynamic programming algorithm that can compute exact p-values for arbitrary integer additive statistics. We also describe several optimizations for important special cases, which can provide orders-of-magnitude speed up in the computations. We apply our methods to datasets describing oxidative phosphorylation and parturition and compare p-values based on computations of several different statistics for measuring enrichment. We find major differences between p-values resulting from these statistics, and that some statistics recover gold standard annotations of the data better than others. Our work establishes a theoretical and algorithmic basis for far richer notions of enrichment or depletion of gene sets with respect to gene ontologies than has previously been available.

  8. A comprehensive statistical classifier of foci in the cell transformation assay for carcinogenicity testing.

    Science.gov (United States)

    Callegaro, Giulia; Malkoc, Kasja; Corvi, Raffaella; Urani, Chiara; Stefanini, Federico M

    2017-12-01

    The identification of the carcinogenic risk of chemicals is currently mainly based on animal studies. The in vitro Cell Transformation Assays (CTAs) are a promising alternative to be considered in an integrated approach. CTAs measure the induction of foci of transformed cells. CTAs model key stages of the in vivo neoplastic process and are able to detect both genotoxic and some non-genotoxic compounds, being the only in vitro method able to deal with the latter. Despite their favorable features, CTAs can be further improved, especially reducing the possible subjectivity arising from the last phase of the protocol, namely visual scoring of foci using coded morphological features. By taking advantage of digital image analysis, the aim of our work is to translate morphological features into statistical descriptors of foci images, and to use them to mimic the classification performances of the visual scorer to discriminate between transformed and non-transformed foci. Here we present a classifier based on five descriptors trained on a dataset of 1364 foci, obtained with different compounds and concentrations. Our classifier showed accuracy, sensitivity and specificity equal to 0.77 and an area under the curve (AUC) of 0.84. The presented classifier outperforms a previously published model. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Neural sensitivity to statistical regularities as a fundamental biological process that underlies auditory learning: the role of musical practice.

    Science.gov (United States)

    François, Clément; Schön, Daniele

    2014-02-01

    There is increasing evidence that humans and other nonhuman mammals are sensitive to the statistical structure of auditory input. Indeed, neural sensitivity to statistical regularities seems to be a fundamental biological property underlying auditory learning. In the case of speech, statistical regularities play a crucial role in the acquisition of several linguistic features, from phonotactic to more complex rules such as morphosyntactic rules. Interestingly, a similar sensitivity has been shown with non-speech streams: sequences of sounds changing in frequency or timbre can be segmented on the sole basis of conditional probabilities between adjacent sounds. We recently ran a set of cross-sectional and longitudinal experiments showing that merging music and speech information in song facilitates stream segmentation and, further, that musical practice enhances sensitivity to statistical regularities in speech at both neural and behavioral levels. Based on recent findings showing the involvement of a fronto-temporal network in speech segmentation, we defend the idea that enhanced auditory learning observed in musicians originates via at least three distinct pathways: enhanced low-level auditory processing, enhanced phono-articulatory mapping via the left Inferior Frontal Gyrus and Pre-Motor cortex and increased functional connectivity within the audio-motor network. Finally, we discuss how these data predict a beneficial use of music for optimizing speech acquisition in both normal and impaired populations. Copyright © 2013 Elsevier B.V. All rights reserved.

  10. A Lagrange multiplier-type test for idiosyncratic unit roots in the exact factor model under misspecification

    NARCIS (Netherlands)

    Zhou, X.; Solberger, M.

    2013-01-01

    We consider an exact factor model and derive a Lagrange multiplier-type test for unit roots in the idiosyncratic components. The asymptotic distribution of the statistic is derived under the misspecification that the differenced factors are white noise. We prove that the asymptotic distribution is

  11. Hyper-binding only apparent under fully implicit test conditions.

    Science.gov (United States)

    Campbell, Karen L; Hasher, Lynn

    2018-02-01

    We have previously shown that older adults hyper-bind, or form more extraneous associations than younger adults. For instance, when asked to perform a 1-back task on pictures superimposed with distracting words, older adults inadvertently form associations between target-distractor pairs and implicitly transfer these associations to a later paired associate learning task (showing a boost in relearning of preserved over disrupted pairs). We have argued that younger adults are better at suppressing the distracting words and thus, do not form these extraneous associations in the first place. However, an alternative explanation is that younger adults simply fail to access these associations during relearning, possibly because of their superior ability to form boundaries between episodes or shift mental contexts between tasks. In this study, we aimed to both replicate this original implicit transfer effect in older adults and to test whether younger adults show evidence of hyper-binding when informed about the relevance of past information. Our results suggest that regardless of the test conditions, younger adults do not hyper-bind. In contrast, older adults showed hyper-binding under (standard) implicit instructions, but not when made aware of a connection between tasks. These results replicate the original hyper-binding effect and reiterate its implicit nature. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  12. A practical model-based statistical approach for generating functional test cases: application in the automotive industry

    OpenAIRE

    Awédikian , Roy; Yannou , Bernard

    2012-01-01

    International audience; With the growing complexity of industrial software applications, industrials are looking for efficient and practical methods to validate the software. This paper develops a model-based statistical testing approach that automatically generates online and offline test cases for embedded software. It discusses an integrated framework that combines solutions for three major software testing research questions: (i) how to select test inputs; (ii) how to predict the expected...

  13. Spiked proteomic standard dataset for testing label-free quantitative software and statistical methods.

    Science.gov (United States)

    Ramus, Claire; Hovasse, Agnès; Marcellin, Marlène; Hesse, Anne-Marie; Mouton-Barbosa, Emmanuelle; Bouyssié, David; Vaca, Sebastian; Carapito, Christine; Chaoui, Karima; Bruley, Christophe; Garin, Jérôme; Cianférani, Sarah; Ferro, Myriam; Dorssaeler, Alain Van; Burlet-Schiltz, Odile; Schaeffer, Christine; Couté, Yohann; Gonzalez de Peredo, Anne

    2016-03-01

    This data article describes a controlled, spiked proteomic dataset for which the "ground truth" of variant proteins is known. It is based on the LC-MS analysis of samples composed of a fixed background of yeast lysate and different spiked amounts of the UPS1 mixture of 48 recombinant proteins. It can be used to objectively evaluate bioinformatic pipelines for label-free quantitative analysis, and their ability to detect variant proteins with good sensitivity and low false discovery rate in large-scale proteomic studies. More specifically, it can be useful for tuning software tools parameters, but also testing new algorithms for label-free quantitative analysis, or for evaluation of downstream statistical methods. The raw MS files can be downloaded from ProteomeXchange with identifier PXD001819. Starting from some raw files of this dataset, we also provide here some processed data obtained through various bioinformatics tools (including MaxQuant, Skyline, MFPaQ, IRMa-hEIDI and Scaffold) in different workflows, to exemplify the use of such data in the context of software benchmarking, as discussed in details in the accompanying manuscript [1]. The experimental design used here for data processing takes advantage of the different spike levels introduced in the samples composing the dataset, and processed data are merged in a single file to facilitate the evaluation and illustration of software tools results for the detection of variant proteins with different absolute expression levels and fold change values.

  14. Statistical aspects of evolution under natural selection, with implications for the advantage of sexual reproduction.

    Science.gov (United States)

    Crouch, Daniel J M

    2017-10-27

    The prevalence of sexual reproduction remains mysterious, as it poses clear evolutionary drawbacks compared to reproducing asexually. Several possible explanations exist, with one of the most likely being that finite population size causes linkage disequilibria to randomly generate and impede the progress of natural selection, and that these are eroded by recombination via sexual reproduction. Previous investigations have either analysed this phenomenon in detail for small numbers of loci, or performed population simulations for many loci. Here we present a quantitative genetic model for fitness, based on the Price Equation, in order to examine the theoretical consequences of randomly generated linkage disequilibria when there are many loci. In addition, most previous work has been concerned with the long-term consequences of deleterious linkage disequilibria for population fitness. The expected change in mean fitness between consecutive generations, a measure of short-term evolutionary success, is shown under random environmental influences to be related to the autocovariance in mean fitness between the generations, capturing the effects of stochastic forces such as genetic drift. Interaction between genetic drift and natural selection, due to randomly generated linkage disequilibria, is demonstrated to be one possible source of mean fitness autocovariance. This suggests a possible role for sexual reproduction in reducing the negative effects of genetic drift, thereby improving the short-term efficacy of natural selection. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Statistical inference for the additive hazards model under outcome-dependent sampling.

    Science.gov (United States)

    Yu, Jichang; Liu, Yanyan; Sandler, Dale P; Zhou, Haibo

    2015-09-01

    Cost-effective study design and proper inference procedures for data from such designs are always of particular interests to study investigators. In this article, we propose a biased sampling scheme, an outcome-dependent sampling (ODS) design for survival data with right censoring under the additive hazards model. We develop a weighted pseudo-score estimator for the regression parameters for the proposed design and derive the asymptotic properties of the proposed estimator. We also provide some suggestions for using the proposed method by evaluating the relative efficiency of the proposed method against simple random sampling design and derive the optimal allocation of the subsamples for the proposed design. Simulation studies show that the proposed ODS design is more powerful than other existing designs and the proposed estimator is more efficient than other estimators. We apply our method to analyze a cancer study conducted at NIEHS, the Cancer Incidence and Mortality of Uranium Miners Study, to study the risk of radon exposure to cancer.

  16. Testing normality using the summary statistics with application to meta-analysis

    OpenAIRE

    Luo, Dehui; Wan, Xiang; Liu, Jiming; Tong, Tiejun

    2018-01-01

    As the most important tool to provide high-level evidence-based medicine, researchers can statistically summarize and combine data from multiple studies by conducting meta-analysis. In meta-analysis, mean differences are frequently used effect size measurements to deal with continuous data, such as the Cohen's d statistic and Hedges' g statistic values. To calculate the mean difference based effect sizes, the sample mean and standard deviation are two essential summary measures. However, many...

  17. Post-test investigation result on the WWER-1000 fuel tested under severe accident conditions

    International Nuclear Information System (INIS)

    Goryachev, A.; Shtuckert, Yu.; Zwir, E.; Stupina, L.

    1996-01-01

    The model bundle of WWER-type were tested under SFD condition in the out-of-pile CORA installation. The objective of the test was to provide an information on the WWER-type fuel bundles behaviour under severe fuel damage accident conditions. Also it was assumed to compare the WWER-type bundle damage mechanisms with these experienced in the PWR-type bundle tests with aim to confirm a possibility to use the various code systems, worked our for PWR as applied to WWER. In order to ensure the possibility of the comparison of the calculated core degradation parameters with the real state of the tested bundle, some parameters have been measured on the bundle cross-sections under examination. Quantitative parameters of the bundle degradation have been evaluated by digital image processing of the bundle cross-sections. The obtained results are shown together with corresponding results obtained by the other participants of this investigation. (author). 3 refs, 13 figs

  18. Sample Size and Statistical Conclusions from Tests of Fit to the Rasch Model According to the Rasch Unidimensional Measurement Model (Rumm) Program in Health Outcome Measurement.

    Science.gov (United States)

    Hagell, Peter; Westergren, Albert

    Sample size is a major factor in statistical null hypothesis testing, which is the basis for many approaches to testing Rasch model fit. Few sample size recommendations for testing fit to the Rasch model concern the Rasch Unidimensional Measurement Models (RUMM) software, which features chi-square and ANOVA/F-ratio based fit statistics, including Bonferroni and algebraic sample size adjustments. This paper explores the occurrence of Type I errors with RUMM fit statistics, and the effects of algebraic sample size adjustments. Data with simulated Rasch model fitting 25-item dichotomous scales and sample sizes ranging from N = 50 to N = 2500 were analysed with and without algebraically adjusted sample sizes. Results suggest the occurrence of Type I errors with N less then or equal to 500, and that Bonferroni correction as well as downward algebraic sample size adjustment are useful to avoid such errors, whereas upward adjustment of smaller samples falsely signal misfit. Our observations suggest that sample sizes around N = 250 to N = 500 may provide a good balance for the statistical interpretation of the RUMM fit statistics studied here with respect to Type I errors and under the assumption of Rasch model fit within the examined frame of reference (i.e., about 25 item parameters well targeted to the sample).

  19. Subspace-based damage detection under changes in the ambient excitation statistics

    Science.gov (United States)

    Döhler, Michael; Mevel, Laurent; Hille, Falk

    2014-03-01

    In the last ten years, monitoring the integrity of the civil infrastructure has been an active research topic, including in connected areas as automatic control. It is common practice to perform damage detection by detecting changes in the modal parameters between a reference state and the current (possibly damaged) state from measured vibration data. Subspace methods enjoy some popularity in structural engineering, where large model orders have to be considered. In the context of detecting changes in the structural properties and the modal parameters linked to them, a subspace-based fault detection residual has been recently proposed and applied successfully, where the estimation of the modal parameters in the possibly damaged state is avoided. However, most works assume that the unmeasured ambient excitation properties during measurements of the structure in the reference and possibly damaged condition stay constant, which is hardly satisfied by any application. This paper addresses the problem of robustness of such fault detection methods. It is explained why current algorithms from literature fail when the excitation covariance changes and how they can be modified. Then, an efficient and fast subspace-based damage detection test is derived that is robust to changes in the excitation covariance but also to numerical instabilities that can arise easily in the computations. Three numerical applications show the efficiency of the new approach to better detect and separate different levels of damage even using a relatively low sample length.

  20. The use of statistical tools in field testing of putative effects of genetically modified plants on nontarget organisms.

    Science.gov (United States)

    Semenov, Alexander V; Elsas, Jan Dirk; Glandorf, Debora C M; Schilthuizen, Menno; Boer, Willem F

    2013-08-01

    To fulfill existing guidelines, applicants that aim to place their genetically modified (GM) insect-resistant crop plants on the market are required to provide data from field experiments that address the potential impacts of the GM plants on nontarget organisms (NTO's). Such data may be based on varied experimental designs. The recent EFSA guidance document for environmental risk assessment (2010) does not provide clear and structured suggestions that address the statistics of field trials on effects on NTO's. This review examines existing practices in GM plant field testing such as the way of randomization, replication, and pseudoreplication. Emphasis is placed on the importance of design features used for the field trials in which effects on NTO's are assessed. The importance of statistical power and the positive and negative aspects of various statistical models are discussed. Equivalence and difference testing are compared, and the importance of checking the distribution of experimental data is stressed to decide on the selection of the proper statistical model. While for continuous data (e.g., pH and temperature) classical statistical approaches - for example, analysis of variance (ANOVA) - are appropriate, for discontinuous data (counts) only generalized linear models (GLM) are shown to be efficient. There is no golden rule as to which statistical test is the most appropriate for any experimental situation. In particular, in experiments in which block designs are used and covariates play a role GLMs should be used. Generic advice is offered that will help in both the setting up of field testing and the interpretation and data analysis of the data obtained in this testing. The combination of decision trees and a checklist for field trials, which are provided, will help in the interpretation of the statistical analyses of field trials and to assess whether such analyses were correctly applied. We offer generic advice to risk assessors and applicants that will

  1. The T(ea) Test: Scripted Stories Increase Statistical Method Selection Skills

    Science.gov (United States)

    Hackathorn, Jana; Ashdown, Brien

    2015-01-01

    To teach statistics, teachers must attempt to overcome pedagogical obstacles, such as dread, anxiety, and boredom. There are many options available to teachers that facilitate a pedagogically conducive environment in the classroom. The current study examined the effectiveness of incorporating scripted stories and humor into statistical method…

  2. Strategies for Testing Statistical and Practical Significance in Detecting DIF with Logistic Regression Models

    Science.gov (United States)

    Fidalgo, Angel M.; Alavi, Seyed Mohammad; Amirian, Seyed Mohammad Reza

    2014-01-01

    This study examines three controversial aspects in differential item functioning (DIF) detection by logistic regression (LR) models: first, the relative effectiveness of different analytical strategies for detecting DIF; second, the suitability of the Wald statistic for determining the statistical significance of the parameters of interest; and…

  3. The complete linkage disequilibrium test: a test that points to causative mutations underlying quantitative traits

    Directory of Open Access Journals (Sweden)

    Uleberg Eivind

    2011-05-01

    Full Text Available Abstract Background Genetically, SNP that are in complete linkage disequilibrium with the causative SNP cannot be distinguished from the causative SNP. The Complete Linkage Disequilibrium (CLD test presented here tests whether a SNP is in complete LD with the causative mutation or not. The performance of the CLD test is evaluated in 1000 simulated datasets. Methods The CLD test consists of two steps i.e. analysis I and analysis II. Analysis I consists of an association analysis of the investigated region. The log-likelihood values from analysis I are next ranked in descending order and in analysis II the CLD test evaluates differences in log-likelihood ratios between the best and second best markers. Under the null-hypothesis distribution, the best SNP is in greater LD with the QTL than the second best, while under the alternative-CLD-hypothesis, the best SNP is alike-in-state with the QTL. To find a significance threshold, the test was also performed on data excluding the causative SNP. The 5th, 10th and 50th highest TCLD value from 1000 replicated analyses were used to control the type-I-error rate of the test at p = 0.005, p = 0.01 and p = 0.05, respectively. Results In a situation where the QTL explained 48% of the phenotypic variance analysis I detected a QTL in 994 replicates (p = 0.001, where 972 were positioned in the correct QTL position. When the causative SNP was excluded from the analysis, 714 replicates detected evidence of a QTL (p = 0.001. In analysis II, the CLD test confirmed 280 causative SNP from 1000 simulations (p = 0.05, i.e. power was 28%. When the effect of the QTL was reduced by doubling the error variance, the power of the test reduced relatively little to 23%. When sequence data were used, the power of the test reduced to 16%. All SNP that were confirmed by the CLD test were positioned in the correct QTL position. Conclusions The CLD test can provide evidence for a causative SNP, but its power may be low in situations

  4. Mathematical statistics

    CERN Document Server

    Pestman, Wiebe R

    2009-01-01

    This textbook provides a broad and solid introduction to mathematical statistics, including the classical subjects hypothesis testing, normal regression analysis, and normal analysis of variance. In addition, non-parametric statistics and vectorial statistics are considered, as well as applications of stochastic analysis in modern statistics, e.g., Kolmogorov-Smirnov testing, smoothing techniques, robustness and density estimation. For students with some elementary mathematical background. With many exercises. Prerequisites from measure theory and linear algebra are presented.

  5. Characterization of Sensory-Motor Behavior Under Cognitive Load Using a New Statistical Platform for Studies of Embodied Cognition

    Directory of Open Access Journals (Sweden)

    Jihye Ryu

    2018-04-01

    Full Text Available The field of enacted/embodied cognition has emerged as a contemporary attempt to connect the mind and body in the study of cognition. However, there has been a paucity of methods that enable a multi-layered approach tapping into different levels of functionality within the nervous systems (e.g., continuously capturing in tandem multi-modal biophysical signals in naturalistic settings. The present study introduces a new theoretical and statistical framework to characterize the influences of cognitive demands on biophysical rhythmic signals harnessed from deliberate, spontaneous and autonomic activities. In this study, nine participants performed a basic pointing task to communicate a decision while they were exposed to different levels of cognitive load. Within these decision-making contexts, we examined the moment-by-moment fluctuations in the peak amplitude and timing of the biophysical time series data (e.g., continuous waveforms extracted from hand kinematics and heart signals. These spike-trains data offered high statistical power for personalized empirical statistical estimation and were well-characterized by a Gamma process. Our approach enabled the identification of different empirically estimated families of probability distributions to facilitate inference regarding the continuous physiological phenomena underlying cognitively driven decision-making. We found that the same pointing task revealed shifts in the probability distribution functions (PDFs of the hand kinematic signals under study and were accompanied by shifts in the signatures of the heart inter-beat-interval timings. Within the time scale of an experimental session, marked changes in skewness and dispersion of the distributions were tracked on the Gamma parameter plane with 95% confidence. The results suggest that traditional theoretical assumptions of stationarity and normality in biophysical data from the nervous systems are incongruent with the true statistical nature of

  6. Test the Overall Significance of p-values by Using Joint Tail Probability of Ordered p-values as Test Statistic

    NARCIS (Netherlands)

    Fang, Yongxiang; Wit, Ernst

    2008-01-01

    Fisher’s combined probability test is the most commonly used method to test the overall significance of a set independent p-values. However, it is very obviously that Fisher’s statistic is more sensitive to smaller p-values than to larger p-value and a small p-value may overrule the other p-values

  7. Rényi statistics for testing composite hypotheses in general exponential models

    Czech Academy of Sciences Publication Activity Database

    Morales, D.; Pardo, L.; Pardo, M. C.; Vajda, Igor

    2004-01-01

    Roč. 38, č. 2 (2004), s. 133-147 ISSN 0233-1888 R&D Projects: GA ČR GA201/02/1391 Grant - others:BMF(ES) 2003-00892; BMF(ES) 2003-04820 Institutional research plan: CEZ:AV0Z1075907 Keywords : natural exponential models * Levy processes * generalized Wald statistics Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.323, year: 2004

  8. Comments on statistical issues in numerical modeling for underground nuclear test monitoring

    International Nuclear Information System (INIS)

    Nicholson, W.L.; Anderson, K.K.

    1993-01-01

    The Symposium concluded with prepared summaries by four experts in the involved disciplines. These experts made no mention of statistics and/or the statistical content of issues. The first author contributed an extemporaneous statement at the Symposium because there are important issues associated with conducting and evaluating numerical modeling that are familiar to statisticians and often treated successfully by them. This note expands upon these extemporaneous remarks

  9. Bayesian networks and statistical analysis application to analyze the diagnostic test accuracy

    Science.gov (United States)

    Orzechowski, P.; Makal, Jaroslaw; Onisko, A.

    2005-02-01

    The computer aided BPH diagnosis system based on Bayesian network is described in the paper. First result are compared to a given statistical method. Different statistical methods are used successfully in medicine for years. However, the undoubted advantages of probabilistic methods make them useful in application in newly created systems which are frequent in medicine, but do not have full and competent knowledge. The article presents advantages of the computer aided BPH diagnosis system in clinical practice for urologists.

  10. Paired preference data with a no-preference option – Statistical tests for comparison with placebo data

    DEFF Research Database (Denmark)

    Christensen, Rune Haubo Bojesen; Ennis, John M.; Ennis, Daniel M.

    2014-01-01

    of such norms is valuable for more complete interpretation of 2-Alternative Choice (2-AC) data. For instance, these norms can be used to indicate consumer segmentation even with non-replicated data. In this paper, we show that the statistical test suggested by Ennis and Ennis (2012a) behaves poorly and has too...... high a type I error rate if the identicality norm is not estimated from a very large sample size. We then compare five χ2 tests of paired preference data with a no preference option in terms of type I error and power in a series of scenarios. In particular, we identify two tests that are well behaved...... for sample sizes typical of recent research and have high statistical power. One of these tests has the advantage that it can be decomposed for more insightful analyses in a fashion similar to that of ANOVA F-tests. The benefits are important because they enable more informed business decisions, particularly...

  11. Automated classification of Permanent Scatterers time-series based on statistical characterization tests

    Science.gov (United States)

    Berti, Matteo; Corsini, Alessandro; Franceschini, Silvia; Iannacone, Jean Pascal

    2013-04-01

    The application of space borne synthetic aperture radar interferometry has progressed, over the last two decades, from the pioneer use of single interferograms for analyzing changes on the earth's surface to the development of advanced multi-interferogram techniques to analyze any sort of natural phenomena which involves movements of the ground. The success of multi-interferograms techniques in the analysis of natural hazards such as landslides and subsidence is widely documented in the scientific literature and demonstrated by the consensus among the end-users. Despite the great potential of this technique, radar interpretation of slope movements is generally based on the sole analysis of average displacement velocities, while the information embraced in multi interferogram time series is often overlooked if not completely neglected. The underuse of PS time series is probably due to the detrimental effect of residual atmospheric errors, which make the PS time series characterized by erratic, irregular fluctuations often difficult to interpret, and also to the difficulty of performing a visual, supervised analysis of the time series for a large dataset. In this work is we present a procedure for automatic classification of PS time series based on a series of statistical characterization tests. The procedure allows to classify the time series into six distinctive target trends (0=uncorrelated; 1=linear; 2=quadratic; 3=bilinear; 4=discontinuous without constant velocity; 5=discontinuous with change in velocity) and retrieve for each trend a series of descriptive parameters which can be efficiently used to characterize the temporal changes of ground motion. The classification algorithms were developed and tested using an ENVISAT datasets available in the frame of EPRS-E project (Extraordinary Plan of Environmental Remote Sensing) of the Italian Ministry of Environment (track "Modena", Northern Apennines). This dataset was generated using standard processing, then the

  12. Statistical mechanics far from equilibrium: prediction and test for a sheared system.

    Science.gov (United States)

    Evans, R M L; Simha, R A; Baule, A; Olmsted, P D

    2010-05-01

    We report the application of a far-from-equilibrium statistical-mechanical theory to a nontrivial system with Newtonian interactions in continuous boundary-driven flow. By numerically time stepping the force-balance equations of a one-dimensional model fluid we measure occupancies and transition rates in simulation. The high-shear-rate simulation data reproduce the predicted invariant quantities, thus supporting the theory that a class of nonequilibrium steady states of matter, namely, sheared complex fluids, is amenable to statistical treatment from first principles.

  13. Assessment of noise in a digital image using the join-count statistic and the Moran test

    International Nuclear Information System (INIS)

    Kehshih Chuang; Huang, H.K.

    1992-01-01

    It is assumed that data bits of a pixel in digital images can be divided into signal and noise bits. The signal bits occupy the most significant part of the pixel. The signal parts of each pixel are correlated while the noise parts are uncorrelated. Two statistical methods, the Moran test and the join-count statistic, are used to examine the noise parts. Images from computerized tomography, magnetic resonance and computed radiography are used for the evaluation of the noise bits. A residual image is formed by subtracting the original image from its smoothed version. The noise level in the residual image is then identical to that in the original image. Both statistical tests are then performed on the bit planes of the residual image. Results show that most digital images contain only 8-9 bits of correlated information. Both methods are easy to implement and fast to perform. (author)

  14. Residuals and the Residual-Based Statistic for Testing Goodness of Fit of Structural Equation Models

    Science.gov (United States)

    Foldnes, Njal; Foss, Tron; Olsson, Ulf Henning

    2012-01-01

    The residuals obtained from fitting a structural equation model are crucial ingredients in obtaining chi-square goodness-of-fit statistics for the model. The authors present a didactic discussion of the residuals, obtaining a geometrical interpretation by recognizing the residuals as the result of oblique projections. This sheds light on the…

  15. Basic Mathematics Test Predicts Statistics Achievement and Overall First Year Academic Success

    Science.gov (United States)

    Fonteyne, Lot; De Fruyt, Filip; Dewulf, Nele; Duyck, Wouter; Erauw, Kris; Goeminne, Katy; Lammertyn, Jan; Marchant, Thierry; Moerkerke, Beatrijs; Oosterlinck, Tom; Rosseel, Yves

    2015-01-01

    In the psychology and educational science programs at Ghent University, only 36.1% of the new incoming students in 2011 and 2012 passed all exams. Despite availability of information, many students underestimate the scientific character of social science programs. Statistics courses are a major obstacle in this matter. Not all enrolling students…

  16. Prediction of failure enthalpy and reliability of irradiated fuel rod under reactivity-initiated accidents by means of statistical approach

    International Nuclear Information System (INIS)

    Nam, Cheol; Choi, Byeong Kwon; Jeong, Yong Hwan; Jung, Youn Ho

    2001-01-01

    During the last decade, the failure behavior of high-burnup fuel rods under RIA has been an extensive concern since observations of fuel rod failures at low enthalpy. Of great importance is placed on failure prediction of fuel rod in the point of licensing criteria and safety in extending burnup achievement. To address the issue, a statistics-based methodology is introduced to predict failure probability of irradiated fuel rods. Based on RIA simulation results in literature, a failure enthalpy correlation for irradiated fuel rod is constructed as a function of oxide thickness, fuel burnup, and pulse width. From the failure enthalpy correlation, a single damage parameter, equivalent enthalpy, is defined to reflect the effects of the three primary factors as well as peak fuel enthalpy. Moreover, the failure distribution function with equivalent enthalpy is derived, applying a two-parameter Weibull statistical model. Using these equations, the sensitivity analysis is carried out to estimate the effects of burnup, corrosion, peak fuel enthalpy, pulse width and cladding materials used

  17. Change detection in a time series of polarimetric SAR data by an omnibus test statistic and its factorization

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Conradsen, Knut; Skriver, Henning

    2016-01-01

    in the covariance matrix representation is carried out. The omnibus test statistic and its factorization detect if and when change(s) occur. The technique is demonstrated on airborne EMISAR L-band data but may be applied to Sentinel-1, Cosmo-SkyMed, TerraSAR-X, ALOS and RadarSat-2 or other dual- and quad...

  18. The search for the decay of Z boson into two gamma as a test of Bose statistics

    International Nuclear Information System (INIS)

    Ignatiev, A.Yu.; Joshi, G.C.; Matsuda, M.

    1994-01-01

    It is suggested that Bose statistics for photons can be tested by looking for decays of spin-1 bosons into two photons. The experimental upper limit on the decay Z → γγ is used to establish for the first time the quantitative measure of the validity of Bose symmetry for photons. 38 refs

  19. A simple and robust statistical framework for planning, analysing and interpreting faecal egg count reduction test (FECRT) studies

    DEFF Research Database (Denmark)

    Denwood, M.J.; McKendrick, I.J.; Matthews, L.

    that the notional type 1 error rate of the new statistical test is accurate. Power calculations demonstrate a power of only 65% with a sample size of 20 treatment and control animals, which increases to 69% with 40 control animals or 79% with 40 treatment animals. Discussion. The method proposed is simple...

  20. A Powerful Test of the Autoregressive Unit Root Hypothesis Based on a Tuning Parameter Free Statistic

    DEFF Research Database (Denmark)

    Nielsen, Morten Ørregaard

    This paper presents a family of simple nonparametric unit root tests indexed by one parameter, d, and containing Breitung's (2002) test as the special case d = 1. It is shown that (i) each member of the family with d > 0 is consistent, (ii) the asymptotic distribution depends on d, and thus...... reflects the parameter chosen to implement the test, and (iii) since the asymptotic distribution depends on d and the test remains consistent for all d > 0, it is possible to analyze the power of the test for different values of d. The usual Phillips-Perron or Dickey-Fuller type tests are indexed...... by bandwidth, lag length, etc., but have none of these three properties. It is shown that members of the family with d test, and when d is small the asymptotic local power of the proposed nonparametric test is relatively close to the parametric...

  1. Statistical methods in epidemiology. VII. An overview of the chi2 test for 2 x 2 contingency table analysis.

    Science.gov (United States)

    Rigby, A S

    2001-11-10

    The odds ratio is an appropriate method of analysis for data in 2 x 2 contingency tables. However, other methods of analysis exist. One such method is based on the chi2 test of goodness-of-fit. Key players in the development of statistical theory include Pearson, Fisher and Yates. Data are presented in the form of 2 x 2 contingency tables and a method of analysis based on the chi2 test is introduced. There are many variations of the basic test statistic, one of which is the chi2 test with Yates' continuity correction. The usefulness (or not) of Yates' continuity correction is discussed. Problems of interpretation when the method is applied to k x m tables are highlighted. Some properties of the chi2 the test are illustrated by taking examples from the author's teaching experiences. Journal editors should be encouraged to give both observed and expected cell frequencies so that better information comes out of the chi2 test statistic.

  2. Monitoring Composites under Bending Tests with Infrared Thermography

    Directory of Open Access Journals (Sweden)

    Carosena Meola

    2012-01-01

    Full Text Available The attention of the present paper is focused on the use of an infrared imaging device to monitor the thermal response of composite materials under cyclic bending. Three types of composites are considered including an epoxy matrix reinforced with either carbon fibres (CFRP or glass fibres (GFRP and a hybrid composite involving glass fibres and aluminium layers (FRML. The specimen surface, under bending, displays temperature variations pursuing the load variations with cooling down under tension and warming up under compression; such temperature variations are in agreement with the bending moment. It has been observed that the amplitude of temperature variations over the specimen surface depends on the material characteristics. In particular, the presence of a defect inside the material affects the temperature distribution with deviation from the usual bending moment trend.

  3. Fatigue testing of materials under extremal conditions by acoustic method

    NARCIS (Netherlands)

    Baranov, VM; Bibilashvili, YK; Karasevich, VA; Sarychev, GA

    2004-01-01

    Increasing fuel cycle time requires fatigue testing of the fuel clad materials for nuclear reactors. The standard high-temperature fatigue tests are complicated and tedious. Solving this task is facilitated by the proposed acoustic method, which ensures observation of the material damage dynamics,

  4. Evaluation of fuel rods behavior - under irradiation test

    International Nuclear Information System (INIS)

    Lameiras, F.S.; Terra, J.L.; Pinto, L.C.M.; Dias, M.S.; Pinheiro, R.B.

    1981-04-01

    By the accompanying of the irradiation of instrumented test fuel rods simulating the operational conditions in reactors, plus the results of post - irradiation exams, tests, evaluation and calibration of analitic modelling of such fuel rods is done. (E.G.) [pt

  5. Spatial heterogeneity and risk factors for stunting among children under age five in Ethiopia: A Bayesian geo-statistical model.

    Directory of Open Access Journals (Sweden)

    Seifu Hagos

    Full Text Available Understanding the spatial distribution of stunting and underlying factors operating at meso-scale is of paramount importance for intervention designing and implementations. Yet, little is known about the spatial distribution of stunting and some discrepancies are documented on the relative importance of reported risk factors. Therefore, the present study aims at exploring the spatial distribution of stunting at meso- (district scale, and evaluates the effect of spatial dependency on the identification of risk factors and their relative contribution to the occurrence of stunting and severe stunting in a rural area of Ethiopia.A community based cross sectional study was conducted to measure the occurrence of stunting and severe stunting among children aged 0-59 months. Additionally, we collected relevant information on anthropometric measures, dietary habits, parent and child-related demographic and socio-economic status. Latitude and longitude of surveyed households were also recorded. Local Anselin Moran's I was calculated to investigate the spatial variation of stunting prevalence and identify potential local pockets (hotspots of high prevalence. Finally, we employed a Bayesian geo-statistical model, which accounted for spatial dependency structure in the data, to identify potential risk factors for stunting in the study area.Overall, the prevalence of stunting and severe stunting in the district was 43.7% [95%CI: 40.9, 46.4] and 21.3% [95%CI: 19.5, 23.3] respectively. We identified statistically significant clusters of high prevalence of stunting (hotspots in the eastern part of the district and clusters of low prevalence (cold spots in the western. We found out that the inclusion of spatial structure of the data into the Bayesian model has shown to improve the fit for stunting model. The Bayesian geo-statistical model indicated that the risk of stunting increased as the child's age increased (OR 4.74; 95% Bayesian credible interval [BCI]:3

  6. Experimental analysis of stereotypy with applications of nonparametric statistical tests for alternating treatments designs.

    Science.gov (United States)

    Lloyd, Blair P; Finley, Crystal I; Weaver, Emily S

    2015-11-17

    Stereotypy is common in individuals with developmental disabilities and may become disruptive in the context of instruction. The purpose of this study was to embed brief experimental analyses in the context of reading instruction to evaluate effects of antecedent and consequent variables on latencies to and durations of stereotypy. We trained a reading instructor to implement a trial-based functional analysis and a subsequent antecedent analysis of stimulus features for an adolescent with autism in a reading clinic. We used alternating treatments designs with applications of nonparametric statistical analyses to control Type I error rates. Results of the experimental analyses suggested stereotypy was maintained by nonsocial reinforcement and informed the extent to which features of academic materials influenced levels of stereotypy. Results of nonparametric statistical analyses were consistent with conclusions based on visual analysis. Brief experimental analyses may be embedded in academic instruction to inform the stimulus conditions that influence stereotypy.

  7. Evaluation of PDA Technical Report No 33. Statistical Testing Recommendations for a Rapid Microbiological Method Case Study.

    Science.gov (United States)

    Murphy, Thomas; Schwedock, Julie; Nguyen, Kham; Mills, Anna; Jones, David

    2015-01-01

    New recommendations for the validation of rapid microbiological methods have been included in the revised Technical Report 33 release from the PDA. The changes include a more comprehensive review of the statistical methods to be used to analyze data obtained during validation. This case study applies those statistical methods to accuracy, precision, ruggedness, and equivalence data obtained using a rapid microbiological methods system being evaluated for water bioburden testing. Results presented demonstrate that the statistical methods described in the PDA Technical Report 33 chapter can all be successfully applied to the rapid microbiological method data sets and gave the same interpretation for equivalence to the standard method. The rapid microbiological method was in general able to pass the requirements of PDA Technical Report 33, though the study shows that there can be occasional outlying results and that caution should be used when applying statistical methods to low average colony-forming unit values. Prior to use in a quality-controlled environment, any new method or technology has to be shown to work as designed by the manufacturer for the purpose required. For new rapid microbiological methods that detect and enumerate contaminating microorganisms, additional recommendations have been provided in the revised PDA Technical Report No. 33. The changes include a more comprehensive review of the statistical methods to be used to analyze data obtained during validation. This paper applies those statistical methods to analyze accuracy, precision, ruggedness, and equivalence data obtained using a rapid microbiological method system being validated for water bioburden testing. The case study demonstrates that the statistical methods described in the PDA Technical Report No. 33 chapter can be successfully applied to rapid microbiological method data sets and give the same comparability results for similarity or difference as the standard method. © PDA, Inc

  8. Integrated testing strategy (ITS) for bioaccumulation assessment under REACH

    DEFF Research Database (Denmark)

    Lombardo, Anna; Roncaglioni, Alessandra; Benfentati, Emilio

    2014-01-01

    in a dossier. REACH promotes the use of alternative methods to replace, refine and reduce the use of animal (eco)toxicity testing. Within the EU OSIRIS project, integrated testing strategies (ITSs) have been developed for the rational use of non-animal testing approaches in chemical hazard assessment. Here we...... present an ITS for evaluating the bioaccumulation potential of organic chemicals. The scheme includes the use of all available data (also the non-optimal ones), waiving schemes, analysis of physicochemical properties related to the end point and alternative methods (both in silico and in vitro). In vivo...

  9. Ultrasonic testing of fatigue cracks under various conditions

    International Nuclear Information System (INIS)

    Jessop, T.J.; Cameron, A.G.B.

    1983-01-01

    Reliable detection of the fatigue cracks was possible under all conditions studied. Applied load affected the ultrasonic response in a variety of ways but never more than by 20dB and generally considerably less. Material variations affected the response under applied load by up to 20dB. Oxide in the crack and crack morphology affected the response by up to 9dB (12dB under load). Crack size variations and presence of water had little effect. Sizing accuracy was generally within 2mm although there was a tendency to undersize. The time of flight sizing technique gave the best accuracy if a tensile load was applied

  10. The Bayesian New Statistics: Hypothesis testing, estimation, meta-analysis, and power analysis from a Bayesian perspective.

    Science.gov (United States)

    Kruschke, John K; Liddell, Torrin M

    2018-02-01

    In the practice of data analysis, there is a conceptual distinction between hypothesis testing, on the one hand, and estimation with quantified uncertainty on the other. Among frequentists in psychology, a shift of emphasis from hypothesis testing to estimation has been dubbed "the New Statistics" (Cumming 2014). A second conceptual distinction is between frequentist methods and Bayesian methods. Our main goal in this article is to explain how Bayesian methods achieve the goals of the New Statistics better than frequentist methods. The article reviews frequentist and Bayesian approaches to hypothesis testing and to estimation with confidence or credible intervals. The article also describes Bayesian approaches to meta-analysis, randomized controlled trials, and power analysis.

  11. Statistical assessment of dumpsite soil suitability to enhance methane bio-oxidation under interactive influence of substrates and temperature.

    Science.gov (United States)

    Bajar, Somvir; Singh, Anita; Kaushik, C P; Kaushik, Anubha

    2017-05-01

    Biocovers are considered as the most effective and efficient way to treat methane (CH 4 ) emission from dumpsites and landfills. Active methanotrophs in the biocovers play a crucial role in reduction of emissions through microbiological methane oxidation. Several factors affecting methane bio-oxidation (MOX) have been well documented, however, their interactive effect on the oxidation process needs to be explored. Therefore, the present study was undertaken to investigate the suitability of a dumpsite soil to be employed as biocover, under the influence of substrate concentrations (CH 4 and O 2 ) and temperature at variable incubation periods. Statistical design matrix of Response Surface Methodology (RSM) revealed that MOX rate up to 69.58μgCH 4 g -1 dw h -1 could be achieved under optimum conditions. MOX was found to be more dependent on CH 4 concentration at higher level (30-40%, v/v), in comparison to O 2 concentration. However, unlike other studies MOX was found in direct proportionality relationship with temperature within a range of 25-35°C. The results obtained with the dumpsite soil biocover open up a new possibility to provide improved, sustained and environmental friendly systems to control even high CH 4 emissions from the waste sector. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. A complete sample of double-lobed radio quasars for VLBI tests of source models - Definition and statistics

    Science.gov (United States)

    Hough, D. H.; Readhead, A. C. S.

    1989-01-01

    A complete, flux-density-limited sample of double-lobed radio quasars is defined, with nuclei bright enough to be mapped with the Mark III VLBI system. It is shown that the statistics of linear size, nuclear strength, and curvature are consistent with the assumption of random source orientations and simple relativistic beaming in the nuclei. However, these statistics are also consistent with the effects of interaction between the beams and the surrounding medium. The distribution of jet velocities in the nuclei, as measured with VLBI, will provide a powerful test of physical theories of extragalactic radio sources.

  13. Bending Under Tension Test with Direct Friction Measurement

    DEFF Research Database (Denmark)

    Andreasen, Jan Lasson; Olsson, David Dam; Chodnikiewicz, K.

    2006-01-01

    A special Bending-Under-Tension (BUT) transducer has been developed in which friction around the tool radius can be directly measured when drawing a plane sheet strip around a cylindrical tool-pin under constant back tension. The front tension, back tension and torque on the tool-pin are all...... measured directly, thus enabling accurate measurement of friction and direct determination of lubricant film breakdown for varying normal pressure, sliding speed, tool radius and tool preheat temperature. The transducer is applied in an experimental investigation focusing on limits of lubrication...... in drawing of stainless steel showing the influence of varying process conditions and the performance of different lubricants....

  14. Does smoking abstinence influence distress tolerance? An experimental study comparing the response to a breath-holding test of smokers under tobacco withdrawal and under nicotine replacement therapy.

    Science.gov (United States)

    Cosci, Fiammetta; Anna Aldi, Giulia; Nardi, Antonio Egidio

    2015-09-30

    Distress tolerance has been operationalized as task persistence in stressful behavioral laboratory tasks. According to the distress tolerance perspective, how an individual responds to discomfort/distress predicts early smoking lapses. This theory seems weakly supported by experimental studies since they are limited in number, show inconsistent results, do not include control conditions. We tested the response to a stressful task in smokers under abstinence and under no abstinence to verify if tobacco abstinence reduces task persistence, thus distress tolerance. A placebo-controlled, double-blind, randomized, cross-over design was used. Twenty smokers underwent a breath holding test after the administration of nicotine on one test day and a placebo on another test day. Physiological and psychological variables were assessed at baseline and directly before and after each challenge. Abstinence induced a statistically significant shorter breath holding duration relative to the nicotine condition. No different response to the breath holding test was observed when nicotine and placebo conditions were compared. No response to the breath holding test was found when pre- and post-test values of heart rate, blood pressure, Visual Analogue Scale for fear or discomfort were compared. In brief, tobacco abstinence reduces breath holding duration but breath holding test does not influence discomfort. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  15. Dynamic behavior of porous concretes under drop weight impact testing

    NARCIS (Netherlands)

    Agar Ozbek, A.S.; Weerheijm, J.; Schlangen, E.; Breugel, K. van

    2013-01-01

    Porous concrete is used as a construction material in various applications mainly as a permeable cementitious material. However, its response under impact loading is generally not considered. Due to the high percentage of its intentional meso-size air pores, porous concrete has a moderate static

  16. Statistical testing and estimation of uncertainty in pre-post case control experimental designs: are the error bars of physics too large?

    Science.gov (United States)

    Ignatius, K.; Henning, S.; Stratmann, F.

    2013-12-01

    We encountered the question of how to do statistical inference and uncertainty estimation for the aerosol particle hygroscopicity (κ) measured up- and downstream of a hilltop in two conditions: during full-cloud events (FCE) where a cap cloud was present on the hilltop, and under cloud-free conditions (non-cloud events, NCE). The aim was to show with statistical testing that particle hygroscopicity is altered by cloud processing. This type of statistical experimental design known as a 'pre-post case control study', 'between-within design' or 'mixed design' is common in medicine and biostatistics but it may not be familiar to all researchers in the atmospheric sciences. Therefore we review the statistical testing methods that can be applied to solve these kind of problems. The key point is that these methods use the pre-measurement as a covariate to the post-measurement, which accounts for the daily variation and reduces variance in the analysis. All the three tests, Change score analysis, Analysis of Covariance (ANCOVA) and multi-way Analysis of Variance (ANOVA) gave similar results and suggested a statistically significant change in κ between FCE and NCE. Quantification of the uncertainty in hygroscopicities derived from cloud condensation nuclei (CCN) measurements implies an uncertainty interval estimation in a nonlinear expression where the uncertainty of one parameter is Gaussian with known mean and variance. We concluded that the commonly used way of estimating and showing the uncertainty intervals in hygroscopicity studies may make the error bars appear too large. Using simple Monte Carlo sampling and plotting the resulting nonlinear distribution and its quantiles may better represent the probability mass in the uncertainty distribution.

  17. Multilevel Factor Analysis by Model Segregation: New Applications for Robust Test Statistics

    Science.gov (United States)

    Schweig, Jonathan

    2014-01-01

    Measures of classroom environments have become central to policy efforts that assess school and teacher quality. This has sparked a wide interest in using multilevel factor analysis to test measurement hypotheses about classroom-level variables. One approach partitions the total covariance matrix and tests models separately on the…

  18. Hybrid Statistical Testing for Nuclear Material Accounting Data and/or Process Monitoring Data

    Energy Technology Data Exchange (ETDEWEB)

    Ticknor, Lawrence O. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Hamada, Michael Scott [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Sprinkle, James K. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Burr, Thomas Lee [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-04-14

    The two tests employed in the hybrid testing scheme are Page’s cumulative sums for all streams within a Balance Period (maximum of the maximums and average of the maximums) and Crosier’s multivariate cumulative sum applied to incremental cumulative sums across Balance Periods. The role of residuals for both kinds of data is discussed.

  19. Statistical modeling of road contribution as emission sources to total suspended particles (TSP) under MCF model downtown Medellin - Antioquia - Colombia, 2004

    International Nuclear Information System (INIS)

    Gomez, Miryam; Saldarriaga, Julio; Correa, Mauricio; Posada, Enrique; Castrillon M, Francisco Javier

    2007-01-01

    Sand fields, constructions, carbon boilers, roads, and biologic sources are air-contaminant-constituent factors in down town Valle de Aburra, among others. the distribution of road contribution data to total suspended particles according to the source receptor model MCF, source correlation modeling, is nearly a gamma distribution. Chi-square goodness of fit is used to model statistically. This test for goodness of fit also allows estimating the parameters of the distribution utilizing maximum likelihood method. As convergence criteria, the estimation maximization algorithm is used. The mean of road contribution data to total suspended particles according to the source receptor model MCF, is straightforward and validates the road contribution factor to the atmospheric pollution of the zone under study

  20. Design of durability test protocol for vehicular fuel cell systems operated in power-follow mode based on statistical results of on-road data

    Science.gov (United States)

    Xu, Liangfei; Reimer, Uwe; Li, Jianqiu; Huang, Haiyan; Hu, Zunyan; Jiang, Hongliang; Janßen, Holger; Ouyang, Minggao; Lehnert, Werner

    2018-02-01

    City buses using polymer electrolyte membrane (PEM) fuel cells are considered to be the most likely fuel cell vehicles to be commercialized in China. The technical specifications of the fuel cell systems (FCSs) these buses are equipped with will differ based on the powertrain configurations and vehicle control strategies, but can generally be classified into the power-follow and soft-run modes. Each mode imposes different levels of electrochemical stress on the fuel cells. Evaluating the aging behavior of fuel cell stacks under the conditions encountered in fuel cell buses requires new durability test protocols based on statistical results obtained during actual driving tests. In this study, we propose a systematic design method for fuel cell durability test protocols that correspond to the power-follow mode based on three parameters for different fuel cell load ranges. The powertrain configurations and control strategy are described herein, followed by a presentation of the statistical data for the duty cycles of FCSs in one city bus in the demonstration project. Assessment protocols are presented based on the statistical results using mathematical optimization methods, and are compared to existing protocols with respect to common factors, such as time at open circuit voltage and root-mean-square power.

  1. Beginning and growth of defects under coatings during fatigue tests

    International Nuclear Information System (INIS)

    Flavenot, J.F.; Dumousseau, P.; Bernard, J.L.; Slama, G.; Doule, A.

    1983-01-01

    To estimate the defects of some tubes of PWR, tensile fatigue test have been repeated on materials having real defects at the interface of a 16 MND 5 steel and of its stainless steel coating. To simulate the real working conditions, these tests have been carried out at 300 0 C. The results obtained, allow to follow the complete defect evolution. The evolution of the shape and the growth of the defect in the 16 MND 5 steel and in the stainless steel are described. Prediction models concerning the beginning and the growth of such defects agree with the results obtained [fr

  2. Statistical homogeneity tests applied to large data sets from high energy physics experiments

    Science.gov (United States)

    Trusina, J.; Franc, J.; Kůs, V.

    2017-12-01

    Homogeneity tests are used in high energy physics for the verification of simulated Monte Carlo samples, it means if they have the same distribution as a measured data from particle detector. Kolmogorov-Smirnov, χ 2, and Anderson-Darling tests are the most used techniques to assess the samples’ homogeneity. Since MC generators produce plenty of entries from different models, each entry has to be re-weighted to obtain the same sample size as the measured data has. One way of the homogeneity testing is through the binning. If we do not want to lose any information, we can apply generalized tests based on weighted empirical distribution functions. In this paper, we propose such generalized weighted homogeneity tests and introduce some of their asymptotic properties. We present the results based on numerical analysis which focuses on estimations of the type-I error and power of the test. Finally, we present application of our homogeneity tests to data from the experiment DØ in Fermilab.

  3. Freedom of expression in Azerbaijan under test : challenges and prospects

    OpenAIRE

    Madatli, Leyla

    2010-01-01

    This article discusses the ground-breaking judgment in Fatullayev v Azerbaijan in which the European Court ordered the immediate release of imprisoned journalist Eynulla Fatullayev, but who at the time of going to press nevertheless remained in custody. Fatullayev was the founder and chief editor of two newspapers in Azerbaijan well known for their harsh criticism of the Azerbaijani Government. This judgment is of great importance for Azerbaijan as it addresses topical issues under Art.10 ECH...

  4. Bank stress testing under different balance sheet assumptions

    OpenAIRE

    Busch, Ramona; Drescher, Christian; Memmel, Christoph

    2017-01-01

    Using unique supervisory survey data on the impact of a hypothetical interest rate shock on German banks, we analyse price and quantity effects on banks' net interest margin components under different balance sheet assumptions. In the first year, the cross-sectional variation of banks' simulated price effect is nearly eight times as large as the one of the simulated quantity effect. After five years, however, the importance of both effects converges. Large banks adjust their balance sheets mo...

  5. Integrated testing strategy (ITS) for bioaccumulation assessment under REACH.

    Science.gov (United States)

    Lombardo, Anna; Roncaglioni, Alessandra; Benfentati, Emilio; Nendza, Monika; Segner, Helmut; Fernández, Alberto; Kühne, Ralph; Franco, Antonio; Pauné, Eduard; Schüürmann, Gerrit

    2014-08-01

    REACH (registration, evaluation, authorisation and restriction of chemicals) regulation requires that all the chemicals produced or imported in Europe above 1 tonne/year are registered. To register a chemical, physicochemical, toxicological and ecotoxicological information needs to be reported in a dossier. REACH promotes the use of alternative methods to replace, refine and reduce the use of animal (eco)toxicity testing. Within the EU OSIRIS project, integrated testing strategies (ITSs) have been developed for the rational use of non-animal testing approaches in chemical hazard assessment. Here we present an ITS for evaluating the bioaccumulation potential of organic chemicals. The scheme includes the use of all available data (also the non-optimal ones), waiving schemes, analysis of physicochemical properties related to the end point and alternative methods (both in silico and in vitro). In vivo methods are used only as last resort. Using the ITS, in vivo testing could be waived for about 67% of the examined compounds, but bioaccumulation potential could be estimated on the basis of non-animal methods. The presented ITS is freely available through a web tool. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Principles Underlying the Bilingual Aphasia Test (BAT) and Its Uses

    Science.gov (United States)

    Paradis, Michel

    2011-01-01

    The Bilingual Aphasia Test (BAT) is designed to be objective (so it can be administered by a lay native speaker of the language) and equivalent across languages (to allow for a comparison between the languages of a given patient as well as across patients from different institutions). It has been used not only with aphasia but also with any…

  7. Exercise testing in Warmblood sport horses under field conditions

    NARCIS (Netherlands)

    Munsters, Carolien C B M; van Iwaarden, Alexandra; van Weeren, René|info:eu-repo/dai/nl/074628550; Sloet van Oldruitenborgh-Oosterbaan, Marianne M|info:eu-repo/dai/nl/075234394

    2014-01-01

    Regular exercise testing in Warmblood sport horses may, as in racing, potentially help to characterise fitness indices in different disciplines and at various competition levels and assist in understanding when a horse is 'fit to compete'. In this review an overview is given of the current state of

  8. Development of test scenarios for off-roadway crash countermeasures based on crash statistics

    Science.gov (United States)

    2002-09-01

    This report presents the results from an analysis of off-roadway crashes and proposes a set of crash-imminent scenarios to objectively test countermeasure systems for light vehicles (passenger cars, sport utility vehicles, vans, and pickup trucks) ba...

  9. A statistical analysis of the deterrence effects of the Military Services' Drug testing policies

    OpenAIRE

    Martinez, Antonio.

    1998-01-01

    This thesis examines themagnitude of the deterrence effect associated with the militaryservices' drug testing policies. Using data from the 1995Department of Defense Survey of Health Related Behaviors Among Military Personnel and the 1995 National Household Survey on Drug Abuse, illicit drug use rates are modeled as a function of pertinent demographic characteristics. The naturalvariation in drug testing policies is exploited to estimate the deterrence effects of suchprograms. The first analy...

  10. Statistical Analysis of Compressive and Flexural Test Results on the Sustainable Adobe Reinforced with Steel Wire Mesh

    Science.gov (United States)

    Jokhio, Gul A.; Syed Mohsin, Sharifah M.; Gul, Yasmeen

    2018-04-01

    It has been established that Adobe provides, in addition to being sustainable and economic, a better indoor air quality without spending extensive amounts of energy as opposed to the modern synthetic materials. The material, however, suffers from weak structural behaviour when subjected to adverse loading conditions. A wide range of mechanical properties has been reported in literature owing to lack of research and standardization. The present paper presents the statistical analysis of the results that were obtained through compressive and flexural tests on Adobe samples. Adobe specimens with and without wire mesh reinforcement were tested and the results were reported. The statistical analysis of these results presents an interesting read. It has been found that the compressive strength of adobe increases by about 43% after adding a single layer of wire mesh reinforcement. This increase is statistically significant. The flexural response of Adobe has also shown improvement with the addition of wire mesh reinforcement, however, the statistical significance of the same cannot be established.

  11. A test for monitoring under- and overtreatment in Dutch hospitals

    OpenAIRE

    Lenz, Oliver Urs; Oberski, Daniel L

    2017-01-01

    Over- and undertreatment harm patients and society and confound other healthcare quality measures. Despite a growing body of research covering specific conditions, we lack tools to systematically detect and measure over- and undertreatment in hospitals. We demonstrate a test used to monitor over- and undertreatment in Dutch hospitals, and illustrate its results applied to the aggregated administrative treatment data of 1,836,349 patients at 89 hospitals in 2013. We employ a random effects mod...

  12. Bending Under Tension Test with Direct Friction Measurement

    DEFF Research Database (Denmark)

    Andreasen, Jan Lasson; Olsson, David Dam; Chodnikiewicz, K.

    2004-01-01

    A special BUT-transducer has been developed in which friction around the tool radius can be directly measured when drawing a plane sheet strip around a cylindrical tool-pin under constant back tension. The front tension, back tension and torque on the tool-pin are all measured directly, thus...... enabling accurate measurement of friction and direct determination of lubricant film breakdown for varying normal pressure, sliding speed, tool radius and tool preheat temperature. The transducer is applied in an experimental investigation focusing on limits of lubrication in drawing of stainless steel...

  13. INVITATION TO PERFORM Y2K TESTING UNDER UNIX

    CERN Multimedia

    CERN Y2K Co-ordinator

    1999-01-01

    IntroductionA special AFS cell Ôy2k.cern.chÕ has been established to allow service managers and users to test y2k compliance.In addition to AFS, the cluster consists of machines representing all the Unix flavours in use at CERN (AIX, DUNIX, HP-UX, IRIX, LINUX, and SOLARIS).More information can be obtained from the page: http://wwwinfo.cern.ch/pdp/bis/y2k/y2kplus.htmlTesting scheduleThe cluster will be set to 25 December 1999 on fixed days and then left running for three weeks. This gives people one week to prepare test programs in 1999 and two weeks to check the consequences of passing into year 2000. These fixed dates are set as follows:— 19 May 1999, date set to 25/12/99 (year 2000 starts on 26 May) — 9 June1999, date set to 25/12/99 (year 2000 starts on 16 June)— 30 June 1999, date set to 25/12/99 (year 2000 starts on 7 July)If more than these three sessions are needed an announcement will be made later. RegistrationThe following Web page should be used for r...

  14. Statistical reliability assessment of UT round-robin test data for piping welds

    International Nuclear Information System (INIS)

    Kim, H.M.; Park, I.K.; Park, U.S.; Park, Y.W.; Kang, S.C.; Lee, J.H.

    2004-01-01

    Ultrasonic NDE is one of important technologies in the life-time maintenance of nuclear power plant. Ultrasonic inspection system is consisted of the operator, equipment and procedure. The reliability of ultrasonic inspection system is affected by its ability. The performance demonstration round robin was conducted to quantify the capability of ultrasonic inspection for in-service. Several teams employed procedures that met or exceeded with ASME sec. XI code requirements detected the piping of nuclear power plant with various cracks to evaluate the capability of detection and sizing. In this paper, the statistical reliability assessment of ultrasonic nondestructive inspection data using probability of detection (POD) is presented. The result of POD using logistic model was useful to the reliability assessment for the NDE hit or miss data. (orig.)

  15. A Statistical Test of Walrasian Equilibrium by Means of Complex Networks Theory

    Science.gov (United States)

    Bargigli, Leonardo; Viaggiu, Stefano; Lionetto, Andrea

    2016-10-01

    We represent an exchange economy in terms of statistical ensembles for complex networks by introducing the concept of market configuration. This is defined as a sequence of nonnegative discrete random variables {w_{ij}} describing the flow of a given commodity from agent i to agent j. This sequence can be arranged in a nonnegative matrix W which we can regard as the representation of a weighted and directed network or digraph G. Our main result consists in showing that general equilibrium theory imposes highly restrictive conditions upon market configurations, which are in most cases not fulfilled by real markets. An explicit example with reference to the e-MID interbank credit market is provided.

  16. Is testing the voice under sedation reliable in medialization thyroplasty?

    Science.gov (United States)

    Oishi, Natsuki; Herrero, Ricard; Martin, Ana; Basterra, Jorge; Zapater, Enrique

    2016-12-01

    Medialization thyroplasty is an accepted method for improving non-compensated unilateral vocal cord palsy. Most surgeons decide the depth of penetration of the prosthesis by monitoring the voice changes in the patient during the surgical procedure. General anesthesia with intubation is incompatible with this procedure. Sedation is recommended. In this study we want to objectivize and quantify the influence of sedation and position on voice in order to know if this anesthetic procedure is justified in medialization thyroplasties. A prospective study. This study involved 15 adult patients who underwent sedation. Voice recordings were performed in each patient in three different positions and conditions: the seated position without sedation, the supine position without sedation, and the supine position under the effects of sedation. The sedation drugs used were midazolam, fentanyl, and propofol. The level of sedation was monitored using the observational scale and the bispectral index. The acoustic data obtained from sustained vowel sounds from patient recordings showed that sedation significantly affected the values of pitch. Compared to recordings from patients without sedation, pitch values in patients under sedation were significantly higher for jitter local and shimmer local recordings and significantly lower for pitch and harmonics-to-noise ratio. The supine position was shown not to influence on the voice. Sedation exerts an important influence on voice quality. General anesthesia could be an alternative, focusing our attention on monitoring the glottis with a fibrolaryngoscope during the surgical procedure. No sedation at all can also be an alternative.

  17. Statistical model for the mechanical behavior of the tissue engineering non-woven fibrous matrices under large deformation.

    Science.gov (United States)

    Rizvi, Mohd Suhail; Pal, Anupam

    2014-09-01

    The fibrous matrices are widely used as scaffolds for the regeneration of load-bearing tissues due to their structural and mechanical similarities with the fibrous components of the extracellular matrix. These scaffolds not only provide the appropriate microenvironment for the residing cells but also act as medium for the transmission of the mechanical stimuli, essential for the tissue regeneration, from macroscopic scale of the scaffolds to the microscopic scale of cells. The requirement of the mechanical loading for the tissue regeneration requires the fibrous scaffolds to be able to sustain the complex three-dimensional mechanical loading conditions. In order to gain insight into the mechanical behavior of the fibrous matrices under large amount of elongation as well as shear, a statistical model has been formulated to study the macroscopic mechanical behavior of the electrospun fibrous matrix and the transmission of the mechanical stimuli from scaffolds to the cells via the constituting fibers. The study establishes the load-deformation relationships for the fibrous matrices for different structural parameters. It also quantifies the changes in the fiber arrangement and tension generated in the fibers with the deformation of the matrix. The model reveals that the tension generated in the fibers on matrix deformation is not homogeneous and hence the cells located in different regions of the fibrous scaffold might experience different mechanical stimuli. The mechanical response of fibrous matrices was also found to be dependent on the aspect ratio of the matrix. Therefore, the model establishes a structure-mechanics interdependence of the fibrous matrices under large deformation, which can be utilized in identifying the appropriate structure and external mechanical loading conditions for the regeneration of load-bearing tissues. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. ON THE POWER FUNCTION OF TESTS OF PERCENTAGE POINTS BASED ON THE NON-CENTRAL T-STATISTIC,

    Science.gov (United States)

    The note considers the power function of one-sided tests of the 100 Beta % point of a normal population which are based on the non-central t...statistic. By combining two approximations given by Johnson and Welch, an approximate expression for the power function is obtained which has desirable...properties from the viewpoint of power function comparisons. Use of this approximation is illustrated by several examples. (Author)

  19. Weibull statistics effective area and volume in the ball-on-ring testing method

    DEFF Research Database (Denmark)

    Frandsen, Henrik Lund

    2014-01-01

    to geometries relevant for the application of the material, the effective area or volume for the test specimen must be evaluated. In this work analytical expressions for the effective area and volume of the ball-on-ring test specimen is derived. In the derivation the multiaxial stress field has been accounted...... for by use of the Weibull theory, and the multinomial theorem has been used to handle the integration of multiple terms raised to the power of the Weibull modulus. The analytical solution is verified with a high number of finite element models for various geometric parameters. The finite element model...

  20. Testing Boron Carbide and Silicon Carbide under Triaxial Compression

    Science.gov (United States)

    Anderson, Charles; Chocron, Sidney; Nicholls, Arthur

    2011-06-01

    Boron Carbide (B4C) and silicon carbide (SiC-N) are extensively used as armor materials. The strength of these ceramics depends mainly on surface defects, hydrostatic pressure and strain rate. This article focuses on the pressure dependence and summarizes the characterization work conducted on intact and predamaged specimens by using compression under confinement in a pressure vessel and in a thick steel sleeve. The techniques used for the characterization will be described briefly. The failure curves obtained for the two materials will be presented, although the data are limited for SiC. The data will also be compared to experimental data from Wilkins (1969), and Meyer and Faber (1997). Additionally, the results will be compared with plate-impact data.

  1. Tests of Mediation: Paradoxical Decline in Statistical Power as a Function of Mediator Collinearity

    Science.gov (United States)

    Beasley, T. Mark

    2014-01-01

    Increasing the correlation between the independent variable and the mediator ("a" coefficient) increases the effect size ("ab") for mediation analysis; however, increasing a by definition increases collinearity in mediation models. As a result, the standard error of product tests increase. The variance inflation caused by…

  2. Testing for Gender Related Size and Shape Differences of the Human Ear canal using Statistical methods

    DEFF Research Database (Denmark)

    Paulsen, Rasmus Reinhold; Larsen, Rasmus; Ersbøll, Bjarne Kjær

    2002-01-01

    surface models are built by using the anatomical landmarks to warp a template mesh onto all shapes in the training set. Testing the gender related differences is done by initially reducing the dimensionality using principal component analysis of the vertices of the warped meshes. The number of components...

  3. Relational Aggression and Hostile Attribution Biases: Testing Multiple Statistical Methods and Models

    Science.gov (United States)

    Godleski, Stephanie A.; Ostrov, Jamie M.

    2010-01-01

    The present study used both categorical and dimensional approaches to test the association between relational and physical aggression and hostile intent attributions for both relational and instrumental provocation situations using the National Institute of Child Health and Human Development longitudinal Study of Early Child Care and Youth…

  4. A Study of the Statistical Foundations of Group Conversation Tests in Spoken English.

    Science.gov (United States)

    Liski, Erkki; Puntanen, Simo

    1983-01-01

    Analysis of error patterns in a test taken by 698 Finnish university students shows errors are made in this declining order of frequency: grammar, pronunciation, vocabulary, and use. More talkative students were proportionately more proficient per utterance, and higher proficiency also correlated with sex (female) and high matriculation test…

  5. Behaviour of Ti-doped CFCs under thermal fatigue tests

    Energy Technology Data Exchange (ETDEWEB)

    Centeno, A. [Instituto Nacional del Carbon (CSIC), Apdo. 73, 33080 Oviedo (Spain); Pintsuk, G.; Linke, J. [Forschungszentrum Juelich GmbH, EURATOM Association, 52425 Juelich (Germany); Gualco, C. [Ansaldo Energia, I-16152 Genoa (Italy); Blanco, C., E-mail: clara@incar.csic.es [Instituto Nacional del Carbon (CSIC), Apdo. 73, 33080 Oviedo (Spain); Santamaria, R.; Granda, M.; Menendez, R. [Instituto Nacional del Carbon (CSIC), Apdo. 73, 33080 Oviedo (Spain)

    2011-01-15

    In spite of the remarkable progress in the design of in-vessel components for the divertor of the first International Thermonuclear Experimental Reactor (ITER), a great effort is still put into the development of manufacturing technologies for carbon armour with improved properties. Newly developed 3D titanium-doped carbon fibre reinforced composites and their corresponding undoped counterparts were brazed to a CuCrZr heat sink to produce actively cooled flat tile mock-ups. By exposing the mock-ups to thermal fatigue tests in an electron beam test facility, the material behaviour and the brazing between the individual constituents in the mock-up was qualified. The mock-ups with titanium-doped CFCs exhibited a significantly improved thermal fatigue resistance compared with those undoped materials. The comparison of these mock-ups with those produced using pristine NB31, one of the reference materials as plasma facing material for ITER, showed almost identical results, indicating the high potential of Ti-doped CFCs due to their improved thermal shock resistance.

  6. Relationship between the COI test and other sensory profiles by statistical procedures

    Directory of Open Access Journals (Sweden)

    Calvente, J. J.

    1994-04-01

    Full Text Available Relationships between 139 sensory attributes evaluated on 32 samples of virgin olive oil have been analysed by a statistical sensory wheel that guarantees the objectiveness and prediction of its conclusions concerning the best clusters of attributes: green, bitter-pungent, ripe fruit, fruity, sweet fruit, undesirable attributes and two miscellanies. The procedure allows the sensory notes evaluated for potential consumers of this edible oil from the point of view of its habitual consumers to be understood with special reference to The European Communities Regulation n-2568/91. Five different panels: Spanish, Greek, Italian, Dutch and British, have been used to evaluate the samples. Analysis of the relationships between stimuli perceived by aroma, flavour, smell, mouthfeel and taste together with Linear Sensory Profiles based on Fuzzy Logic are provided. A 3-dimensional plot indicates the usefulness of the proposed procedure in the authentication of different varieties of virgin olive oil. An analysis of the volatile compounds responsible for most of the attributes gives weight to the conclusions. Directions which promise to improve the E.G. Regulation on the sensory quality of olive oil are also given.

  7. Development of Speditive Explosibility Test (SET): a statistical reliable method for combustible dust explosibility investigation

    OpenAIRE

    Danzi, Enrico

    2016-01-01

    The present work of thesis investigate the explosibility sensitivity and behavior of combustible solid materials, in the form of dusts. The first phase of the work has focused on the ignition sensitivity of combustible dusts, both in form of clouds than deposed as layers. Standard test methods has been used to assess ignition parameter of the samples, i.e. UNI EN 50821: 1999. MITC and MITL were measured for pure combustible dusts and for mixtures of different dusts. In particular mixtures of ...

  8. Statistical testing of the association between annual turnover and marketing activities in SMEs using χ2

    Science.gov (United States)

    Pater, Liana; Miclea, Şerban; Izvercian, Monica

    2016-06-01

    This paper considers the impact of SMEs' annual turnover upon its marketing activities (in terms of marketing responsibility, strategic planning and budgeting). Empirical results and literature reviews unveil that SMEs managers incline to partake in planned and profitable marketing activities, depending on their turnover's level. Thus, using the collected data form 131 Romanian SMEs managers, we have applied the Chi-Square Test in order to validate or invalidate three research assumptions (hypotheses), created starting from the empirical and literature findings.

  9. A statistical test of the stability assumption inherent in empirical estimates of economic depreciation.

    Science.gov (United States)

    Shriver, K A

    1986-01-01

    Realistic estimates of economic depreciation are required for analyses of tax policy, economic growth and production, and national income and wealth. THe purpose of this paper is to examine the stability assumption underlying the econometric derivation of empirical estimates of economic depreciation for industrial machinery and and equipment. The results suggest that a reasonable stability of economic depreciation rates of decline may exist over time. Thus, the assumption of a constant rate of economic depreciation may be a reasonable approximation for further empirical economic analyses.

  10. A statistical simulation model for field testing of non-target organisms in environmental risk assessment of genetically modified plants.

    Science.gov (United States)

    Goedhart, Paul W; van der Voet, Hilko; Baldacchino, Ferdinando; Arpaia, Salvatore

    2014-04-01

    Genetic modification of plants may result in unintended effects causing potentially adverse effects on the environment. A comparative safety assessment is therefore required by authorities, such as the European Food Safety Authority, in which the genetically modified plant is compared with its conventional counterpart. Part of the environmental risk assessment is a comparative field experiment in which the effect on non-target organisms is compared. Statistical analysis of such trials come in two flavors: difference testing and equivalence testing. It is important to know the statistical properties of these, for example, the power to detect environmental change of a given magnitude, before the start of an experiment. Such prospective power analysis can best be studied by means of a statistical simulation model. This paper describes a general framework for simulating data typically encountered in environmental risk assessment of genetically modified plants. The simulation model, available as Supplementary Material, can be used to generate count data having different statistical distributions possibly with excess-zeros. In addition the model employs completely randomized or randomized block experiments, can be used to simulate single or multiple trials across environments, enables genotype by environment interaction by adding random variety effects, and finally includes repeated measures in time following a constant, linear or quadratic pattern in time possibly with some form of autocorrelation. The model also allows to add a set of reference varieties to the GM plants and its comparator to assess the natural variation which can then be used to set limits of concern for equivalence testing. The different count distributions are described in some detail and some examples of how to use the simulation model to study various aspects, including a prospective power analysis, are provided.

  11. A statistical simulation model for field testing of non-target organisms in environmental risk assessment of genetically modified plants

    Science.gov (United States)

    Goedhart, Paul W; van der Voet, Hilko; Baldacchino, Ferdinando; Arpaia, Salvatore

    2014-01-01

    Genetic modification of plants may result in unintended effects causing potentially adverse effects on the environment. A comparative safety assessment is therefore required by authorities, such as the European Food Safety Authority, in which the genetically modified plant is compared with its conventional counterpart. Part of the environmental risk assessment is a comparative field experiment in which the effect on non-target organisms is compared. Statistical analysis of such trials come in two flavors: difference testing and equivalence testing. It is important to know the statistical properties of these, for example, the power to detect environmental change of a given magnitude, before the start of an experiment. Such prospective power analysis can best be studied by means of a statistical simulation model. This paper describes a general framework for simulating data typically encountered in environmental risk assessment of genetically modified plants. The simulation model, available as Supplementary Material, can be used to generate count data having different statistical distributions possibly with excess-zeros. In addition the model employs completely randomized or randomized block experiments, can be used to simulate single or multiple trials across environments, enables genotype by environment interaction by adding random variety effects, and finally includes repeated measures in time following a constant, linear or quadratic pattern in time possibly with some form of autocorrelation. The model also allows to add a set of reference varieties to the GM plants and its comparator to assess the natural variation which can then be used to set limits of concern for equivalence testing. The different count distributions are described in some detail and some examples of how to use the simulation model to study various aspects, including a prospective power analysis, are provided. PMID:24834325

  12. Application of modern tests for stationarity to single-trial MEG data: transferring powerful statistical tools from econometrics to neuroscience.

    Science.gov (United States)

    Kipiński, Lech; König, Reinhard; Sielużycki, Cezary; Kordecki, Wojciech

    2011-10-01

    Stationarity is a crucial yet rarely questioned assumption in the analysis of time series of magneto- (MEG) or electroencephalography (EEG). One key drawback of the commonly used tests for stationarity of encephalographic time series is the fact that conclusions on stationarity are only indirectly inferred either from the Gaussianity (e.g. the Shapiro-Wilk test or Kolmogorov-Smirnov test) or the randomness of the time series and the absence of trend using very simple time-series models (e.g. the sign and trend tests by Bendat and Piersol). We present a novel approach to the analysis of the stationarity of MEG and EEG time series by applying modern statistical methods which were specifically developed in econometrics to verify the hypothesis that a time series is stationary. We report our findings of the application of three different tests of stationarity--the Kwiatkowski-Phillips-Schmidt-Schin (KPSS) test for trend or mean stationarity, the Phillips-Perron (PP) test for the presence of a unit root and the White test for homoscedasticity--on an illustrative set of MEG data. For five stimulation sessions, we found already for short epochs of duration of 250 and 500 ms that, although the majority of the studied epochs of single MEG trials were usually mean-stationary (KPSS test and PP test), they were classified as nonstationary due to their heteroscedasticity (White test). We also observed that the presence of external auditory stimulation did not significantly affect the findings regarding the stationarity of the data. We conclude that the combination of these tests allows a refined analysis of the stationarity of MEG and EEG time series.

  13. An empirical test of Maslow's theory of need hierarchy using hologeistic comparison by statistical sampling.

    Science.gov (United States)

    Davis-Sharts, J

    1986-10-01

    Maslow's hierarchy of basic human needs provides a major theoretical framework in nursing science. The purpose of this study was to empirically test Maslow's need theory, specifically at the levels of physiological and security needs, using a hologeistic comparative method. Thirty cultures taken from the 60 cultural units in the Health Relations Area Files (HRAF) Probability Sample were found to have data available for examining hypotheses about thermoregulatory (physiological) and protective (security) behaviors practiced prior to sleep onset. The findings demonstrate there is initial worldwide empirical evidence to support Maslow's need hierarchy.

  14. A systematic review of statistical methods used to test for reliability of medical instruments measuring continuous variables.

    Science.gov (United States)

    Zaki, Rafdzah; Bulgiba, Awang; Nordin, Noorhaire; Azina Ismail, Noor

    2013-06-01

    Reliability measures precision or the extent to which test results can be replicated. This is the first ever systematic review to identify statistical methods used to measure reliability of equipment measuring continuous variables. This studyalso aims to highlight the inappropriate statistical method used in the reliability analysis and its implication in the medical practice. In 2010, five electronic databases were searched between 2007 and 2009 to look for reliability studies. A total of 5,795 titles were initially identified. Only 282 titles were potentially related, and finally 42 fitted the inclusion criteria. The Intra-class Correlation Coefficient (ICC) is the most popular method with 25 (60%) studies having used this method followed by the comparing means (8 or 19%). Out of 25 studies using the ICC, only 7 (28%) reported the confidence intervals and types of ICC used. Most studies (71%) also tested the agreement of instruments. This study finds that the Intra-class Correlation Coefficient is the most popular method used to assess the reliability of medical instruments measuring continuous outcomes. There are also inappropriate applications and interpretations of statistical methods in some studies. It is important for medical researchers to be aware of this issue, and be able to correctly perform analysis in reliability studies.

  15. A Systematic Review of Statistical Methods Used to Test for Reliability of Medical Instruments Measuring Continuous Variables

    Directory of Open Access Journals (Sweden)

    Rafdzah Zaki

    2013-06-01

    Full Text Available   Objective(s: Reliability measures precision or the extent to which test results can be replicated. This is the first ever systematic review to identify statistical methods used to measure reliability of equipment measuring continuous variables. This studyalso aims to highlight the inappropriate statistical method used in the reliability analysis and its implication in the medical practice.   Materials and Methods: In 2010, five electronic databases were searched between 2007 and 2009 to look for reliability studies. A total of 5,795 titles were initially identified. Only 282 titles were potentially related, and finally 42 fitted the inclusion criteria. Results: The Intra-class Correlation Coefficient (ICC is the most popular method with 25 (60% studies having used this method followed by the comparing means (8 or 19%. Out of 25 studies using the ICC, only 7 (28% reported the confidence intervals and types of ICC used. Most studies (71% also tested the agreement of instruments. Conclusion: This study finds that the Intra-class Correlation Coefficient is the most popular method used to assess the reliability of medical instruments measuring continuous outcomes. There are also inappropriate applications and interpretations of statistical methods in some studies. It is important for medical researchers to be aware of this issue, and be able to correctly perform analysis in reliability studies.

  16. Designing experiments for maximum information from cyclic oxidation tests and their statistical analysis using half Normal plots

    International Nuclear Information System (INIS)

    Coleman, S.Y.; Nicholls, J.R.

    2006-01-01

    Cyclic oxidation testing at elevated temperatures requires careful experimental design and the adoption of standard procedures to ensure reliable data. This is a major aim of the 'COTEST' research programme. Further, as such tests are both time consuming and costly, in terms of human effort, to take measurements over a large number of cycles, it is important to gain maximum information from a minimum number of tests (trials). This search for standardisation of cyclic oxidation conditions leads to a series of tests to determine the relative effects of cyclic parameters on the oxidation process. Following a review of the available literature, databases and the experience of partners to the COTEST project, the most influential parameters, upper dwell temperature (oxidation temperature) and time (hot time), lower dwell time (cold time) and environment, were investigated in partners' laboratories. It was decided to test upper dwell temperature at 3 levels, at and equidistant from a reference temperature; to test upper dwell time at a reference, a higher and a lower time; to test lower dwell time at a reference and a higher time and wet and dry environments. Thus an experiment, consisting of nine trials, was designed according to statistical criteria. The results of the trial were analysed statistically, to test the main linear and quadratic effects of upper dwell temperature and hot time and the main effects of lower dwell time (cold time) and environment. The nine trials are a quarter fraction of the 36 possible combinations of parameter levels that could have been studied. The results have been analysed by half Normal plots as there are only 2 degrees of freedom for the experimental error variance, which is rather low for a standard analysis of variance. Half Normal plots give a visual indication of which factors are statistically significant. In this experiment each trial has 3 replications, and the data are analysed in terms of mean mass change, oxidation kinetics

  17. Tests of Statistical Significance and Background Estimation in Gamma-Ray Air Shower Experiments

    Science.gov (United States)

    Fleysher, R.; Fleysher, L.; Nemethy, P.; Mincer, A. I.; Haines, T. J.

    2004-03-01

    In this paper we discuss established methods of significance calculation for testing the existence of a signal in the presence of unknown background and point out the limits of their applicability. We then introduce a new self-consistent scheme for source detection and discuss some of its properties. The method overcomes weaknesses of those used previously and allows incorporating background anisotropies by vetoing existing localized sources and sinks on the sky and compensating for known large-scale anisotropies. By giving an example using the Milagro gamma-ray observatory data, we demonstrate how the method can be employed to relax the detector stability assumption. The new method is universal and can be used with any large field-of-view detector, in which the object of investigation, steady or transient, point or extended, traverses its field of view.

  18. A Test Model for Fluctuation-Dissipation Theorems with Time Periodic Statistics (PREPRINT)

    Science.gov (United States)

    2010-03-09

    Cov(u2(, u ∗ 2) ∂γ2 = − ∫ t −∞ ∫ t −∞ (2t− s− r)e−γ2(2t−s−r) ( 〈ψ(s, t)ψ(r, t)〉 − 〈ψ(s, t)〉〈ψ(r, t)〉 ) f2(s)f2(r)dsdr, (88) References [1] R. Abramov ...Short-time linear response with reduced-rank tangent map. Chinese Annals of Mathematics, Series B., 30:447–462, 2009. [2] R. Abramov and A.J. Majda... Abramov and A.J. Majda. New approximations and tests of linear fluctuation-response for chaotic nonlinear forced-dissipative dynamical sys- tems. J

  19. 77 FR 15101 - Results From Inert Ingredient Test Orders Issued Under EPA's Endocrine Disruptor Screening...

    Science.gov (United States)

    2012-03-14

    ... . List of Subjects Environmental protection, Endocrine disruptors, Pesticides and pests. Dated: February... Test Orders Issued Under EPA's Endocrine Disruptor Screening Program: New Data Compensation Claims... required recipients to submit specific screening data on hormonal effects under EPA's Endocrine Disruptor...

  20. CUSUM Statistics for Large Item Banks: Computation of Standard Errors. Law School Admission Council Computerized Testing Report. LSAC Research Report Series.

    Science.gov (United States)

    Glas, C. A. W.

    In a previous study (1998), how to evaluate whether adaptive testing data used for online calibration sufficiently fit the item response model used by C. Glas was studied. Three approaches were suggested, based on a Lagrange multiplier (LM) statistic, a Wald statistic, and a cumulative sum (CUMSUM) statistic respectively. For all these methods,…

  1. Accuracy statistics in predicting Independent Activities of Daily Living (IADL) capacity with comprehensive and brief neuropsychological test batteries.

    Science.gov (United States)

    Karzmark, Peter; Deutsch, Gayle K

    2018-01-01

    This investigation was designed to determine the predictive accuracy of a comprehensive neuropsychological and brief neuropsychological test battery with regard to the capacity to perform instrumental activities of daily living (IADLs). Accuracy statistics that included measures of sensitivity, specificity, positive and negative predicted power and positive likelihood ratio were calculated for both types of batteries. The sample was drawn from a general neurological group of adults (n = 117) that included a number of older participants (age >55; n = 38). Standardized neuropsychological assessments were administered to all participants and were comprised of the Halstead Reitan Battery and portions of the Wechsler Adult Intelligence Scale-III. A comprehensive test battery yielded a moderate increase over base-rate in predictive accuracy that generalized to older individuals. There was only limited support for using a brief battery, for although sensitivity was high, specificity was low. We found that a comprehensive neuropsychological test battery provided good classification accuracy for predicting IADL capacity.

  2. Local tolerance testing under REACH: Accepted non-animal methods are not on equal footing with animal tests.

    Science.gov (United States)

    Sauer, Ursula G; Hill, Erin H; Curren, Rodger D; Raabe, Hans A; Kolle, Susanne N; Teubner, Wera; Mehling, Annette; Landsiedel, Robert

    2016-07-01

    In general, no single non-animal method can cover the complexity of any given animal test. Therefore, fixed sets of in vitro (and in chemico) methods have been combined into testing strategies for skin and eye irritation and skin sensitisation testing, with pre-defined prediction models for substance classification. Many of these methods have been adopted as OECD test guidelines. Various testing strategies have been successfully validated in extensive in-house and inter-laboratory studies, but they have not yet received formal acceptance for substance classification. Therefore, under the European REACH Regulation, data from testing strategies can, in general, only be used in so-called weight-of-evidence approaches. While animal testing data generated under the specific REACH information requirements are per se sufficient, the sufficiency of weight-of-evidence approaches can be questioned under the REACH system, and further animal testing can be required. This constitutes an imbalance between the regulatory acceptance of data from approved non-animal methods and animal tests that is not justified on scientific grounds. To ensure that testing strategies for local tolerance testing truly serve to replace animal testing for the REACH registration 2018 deadline (when the majority of existing chemicals have to be registered), clarity on their regulatory acceptance as complete replacements is urgently required. 2016 FRAME.

  3. Statistical analysis on the fluence factor of surveillance test data of Korean nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Gyeong Geun; Kim, Min Chul; Yoon, Ji Hyun; Lee, Bong Sang; Lim, Sang Yeob; Kwon, Jun Hyun [Nuclear Materials Safety Research Division, Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2017-06-15

    The transition temperature shift (TTS) of the reactor pressure vessel materials is an important factor that determines the lifetime of a nuclear power plant. The prediction of the TTS at the end of a plant’s lifespan is calculated based on the equation of Regulatory Guide 1.99 revision 2 (RG1.99/2) from the US. The fluence factor in the equation was expressed as a power function, and the exponent value was determined by the early surveillance data in the US. Recently, an advanced approach to estimate the TTS was proposed in various countries for nuclear power plants, and Korea is considering the development of a new TTS model. In this study, the TTS trend of the Korean surveillance test results was analyzed using a nonlinear regression model and a mixed-effect model based on the power function. The nonlinear regression model yielded a similar exponent as the power function in the fluence compared with RG1.99/2. The mixed-effect model had a higher value of the exponent and showed superior goodness of fit compared with the nonlinear regression model. Compared with RG1.99/2 and RG1.99/3, the mixed-effect model provided a more accurate prediction of the TTS.

  4. Respirator Filter Efficiency Testing Against Particulate and Biological Aerosols Under Moderate to High Flow Rates

    National Research Council Canada - National Science Library

    Richardson, Aaron W; Eshbaugh, Jonathan P; Hofacre, Kent C; Gardner, Paul D

    2006-01-01

    ...) and biological test aerosols under breather flow rates associated with high work rates. The inert test challenges consisted of solid and oil aerosols having nominal diameters ranging from 0.02...

  5. GENUS STATISTICS USING THE DELAUNAY TESSELLATION FIELD ESTIMATION METHOD. I. TESTS WITH THE MILLENNIUM SIMULATION AND THE SDSS DR7

    International Nuclear Information System (INIS)

    Zhang Youcai; Yang Xiaohu; Springel, Volker

    2010-01-01

    We study the topology of cosmic large-scale structure through the genus statistics, using galaxy catalogs generated from the Millennium Simulation and observational data from the latest Sloan Digital Sky Survey Data Release (SDSS DR7). We introduce a new method for constructing galaxy density fields and for measuring the genus statistics of its isodensity surfaces. It is based on a Delaunay tessellation field estimation (DTFE) technique that allows the definition of a piece-wise continuous density field and the exact computation of the topology of its polygonal isodensity contours, without introducing any free numerical parameter. Besides this new approach, we also employ the traditional approaches of smoothing the galaxy distribution with a Gaussian of fixed width, or by adaptively smoothing with a kernel that encloses a constant number of neighboring galaxies. Our results show that the Delaunay-based method extracts the largest amount of topological information. Unlike the traditional approach for genus statistics, it is able to discriminate between the different theoretical galaxy catalogs analyzed here, both in real space and in redshift space, even though they are based on the same underlying simulation model. In particular, the DTFE approach detects with high confidence a discrepancy of one of the semi-analytic models studied here compared with the SDSS data, while the other models are found to be consistent.

  6. Statistical tests for natural selection on regulatory regions based on the strength of transcription factor binding sites

    Directory of Open Access Journals (Sweden)

    Moses Alan M

    2009-12-01

    Full Text Available Abstract Background Although cis-regulatory changes play an important role in evolution, it remains difficult to establish the contribution of natural selection to regulatory differences between species. For protein coding regions, powerful tests of natural selection have been developed based on comparisons of synonymous and non-synonymous substitutions, and analogous tests for regulatory regions would be of great utility. Results Here, tests for natural selection on regulatory regions are proposed based on nucleotide substitutions that occur in characterized transcription factor binding sites (an important type functional element within regulatory regions. In the absence of selection, these substitutions will tend to reduce the strength of existing binding sites. On the other hand, purifying selection will act to preserve the binding sites in regulatory regions, while positive selection can act to create or destroy binding sites, as well as change their strength. Using standard models of binding site strength and molecular evolution in the absence of selection, this intuition can be used to develop statistical tests for natural selection. Application of these tests to two well-characterized regulatory regions in Drosophila provides evidence for purifying selection. Conclusion This demonstrates that it is possible to develop tests for selection on regulatory regions based on the specific functional constrains on these sequences.

  7. A method of statistical analysis in the field of sports science when assumptions of parametric tests are not violated

    OpenAIRE

    Sandurska, Elżbieta; Szulc, Aleksandra

    2016-01-01

    Sandurska Elżbieta, Szulc Aleksandra. A method of statistical analysis in the field of sports science when assumptions of parametric tests are not violated. Journal of Education Health and Sport. 2016;6(13):275-287. eISSN 2391-8306. DOI http://dx.doi.org/10.5281/zenodo.293762 http://ojs.ukw.edu.pl/index.php/johs/article/view/4278 The journal has had 7 points in Ministry of Science and Higher Education parametric evaluation. Part B item 754 (09.12.2016). 754 Journal...

  8. RILEM technical committee 195-DTD recommendation for test methods for AD and TD of early age concrete Round Robin documentation report : program, test results and statistical evaluation

    CERN Document Server

    Bjøntegaard, Øyvind; Krauss, Matias; Budelmann, Harald

    2015-01-01

    This report presents the Round-Robin (RR) program and test results including a statistical evaluation of the RILEM TC195-DTD committee named “Recommendation for test methods for autogenous deformation (AD) and thermal dilation (TD) of early age concrete”. The task of the committee was to investigate the linear test set-up for AD and TD measurements (Dilation Rigs) in the period from setting to the end of the hardening phase some weeks after. These are the stress-inducing deformations in a hardening concrete structure subjected to restraint conditions. The main task was to carry out an RR program on testing of AD of one concrete at 20 °C isothermal conditions in Dilation Rigs. The concrete part materials were distributed to 10 laboratories (Canada, Denmark, France, Germany, Japan, The Netherlands, Norway, Sweden and USA), and in total 30 tests on AD were carried out. Some supporting tests were also performed, as well as a smaller RR on cement paste. The committee has worked out a test procedure recommenda...

  9. Statistical flaw strength distributions for glass fibres: Correlation between bundle test and AFM-derived flaw size density functions

    International Nuclear Information System (INIS)

    Foray, G.; Descamps-Mandine, A.; R’Mili, M.; Lamon, J.

    2012-01-01

    The present paper investigates glass fibre flaw size distributions. Two commercial fibre grades (HP and HD) mainly used in cement-based composite reinforcement were studied. Glass fibre fractography is a difficult and time consuming exercise, and thus is seldom carried out. An approach based on tensile tests on multifilament bundles and examination of the fibre surface by atomic force microscopy (AFM) was used. Bundles of more than 500 single filaments each were tested. Thus a statistically significant database of failure data was built up for the HP and HD glass fibres. Gaussian flaw distributions were derived from the filament tensile strength data or extracted from the AFM images. The two distributions were compared. Defect sizes computed from raw AFM images agreed reasonably well with those derived from tensile strength data. Finally, the pertinence of a Gaussian distribution was discussed. The alternative Pareto distribution provided a fair approximation when dealing with AFM flaw size.

  10. Hybrid Statistical Testing for Nuclear Material Accounting Data and/or Process Monitoring Data in Nuclear Safeguards

    Directory of Open Access Journals (Sweden)

    Tom Burr

    2015-01-01

    Full Text Available The aim of nuclear safeguards is to ensure that special nuclear material is used for peaceful purposes. Historically, nuclear material accounting (NMA has provided the quantitative basis for monitoring for nuclear material loss or diversion, and process monitoring (PM data is collected by the operator to monitor the process. PM data typically support NMA in various ways, often by providing a basis to estimate some of the in-process nuclear material inventory. We develop options for combining PM residuals and NMA residuals (residual = measurement − prediction, using a hybrid of period-driven and data-driven hypothesis testing. The modified statistical tests can be used on time series of NMA residuals (the NMA residual is the familiar material balance, or on a combination of PM and NMA residuals. The PM residuals can be generated on a fixed time schedule or as events occur.

  11. Towards optimization of chemical testing under REACH: A Bayesian network approach to Integrated Testing Strategies

    NARCIS (Netherlands)

    Jaworska, J.; Gabbert, S.G.M.; Aldenberg, T.

    2010-01-01

    Integrated Testing Strategies (ITSs) are considered tools for guiding resource efficient decision-making on chemical hazard and risk management. Originating in the mid-nineties from research initiatives on minimizing animal use in toxicity testing, ITS development still lacks a methodologically

  12. Statistical analysis and ANN modeling for predicting hydrological extremes under climate change scenarios: the example of a small Mediterranean agro-watershed.

    Science.gov (United States)

    Kourgialas, Nektarios N; Dokou, Zoi; Karatzas, George P

    2015-05-01

    The purpose of this study was to create a modeling management tool for the simulation of extreme flow events under current and future climatic conditions. This tool is a combination of different components and can be applied in complex hydrogeological river basins, where frequent flood and drought phenomena occur. The first component is the statistical analysis of the available hydro-meteorological data. Specifically, principal components analysis was performed in order to quantify the importance of the hydro-meteorological parameters that affect the generation of extreme events. The second component is a prediction-forecasting artificial neural network (ANN) model that simulates, accurately and efficiently, river flow on an hourly basis. This model is based on a methodology that attempts to resolve a very difficult problem related to the accurate estimation of extreme flows. For this purpose, the available measurements (5 years of hourly data) were divided in two subsets: one for the dry and one for the wet periods of the hydrological year. This way, two ANNs were created, trained, tested and validated for a complex Mediterranean river basin in Crete, Greece. As part of the second management component a statistical downscaling tool was used for the creation of meteorological data according to the higher and lower emission climate change scenarios A2 and B1. These data are used as input in the ANN for the forecasting of river flow for the next two decades. The final component is the application of a meteorological index on the measured and forecasted precipitation and flow data, in order to assess the severity and duration of extreme events. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. A STUDY OF UNSUPERVISED CHANGE DETECTION BASED ON TEST STATISTIC AND GAUSSIAN MIXTURE MODEL USING POLSAR SAR DATA

    Directory of Open Access Journals (Sweden)

    Y. Yang

    2017-09-01

    Full Text Available To solve the problems of existing method of change detection using fully polarimetric SAR which not takes full advantage of polarimetric information and the result of false alarm rate of which is high, a method is proposed based on test statistic and Gaussian mixture model in this paper. In the case of the flood disaster in Wuhan city in 2016, difference image is obtained by the likelihoodratio parameter which is built using coherency matrix C3 or covariance matrix T3 of fully polarimetric SAR based on test statistic, and it becomes a reality that the change information is automatic extracted by the parameter of Gaussian mixture model (GMM of difference image based on the expectation maximization (EM iterative algorithm. The experimental results show that the overall accuracy of change detection results can be improved and false alarm rate can be reduced using this method by comparison with traditional constant false alarm rate (CFAR method. Thus the validity and feasibility of the method is demonstrated.

  14. a Study of Unsupervised Change Detection Based on Test Statistic and Gaussian Mixture Model Using Polsar SAR Data

    Science.gov (United States)

    Yang, Y.; Liu, W.

    2017-09-01

    To solve the problems of existing method of change detection using fully polarimetric SAR which not takes full advantage of polarimetric information and the result of false alarm rate of which is high, a method is proposed based on test statistic and Gaussian mixture model in this paper. In the case of the flood disaster in Wuhan city in 2016, difference image is obtained by the likelihoodratio parameter which is built using coherency matrix C3 or covariance matrix T3 of fully polarimetric SAR based on test statistic, and it becomes a reality that the change information is automatic extracted by the parameter of Gaussian mixture model (GMM) of difference image based on the expectation maximization (EM) iterative algorithm. The experimental results show that the overall accuracy of change detection results can be improved and false alarm rate can be reduced using this method by comparison with traditional constant false alarm rate (CFAR) method. Thus the validity and feasibility of the method is demonstrated.

  15. Accurate single nucleotide variant detection in viral populations by combining probabilistic clustering with a statistical test of strand bias

    Science.gov (United States)

    2013-01-01

    Background Deep sequencing is a powerful tool for assessing viral genetic diversity. Such experiments harness the high coverage afforded by next generation sequencing protocols by treating sequencing reads as a population sample. Distinguishing true single nucleotide variants (SNVs) from sequencing errors remains challenging, however. Current protocols are characterised by high false positive rates, with results requiring time consuming manual checking. Results By statistical modelling, we show that if multiple variant sites are considered at once, SNVs can be called reliably from high coverage viral deep sequencing data at frequencies lower than the error rate of the sequencing technology, and that SNV calling accuracy increases as true sequence diversity within a read length increases. We demonstrate these findings on two control data sets, showing that SNV detection is more reliable on a high diversity human immunodeficiency virus sample as compared to a moderate diversity sample of hepatitis C virus. Finally, we show that in situations where probabilistic clustering retains false positive SNVs (for instance due to insufficient sample diversity or systematic errors), applying a strand bias test based on a beta-binomial model of forward read distribution can improve precision, with negligible cost to true positive recall. Conclusions By combining probabilistic clustering (implemented in the program ShoRAH) with a statistical test of strand bias, SNVs may be called from deeply sequenced viral populations with high accuracy. PMID:23879730

  16. FLUCTUATION IN PENSION FUND ASSETS PRIVATELY MANAGED UNDER THE INFLUENCE OF CERTAIN FACTORS. STATISTICAL STUDY IN ROMANIA

    Directory of Open Access Journals (Sweden)

    Dracea Raluca

    2011-07-01

    Full Text Available On international level, the economic and financial crisis has determined a diminution of the asset value of compulsory pension funds, reflecting a reallocation of funds towards alternative or low-risk investments. The present paper indicates how the net asset value of privately managed pension funds in Romania may be affected or not by certain influence factors in direct correlation with different asset allocation strategies of pension funds. In this way, on literature review there are many studies which have analyzed the fluctuation of pension funds assets and a better reallocation of their investment in order to improve their efficiency. The experience of the value fluctuation of privately administered pension fund net assets is highly important, firstly beacause of its effects on the increase and the decrease of invested values for the insured persons’ accounts, under the circumstances of constantly maintaining their contributions and, implicitly, the results achieved through these investments. The research methodology consists in testing of five variables: currency exchange rate, credit interest rate, bank deposit interest rate, reference interest rate and value of the stock exchange market index (BET-C index, by means of the multiple linear regression method. The conclusion is that only two of these factors, namely, the currency exchange rate and the reference interest rate, influence net asset value of privately managed pension funds, the second pillar, one in direct and the other in indirect correlation. In order to neutralize the effects generated by the diminution of the net asset value of privately managed pension funds, considering a short time horizon, we shall elaborate a dynamic mix of their investments able to adapt to the fluctuations of the influence factors. Thus, new opportunities will be generated in order to achieve the efficiency of pension funds and to prevent the diminution of the value of insured individuals

  17. Understanding Statistics - Cancer Statistics

    Science.gov (United States)

    Annual reports of U.S. cancer statistics including new cases, deaths, trends, survival, prevalence, lifetime risk, and progress toward Healthy People targets, plus statistical summaries for a number of common cancer types.

  18. TESTING MODELS OF MAGNETIC FIELD EVOLUTION OF NEUTRON STARS WITH THE STATISTICAL PROPERTIES OF THEIR SPIN EVOLUTIONS

    International Nuclear Information System (INIS)

    Zhang Shuangnan; Xie Yi

    2012-01-01

    We test models for the evolution of neutron star (NS) magnetic fields (B). Our model for the evolution of the NS spin is taken from an analysis of pulsar timing noise presented by Hobbs et al.. We first test the standard model of a pulsar's magnetosphere in which B does not change with time and magnetic dipole radiation is assumed to dominate the pulsar's spin-down. We find that this model fails to predict both the magnitudes and signs of the second derivatives of the spin frequencies (ν-double dot). We then construct a phenomenological model of the evolution of B, which contains a long-term decay (LTD) modulated by short-term oscillations; a pulsar's spin is thus modified by its B-evolution. We find that an exponential LTD is not favored by the observed statistical properties of ν-double dot for young pulsars and fails to explain the fact that ν-double dot is negative for roughly half of the old pulsars. A simple power-law LTD can explain all the observed statistical properties of ν-double dot. Finally, we discuss some physical implications of our results to models of the B-decay of NSs and suggest reliable determination of the true ages of many young NSs is needed, in order to constrain further the physical mechanisms of their B-decay. Our model can be further tested with the measured evolutions of ν-dot and ν-double dot for an individual pulsar; the decay index, oscillation amplitude, and period can also be determined this way for the pulsar.

  19. Statistical analysis of water-quality data containing multiple detection limits II: S-language software for nonparametric distribution modeling and hypothesis testing

    Science.gov (United States)

    Lee, L.; Helsel, D.

    2007-01-01

    Analysis of low concentrations of trace contaminants in environmental media often results in left-censored data that are below some limit of analytical precision. Interpretation of values becomes complicated when there are multiple detection limits in the data-perhaps as a result of changing analytical precision over time. Parametric and semi-parametric methods, such as maximum likelihood estimation and robust regression on order statistics, can be employed to model distributions of multiply censored data and provide estimates of summary statistics. However, these methods are based on assumptions about the underlying distribution of data. Nonparametric methods provide an alternative that does not require such assumptions. A standard nonparametric method for estimating summary statistics of multiply-censored data is the Kaplan-Meier (K-M) method. This method has seen widespread usage in the medical sciences within a general framework termed "survival analysis" where it is employed with right-censored time-to-failure data. However, K-M methods are equally valid for the left-censored data common in the geosciences. Our S-language software provides an analytical framework based on K-M methods that is tailored to the needs of the earth and environmental sciences community. This includes routines for the generation of empirical cumulative distribution functions, prediction or exceedance probabilities, and related confidence limits computation. Additionally, our software contains K-M-based routines for nonparametric hypothesis testing among an unlimited number of grouping variables. A primary characteristic of K-M methods is that they do not perform extrapolation and interpolation. Thus, these routines cannot be used to model statistics beyond the observed data range or when linear interpolation is desired. For such applications, the aforementioned parametric and semi-parametric methods must be used.

  20. Exploiting the full power of temporal gene expression profiling through a new statistical test: Application to the analysis of muscular dystrophy data

    Directory of Open Access Journals (Sweden)

    Turk Rolf

    2006-04-01

    Full Text Available Abstract Background The identification of biologically interesting genes in a temporal expression profiling dataset is challenging and complicated by high levels of experimental noise. Most statistical methods used in the literature do not fully exploit the temporal ordering in the dataset and are not suited to the case where temporal profiles are measured for a number of different biological conditions. We present a statistical test that makes explicit use of the temporal order in the data by fitting polynomial functions to the temporal profile of each gene and for each biological condition. A Hotelling T2-statistic is derived to detect the genes for which the parameters of these polynomials are significantly different from each other. Results We validate the temporal Hotelling T2-test on muscular gene expression data from four mouse strains which were profiled at different ages: dystrophin-, beta-sarcoglycan and gamma-sarcoglycan deficient mice, and wild-type mice. The first three are animal models for different muscular dystrophies. Extensive biological validation shows that the method is capable of finding genes with temporal profiles significantly different across the four strains, as well as identifying potential biomarkers for each form of the disease. The added value of the temporal test compared to an identical test which does not make use of temporal ordering is demonstrated via a simulation study, and through confirmation of the expression profiles from selected genes by quantitative PCR experiments. The proposed method maximises the detection of the biologically interesting genes, whilst minimising false detections. Conclusion The temporal Hotelling T2-test is capable of finding relatively small and robust sets of genes that display different temporal profiles between the conditions of interest. The test is simple, it can be used on gene expression data generated from any experimental design and for any number of conditions, and it

  1. Statistical comparative study on a combined radioiodine test and extended protirelin test and correlation with the common in vitro parameters of hyroid function

    International Nuclear Information System (INIS)

    Kraemer, H.A.

    1982-01-01

    Using the data of 339 patients, the following parameters of thyroid function were statistically evaluated. The in vitro parameters ET 3 U, TT 4 (D), FT 4 -index and PB 127 I and the radioiodine test with determination of PB 131 I before i.v. injection of 400 μg protirelin (DHP) and 120 minutes after the injection. There was no correlation between the percentage Change of the PB 121 I level 120 min after protirelin (DHP) administration and the percentage change of the TSH level 30 min after protirelin (DTP1) administration. The accuracies of the in vitro parameters ET 3 U, TT 4 (D) and FT 4 -index on the one hand and the extended protirelin test on the other hand were compared. (orig./MG) [de

  2. Test-retest reliabilitet av en anaerob terskeltest i felt. Undersøke test-retest reliabiliteten mellom ulike dager for en anaerob terskel test i felt.

    OpenAIRE

    Munkebye, Øyvind Bruhn

    2013-01-01

    Hensikten med dette studiet var å teste test-retest reliabilitet på en anaerob terskeltest i felt kun ved hjelp av hjertefrekvens. Resultatet fra undersøkelsen viste meget høy korrelasjon mellom anaerob terskel test 1 og anaerob terskeltest 2.

  3. Homogeneity tests for variances and mean test under heterogeneity conditions in a single way ANOVA method

    International Nuclear Information System (INIS)

    Morales P, J.R.; Avila P, P.

    1996-01-01

    If we have consider the maximum permissible levels showed for the case of oysters, it results forbidding to collect oysters at the four stations of the El Chijol Channel ( Veracruz, Mexico), as well as along the channel itself, because the metal concentrations studied exceed these limits. In this case the application of Welch tests were not necessary. For the water hyacinth the means of the treatments were unequal in Fe, Cu, Ni, and Zn. This case is more illustrative, for the conclusion has been reached through the application of the Welch tests to treatments with heterogeneous variances. (Author)

  4. Test of safety injection supply by diesel generator under reactor vessel closed condition

    International Nuclear Information System (INIS)

    Zhang Hao; Bi Fengchuan; Che Junxia; Zhang Jianwen; Yang Bo

    2014-01-01

    The paper studied that the test of diesel generator full load take-up under the condition of actual safety injection and reactor vessel closed in Ningde nuclear project unit l. It is proved that test result accorded with design criteria, meanwhile, the test was removed from the key path of project schedule, which cut a huge cost. (authors)

  5. Validity and reliability of new agility test among elite and subelite under 14-soccer players.

    Directory of Open Access Journals (Sweden)

    Younés Hachana

    Full Text Available BACKGROUND: Agility is a determinant component in soccer performance. This study aimed to evaluate the reliability and sensitivity of a "Modified Illinois change of direction test" (MICODT in ninety-five U-14 soccer players. METHODS: A total of 95 U-14 soccer players (mean ± SD: age: 13.61 ± 1.04 years; body mass: 30.52 ± 4.54 kg; height: 1.57 ± 0.1 m from a professional and semi-professional soccer academy, participated to this study. Sixty of them took part in reliability analysis and thirty-two in sensitivity analysis. RESULTS: The intraclass correlation coefficient (ICC that aims to assess relative reliability of the MICODT was of 0.99, and its standard error of measurement (SEM for absolute reliability was <5% (1.24%. The MICODT's capacity to detect change is "good", it's SEM (0.10 s was ≤ SWC (0.33 s. The MICODT is significantly correlated to the Illinois change of direction speed test (ICODT (r = 0.77; p<0.0001. The ICODT's MDC95 (0.64 s was twice about the MICODT's MDC95 (0.28 s, indicating that MICODT presents better ability to detect true changes than ICODT. The MICODT provided good sensitivity since elite U-14 soccer players were better than non-elite one on MICODT (p = 0.005; dz = 1.01 [large]. This was supported by an area under the ROC curve of 0.77 (CI 95%, 0.59 to 0.89, p<0.0008. The difference observed in these two groups in ICODT was not statistically significant (p = 0.14; dz = 0.51 [small], showing poor discriminant ability. CONCLUSION: MICODT can be considered as more suitable protocol for assessing agility performance level than ICODT in U-14 soccer players.

  6. Statistics Clinic

    Science.gov (United States)

    Feiveson, Alan H.; Foy, Millennia; Ploutz-Snyder, Robert; Fiedler, James

    2014-01-01

    Do you have elevated p-values? Is the data analysis process getting you down? Do you experience anxiety when you need to respond to criticism of statistical methods in your manuscript? You may be suffering from Insufficient Statistical Support Syndrome (ISSS). For symptomatic relief of ISSS, come for a free consultation with JSC biostatisticians at our help desk during the poster sessions at the HRP Investigators Workshop. Get answers to common questions about sample size, missing data, multiple testing, when to trust the results of your analyses and more. Side effects may include sudden loss of statistics anxiety, improved interpretation of your data, and increased confidence in your results.

  7. Dynamic PMU Compliance Test under C37.118.1aTM-2014

    DEFF Research Database (Denmark)

    Ghiga, Radu; Wu, Qiuwei; Martin, K.

    2015-01-01

    This paper presents a flexible testing methodology and the dynamic compliance of PMUs as per the new C37.118.1a amendment published in 2014. The test platform consists of test signal generator, a Doble F6150 amplifier, PMUs under test, and a PMU test result analysis kit. The Doble amplifier is used...... for providing three phase voltage and current injections to the PMUs. Three PMUs from different vendors were tested simultaneously in order to provide a fair comparison of the devices. The new 2014 amendment comes with significant changes over the C37.118.1 - 2011 standard regarding the dynamic tests....

  8. Frequency of the adequate use of statistical tests of hypothesis in original articles published in the Revista Brasileira de Anestesiologia between January 2008 and December 2009.

    Science.gov (United States)

    Barbosa, Fabiano Timbó; de Souza, Diego Agra

    2010-01-01

    Statistical analysis is necessary for adequate evaluation of the original article by the reader allowing him/her to better visualize and comprehend the results. The objective of the present study was to determine the frequency of the adequate use of statistical tests in original articles published in the Revista Brasileira de Anestesiologia from January 2008 to December 2009. Original articles published in the Revista Brasileira de Anestesiologia between January 2008 and December 2009 were selected. The use of statistical tests was deemed appropriate when the selection of the tests was adequate for continuous and categorical variables and for parametric and non-parametric tests, the correction factor was described when the use of multiple comparisons was reported, and the specific use of a statistical test for analysis of one variable was mentioned. Seventy-six original articles from a total of 179 statistical tests were selected. The frequency of the statistical tests used more often was: Chi-square 20.11%, Student t test 19.55%, ANOVA 10.05%, and Fisher exact test 9.49%. The frequency of the adequate use of statistical tests was 56.42% (95% CI 49.16% to 63.68%), erroneous use in 13.41% (95% CI 8.42% to 18.40%), and an inconclusive result in 30.16% (95% CI 23.44% to 36.88%). The frequency of inadequate use of statistical tests in the articles published by the Revista Brasileira de Anestesiologia between January 2008 and December 2009 was 56.42%. Copyright © 2010 Elsevier Editora Ltda. All rights reserved.

  9. Statistical test of a null hypothesis: Taser shocks have not caused or contributed to subsequent in-custody deaths

    Science.gov (United States)

    Lundquist, Marjorie

    2009-03-01

    Since 1999 over 425 in-custody deaths have occurred in the USA after law enforcement officers (LEOs) used an M26 or X26 Taser, causing Amnesty International and the ACLU to call for a moratorium on Taser use until its physiological effects on people have been better studied. A person's Taser dose is defined as the total duration (in seconds) of all Taser shocks received by that person during a given incident. Utilizing the concept of Taser dose for these deaths, TASER International's claim of Taser safety can be treated as a null hypothesis and its validity scientifically tested. Such a test using chi-square as the test statistic is presented. It shows that the null hypothesis should be rejected; i.e., model M26 and X26 Tasers are capable of producing lethal effects non-electrically and so have played a causal or contributory role in a great many of the in-custody deaths following their use. This implies that the Taser is a lethal weapon, and that LEOs have not been adequately trained in its safe use!

  10. Experimental observations of Lagrangian sand grain kinematics under bedload transport: statistical description of the step and rest regimes

    Science.gov (United States)

    Guala, M.; Liu, M.

    2017-12-01

    The kinematics of sediment particles is investigated by non-intrusive imaging methods to provide a statistical description of bedload transport in conditions near the threshold of motion. In particular, we focus on the cyclic transition between motion and rest regimes to quantify the waiting time statistics inferred to be responsible for anomalous diffusion, and so far elusive. Despite obvious limitations in the spatio-temporal domain of the observations, we are able to identify the probability distributions of the particle step time and length, velocity, acceleration, waiting time, and thus distinguish which quantities exhibit well converged mean values, based on the thickness of their respective tails. The experimental results shown here for four different transport conditions highlight the importance of the waiting time distribution and represent a benchmark dataset for the stochastic modeling of bedload transport.

  11. Principal Components of Superhigh-Dimensional Statistical Features and Support Vector Machine for Improving Identification Accuracies of Different Gear Crack Levels under Different Working Conditions

    Directory of Open Access Journals (Sweden)

    Dong Wang

    2015-01-01

    Full Text Available Gears are widely used in gearbox to transmit power from one shaft to another. Gear crack is one of the most frequent gear fault modes found in industry. Identification of different gear crack levels is beneficial in preventing any unexpected machine breakdown and reducing economic loss because gear crack leads to gear tooth breakage. In this paper, an intelligent fault diagnosis method for identification of different gear crack levels under different working conditions is proposed. First, superhigh-dimensional statistical features are extracted from continuous wavelet transform at different scales. The number of the statistical features extracted by using the proposed method is 920 so that the extracted statistical features are superhigh dimensional. To reduce the dimensionality of the extracted statistical features and generate new significant low-dimensional statistical features, a simple and effective method called principal component analysis is used. To further improve identification accuracies of different gear crack levels under different working conditions, support vector machine is employed. Three experiments are investigated to show the superiority of the proposed method. Comparisons with other existing gear crack level identification methods are conducted. The results show that the proposed method has the highest identification accuracies among all existing methods.

  12. Classification based hypothesis testing in neuroscience: Below-chance level classification rates and overlooked statistical properties of linear parametric classifiers.

    Science.gov (United States)

    Jamalabadi, Hamidreza; Alizadeh, Sarah; Schönauer, Monika; Leibold, Christian; Gais, Steffen

    2016-05-01

    Multivariate pattern analysis (MVPA) has recently become a popular tool for data analysis. Often, classification accuracy as quantified by correct classification rate (CCR) is used to illustrate the size of the effect under investigation. However, we show that in low sample size (LSS), low effect size (LES) data, which is typical in neuroscience, the distribution of CCRs from cross-validation of linear MVPA is asymmetric and can show classification rates considerably below what would be expected from chance classification. Conversely, the mode of the distribution in these cases is above expected chance levels, leading to a spuriously high number of above chance CCRs. This unexpected distribution has strong implications when using MVPA for hypothesis testing. Our analyses warrant the conclusion that CCRs do not well reflect the size of the effect under investigation. Moreover, the skewness of the null-distribution precludes the use of many standard parametric tests to assess significance of CCRs. We propose that MVPA results should be reported in terms of P values, which are estimated using randomization tests. Also, our results show that cross-validation procedures using a low number of folds, e.g. twofold, are generally more sensitive, even though the average CCRs are often considerably lower than those obtained using a higher number of folds. Hum Brain Mapp 37:1842-1855, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  13. A Correlated Study of the Response of a Satellite to Acoustic Radiation Using Statistical Energy Analysis and Acoustic Test Data

    International Nuclear Information System (INIS)

    CAP, JEROME S.; TRACEY, BRIAN

    1999-01-01

    Aerospace payloads, such as satellites, are subjected to vibroacoustic excitation during launch. Sandia's MTI satellite has recently been certified to this environment using a combination of base input random vibration and reverberant acoustic noise. The initial choices for the acoustic and random vibration test specifications were obtained from the launch vehicle Interface Control Document (ICD). In order to tailor the random vibration levels for the laboratory certification testing, it was necessary to determine whether vibration energy was flowing across the launch vehicle interface from the satellite to the launch vehicle or the other direction. For frequencies below 120 Hz this issue was addressed using response limiting techniques based on results from the Coupled Loads Analysis (CLA). However, since the CLA Finite Element Analysis FEA model was only correlated for frequencies below 120 Hz, Statistical Energy Analysis (SEA) was considered to be a better choice for predicting the direction of the energy flow for frequencies above 120 Hz. The existing SEA model of the launch vehicle had been developed using the VibroAcoustic Payload Environment Prediction System (VAPEPS) computer code[1]. Therefore, the satellite would have to be modeled using VAPEPS as well. As is the case for any computational model, the confidence in its predictive capability increases if one can correlate a sample prediction against experimental data. Fortunately, Sandia had the ideal data set for correlating an SEA model of the MTI satellite--the measured response of a realistic assembly to a reverberant acoustic test that was performed during MTI's qualification test series. The first part of this paper will briefly describe the VAPEPS modeling effort and present the results of the correlation study for the VAPEPS model. The second part of this paper will present the results from a study that used a commercial SEA software package[2] to study the effects of in-plane modes and to evaluate

  14. Three Statistical Testing Procedures in Logistic Regression: Their Performance in Differential Item Functioning (DIF) Investigation. Research Report. ETS RR-09-35

    Science.gov (United States)

    Paek, Insu

    2009-01-01

    Three statistical testing procedures well-known in the maximum likelihood approach are the Wald, likelihood ratio (LR), and score tests. Although well-known, the application of these three testing procedures in the logistic regression method to investigate differential item function (DIF) has not been rigorously made yet. Employing a variety of…

  15. An Efficient Stepwise Statistical Test to Identify Multiple Linked Human Genetic Variants Associated with Specific Phenotypic Traits.

    Directory of Open Access Journals (Sweden)

    Iksoo Huh

    Full Text Available Recent advances in genotyping methodologies have allowed genome-wide association studies (GWAS to accurately identify genetic variants that associate with common or pathological complex traits. Although most GWAS have focused on associations with single genetic variants, joint identification of multiple genetic variants, and how they interact, is essential for understanding the genetic architecture of complex phenotypic traits. Here, we propose an efficient stepwise method based on the Cochran-Mantel-Haenszel test (for stratified categorical data to identify causal joint multiple genetic variants in GWAS. This method combines the CMH statistic with a stepwise procedure to detect multiple genetic variants associated with specific categorical traits, using a series of associated I × J contingency tables and a null hypothesis of no phenotype association. Through a new stratification scheme based on the sum of minor allele count criteria, we make the method more feasible for GWAS data having sample sizes of several thousands. We also examine the properties of the proposed stepwise method via simulation studies, and show that the stepwise CMH test performs better than other existing methods (e.g., logistic regression and detection of associations by Markov blanket for identifying multiple genetic variants. Finally, we apply the proposed approach to two genomic sequencing datasets to detect linked genetic variants associated with bipolar disorder and obesity, respectively.

  16. Model test study of evaporation mechanism of sand under constant atmospheric condition

    OpenAIRE

    CUI, Yu Jun; DING, Wenqi; SONG, Weikang

    2014-01-01

    The evaporation mechanism of Fontainebleau sand using a large-scale model chamber is studied. First, the evaporation test on a layer of water above sand surface is performed under various atmospheric conditions, validating the performance of the chamber and the calculation method of actual evaporation rate by comparing the calculated and measured cumulative evaporations. Second,the evaporation test on sand without water layer is conducted under constant atmospheric condition. Both the evoluti...

  17. Validity of tests under covariate-adaptive biased coin randomization and generalized linear models.

    Science.gov (United States)

    Shao, Jun; Yu, Xinxin

    2013-12-01

    Some covariate-adaptive randomization methods have been used in clinical trials for a long time, but little theoretical work has been done about testing hypotheses under covariate-adaptive randomization until Shao et al. (2010) who provided a theory with detailed discussion for responses under linear models. In this article, we establish some asymptotic results for covariate-adaptive biased coin randomization under generalized linear models with possibly unknown link functions. We show that the simple t-test without using any covariate is conservative under covariate-adaptive biased coin randomization in terms of its Type I error rate, and that a valid test using the bootstrap can be constructed. This bootstrap test, utilizing covariates in the randomization scheme, is shown to be asymptotically as efficient as Wald's test correctly using covariates in the analysis. Thus, the efficiency loss due to not using covariates in the analysis can be recovered by utilizing covariates in covariate-adaptive biased coin randomization. Our theory is illustrated with two most popular types of discrete outcomes, binary responses and event counts under the Poisson model, and exponentially distributed continuous responses. We also show that an alternative simple test without using any covariate under the Poisson model has an inflated Type I error rate under simple randomization, but is valid under covariate-adaptive biased coin randomization. Effects on the validity of tests due to model misspecification is also discussed. Simulation studies about the Type I errors and powers of several tests are presented for both discrete and continuous responses. © 2013, The International Biometric Society.

  18. Are specific testing protocols required for organic onion varieties? Analysis of onion variety testing under conventional and organic growing conditions

    NARCIS (Netherlands)

    Lammerts Van Bueren, E.; Osman, A.M.; Tiemens-Hulscher, M.; Struik, P.C.; Burgers, S.L.G.E.; Broek, van den R.C.F.M.

    2012-01-01

    Organic growers need information on variety performance under their growing conditions. A 4-year onion variety research project was carried out to investigate whether setting up a variety testing system combining conventional and organic variety trials is feasible and efficient rather than

  19. Calculation of Tajima's D and other neutrality test statistics from low depth next-generation sequencing data

    DEFF Research Database (Denmark)

    Korneliussen, Thorfinn Sand; Moltke, Ida; Albrechtsen, Anders

    2013-01-01

    A number of different statistics are used for detecting natural selection using DNA sequencing data, including statistics that are summaries of the frequency spectrum, such as Tajima's D. These statistics are now often being applied in the analysis of Next Generation Sequencing (NGS) data. However......, estimates of frequency spectra from NGS data are strongly affected by low sequencing coverage; the inherent technology dependent variation in sequencing depth causes systematic differences in the value of the statistic among genomic regions....

  20. 16 CFR 1610.35 - Procedures for testing special types of textile fabrics under the standard.

    Science.gov (United States)

    2010-01-01

    ... textile fabrics under the standard. 1610.35 Section 1610.35 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION FLAMMABLE FABRICS ACT REGULATIONS STANDARD FOR THE FLAMMABILITY OF CLOTHING TEXTILES Rules and Regulations § 1610.35 Procedures for testing special types of textile fabrics under the standard. (a) Fabric...

  1. Test method research on weakening interface strength of steel - concrete under cyclic loading

    Science.gov (United States)

    Liu, Ming-wei; Zhang, Fang-hua; Su, Guang-quan

    2018-02-01

    The mechanical properties of steel - concrete interface under cyclic loading are the key factors affecting the rule of horizontal load transfer, the calculation of bearing capacity and cumulative horizontal deformation. Cyclic shear test is an effective method to study the strength reduction of steel - concrete interface. A test system composed of large repeated direct shear test instrument, hydraulic servo system, data acquisition system, test control software system and so on is independently designed, and a set of test method, including the specimen preparation, the instrument preparation, the loading method and so on, is put forward. By listing a set of test results, the validity of the test method is verified. The test system and the test method based on it provide a reference for the experimental study on mechanical properties of steel - concrete interface.

  2. Test Equating under the NEAT Design: A Necessary Condition for Anchor Items

    Science.gov (United States)

    Raykov, Tenko

    2010-01-01

    Mroch, Suh, Kane, & Ripkey (2009); Suh, Mroch, Kane, & Ripkey (2009); and Kane, Mroch, Suh, & Ripkey (2009) provided elucidating discussions on critical properties of linear equating methods under the nonequivalent groups with anchor test (NEAT) design. In this popular equating design, two test forms are administered to different…

  3. Collaborative testing for key-term definitions under representative conditions: Efficiency costs and no learning benefits.

    Science.gov (United States)

    Wissman, Kathryn T; Rawson, Katherine A

    2018-01-01

    Students are expected to learn key-term definitions across many different grade levels and academic disciplines. Thus, investigating ways to promote understanding of key-term definitions is of critical importance for applied purposes. A recent survey showed that learners report engaging in collaborative practice testing when learning key-term definitions, with outcomes also shedding light on the way in which learners report engaging in collaborative testing in real-world contexts (Wissman & Rawson, 2016, Memory, 24, 223-239). However, no research has directly explored the effectiveness of engaging in collaborative testing under representative conditions. Accordingly, the current research evaluates the costs (with respect to efficiency) and the benefits (with respect to learning) of collaborative testing for key-term definitions under representative conditions. In three experiments (ns = 94, 74, 95), learners individually studied key-term definitions and then completed retrieval practice, which occurred either individually or collaboratively (in dyads). Two days later, all learners completed a final individual test. Results from Experiments 1-2 showed a cost (with respect to efficiency) and no benefit (with respect to learning) of engaging in collaborative testing for key-term definitions. Experiment 3 evaluated a theoretical explanation for why collaborative benefits do not emerge under representative conditions. Collectively, outcomes indicate that collaborative testing versus individual testing is less effective and less efficient when learning key-term definitions under representative conditions.

  4. Comparison of the release of constituents from granular materials under batch and column testing.

    Science.gov (United States)

    Lopez Meza, Sarynna; Garrabrants, Andrew C; van der Sloot, Hans; Kosson, David S

    2008-01-01

    Column leaching testing can be considered a better basis for assessing field impact data than any other available batch test method and thus provides a fundamental basis from which to estimate constituent release under a variety of field conditions. However, column testing is time-intensive compared to the more simplified batch testing, and may not always be a viable option when making decisions for material reuse. Batch tests are used most frequently as a simple tool for compliance or quality control reasons. Therefore, it is important to compare the release that occurs under batch and column testing, and establish conservative interpretation protocols for extrapolation from batch data when column data are not available. Five different materials (concrete, construction debris, aluminum recycling residue, coal fly ash and bottom ash) were evaluated via batch and column testing, including different column flow regimes (continuously saturated and intermittent unsaturated flow). Constituent release data from batch and column tests were compared. Results showed no significant difference between the column flow regimes when constituent release data from batch and column tests were compared. In most cases batch and column testing agreed when presented in the form of cumulative release. For arsenic in carbonated materials, however, batch testing underestimates the column constituent release for most LS ratios and also on a cumulative basis. For cases when As is a constituent of concern, column testing may be required.

  5. Testing earthquake prediction algorithms: Statistically significant advance prediction of the largest earthquakes in the Circum-Pacific, 1992-1997

    Science.gov (United States)

    Kossobokov, V.G.; Romashkova, L.L.; Keilis-Borok, V. I.; Healy, J.H.

    1999-01-01

    Algorithms M8 and MSc (i.e., the Mendocino Scenario) were used in a real-time intermediate-term research prediction of the strongest earthquakes in the Circum-Pacific seismic belt. Predictions are made by M8 first. Then, the areas of alarm are reduced by MSc at the cost that some earthquakes are missed in the second approximation of prediction. In 1992-1997, five earthquakes of magnitude 8 and above occurred in the test area: all of them were predicted by M8 and MSc identified correctly the locations of four of them. The space-time volume of the alarms is 36% and 18%, correspondingly, when estimated with a normalized product measure of empirical distribution of epicenters and uniform time. The statistical significance of the achieved results is beyond 99% both for M8 and MSc. For magnitude 7.5 + , 10 out of 19 earthquakes were predicted by M8 in 40% and five were predicted by M8-MSc in 13% of the total volume considered. This implies a significance level of 81% for M8 and 92% for M8-MSc. The lower significance levels might result from a global change in seismic regime in 1993-1996, when the rate of the largest events has doubled and all of them become exclusively normal or reversed faults. The predictions are fully reproducible; the algorithms M8 and MSc in complete formal definitions were published before we started our experiment [Keilis-Borok, V.I., Kossobokov, V.G., 1990. Premonitory activation of seismic flow: Algorithm M8, Phys. Earth and Planet. Inter. 61, 73-83; Kossobokov, V.G., Keilis-Borok, V.I., Smith, S.W., 1990. Localization of intermediate-term earthquake prediction, J. Geophys. Res., 95, 19763-19772; Healy, J.H., Kossobokov, V.G., Dewey, J.W., 1992. A test to evaluate the earthquake prediction algorithm, M8. U.S. Geol. Surv. OFR 92-401]. M8 is available from the IASPEI Software Library [Healy, J.H., Keilis-Borok, V.I., Lee, W.H.K. (Eds.), 1997. Algorithms for Earthquake Statistics and Prediction, Vol. 6. IASPEI Software Library]. ?? 1999 Elsevier

  6. Steady-State PMU Compliance Test under C37.118.1a-2014

    DEFF Research Database (Denmark)

    Ghiga, Radu; Wu, Qiuwei; Martin, Kenneth E.

    2016-01-01

    This paper presents a flexible testing method and the steady-state compliance of PMUs under the C37.118.1a amendment. The work is focused on the changes made to the standard for the harmonic rejection and out-of-band interference tests for which the ROCOF Error limits have been suspended. The paper...... vendors were tested simultaneously in order to provide a fair comparison of the devices. The results for the steady state tests are discussed in the paper together with the strengths and weaknesses of the PMUs and of the test setup....

  7. Seed vigour tests for predicting field emergence of maize under severe conditions

    OpenAIRE

    García de Yzaguirre, Álvaro; Lasa Dolhagaray, José Manuel

    1989-01-01

    [EN] With 40 to 50 different seed vigour tests available, appropiate procedures for choosing the best single test or combination the best predictors of seedling emergence of maize (Zea Mays L.) under severe conditions. Thirteen vigour tests and various field emergence trials were performed on six inbred lines and two commercial hybrids. The best single predictors of field emergence were identified by calculating simple correlation coefficients. The calculation of the geometric mean of the res...

  8. Standard Test Method for Testing Polymeric Seal Materials for Geothermal and/or High Temperature Service Under Sealing Stress

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    1985-01-01

    1.1 This test method covers the initial evaluation of (screening) polymeric materials for seals under static sealing stress and at elevated temperatures. 1.2 This test method applies to geothermal service only if used in conjunction with Test Method E 1068. 1.3 The test fluid is distilled water. 1.4 The values stated in SI units are to be regarded as the standard. The values in parentheses are for information only. 1.5 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  9. Statistical optimization and mutagenesis for high level of phytase production by Rhizopus oligosporus MTCC 556 under solid state fermentation.

    Science.gov (United States)

    Suresh, S; Radha, K V

    2016-03-01

    The present study deals with production of phytase from Rhizopus oligosporus MTCC 556 by solid state fermentation (SSF) using different (ADT27, IR20, PAIYUR1, KG, and RASI) rice bran varieties, in which ADT27 rice bran yield maximum of 6.2 U gds⁻¹ phytase. Statistical optimization was employed by Central Composite Design (CCD); the results showed that 3.0 g dextrose, 2.5 g ammonium nitrate, substrate size of 80 mesh, 10 mg calcium chloride was 116 hr at optimal for phytase production by SSF, with maximum of 23.14 U gds'. Phytase production improved by 4 fold (31.3 U/gds) due to chemical mutagenesis (mutant Rhizopus oligosporus MTCC 1116) in optimized media composition. Partially purified phytase showed approximately 90 kDa of molecular mass and was optimally active at 5.5 pH and 50°C temperature. Substrate specificity exhibited in sodium phytic acid and phytase activity was stimulated by Zn²⁺ and Ca²⁺.

  10. Distribution of the two-sample t-test statistic following blinded sample size re-estimation.

    Science.gov (United States)

    Lu, Kaifeng

    2016-05-01

    We consider the blinded sample size re-estimation based on the simple one-sample variance estimator at an interim analysis. We characterize the exact distribution of the standard two-sample t-test statistic at the final analysis. We describe a simulation algorithm for the evaluation of the probability of rejecting the null hypothesis at given treatment effect. We compare the blinded sample size re-estimation method with two unblinded methods with respect to the empirical type I error, the empirical power, and the empirical distribution of the standard deviation estimator and final sample size. We characterize the type I error inflation across the range of standardized non-inferiority margin for non-inferiority trials, and derive the adjusted significance level to ensure type I error control for given sample size of the internal pilot study. We show that the adjusted significance level increases as the sample size of the internal pilot study increases. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  11. 2D statistical analysis of Non-Diffusive transport under attached and detached plasma conditions of the linear divertor simulator

    International Nuclear Information System (INIS)

    Tanaka, H.; Ohno, N.; Tsuji, Y.; Kajita, S.

    2010-01-01

    We have analyzed the 2D convective motion of coherent structures, which is associated with plasma blobs, under attached and detached plasma conditions of a linear divertor simulator, NAGDIS-II. Data analysis of probes and a fast-imaging camera by spatio-temporal correlation with three decomposition and proper orthogonal decomposition (POD) was carried out to determine the basic properties of coherent structures detached from a bulk plasma column. Under the attached plasma condition, the spatio-temporal correlation with three decomposition based on the probe measurement showed that two types of coherent structures with different sizes detached from the bulk plasma and the azimuthally localized structure radially propagated faster than the larger structure. Under the detached plasma condition, movies taken by the fast-imaging camera clearly showed the dynamics of a 2D spiral structure at peripheral regions of the bulk plasma; this dynamics caused the broadening of the plasma profile. The POD method was used for the data processing of the movies to obtain low-dimensional mode shapes. It was found that the m=1 and m=2 ring-shaped coherent structures were dominant. Comparison between the POD analysis of both the movie and the probe data suggested that the coherent structure could be detached from the bulk plasma mainly associated with the m=2 fluctuation. This phenomena could play an important role in the reduction of the particle and heat flux as well as the plasma recombination processes in plasma detachment (copyright 2010 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  12. Reliability data collection on IC and VLSI devices tested under accelerated life conditions

    International Nuclear Information System (INIS)

    Barry, D.M.; Meniconi, M.

    1986-01-01

    As part of a more general investigation into the reliability and failure causes of semiconductor devices, statistical samples of integrated circuit devices (LM741C) and dynamic random access memory devices (TMS4116) were tested destructively to failure using elevated temperature as the accelerating stress. The devices were operated during the life test and the failure data generated were collected automatically using a multiple question-and-answer program and a process control computer. The failure data were modelled from the lognormal, inverse Gaussian and Weibull distribution using an Arrhenius reaction rate model. The failed devices were later decapsulated for failure cause determination. (orig./DG)

  13. Standard Test Method for Electrical Performance of Concentrator Terrestrial Photovoltaic Modules and Systems Under Natural Sunlight

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2009-01-01

    1.1 This test method covers the determination of the electrical performance of photovoltaic concentrator modules and systems under natural sunlight using a normal incidence pyrheliometer. 1.2 The test method is limited to module assemblies and systems where the geometric concentration ratio specified by the manufacturer is greater than 5. 1.3 This test method applies to concentrators that use passive cooling where the cell temperature is related to the air temperature. 1.4 Measurements under a variety of conditions are allowed; results are reported under a select set of concentrator reporting conditions to facilitate comparison of results. 1.5 This test method applies only to concentrator terrestrial modules and systems. 1.6 This test method assumes that the module or system electrical performance characteristics do not change during the period of test. 1.7 The performance rating determined by this test method applies only at the period of the test, and implies no past or future performance level. 1.8...

  14. C-Scan Performance Test of Under-Sodium ultrasonic Waveguide Sensor in Sodium

    International Nuclear Information System (INIS)

    Joo, Young Sang; Bae, Jin Ho; Kim, Jong Bum

    2011-01-01

    Reactor core and in-vessel structures of a sodium-cooled fast (SFR) are submerged in opaque liquid sodium in the reactor vessel. The ultrasonic inspection techniques should be applied for observing the in-vessel structures under hot liquid sodium. Ultrasonic sensors such as immersion sensors and rod-type waveguide sensors have developed in order to apply under-sodium viewing of the in-vessel structures of SFR. Recently the novel plate-type ultrasonic waveguide sensor has been developed for the versatile application of under-sodium viewing in SFR. In previous studies, the ultrasonic waveguide sensor module was designed and manufactured, and the feasibility study of the ultrasonic waveguide sensor was performed. To improve the performance of the ultrasonic waveguide sensor in the under-sodium application, a new concept of ultrasonic waveguide sensors with a Be coated SS304 plate is suggested for the effective generation of a leaky wave in liquid sodium and the non-dispersive propagation of A 0 -mode Lamb wave in an ultrasonic waveguide sensor. In this study, the C-scan performance of the under-sodium ultrasonic waveguide sensor in sodium has been investigated by the experimental test in sodium. The under-sodium ultrasonic waveguide sensor and the sodium test facility with a glove box system and a sodium tank are designed and manufactured to carry out the performance test of under-sodium ultrasonic waveguide sensor in sodium environment condition

  15. Potencial suicida en el test persona bajo la lluvia Suicide potential in "the person under the rain" test

    Directory of Open Access Journals (Sweden)

    Anabela Piccone

    2006-12-01

    Full Text Available En los últimos 6 años ha aumentado considerablemente la tasa de suicidios en Argentina y esta es una conducta que se incrementa en situaciones de crisis vitales o sociales. El objetivo es aislar en el Test Persona Bajo la Lluvia (PBLL indicadores de Potencial Suicida. Considerando que su consigna propone dibujar una persona bajo la lluvia para evaluar mediante lo graficado la reacción emocional frente a una situación de tensión. Los instrumentos que se utilizan son: la Escala de Evaluación del Potencial Suicida de Adultos (ESPA del Rorschach y el test PBLL. La muestra está compuesta por 41 casos de sujetos entre 18 y 68 años. El análisis de los indicadores permite hipotetizar que PBLL puede ser un instrumento útil para el despistaje de casos patológicos.In the last 6 years the suicide has increased considerably especially in stress situations. The objective of this work is looking for indicators of suicide potential in the "The Person Under the Rain" Test (PBLL. This test is based on a technique that consists in drawing a person under the rain. Through the drawing the test evaluate the emotional reaction in a stress situation. The instruments used in this work are: The Suicide Scale for Adults (ESPA of the Rorschach Test and PBLL Test. The simple incluyes 41 cases of people between 18 to 68 years old. The analisys of these indicators confirm that PBLL is a useful instrument to detected pathological cases.

  16. Model-free tests of equality in binary data under an incomplete block design.

    Science.gov (United States)

    Lui, Kung-Jong; Zhu, Lixia

    2018-02-16

    Using Prescott's model-free approach, we develop an asymptotic procedure and an exact procedure for testing equality between treatments with binary responses under an incomplete block crossover design. We employ Monte Carlo simulation and note that these test procedures can not only perform well in small-sample cases but also outperform the corresponding test procedures accounting for only patients with discordant responses published elsewhere. We use the data taken as a part of the crossover trial comparing two different doses of an analgesic with placebo for the relief of primary dysmenorrhea to illustrate the use of test procedures discussed here.

  17. Action Memorandum for Decommissioning the Engineering Test Reactor Complex under the Idaho Cleanup Project

    International Nuclear Information System (INIS)

    A. B. Culp

    2007-01-01

    This Action Memorandum documents the selected alternative for decommissioning of the Engineering Test Reactor at the Idaho National Laboratory under the Idaho Cleanup Project. Since the missions of the Engineering Test Reactor Complex have been completed, an engineering evaluation/cost analysis that evaluated alternatives to accomplish the decommissioning of the Engineering Test Reactor Complex was prepared and released for public comment. The scope of this Action Memorandum is to encompass the final end state of the Complex and disposal of the Engineering Test Reactor vessel. The selected removal action includes removing and disposing of the vessel at the Idaho CERCLA Disposal Facility and demolishing the reactor building to ground surface

  18. Action Memorandum for the Engineering Test Reactor under the Idaho Cleanup Project

    Energy Technology Data Exchange (ETDEWEB)

    A. B. Culp

    2007-01-26

    This Action Memorandum documents the selected alternative for decommissioning of the Engineering Test Reactor at the Idaho National Laboratory under the Idaho Cleanup Project. Since the missions of the Engineering Test Reactor Complex have been completed, an engineering evaluation/cost analysis that evaluated alternatives to accomplish the decommissioning of the Engineering Test Reactor Complex was prepared adn released for public comment. The scope of this Action Memorandum is to encompass the final end state of the Complex and disposal of the Engineering Test Reactor vessol. The selected removal action includes removing and disposing of the vessel at the Idaho CERCLA Disposal Facility and demolishing the reactor building to ground surface.

  19. Risk Factors for Inadequate Defibrillation Safety Margins Vary With the Underlying Cardiac Disease: Implications for Selective Testing Strategies.

    Science.gov (United States)

    Bonnes, Judith L; Westra, Sjoerd W; Bouwels, Leon H R; DE Boer, Menko Jan; Brouwer, Marc A; Smeets, Joep L R M

    2016-05-01

    In view of the shift from routine toward no or selective defibrillation testing, optimization of the current risk stratification for inadequate defibrillation safety margins (DSMs) could improve individualized testing decisions. Given the pathophysiological differences in myocardial substrate between ischemic and nonischemic heart disease (IHD/non-IHD) and the accompanying differences in clinical characteristics, we studied inadequate DSMs and their predictors in relation to the underlying etiology. Cohort of routine defibrillation tests (n = 785) after first implantable cardioverter defibrillator (ICD)-implantations at the Radboud UMC (2005-2014). A defibrillation threshold >25 J was regarded as an inadequate DSM. In total, 4.3% of patients had an inadequate DSM; in IHD 2.5% versus 7.3% in non-IHD (P = 0.002). We identified a group of non-IHD patients at high risk (13-42% inadequate DSM); the remainder of the cohort (>70%) had a risk of only 2% (C-statistic entire cohort 0.74; C-statistic non-IHD 0.82). This was based upon two identified interaction terms: (1) non-IHD and age (aOR 0.94 [95% CI 0.91-0.97]); (2) non-IHD and the indexed left ventricular (LV) internal diastolic diameter (aOR 3.50 [95% CI 2.10-5.82]). The present study on risk stratification for an inadequate DSM not only confirms the importance of making a distinction between IHD and non-IHD, but also shows that risk factors in an entire cohort (LV dilatation, age) may only apply to a subgroup (non-IHD). Appreciation of this concept could favorably affect current risk stratification. If confirmed, our approach may be used to optimize individualized testing decisions in an upcoming era of non-routine testing. © 2016 Wiley Periodicals, Inc.

  20. Effect of Metformin and Flutamide on Anthropometric Indices and Laboratory Tests in Obese/Overweight PCOS Women under Hypocaloric Diet.

    Science.gov (United States)

    Amiri, Mania; Golsorkhtabaramiri, Masoumeh; Esmaeilzadeh, Sedigheh; Ghofrani, Faeze; Bijani, Ali; Ghorbani, Leila; Delavar, Moloud Agajani

    2014-10-01

    This study was designed to investigate the effect of metformin and flutamide alone or in combination with anthropometric indices and laboratory tests of obese/overweight PCOS women under hypocaloric diet. This single blind clinical trial was performed on 120 PCOS women. At the beginning, hypocaloric diet was recommended for the patients. After one month while they were on the diet, the patients were randomly divided in 4 groups; metformin (500 mg, 3/day), flutamide (250 mg, 2/day), combined, metformin (500 mg, 3/day) with flutamide (250 mg, 2/day) and finally placebo group. The patients were treated for 6 months. Anthropometric indices and laboratory tests (fasting and glucose-stimulated insulin levels, lipid profile and androgens) were measured. A one-way ANOVA (Post Hoc) and paired t-test were performed to analyze data. A p ≤ 0.05 was considered statistically significant. After treatment, reduction in weight, BMI, hip circumference was significantly greater in the metformin group in comparison to other groups (pPCOS women under hypocaloric diet.

  1. Testing for Stationarity and Nonlinearity of Daily Streamflow Time Series Based on Different Statistical Tests (Case Study: Upstream Basin Rivers of Zarrineh Roud Dam

    Directory of Open Access Journals (Sweden)

    Farshad Fathian

    2017-02-01

    Full Text Available Introduction: Time series models are one of the most important tools for investigating and modeling hydrological processes in order to solve problems related to water resources management. Many hydrological time series shows nonstationary and nonlinear behaviors. One of the important hydrological modeling tasks is determining the existence of nonstationarity and the way through which we can access the stationarity accordingly. On the other hand, streamflow processes are usually considered as nonlinear mechanisms while in many studies linear time series models are used to model streamflow time series. However, it is not clear what kind of nonlinearity is acting underlying the streamflowprocesses and how intensive it is. Materials and Methods: Streamflow time series of 6 hydro-gauge stations located in the upstream basin rivers of ZarrinehRoud dam (located in the southern part of Urmia Lake basin have been considered to investigate stationarity and nonlinearity. All data series used here to startfrom January 1, 1997, and end on December 31, 2011. In this study, stationarity is tested by ADF and KPSS tests and nonlinearity is tested by BDS, Keenan and TLRT tests. The stationarity test is carried out with two methods. Thefirst one method is the augmented Dickey-Fuller (ADF unit root test first proposed by Dickey and Fuller (1979 and modified by Said and Dickey (1984, which examinsthe presence of unit roots in time series.The second onemethod is KPSS test, proposed by Kwiatkowski et al. (1992, which examinesthestationarity around a deterministic trend (trend stationarity and the stationarity around a fixed level (level stationarity. The BDS test (Brock et al., 1996 is a nonparametric method for testing the serial independence and nonlinear structure in time series based on the correlation integral of the series. The null hypothesis is the time series sample comes from an independent identically distributed (i.i.d. process. The alternative hypothesis

  2. 49 CFR 40.277 - Are alcohol tests other than saliva or breath permitted under these regulations?

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 1 2010-10-01 2010-10-01 false Are alcohol tests other than saliva or breath... Testing § 40.277 Are alcohol tests other than saliva or breath permitted under these regulations? No, other types of alcohol tests (e,g., blood and urine) are not authorized for testing done under this part...

  3. TESTS ON 10.9 BOLTS UNDER COMBINED TENSION AND SHEAR

    Directory of Open Access Journals (Sweden)

    Anne Katherine Kawohl

    2016-04-01

    Full Text Available Prior investigations of the load-bearing capacity of bolts during fire have shown differing behaviour between bolts that have been loaded by shear or by tensile loads. A combination of the two loads has not yet been examined under fire conditions. This paper describes a series of tests on high-strength bolts of property class 10.9 both during and after fire under a combined shear and tensile load.

  4. The underlying dimensionality of PTSD in the diagnostic and statistical manual of mental disorders: where are we going?

    Science.gov (United States)

    Armour, Cherie

    2015-01-01

    There has been a substantial body of literature devoted to answering one question: Which latent model of posttraumatic stress disorder (PTSD) best represents PTSD's underlying dimensionality? This research summary will, therefore, focus on the literature pertaining to PTSD's latent structure as represented in the fourth (DSM-IV, 1994) to the fifth (DSM-5, 2013) edition of the DSM. This article will begin by providing a clear rationale as to why this is a pertinent research area, then the body of literature pertaining to the DSM-IV and DSM-IV-TR will be summarised, and this will be followed by a summary of the literature pertaining to the recently published DSM-5. To conclude, there will be a discussion with recommendations for future research directions, namely that researchers must investigate the applicability of the new DSM-5 criteria and the newly created DSM-5 symptom sets to trauma survivors. In addition, researchers must continue to endeavour to identify the "correct" constellations of symptoms within symptom sets to ensure that diagnostic algorithms are appropriate and aid in the development of targeted treatment approaches and interventions. In particular, the newly proposed DSM-5 anhedonia model, externalising behaviours model, and hybrid models must be further investigated. It is also important that researchers follow up on the idea that a more parsimonious latent structure of PTSD may exist.

  5. The underlying dimensionality of PTSD in the diagnostic and statistical manual of mental disorders: where are we going?

    Science.gov (United States)

    Armour, Cherie

    2015-01-01

    There has been a substantial body of literature devoted to answering one question: Which latent model of posttraumatic stress disorder (PTSD) best represents PTSD's underlying dimensionality? This research summary will, therefore, focus on the literature pertaining to PTSD's latent structure as represented in the fourth (DSM-IV, 1994) to the fifth (DSM-5, 2013) edition of the DSM. This article will begin by providing a clear rationale as to why this is a pertinent research area, then the body of literature pertaining to the DSM-IV and DSM-IV-TR will be summarised, and this will be followed by a summary of the literature pertaining to the recently published DSM-5. To conclude, there will be a discussion with recommendations for future research directions, namely that researchers must investigate the applicability of the new DSM-5 criteria and the newly created DSM-5 symptom sets to trauma survivors. In addition, researchers must continue to endeavour to identify the “correct” constellations of symptoms within symptom sets to ensure that diagnostic algorithms are appropriate and aid in the development of targeted treatment approaches and interventions. In particular, the newly proposed DSM-5 anhedonia model, externalising behaviours model, and hybrid models must be further investigated. It is also important that researchers follow up on the idea that a more parsimonious latent structure of PTSD may exist. PMID:25994027

  6. Testing of one-inch UF{sub 6} cylinder valves under simulated fire conditions

    Energy Technology Data Exchange (ETDEWEB)

    Elliott, P.G. [Martin Marietta Energy Systems, Inc., Paducah, KY (United States)

    1991-12-31

    Accurate computational models which predict the behavior of UF{sub 6} cylinders exposed to fires are required to validate existing firefighting and emergency response procedures. Since the cylinder valve is a factor in the containment provided by the UF{sub 6} cylinder, its behavior under fire conditions has been a necessary assumption in the development of such models. Consequently, test data is needed to substantiate these assumptions. Several studies cited in this document provide data related to the behavior of a 1-inch UF{sub 6} cylinder valve in fire situations. To acquire additional data, a series of tests were conducted at the Paducah Gaseous Diffusion Plant (PGDP) under a unique set of test conditions. This document describes this testing and the resulting data.

  7. Exact tests in binary data under an incomplete block crossover design.

    Science.gov (United States)

    Lui, Kung-Jong; Chang, Kuang-Chao

    2018-02-01

    To improve the power of a parallel groups design and reduce the time length of a crossover trial, we may consider an incomplete block crossover design. Under a distribution-free random effects logistic regression model, we derive an exact test and a Mantel-Haenszel Type of summary test procedure for testing non-equality in binary data when comparing three treatments. We employ Monte Carlo simulation to evaluate the performance of these test procedures. We find that both test procedures developed here can perform well in a variety of situations. We use the data taken as a part of the crossover trial comparing the low and high doses of an analgesic with a placebo for the relief of pain in primary dysmenorrhea to illustrate the use of the proposed test procedures.

  8. Using Small Punch tests in environment under static load for fracture toughness estimation in hydrogen embrittlement

    Science.gov (United States)

    Arroyo, B.; Álvarez, J. A.; Lacalle, R.; González, P.; Gutiérrez-Solana, F.

    2017-12-01

    In this paper, the response of three medium and high-strength steels to hydrogen embrittlement is analyzed by means of the quasi-non-destructive test known as the Small Punch Test (SPT). SPT tests on notched specimens under static load are carried out, applying Lacaclle’s methodology to estimate the fracture toughness for crack initiation, comparing the results to KIEAC fracture toughness obtained from C(T) precracked specimens tested in the same environment; SPT showed good correlation to standard tests. A novel expression was proposed to define the parameter KIEAC-SP as the suitable one to estimate the fracture toughness for crack initiation in hydrogen embrittlement conditions by Small Punch means, obtaining good accuracy in its estimations. Finally, Slow Rate Small Punch Tests (SRSPT) are proposed as a more efficient alternative, introducing an order of magnitude for the adequate rate to be employed.

  9. Factors underlying anxiety in HIV testing: risk perceptions, stigma, and the patient-provider power dynamic.

    Science.gov (United States)

    Worthington, Catherine; Myers, Ted

    2003-05-01

    Client anxiety is often associated with diagnostic testing. In this study, the authors used a grounded theory approach to examine the situational and social factors underlying anxiety associated with HIV testing, analyzing transcripts from semistructured interviews with 39 HIV test recipients in Ontario, Canada (selected based on HIV serostatus, risk experience, geographic region, gender, and number of HIV tests), then integrating emergent themes with existing research literature. Analysis revealed four themes: perceptions of risk and responsibility for health, stigma associated with HIV, the patient-provider power dynamic, and techniques used by test recipients to enhance control in their interactions with providers. Service implications include modifications to information provision during the test session, attention to privacy and anonymity, and sensitivity to patient-provider interactions.

  10. Evaluation of the integrated testing strategy for PNEC derivation under REACH.

    Science.gov (United States)

    May, Martin; Drost, Wiebke; Germer, Sabine; Juffernholz, Tanja; Hahn, Stefan

    2016-07-01

    Species sensitivity evaluation represents an approach to avoid chronic toxicity testing of aquatic vertebrates in accordance with the animal welfare concept of the EU chemicals regulation. In this study a data set of chemicals is analysed for relative species sensitivity between Daphnia and fish in chronic testing to evaluate under what condition chronic fish tests can be waived without underestimating the environmental hazard. Chronic fish toxicity is covered in 84% of the evaluated substances by the chronic invertebrate test and an assessment factor of 50. Thus, animal testing can be avoided in environmental hazard assessment for many chemicals. Moreover, it is shown that species sensitivity in chronic testing is associated with species sensitivity in acute testing. The more sensitive species in chronic testing is predicted with a high probability if a species is >5x more sensitive in acute testing. If substances are comparable or more toxic to Daphnia in acute testing than to fish chronic fish toxicity is covered by the chronic Daphnia test and an assessment factor of 50 in about 95% of the evaluated cases. To provide decision support for the regulation of chemicals a categorization scheme on relative sensitivity comparison is presented. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. WIND project tests and analysis on the integrity of small size pipe under severe accident condition

    International Nuclear Information System (INIS)

    Nakamura, Naohiko; Hashimoto, Kazuichiro; Maruyama, Yu; Igarashi, Minoru; Hidaka, Akihide; Sugimoto, Jun

    1996-01-01

    In a severe accident of a light water reactor(LWR), fission products (FPs) released from fuel rods will be transported to the primary cooling system piping as aerosol and some of them will be deposited on the inner surface of piping. In such conditions the primary cooling system piping might be subjected to both of elevated temperature load due to decay heat of FPs and pressure load, and as a consequence the integrity of piping might be threatened. The WIND (Wide Range Piping Integrity Demonstration) Project is being performed at Japan Atomic Energy Research Institute (JAERI) to investigate the FP aerosol behavior in reactor piping and the integrity of reactor piping under severe accident condition (K. Hashimoto et al., 1994, K. Hashimoto et al., 1995). In order to meet these two objectives, the Project comprises two test series: an aerosol behavior test series and a piping integrity test series. In the piping integrity test a straight stainless steel pipe is used to simulate a partial fraction of reactor piping under severe accident conditions. In parallel with conducting the tests, test analyses are performed with ABAQUS code (Hibbitt, Karlsson and Sorensen Inc. 1989) using the test conditions to investigate the behavior of straight pipe against thermal and pressure loads. This paper describes the comparison of the scoping piping integrity test results and the analysis results with ABAQUS

  12. Statistical techniques to construct assays for identifying likely responders to a treatment under evaluation from cell line genomic data

    International Nuclear Information System (INIS)

    Huang, Erich P; Fridlyand, Jane; Lewin-Koh, Nicholas; Yue, Peng; Shi, Xiaoyan; Dornan, David; Burington, Bart

    2010-01-01

    Developing the right drugs for the right patients has become a mantra of drug development. In practice, it is very difficult to identify subsets of patients who will respond to a drug under evaluation. Most of the time, no single diagnostic will be available, and more complex decision rules will be required to define a sensitive population, using, for instance, mRNA expression, protein expression or DNA copy number. Moreover, diagnostic development will often begin with in-vitro cell-line data and a high-dimensional exploratory platform, only later to be transferred to a diagnostic assay for use with patient samples. In this manuscript, we present a novel approach to developing robust genomic predictors that are not only capable of generalizing from in-vitro to patient, but are also amenable to clinically validated assays such as qRT-PCR. Using our approach, we constructed a predictor of sensitivity to dacetuzumab, an investigational drug for CD40-expressing malignancies such as lymphoma using genomic measurements of cell lines treated with dacetuzumab. Additionally, we evaluated several state-of-the-art prediction methods by independently pairing the feature selection and classification components of the predictor. In this way, we constructed several predictors that we validated on an independent DLBCL patient dataset. Similar analyses were performed on genomic measurements of breast cancer cell lines and patients to construct a predictor of estrogen receptor (ER) status. The best dacetuzumab sensitivity predictors involved ten or fewer genes and accurately classified lymphoma patients by their survival and known prognostic subtypes. The best ER status classifiers involved one or two genes and led to accurate ER status predictions more than 85% of the time. The novel method we proposed performed as well or better than other methods evaluated. We demonstrated the feasibility of combining feature selection techniques with classification methods to develop assays

  13. Test plan for reactions between spent fuel and J-13 well water under unsaturated conditions

    International Nuclear Information System (INIS)

    Finn, P.A.; Wronkiewicz, D.J.; Hoh, J.C.; Emery, J.W.; Hafenrichter, L.D.; Bates, J.K.

    1993-01-01

    The Yucca Mountain Site Characterization Project is evaluating the long-term performance of a high-level nuclear waste form, spent fuel from commercial reactors. Permanent disposal of the spent fuel is possible in a potential repository to be located in the volcanic tuff beds near Yucca Mountain, Nevada. During the post-containment period the spent fuel could be exposed to water condensation since of the cladding is assumed to fail during this time. Spent fuel leach (SFL) tests are designed to simulate and monitor the release of radionuclides from the spent fuel under this condition. This Test Plan addresses the anticipated conditions whereby spent fuel is contacted by small amounts of water that trickle through the spent fuel container. Two complentary test plans are presented, one to examine the reaction of spent fuel and J-13 well water under unsaturated conditions and the second to examine the reaction of unirradiated UO 2 pellets and J-13 well water under unsaturated conditions. The former test plan examines the importance of the water content, the oxygen content as affected by radiolysis, the fuel burnup, fuel surface area, and temperature. The latter test plant examines the effect of the non-presence of Teflon in the test vessel

  14. Double torsion fracture mechanics testing of shales under chemically reactive conditions

    Science.gov (United States)

    Chen, X.; Callahan, O. A.; Holder, J. T.; Olson, J. E.; Eichhubl, P.

    2015-12-01

    Fracture properties of shales is vital for applications such as shale and tight gas development, and seal performance of carbon storage reservoirs. We analyze the fracture behavior from samples of Marcellus, Woodford, and Mancos shales using double-torsion (DT) load relaxation fracture tests. The DT test allows the determination of mode-I fracture toughness (KIC), subcritical crack growth index (SCI), and the stress-intensity factor vs crack velocity (K-V) curves. Samples are tested at ambient air and aqueous conditions with variable ionic concentrations of NaCl and CaCl2, and temperatures up to 70 to determine the effects of chemical/environmental conditions on fracture. Under ambient air condition, KIC determined from DT tests is 1.51±0.32, 0.85±0.25, 1.08±0.17 MPam1/2 for Marcellus, Woodford, and Mancos shales, respectively. Tests under water showed considerable change of KIC compared to ambient condition, with 10.6% increase for Marcellus, 36.5% decrease for Woodford, and 6.7% decrease for Mancos shales. SCI under ambient air condition is between 56 and 80 for the shales tested. The presence of water results in a significant reduction of the SCI from 70% to 85% compared to air condition. Tests under chemically reactive solutions are currently being performed with temperature control. K-V curves under ambient air conditions are linear with stable SCI throughout the load-relaxation period. However, tests conducted under water result in an initial cracking period with SCI values comparable to ambient air tests, which then gradually transition into stable but significantly lower SCI values of 10-20. The non-linear K-V curves reveal that crack propagation in shales is initially limited by the transport of chemical agents due to their low permeability. Only after the initial cracking do interactions at the crack tip lead to cracking controlled by faster stress corrosion reactions. The decrease of SCI in water indicates higher crack propagation velocity due to

  15. Advanced power cycler with intelligent monitoring strategy of IGBT module under test

    DEFF Research Database (Denmark)

    Choi, U. M.; Blaabjerg, F.; Iannuzzo, F.

    2017-01-01

    and diode, which for the wear-out condition monitoring are presented. This advanced power cycler allows to perform power cycling test cost-effectively under conditions close to real power converter applications. In addition, an intelligent monitoring strategy for the separation of package-related wear......Power cycling (PC) test is one of the important test methods to assess the reliability performance of power device modules related to packaging technology, in respect to temperature stress. In this paper, an advanced power cycler with a real-time VCE_ON and VF measurement circuit for the IGBT...

  16. Public attitudes toward ancillary information revealed by pharmacogenetic testing under limited information conditions.

    Science.gov (United States)

    Haga, Susanne B; O'Daniel, Julianne M; Tindall, Genevieve M; Lipkus, Isaac R; Agans, Robert

    2011-08-01

    Pharmacogenetic testing can inform drug dosing and selection by aiding in estimating a patient's genetic risk of adverse response and/or failure to respond. Some pharmacogenetic tests may generate ancillary clinical information unrelated to the drug treatment question for which testing is done-an informational "side effect." We aimed to assess public interest and concerns about pharmacogenetic tests and ancillary information. We conducted a random-digit-dial phone survey of a sample of the US public. We achieved an overall response rate of 42% (n = 1139). When the potential for ancillary information was presented, 85% (±2.82%) of respondents expressed interest in pharmacogenetic testing, compared with 82% (±3.02%) before discussion of ancillary information. Most respondents (89% ± 2.27%) indicated that physicians should inform patients that a pharmacogenetic test may reveal ancillary risk information before testing is ordered. Respondents' interest in actually learning of the ancillary risk finding significantly differed based on disease severity, availability of an intervention, and test validity, even after adjusting for age, gender, education, and race. Under the limited information conditions presented in the survey, the potential of ancillary information does not negatively impact public interest in pharmacogenetic testing. Interest in learning ancillary information is well aligned with the public's desire to be informed about potential benefits and risks before testing, promoting patient autonomy.

  17. Testing the robustness of two water distribution system layouts under changing drinking water demand

    NARCIS (Netherlands)

    Agudelo-Vera, Claudia; Blokker, M; Vreeburg, J; Vogelaar, H.; Hillegers, S; van der Hoek, J.P.

    2016-01-01

    A drinking water distribution system (DWDS) is a critical and a costly asset with a long lifetime. Drinking water demand is likely to change in the coming decades. Quantifying these changes involves large uncertainties. This paper proposes a stress test on the robustness of existing DWDS under

  18. Comparing Relationships among Yield and Its Related Traits in Mycorrhizal and Nonmycorrhizal Inoculated Wheat Cultivars under Different Water Regimes Using Multivariate Statistics

    Directory of Open Access Journals (Sweden)

    Armin Saed-Moucheshi

    2013-01-01

    Full Text Available Multivariate statistical techniques were used to compare the relationship between yield and its related traits under noninoculated and inoculated cultivars with mycorrhizal fungus (Glomus intraradices; each one consisted of three wheat cultivars and four water regimes. Results showed that, under inoculation conditions, spike weight per plant and total chlorophyll content of the flag leaf were the most important variables contributing to wheat grain yield variation, while, under noninoculated condition, in addition to two mentioned traits, grain weight per spike and leaf area were also important variables accounting for wheat grain yield variation. Therefore, spike weight per plant and chlorophyll content of flag leaf can be used as selection criteria in breeding programs for both inoculated and noninoculated wheat cultivars under different water regimes, and also grain weight per spike and leaf area can be considered for noninoculated condition. Furthermore, inoculation of wheat cultivars showed higher value in the most measured traits, and the results indicated that inoculation treatment could change the relationship among morphological traits of wheat cultivars under drought stress. Also, it seems that the results of stepwise regression as a selecting method together with principal component and factor analysis are stronger methods to be applied in breeding programs for screening important traits.

  19. Fission product aerosol removal test by containment spray under accident management conditions (3)

    International Nuclear Information System (INIS)

    Watanabe, Atsushi; Nagasaka, Hideo; Yokobori, Seiichi; Akinaga, Makoto

    2000-01-01

    In order to demonstrate the effective FP aerosol removal by containment spray under Japanese AM conditions, two system integral tests and two separate effect tests were carried out using a full-height simulation test facility. In case of PWR LOCA, aerosol concentration in the upper containment vessel decreased even under low spray flow rate. In case of BWR LOCA with water injection into RPV, the aerosol concentration in the entire vessel also decreased rapidly after aerosol supply stopping. In both cases, the removal rate estimated from the NUREG-1465 was coincided with test results. The aerosol washing effect by spray was confirmed to be predominant by conducting suppression chamber isolation test. It turned out that the effect of aerosol solubility and density on aerosol removal by spray was quite small by conducting insoluble aerosol injection test. After the modification of aerosol removal model by the spray and hygroscopic aerosol model in original MELCOR 1.8.4, calculated aerosol concentration transient in the containment vessel agreed well with the test data. (author)

  20. Analysis of North Korea's Nuclear Tests under Prospect Theory

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Han Myung; Ryu, Jae Soo; Lee, Kwang Seok; Lee, Dong Hoon; Jun, Eunju; Kim, Mi Jin [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2013-10-15

    North Korea has chosen nuclear weapons as the means to protect its sovereignty. Despite international society's endeavors and sanctions to encourage North Korea to abandon its nuclear ambition, North Korea has repeatedly conducted nuclear testing. In this paper, the reason for North Korea's addiction to a nuclear arsenal is addressed within the framework of cognitive psychology. The prospect theory addresses an epistemological approach usually overlooked in rational choice theories. It provides useful implications why North Korea, being under a crisis situation has thrown out a stable choice but taken on a risky one such as nuclear testing. Under the viewpoint of prospect theory, nuclear tests by North Korea can be understood as follows: The first nuclear test in 2006 is seen as a trial to escape from loss areas such as financial sanctions and regime threats; the second test in 2009 was interpreted as a consequence of the strategy to recover losses by making a direct confrontation against the United States; and the third test in 2013 was understood as an attempt to strengthen internal solidarity after Kim Jong-eun inherited the dynasty, as well as to enhance bargaining power against the United States. Thus, it can be summarized that Pyongyang repeated its nuclear tests to escape from a negative domain and to settle into a positive one. In addition, in the future, North Korea may not be willing to readily give up its nuclear capabilities to ensure the survival of its own regime.

  1. Cosmic Statistics of Statistics

    OpenAIRE

    Szapudi, I.; Colombi, S.; Bernardeau, F.

    1999-01-01

    The errors on statistics measured in finite galaxy catalogs are exhaustively investigated. The theory of errors on factorial moments by Szapudi & Colombi (1996) is applied to cumulants via a series expansion method. All results are subsequently extended to the weakly non-linear regime. Together with previous investigations this yields an analytic theory of the errors for moments and connected moments of counts in cells from highly nonlinear to weakly nonlinear scales. The final analytic formu...

  2. Testing for central symmetry

    NARCIS (Netherlands)

    Einmahl, John; Gan, Zhuojiong

    Omnibus tests for central symmetry of a bivariate probability distribution are proposed. The test statistics compare empirical measures of opposite regions. Under rather weak conditions, we establish the asymptotic distribution of the test statistics under the null hypothesis; it follows that they

  3. Estimating the Contact Endurance of the AISI 321 Stainless Steel Under Contact Gigacycle Fatigue Tests

    Science.gov (United States)

    Savrai, R. A.; Makarov, A. V.; Osintseva, A. L.; Malygina, I. Yu.

    2018-02-01

    Mechanical testing of the AISI 321 corrosion resistant austenitic steel for contact gigacycle fatigue has been conducted with the application of a new method of contact fatigue testing with ultrasonic frequency of loading according to a pulsing impact "plane-to-plane" contact scheme. It has been found that the contact endurance (the ability to resist the fatigue spalling) of the AISI 321 steel under contact gigacycle fatigue loading is determined by its plasticity margin and the possibility of additional hardening under contact loading. It is demonstrated that the appearance of localized deep and long areas of spalling on a material surface can serve as a qualitative characteristic for the loss of the fatigue strength of the AISI 321 steel under impact contact fatigue loading. The value of surface microhardness measured within contact spots and the maximum depth of contact damages in the peripheral zone of contact spots can serve as quantitative criteria for that purpose.

  4. Injury Statistics

    Science.gov (United States)

    ... Certification Import Surveillance International Recall Guidance Civil and Criminal Penalties Federal Court Orders & Decisions Research & Statistics Research & Statistics Technical Reports Injury Statistics NEISS ...

  5. Enhanced statistical tests for GWAS in admixed populations: assessment using African Americans from CARe and a Breast Cancer Consortium.

    Directory of Open Access Journals (Sweden)

    Bogdan Pasaniuc

    2011-04-01

    Full Text Available While genome-wide association studies (GWAS have primarily examined populations of European ancestry, more recent studies often involve additional populations, including admixed populations such as African Americans and Latinos. In admixed populations, linkage disequilibrium (LD exists both at a fine scale in ancestral populations and at a coarse scale (admixture-LD due to chromosomal segments of distinct ancestry. Disease association statistics in admixed populations have previously considered SNP association (LD mapping or admixture association (mapping by admixture-LD, but not both. Here, we introduce a new statistical framework for combining SNP and admixture association in case-control studies, as well as methods for local ancestry-aware imputation. We illustrate the gain in statistical power achieved by these methods by analyzing data of 6,209 unrelated African Americans from the CARe project genotyped on the Affymetrix 6.0 chip, in conjunction with both simulated and real phenotypes, as well as by analyzing the FGFR2 locus using breast cancer GWAS data from 5,761 African-American women. We show that, at typed SNPs, our method yields an 8% increase in statistical power for finding disease risk loci compared to the power achieved by standard methods in case-control studies. At imputed SNPs, we observe an 11% increase in statistical power for mapping disease loci when our local ancestry-aware imputation framework and the new scoring statistic are jointly employed. Finally, we show that our method increases statistical power in regions harboring the causal SNP in the case when the causal SNP is untyped and cannot be imputed. Our methods and our publicly available software are broadly applicable to GWAS in admixed populations.

  6. Testing machine for fatigue crack kinetic investigation in specimens under bending

    International Nuclear Information System (INIS)

    Panasyuk, V.V.; Ratych, L.V.; Dmytrakh, I.N.

    1978-01-01

    A kinematic diagram of testing mashine for the investigation of fatigue crack kinetics in prismatic specimens, subjected to pure bending is described. Suggested is a technique of choosing an optimum ratio of the parameters of ''the testing machine-specimen'' system, which provide the stabilization of the stress intensity coefficient for a certain region of crack development under hard loading. On the example of the 40KhS and 15Kh2MFA steel specimens the pliability of the machine constructed according to the described diagram and designed for the 30ONxm maximum bending moment. The results obtained can be used in designing of the testing machines for studying pure bending under hard loading and in choosing the sizes of specimens with rectangular cross sections for investigations into the kinetics of the fatigue crack

  7. Virial Coefficients from Unified Statistical Thermodynamics of Quantum Gases Trapped under Generic Power Law Potential in d Dimension and Equivalence of Quantum Gases

    Science.gov (United States)

    Bahauddin, Shah Mohammad; Mehedi Faruk, Mir

    2016-09-01

    From the unified statistical thermodynamics of quantum gases, the virial coefficients of ideal Bose and Fermi gases, trapped under generic power law potential are derived systematically. From the general result of virial coefficients, one can produce the known results in d = 3 and d = 2. But more importantly we found that, the virial coefficients of Bose and Fermi gases become identical (except the second virial coefficient, where the sign is different) when the gases are trapped under harmonic potential in d = 1. This result suggests the equivalence between Bose and Fermi gases established in d = 1 (J. Stat. Phys. DOI 10.1007/s10955-015-1344-4). Also, it is found that the virial coefficients of two-dimensional free Bose (Fermi) gas are equal to the virial coefficients of one-dimensional harmonically trapped Bose (Fermi) gas.

  8. Back to the basics: Identifying and addressing underlying challenges in achieving high quality and relevant health statistics for indigenous populations in Canada.

    Science.gov (United States)

    Smylie, Janet; Firestone, Michelle

    Canada is known internationally for excellence in both the quality and public policy relevance of its health and social statistics. There is a double standard however with respect to the relevance and quality of statistics for Indigenous populations in Canada. Indigenous specific health and social statistics gathering is informed by unique ethical, rights-based, policy and practice imperatives regarding the need for Indigenous participation and leadership in Indigenous data processes throughout the spectrum of indicator development, data collection, management, analysis and use. We demonstrate how current Indigenous data quality challenges including misclassification errors and non-response bias systematically contribute to a significant underestimate of inequities in health determinants, health status, and health care access between Indigenous and non-Indigenous people in Canada. The major quality challenge underlying these errors and biases is the lack of Indigenous specific identifiers that are consistent and relevant in major health and social data sources. The recent removal of an Indigenous identity question from the Canadian census has resulted in further deterioration of an already suboptimal system. A revision of core health data sources to include relevant, consistent, and inclusive Indigenous self-identification is urgently required. These changes need to be carried out in partnership with Indigenous peoples and their representative and governing organizations.

  9. Intermediate statistics a modern approach

    CERN Document Server

    Stevens, James P

    2007-01-01

    Written for those who use statistical techniques, this text focuses on a conceptual understanding of the material. It uses definitional formulas on small data sets to provide conceptual insight into what is being measured. It emphasizes the assumptions underlying each analysis, and shows how to test the critical assumptions using SPSS or SAS.

  10. PRESTO: Rapid calculation of order statistic distributions and multiple-testing adjusted P-values via permutation for one and two-stage genetic association studies

    Directory of Open Access Journals (Sweden)

    Browning Brian L

    2008-07-01

    Full Text Available Abstract Background Large-scale genetic association studies can test hundreds of thousands of genetic markers for association with a trait. Since the genetic markers may be correlated, a Bonferroni correction is typically too stringent a correction for multiple testing. Permutation testing is a standard statistical technique for determining statistical significance when performing multiple correlated tests for genetic association. However, permutation testing for large-scale genetic association studies is computationally demanding and calls for optimized algorithms and software. PRESTO is a new software package for genetic association studies that performs fast computation of multiple-testing adjusted P-values via permutation of the trait. Results PRESTO is an order of magnitude faster than other existing permutation testing software, and can analyze a large genome-wide association study (500 K markers, 5 K individuals, 1 K permutations in approximately one hour of computing time. PRESTO has several unique features that are useful in a wide range of studies: it reports empirical null distributions for the top-ranked statistics (i.e. order statistics, it performs user-specified combinations of allelic and genotypic tests, it performs stratified analysis when sampled individuals are from multiple populations and each individual's population of origin is specified, and it determines significance levels for one and two-stage genotyping designs. PRESTO is designed for case-control studies, but can also be applied to trio data (parents and affected offspring if transmitted parental alleles are coded as case alleles and untransmitted parental alleles are coded as control alleles. Conclusion PRESTO is a platform-independent software package that performs fast and flexible permutation testing for genetic association studies. The PRESTO executable file, Java source code, example data, and documentation are freely available at http://www.stat.auckland.ac.nz/~browning/presto/presto.html.

  11. Statistical thermodynamics

    International Nuclear Information System (INIS)

    Lim, Gyeong Hui

    2008-03-01

    This book consists of 15 chapters, which are basic conception and meaning of statistical thermodynamics, Maxwell-Boltzmann's statistics, ensemble, thermodynamics function and fluctuation, statistical dynamics with independent particle system, ideal molecular system, chemical equilibrium and chemical reaction rate in ideal gas mixture, classical statistical thermodynamics, ideal lattice model, lattice statistics and nonideal lattice model, imperfect gas theory on liquid, theory on solution, statistical thermodynamics of interface, statistical thermodynamics of a high molecule system and quantum statistics

  12. Routine TP53 testing for breast cancer under age 30: ready for prime time?

    Science.gov (United States)

    McCuaig, Jeanna M; Armel, Susan R; Novokmet, Ana; Ginsburg, Ophira M; Demsky, Rochelle; Narod, Steven A; Malkin, David

    2012-12-01

    It is well known that early-onset breast cancer may be due to an inherited predisposition. When evaluating women diagnosed with breast cancer under age 30, two important syndromes are typically considered: Hereditary Breast and Ovarian Cancer Syndrome and Li-Fraumeni syndrome. Many women are offered genetic testing for mutations in the BRCA1 and BRCA2 genes; however, few are offered genetic testing for mutations in the TP53 gene. There is a concern that overly restrictive testing of TP53 may fail to recognize families with Li-Fraumeni syndrome. We reviewed the genetic test results and family histories of all women with early-onset breast cancer who had genetic testing of the TP53 gene at the Toronto Hospital for Sick Children. Of the 28 women tested, six (33.3 %) had a mutation in the TP53 gene; a mutation was found in 7.7 % of women who did not meet current criteria for Li-Fraumeni syndrome. By reviewing similar data published between 2000 and 2011, we estimate that 5-8 % of women diagnosed with early-onset breast cancer, and who have a negative family history, may have a mutation in the TP53 gene. Given the potential benefits versus harms of this testing, we discuss the option of simultaneous testing of all three genes (BRCA1, BRCA2, and TP53) for women diagnosed with breast cancer before age 30.

  13. Non-destructive Testing by Infrared Thermography Under Random Excitation and ARMA Analysis

    Science.gov (United States)

    Bodnar, J. L.; Nicolas, J. L.; Candoré, J. C.; Detalle, V.

    2012-11-01

    Photothermal thermography is a non-destructive testing (NDT) method, which has many applications in the field of control and characterization of thin materials. This technique is usually implemented under CW or flash excitation. Such excitations are not adapted for control of fragile materials or for multi-frequency analysis. To allow these analyses, in this article, the use of a new control mode is proposed: infrared thermography under random excitation and auto regressive moving average analysis. First, the principle of this NDT method is presented. Then, the method is shown to permit detection, with low energy constraints, of detachments situated in mural paintings.

  14. Transformational Leadership and Organizational Citizenship Behavior: A Meta-Analytic Test of Underlying Mechanisms.

    Science.gov (United States)

    Nohe, Christoph; Hertel, Guido

    2017-01-01

    Based on social exchange theory, we examined and contrasted attitudinal mediators (affective organizational commitment, job satisfaction) and relational mediators (trust in leader, leader-member exchange; LMX) of the positive relationship between transformational leadership and organizational citizenship behavior (OCB). Hypotheses were tested using meta-analytic path models with correlations from published meta-analyses (761 samples with 227,419 individuals overall). When testing single-mediator models, results supported our expectations that each of the mediators explained the relationship between transformational leadership and OCB. When testing a multi-mediator model, LMX was the strongest mediator. When testing a model with a latent attitudinal mechanism and a latent relational mechanism, the relational mechanism was the stronger mediator of the relationship between transformational leadership and OCB. Our findings help to better understand the underlying mechanisms of the relationship between transformational leadership and OCB.

  15. An assessment of consistence of exhaust gas emission test results obtained under controlled NEDC conditions

    Science.gov (United States)

    Balawender, K.; Jaworski, A.; Kuszewski, H.; Lejda, K.; Ustrzycki, A.

    2016-09-01

    Measurements concerning emissions of pollutants contained in automobile combustion engine exhaust gases is of primary importance in view of their harmful impact on the natural environment. This paper presents results of tests aimed at determining exhaust gas pollutant emissions from a passenger car engine obtained under repeatable conditions on a chassis dynamometer. The test set-up was installed in a controlled climate chamber allowing to maintain the temperature conditions within the range from -20°C to +30°C. The analysis covered emissions of such components as CO, CO2, NOx, CH4, THC, and NMHC. The purpose of the study was to assess repeatability of results obtained in a number of tests performed as per NEDC test plan. The study is an introductory stage of a wider research project concerning the effect of climate conditions and fuel type on emission of pollutants contained in exhaust gases generated by automotive vehicles.

  16. Statistical Power in Meta-Analysis

    Science.gov (United States)

    Liu, Jin

    2015-01-01

    Statistical power is important in a meta-analysis study, although few studies have examined the performance of simulated power in meta-analysis. The purpose of this study is to inform researchers about statistical power estimation on two sample mean difference test under different situations: (1) the discrepancy between the analytical power and…

  17. Effectiveness of Combining Statistical Tests and Effect Sizes When Using Logistic Discriminant Function Regression to Detect Differential Item Functioning for Polytomous Items

    Science.gov (United States)

    Gómez-Benito, Juana; Hidalgo, Maria Dolores; Zumbo, Bruno D.

    2013-01-01

    The objective of this article was to find an optimal decision rule for identifying polytomous items with large or moderate amounts of differential functioning. The effectiveness of combining statistical tests with effect size measures was assessed using logistic discriminant function analysis and two effect size measures: R[superscript 2] and…

  18. Plot of expected distributions of the test statistics q=log(L(0^+)/L(2^+)) for the spin-0 and spin-2 (produced by gluon fusion) hypotheses

    CERN Multimedia

    ATLAS, Collaboration

    2013-01-01

    Expected distributions of the test statistics q=log(L(0^+)/L(2^+)) for the spin-0 and spin-2 (produced by gluon fusion) hypotheses. The observed value is indicated by a vertical line. The coloured areas correspond to the integrals of the expected distributions used to compute the p-values for the rejection of each hypothesis.

  19. Stability of selected volatile contact allergens in different patch test chambers under different storage conditions

    DEFF Research Database (Denmark)

    Mose, Kristian Fredløv; Andersen, Klaus Ejner; Christensen, Lars Porskjaer

    2012-01-01

    storage conditions. Methods. Petrolatum samples of methyl methacrylate (MMA), 2-hydroxyethyl methacrylate (2-HEMA), 2-hydroxypropyl acrylate (2-HPA), cinnamal and eugenol in patch test concentrations were stored in three different test chambers (IQ chamber™, IQ Ultimate™, and Van der Bend® transport...... during storage in the refrigerator. For these two chamber systems, the contact allergen concentration dropped below the stability limit in the following order: MMA, cinnamal, 2-HPA, eugenol, and 2-HEMA. In the Van der Bend® transport container, the contact allergens exhibited acceptable stability under...

  20. The PRAXIS I Math Study Guide Questions and the PRAXIS I Math Skills Test Questions: A Statistical Study

    Science.gov (United States)

    Wilkins, M. Elaine

    2012-01-01

    In 2001, No Child Left Behind introduced the highly qualified status for k-12 teachers, which mandated the successful scores on a series of high-stakes test; within this series is the Pre-Professional Skills Test (PPST) or PRAXIS I. The PPST measures basic k-12 skills for reading, writing, and mathematics. The mathematics sub-test is a national…

  1. EM algorithm for one-shot device testing with competing risks under exponential distribution

    International Nuclear Information System (INIS)

    Balakrishnan, N.; So, H.Y.; Ling, M.H.

    2015-01-01

    This paper provides an extension of the work of Balakrishnan and Ling [1] by introducing a competing risks model into a one-shot device testing analysis under an accelerated life test setting. An Expectation Maximization (EM) algorithm is then developed for the estimation of the model parameters. An extensive Monte Carlo simulation study is carried out to assess the performance of the EM algorithm and then compare the obtained results with the initial estimates obtained by the Inequality Constrained Least Squares (ICLS) method of estimation. Finally, we apply the EM algorithm to a clinical data, ED01, to illustrate the method of inference developed here. - Highlights: • ALT data analysis for one-shot devices with competing risks is considered. • EM algorithm is developed for the determination of the MLEs. • The estimations of lifetime under normal operating conditions are presented. • The EM algorithm improves the convergence rate

  2. How diagnostic tests help to disentangle the mechanisms underlying neuropathic pain symptoms in painful neuropathies.

    Science.gov (United States)

    Truini, Andrea; Cruccu, Giorgio

    2016-02-01

    Neuropathic pain, ie, pain arising directly from a lesion or disease affecting the somatosensory afferent pathway, manifests with various symptoms, the commonest being ongoing burning pain, electrical shock-like sensations, and dynamic mechanical allodynia. Reliable insights into the mechanisms underlying neuropathic pain symptoms come from diagnostic tests documenting and quantifying somatosensory afferent pathway damage in patients with painful neuropathies. Neurophysiological investigation and skin biopsy studies suggest that ongoing burning pain primarily reflects spontaneous activity in nociceptive-fiber pathways. Electrical shock-like sensations presumably arise from high-frequency ectopic bursts generated in demyelinated, nonnociceptive, Aβ fibers. Although the mechanisms underlying dynamic mechanical allodynia remain debatable, normally innocuous stimuli might cause pain by activating spared and sensitized nociceptive afferents. Extending the mechanistic approach to neuropathic pain symptoms might advance targeted therapy for the individual patient and improve testing for new drugs.

  3. Test Results of Selected Commercial DC/DC Converters under Cryogenic Temperatures - A Digest

    Science.gov (United States)

    Patterson, Richard; Hammoud, Ahmad

    2010-01-01

    DC/DC converters are widely used in space power systems in the areas of power management and distribution, signal conditioning, and motor control. Design of DC/DC converters to survive cryogenic temperatures will improve the power system performance, simplify design, and reduce development and launch costs. In this work, the performance of nine COTS modular, low-tomedium power DC/DC converters was investigated under cryogenic temperatures. The converters were evaluated in terms of their output regulation, efficiency, and input and output currents. At a given temperature, these properties were obtained at various input voltages and at different load levels. A summary on the performance of the tested converters was given. More comprehensive testing and in-depth analysis of performance under long-term exposure to extreme temperatures are deemed necessary to establish the suitability of these and other devices for use in the harsh environment of space exploration missions.

  4. Helium leak testing of a radioactive contaminated vessel under high pressure in a contaminated environment

    International Nuclear Information System (INIS)

    Winter, M.E.

    1996-01-01

    At ANL-W, with the shutdown of EBR-II, R ampersand D has evolved from advanced reactor design to the safe handling, processing, packaging, and transporting spent nuclear fuel and nuclear waste. New methods of processing spent fuel rods and transforming contaminated material into acceptable waste forms are now in development. Storage of nuclear waste is a high interest item. ANL-W is participating in research of safe storage of nuclear waste, with the WIPP (Waste Isolation Pilot Plant) site in New Mexico the repository. The vessel under test simulates gas generated by contaminated materials stored underground at the WIPP site. The test vessel is 90% filled with a mixture of contaminated material and salt brine (from WIPP site) and pressurized with N2-1% He at 2500 psia. Test acceptance criteria is leakage -7 cc/seconds at 2500 psia. The bell jar method is used to determine leakage rate using a mass spectrometer leak detector (MSLD). The efficient MSLD and an Al bell jar replaced a costly, time consuming pressure decay test setup. Misinterpretation of test criterion data caused lengthy delays, resulting in the development of a unique procedure. Reevaluation of the initial intent of the test criteria resulted in leak tolerances being corrected and test efficiency improved

  5. A novel test rig to investigate under-platform damper dynamics

    Science.gov (United States)

    Botto, Daniele; Umer, Muhammad

    2018-02-01

    In the field of turbomachinery, vibration amplitude is often reduced by dissipating the kinetic energy of the blades with devices that utilize dry friction. Under-platform dampers, for example, are often placed in the underside of two consecutive turbine blades. Dampers are kept in contact with the under-platform of the respective blades by means of the centrifugal force. If the damper is well designed, vibration of blades instigate a relative motion between the under-platform and the damper. A friction force, that is a non-conservative force, arises in the contact and partly dissipates the vibration energy. Several contact models are available in the literature to simulate the contact between the damper and the under-platform. However, the actual dynamics of the blade-damper interaction have not fully understood yet. Several test rigs have been previously developed to experimentally investigate the performance of under-platform dampers. The majority of these experimental setups aim to evaluate the overall damper efficiency in terms of reduction in response amplitude of the blade for a given exciting force that simulates the aerodynamic loads. Unfortunately, the experimental data acquired on the blade dynamics do not provide enough information to understand the damper dynamics. Therefore, the uncertainty on the damper behavior remains a big issue. In this work, a novel experimental test rig has been developed to extensively investigate the damper dynamic behavior. A single replaceable blade is clamped in the rig with a specific clamping device. With this device the blade root is pressed against a groove machined in the test rig. The pushing force is controllable and measurable, to better simulate the actual centrifugal load acting on the blade. Two dampers, one on each side of the blade, are in contact with the blade under-platforms and with platforms on force measuring supports. These supports have been specifically designed to measure the contact forces on the

  6. Safety Evaluation of Radioactive Material Transport Package under Stacking Test Condition

    International Nuclear Information System (INIS)

    Lee, Ju Chan; Seo, Ki Seog; Yoo, Seong Yeon

    2012-01-01

    Radioactive waste transport package was developed to transport eight drums of low and intermediate level waste(LILW) in accordance with the IAEA and domestic related regulations. The package is classified with industrial package IP-2. IP-2 package is required to undergo a free drop test and a stacking test. After free drop and stacking tests, it should prevent the loss or dispersal of radioactive contents, and loss of shielding integrity which would result in more than 20 % increase in the radiation level at any external surface of the package. The objective of this study is to establish the safety test method and procedure for stacking test and to prove the structural integrities of the IP-2 package. Stacking test and analysis were performed with a compressive load equal to five times the weight of the package for a period of 24 hours using a full scale model. Strains and displacements were measured at the corner fitting of the package during the stacking test. The measured strains and displacements were compared with the analysis results, and there were good agreements. It is very difficult to measure the deflection at the container base, so the maximum deflection of the container base was calculated by the analysis method. The maximum displacement at the corner fitting and deflection at the container base were less than their allowable values. Dimensions of the test model, thickness of shielding material and bolt torque were measured before and after the stacking test. Throughout the stacking test, it was found that there were no loss or dispersal of radioactive contents and no loss of shielding integrity. Thus, the package was shown to comply with the requirements to maintain structural integrity under the stacking condition.

  7. Testing the rationality of DOE's energy price forecasts under asymmetric loss preferences

    International Nuclear Information System (INIS)

    Mamatzakis, E.; Koutsomanoli-Filippaki, A.

    2014-01-01

    This paper examines the rationality of the price forecasts for energy commodities of the United States Department of Energy's (DOE), departing from the common assumption in the literature that DOE's forecasts are based on a symmetric underlying loss function with respect to positive vs. negative forecast errors. Instead, we opt for the methodology of Elliott et al. (2005) that allows testing the joint hypothesis of an asymmetric loss function and rationality and reveals the underlying preferences of the forecaster. Results indicate the existence of asymmetries in the shape of the loss function for most energy categories with preferences leaning towards optimism. Moreover, we also examine whether there is a structural break in those preferences over the examined period, 1997–2012. - Highlights: • Examine the rationality of DOE energy forecasts. • Departing from a symmetric underlying loss function. • Asymmetries exist in most energy prices. • Preferences lean towards optimism. • Examine structural breaks in those preferences

  8. Degradation testing and failure analysis of DC film capacitors under high humidity conditions

    DEFF Research Database (Denmark)

    Wang, Huai; Nielsen, Dennis Achton; Blaabjerg, Frede

    2015-01-01

    Metallized polypropylene film capacitors are widely used for high-voltage DC-link applications in power electronic converters. They generally have better reliability performance compared to aluminum electrolytic capacitors under electro-thermal stresses within specifications. However, the degrada......Metallized polypropylene film capacitors are widely used for high-voltage DC-link applications in power electronic converters. They generally have better reliability performance compared to aluminum electrolytic capacitors under electro-thermal stresses within specifications. However......, the degradation of the film capacitors is a concern in applications exposed to high humidity environments. This paper investigates the degradation of a type of plastic-boxed metallized DC film capacitors under different humidity conditions based on a total of 8700 h of accelerated testing and also post failure...... analysis. The test results are given by the measured data of capacitance and the equivalent series resistance. The degradation curves in terms of capacitance reduction are obtained under the conditions of 85% Relative Humidity (RH), 70% RH, and 55% RH. The post failure analysis of the degraded samples...

  9. Yo-Yo Intermittent Recovery Test Performance in Subelite Gaelic Football Players From Under Thirteen to Senior Age Groups.

    Science.gov (United States)

    Roe, Mark; Malone, Shane

    2016-11-01

    Roe, M and Malone, S. Yo-Yo intermittent recovery test performance in subelite Gaelic football players from under thirteen to senior age groups. J Strength Cond Res 30 (11): 3187-3193, 2016-Gaelic football is indigenous to Ireland and has similar locomotion profiles to soccer and Australian Football. Given the increasing attention on long-term player development, investigations on age-related variation in Yo-Yo intermittent recovery test level 1 (Yo-YoIR1) performance may provide useful information in talent identification, program design, and player monitoring. Therefore, the aim of this study was to evaluate Yo-YoIR1 performance across Gaelic football age groups. Male participants (n = 355) were recruited from division one, Gaelic football teams. Participants were allocated to one of the 7 groups according to respective age groups from under 13 (U13), under 14, under 15 (U15), under 16 (U16), minor, under 21 (U21), to senior age groups. Total Yo-YoIR1 distance (m) increased progressively from U13 (885 ± 347 m) to U16 (1,595 ± 380 m) equating to a rate of change of 180.2%. In comparison to U13, total distance at minor (1,206 ± 327 m) increased by 136.4%. Subsequent increases were observed in U21 (1,585 ± 445 m) and senior players (2,365 ± 489). Minimum (800-880 m) and maximum (2,240-2,280 m) total distances were comparable for U15, U16, and U21 players. Differences in total distance (m) for all age groups were statistically significant when compared to U13 players (p age groups for total distance was deemed to be large (effect size > 0.8). Similar trends were observed for maximum velocity and estimated V[Combining Dot Above]O2max. The evolution of Yo-YoIR1 performance in Gaelic football players from adolescents to adulthood highlights how maturation may influence sport-related running ability. Changes in Yo-YoIR1 performance should be closely monitored to optimize interventions for individuals transitioning across age groups.

  10. 76 FR 41838 - Order Approving Adjustment for Inflation of the Dollar Amount Tests in Rule 205-3 Under the...

    Science.gov (United States)

    2011-07-15

    ... COMMISSION Order Approving Adjustment for Inflation of the Dollar Amount Tests in Rule 205-3 Under the... of assets under management, relationship with a registered investment adviser, and such other factors... investment adviser immediately after entering into the advisory contract (``assets-under-management test...

  11. Comparative statistical properties of expected utility and area under the ROC curve for laboratory studies of observer performance in screening mammography

    Science.gov (United States)

    Abbey, Craig K; Gallas, Brandon D; Boone, John M; Niklason, Loren T; Hadjiiski, Lubomir M; Sahiner, Berkman; Samuelson, Frank W

    2014-01-01

    Rationale and Objectives Our objective is to determine whether expected utility (EU) and the area under the ROC (AUC) are consistent with one another as endpoints of observer performance studies in mammography. These two measures characterize ROC performance somewhat differently. We compare these two study endpoints at the level of individual reader effects, statistical inference, and components of variance across readers and cases. Materials and Methods We reanalyze three previously published laboratory observer performance studies that investigate various x-ray breast imaging modalities using EU and AUC. The EU measure is based on recent estimates of relative utility for screening mammography. Results The AUC and EU measures are correlated across readers for individual modalities (r = 0.93) and differences in modalities (r = 0.94 to 0.98). Statistical inference for modality effects based on multi-reader multi-case analysis is very similar, with significant results (p < 0.05) in exactly the same conditions. Power analyses show mixed results across studies, with a small increase in power on average for EU that corresponds to approximately a 7% reduction in the number of readers. Despite a large number of crossing ROC curves (59% of readers), modality effects only rarely have opposite signs for EU and AUC (6%). Conclusions We do not find any evidence of systematic differences between EU and AUC in screening mammography observer studies. Thus, when utility approaches are viable (i.e. an appropriate value of relative utility exists), practical effects such as statistical efficiency may be used to choose study endpoints. PMID:24594418

  12. Hyperspectral Imaging in Tandem with R Statistics and Image Processing for Detection and Visualization of pH in Japanese Big Sausages Under Different Storage Conditions.

    Science.gov (United States)

    Feng, Chao-Hui; Makino, Yoshio; Yoshimura, Masatoshi; Thuyet, Dang Quoc; García-Martín, Juan Francisco

    2018-02-01

    The potential of hyperspectral imaging with wavelengths of 380 to 1000 nm was used to determine the pH of cooked sausages after different storage conditions (4 °C for 1 d, 35 °C for 1, 3, and 5 d). The mean spectra of the sausages were extracted from the hyperspectral images and partial least squares regression (PLSR) model was developed to relate spectral profiles with the pH of the cooked sausages. Eleven important wavelengths were selected based on the regression coefficient values. The PLSR model established using the optimal wavelengths showed good precision being the prediction coefficient of determination (R p 2 ) 0.909 and the root mean square error of prediction 0.035. The prediction map for illustrating pH indices in sausages was for the first time developed by R statistics. The overall results suggested that hyperspectral imaging combined with PLSR and R statistics are capable to quantify and visualize the sausages pH evolution under different storage conditions. In this paper, hyperspectral imaging is for the first time used to detect pH in cooked sausages using R statistics, which provides another useful information for the researchers who do not have the access to Matlab. Eleven optimal wavelengths were successfully selected, which were used for simplifying the PLSR model established based on the full wavelengths. This simplified model achieved a high R p 2 (0.909) and a low root mean square error of prediction (0.035), which can be useful for the design of multispectral imaging systems. © 2017 Institute of Food Technologists®.

  13. Response of unirradiated and irradiated PWR fuel rods tested under power-cooling-mismatch conditions

    International Nuclear Information System (INIS)

    MacDonald, P.E.; Quapp, W.J.; Martinson, Z.R.; McCardell, R.K.; Mehner, A.S.

    1978-01-01

    This report summarizes the results from the single-rod power-cooling-mismatch (PCM) and irradiation effects (IE) tests conducted to date in the Power Burst Facility (PBF) at the U.S. DOE Idaho National Engineering Laboratory. This work was performed for the U.S. NRC under contact to the Department of Energy. These tests are part of the NRC Fuel Behavior Program, which is designed to provide data for the development and verification of analytical fuel behavior models that are used to predict fuel response to abnormal or postulated accident conditions in commercial LWRs. The mechanical, chemical and thermal response of both previously unirradiated and previously irradiated LWR-type fuel rods tested under power-cooling-mismatch condition is discussed. A brief description of the test designs is presented. The results of the PCM thermal-hydraulic studies are summarized. Primary emphasis is placed on the behavior of the fuel and cladding during and after stable film boiling. (orig.) [de

  14. Development of in-situ rock shear test under low compressive to tensile normal stress

    International Nuclear Information System (INIS)

    Nozaki, Takashi; Shin, Koichi

    2003-01-01

    The purpose of this study is to develop an in-situ rock shear testing method to evaluate the shear strength under low normal stress condition including tensile stress, which is usually ignored in the assessment of safety factor of the foundations for nuclear power plants against sliding. The results are as follows. (1) A new in-situ rock shear testing method is devised, in which tensile normal stress can be applied on the shear plane of a specimen by directly pulling up a steel box bonded to the specimen. By applying the counter shear load to cancel the moment induced by the main shear load, it can obtain shear strength under low normal stress. (2) Some model tests on Oya tuff and diatomaceous mudstone have been performed using the developed test method. The shear strength changed smoothly from low values at tensile normal stresses to higher values at compressive normal stresses. The failure criterion has been found to be bi-linear on the shear stress vs normal stress plane. (author)

  15. Statistical tests against systematic errors in data sets based on the equality of residual means and variances from control samples: theory and applications.

    Science.gov (United States)

    Henn, Julian; Meindl, Kathrin

    2015-03-01

    Statistical tests are applied for the detection of systematic errors in data sets from least-squares refinements or other residual-based reconstruction processes. Samples of the residuals of the data are tested against the hypothesis that they belong to the same distribution. For this it is necessary that they show the same mean values and variances within the limits given by statistical fluctuations. When the samples differ significantly from each other, they are not from the same distribution within the limits set by the significance level. Therefore they cannot originate from a single Gaussian function in this case. It is shown that a significance cutoff results in exactly this case. Significance cutoffs are still frequently used in charge-density studies. The tests are applied to artificial data with and without systematic errors and to experimental data from the literature.

  16. Introduction to Large-sized Test Facility for validating Containment Integrity under Severe Accidents

    International Nuclear Information System (INIS)

    Na, Young Su; Hong, Seongwan; Hong, Seongho; Min, Beongtae

    2014-01-01

    An overall assessment of containment integrity can be conducted properly by examining the hydrogen behavior in the containment building. Under severe accidents, an amount of hydrogen gases can be generated by metal oxidation and corium-concrete interaction. Hydrogen behavior in the containment building strongly depends on complicated thermal hydraulic conditions with mixed gases and steam. The performance of a PAR can be directly affected by the thermal hydraulic conditions, steam contents, gas mixture behavior and aerosol characteristics, as well as the operation of other engineering safety systems such as a spray. The models in computer codes for a severe accident assessment can be validated based on the experiment results in a large-sized test facility. The Korea Atomic Energy Research Institute (KAERI) is now preparing a large-sized test facility to examine in detail the safety issues related with hydrogen including the performance of safety devices such as a PAR in various severe accident situations. This paper introduces the KAERI test facility for validating the containment integrity under severe accidents. To validate the containment integrity, a large-sized test facility is necessary for simulating complicated phenomena induced by an amount of steam and gases, especially hydrogen released into the containment building under severe accidents. A pressure vessel 9.5 m in height and 3.4 m in diameter was designed at the KAERI test facility for the validating containment integrity, which was based on the THAI test facility with the experimental safety and the reliable measurement systems certified for a long time. This large-sized pressure vessel operated in steam and iodine as a corrosive agent was made by stainless steel 316L because of corrosion resistance for a long operating time, and a vessel was installed in at KAERI in March 2014. In the future, the control systems for temperature and pressure in a vessel will be constructed, and the measurement system

  17. Test and Analyses of a Composite Multi-Bay Fuselage Panel Under Uni-Axial Compression

    Science.gov (United States)

    Li, Jian; Baker, Donald J.

    2004-01-01

    A composite panel containing three stringers and two frames cut from a vacuum-assisted resin transfer molded (VaRTM) stitched fuselage article was tested under uni-axial compression loading. The stringers and frames divided the panel into six bays with two columns of three bays each along the compressive loading direction. The two frames were supported at the ends with pins to restrict the out-of-plane translation. The free edges of the panel were constrained by knife-edges. The panel was modeled with shell finite elements and analyzed with ABAQUS nonlinear solver. The nonlinear predictions were compared with the test results in out-of-plane displacements, back-to-back surface strains on stringer flanges and back-to-back surface strains at the centers of the skin-bays. The analysis predictions were in good agreement with the test data up to post-buckling.

  18. Standard test method for damage to contacting solid surfaces under fretting conditions

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2010-01-01

    1.1 This test method covers the studying or ranking the susceptibility of candidate materials to fretting corrosion or fretting wear for the purposes of material selection for applications where fretting corrosion or fretting wear can limit serviceability. 1.2 This test method uses a tribological bench test apparatus with a mechanism or device that will produce the necessary relative motion between a contacting hemispherical rider and a flat counterface. The rider is pressed against the flat counterface with a loading mass. The test method is intended for use in room temperature air, but future editions could include fretting in the presence of lubricants or other environments. 1.3 The purpose of this test method is to rub two solid surfaces together under controlled fretting conditions and to quantify the damage to both surfaces in units of volume loss for the test method. 1.4 The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard. 1.5...

  19. Illustration of the WPS benefit through BATMAN test series: Tests on large specimens under WPS loading configurations

    Energy Technology Data Exchange (ETDEWEB)

    Yuritzinn, T.; Ferry, L.; Chapuliot, S.; Mongabure, P. [CEA, DEN/DANS/DM2S/SEMT/LISN, Nucl Engn Div, Syst and Struct Modeling Dept, F-91191 Gif Sur Yvette, (France); Moinereau, D.; Dahl, A. [EdF/MMC, F-77818 Moret Sur Loing, (France); Gilles, P. [AREVA-NP, F-92084 Paris, (France)

    2008-07-01

    To study the effects of warm pre-stressing on the toughness of reactor pressure vessel steel, the 'Commissariat a l Energie Atomique', in collaboration with 'Electricite de France' and AREVA-NP, has made a study combining modeling and a series of experiments on large specimens submitted to a thermal shock or isothermal cooling. The tests were made on 18MND5 ferritic steel bars, containing a short or large fatigue pre-crack. The effect of 'warm pre-stressing' was confirmed, in the two cases of a fast thermal shock creating a gradient across the thickness of the bar and for gradual uniform cooling. In both cases, no propagation was observed during the thermal transient. Fracture occurred under low temperature conditions, at the end of the test when the tensile load was increased. The failure loads recorded were substantially higher than during pre-stressing. To illustrate the benefit of the WPS effect, numerical interpretations were performed using either global approach or local approach criteria. WPS effect and capability of models to predict it were then clearly shown. (authors)

  20. Zircaloy PWR fuel cladding deformation tests under mainly convective cooling conditions

    International Nuclear Information System (INIS)

    Hindle, E.D.; Mann, C.A.

    1980-01-01

    In a loss-of-coolant accident the temperature of the cladding of the fuel rods may rise to levels (650-810 0 C) where the ductility of Zircaloy is high (approximately 80%). The net outward pressure which will obtain if the coolant pressure falls to a small fraction of its normal working value produces stresses in the cladding which can result in large strain through secondary creep. An earlier study of the deformation of specimens of PWR Zircaloy cladding tubing 450 mm long under internal pressure had shown that strains of over 50% could be produced over considerable lengths (greater than twenty tube diameters). Extended deformation of this sort might be unacceptable if it occurred in a fuel element. The previous tests had been carried out under conditions of uniform radiative heat loss, and the work reported here extends the study to conditions of mainly convective heat loss believed to be more representative of a fuel element following a loss of coolant. Zircaloy-4 cladding specimens 450 mm long were filled with alumina pellets and tested at temperatures between 630 and 845 0 C in flowing steam at atmospheric pressure. Internal test pressures were in the range 2.9-11.0 MPa (400-1600 1b/in 2 ). Maximum strains were observed of the same magnitude as those seen in the previous tests, but the shape of the deformation differed; in these tests the deformation progressively increased in the direction of the steam flow. These results are compared with those from multi-rod tests elsewhere, and it is suggested that heat transfer has a dominant effect in determining deformation. The implications for the behaviour of fuel elements in a loss-of-coolant accident are outlined. (author)

  1. Reaction time in the agility test under simulated competitive and noncompetitive conditions.

    Science.gov (United States)

    Zemková, Erika; Vilman, Tomáš; Kováčiková, Zuzana; Hamar, Dušan

    2013-12-01

    The study evaluates a reaction time in the Agility Test under simulated competitive and noncompetitive conditions. A group of 16 fit men performed, in random order, 2 versions of the Agility Test: non-competitive Agility Single and Agility Dual in form of simulated competition. In both cases, subjects had to touch, as fast as possible, with either the left or the right foot 1 of 4 mats located in 4 corners outside of an 80 cm square. Mats had to be touched in accordance with the location of the stimulus in one of the corners of the screen. The test consisted of 20 visual stimuli with random generation of their location on the screen and time generation from 500 to 2,500 milliseconds. The result was total reaction time (RT) for all 20 reactions measured by a PC-based system FiTRO Agility Check. Results showed significantly (p Agility Dual than in the Agility Single Test (690.6 ± 83.8 milliseconds and 805.8 ± 101.1 milliseconds, respectively). Further comparisons of RT under noncompetitive and simulated competitive conditions for the best 8 subjects proceeded in the second match showed a decrease from 781.3 ± 111.2 milliseconds to 693.6 ± 97.8 milliseconds in the first match and to 637.0 ± 53.0 milliseconds in the second match. It may be concluded that RT is better when the Agility Test is performed in simulated competitive than noncompetitive conditions. The Agility Test in form of competition may be used for children and young athletes to enhance their attention level and motivation.

  2. Optimizing the impact of temperature on bio-hydrogen production from food waste and its derivatives under no pH control using statistical modelling

    Science.gov (United States)

    Arslan, C.; Sattar, A.; Ji, C.; Sattar, S.; Yousaf, K.; Hashim, S.

    2015-11-01

    The effect of temperature on bio-hydrogen production by co-digestion of sewerage sludge with food waste and its two derivatives, i.e. noodle waste and rice waste, was investigated by statistical modelling. Experimental results showed that increasing temperature from mesophilic (37 °C) to thermophilic (55 °C) was an effective mean for increasing bio-hydrogen production from food waste and noodle waste, but it caused a negative impact on bio-hydrogen production from rice waste. The maximum cumulative bio-hydrogen production of 650 mL was obtained from noodle waste under thermophilic temperature condition. Most of the production was observed during the first 48 h of incubation, which continued until 72 h of incubation. The decline in pH during this interval was 4.3 and 4.4 from a starting value of 7 under mesophilic and thermophilic conditions, respectively. Most of the glucose consumption was also observed during 72 h of incubation and the maximum consumption was observed during the first 24 h, which was the same duration where the maximum pH drop occurred. The maximum hydrogen yields of 82.47 mL VS-1, 131.38 mL COD-1, and 44.90 mL glucose-1 were obtained from thermophilic food waste, thermophilic noodle waste and mesophilic rice waste, respectively. The production of volatile fatty acids increased with an increase in time and temperature in food waste and noodle waste reactors whereas they decreased with temperature in rice waste reactors. The statistical modelling returned good results with high values of coefficient of determination (R2) for each waste type and 3-D response surface plots developed by using models developed. These plots developed a better understanding regarding the impact of temperature and incubation time on bio-hydrogen production trend, glucose consumption during incubation and volatile fatty acids production.

  3. ASSESSMENT OF SOIL COMPACTION UNDER DIFFERENT MANAGEMENT REGIMES USING DOUBLE-CYCLE UNIAXIAL COMPRESSION TEST

    Directory of Open Access Journals (Sweden)

    Víctor Manuel Vaca García

    2014-04-01

    Full Text Available The impact of wheeled farm machines trafficking on soil compaction has not been well documented in Mexico, particularly in the maize producing area of the Toluca-Atlacomulco Valley, which features a Vertisol soil type. In addition, laboratory measurements are needed that can imitate field conditions is needed to make measurements that are sensitive, reliable and appropriate for monitoring changes in compaction and other physical soil properties while reducing destructive sampling in the field. The objective of this research was to use double-cycle uniaxial compression, penetration resistance and cutting force tests to assess the response of a Vertisol in terms of hardness, cohesiveness and adhesiveness when compacted by wheel traffic in three different types of tillage systems: zero tillage (ZT, minimal tillage (MT and conventional tillage (CT. The study was conducted in Toluca, State of Mexico, in 2011. Soil samples were collected from the tractor’s wheel track, with three repetitions at two depths. All of the variables were measured using a universal testing machine. For penetration resistance and cutting force tests, standard screwdrivers were used as probes. According to the uniaxial compression test, CT was found to increase soil hardness, relative to the other systems (47% higher on average. MT reported the highest adhesiveness value (0.1 N s-1, but no statistically significant differences in cohesiveness were found among tillage systems. In the ZT system higher penetration resistance was observed in subsoil than in topsoil. MT obtained the maximum cutting force value (54.55 N, while there were no significant differences between other two systems. In these trials the universal testing machine was sensitive enough to detect differences in the soil physical properties of the different tillage systems.

  4. A Novel Approach for Dynamic Testing of Total Hip Dislocation under Physiological Conditions.

    Directory of Open Access Journals (Sweden)

    Sven Herrmann

    Full Text Available Constant high rates of dislocation-related complications of total hip replacements (THRs show that contributing factors like implant position and design, soft tissue condition and dynamics of physiological motions have not yet been fully understood. As in vivo measurements of excessive motions are not possible due to ethical objections, a comprehensive approach is proposed which is capable of testing THR stability under dynamic, reproducible and physiological conditions. The approach is based on a hardware-in-the-loop (HiL simulation where a robotic physical setup interacts with a computational musculoskeletal model based on inverse dynamics. A major objective of this work was the validation of the HiL test system against in vivo data derived from patients with instrumented THRs. Moreover, the impact of certain test conditions, such as joint lubrication, implant position, load level in terms of body mass and removal of muscle structures, was evaluated within several HiL simulations. The outcomes for a normal sitting down and standing up maneuver revealed good agreement in trend and magnitude compared with in vivo measured hip joint forces. For a deep maneuver with femoral adduction, lubrication was shown to cause less friction torques than under dry conditions. Similarly, it could be demonstrated that less cup anteversion and inclination lead to earlier impingement in flexion motion including pelvic tilt for selected combinations of cup and stem positions. Reducing body mass did not influence impingement-free range of motion and dislocation behavior; however, higher resisting torques were observed under higher loads. Muscle removal emulating a posterior surgical approach indicated alterations in THR loading and the instability process in contrast to a reference case with intact musculature. Based on the presented data, it can be concluded that the HiL test system is able to reproduce comparable joint dynamics as present in THR patients.

  5. Training and testing ERP-BCIs under different mental workload conditions

    Science.gov (United States)

    Ke, Yufeng; Wang, Peiyuan; Chen, Yuqian; Gu, Bin; Qi, Hongzhi; Zhou, Peng; Ming, Dong

    2016-02-01

    Objective. As one of the most popular and extensively studied paradigms of brain-computer interfaces (BCIs), event-related potential-based BCIs (ERP-BCIs) are usually built and tested in ideal laboratory settings in most existing studies, with subjects concentrating on stimuli and intentionally avoiding possible distractors. This study is aimed at examining the effect of simultaneous mental activities on ERP-BCIs by manipulating various levels of mental workload during the training and/or testing of an ERP-BCI. Approach. Mental workload was manipulated during the training or testing of a row-column P300-speller to investigate how and to what extent the spelling performance and the ERPs evoked by the oddball stimuli are affected by simultaneous mental workload. Main results. Responses of certain ERP components, temporal-occipital N200 and the late reorienting negativity evoked by the oddball stimuli and the classifiability of ERP features between targets and non-targets decreased with the increase of mental workload encountered by the subject. However, the effect of mental workload on the performance of ERP-BCI was not always negative but depended on the conditions where the ERP-BCI was built and applied. The performance of ERP-BCI built under an ideal lab setting without any irrelevant mental activities declined with the increasing mental workload of the testing data. However, the performance was significantly improved when an ERP-BCI was built under an appropriate mental workload level, compared to that built under speller-only conditions. Significance. The adverse effect of concurrent mental activities may present a challenge for ERP-BCIs trained in ideal lab settings but which are to be used in daily work, especially when users are performing demanding mental processing. On the other hand, the positive effects of the mental workload of the training data suggest that introducing appropriate mental workload during training ERP-BCIs is of potential benefit to the

  6. Neuro magnetic resonance spectroscopy using wavelet decomposition and statistical testing identifies biochemical changes in people with spinal cord injury and pain.

    Science.gov (United States)

    Stanwell, Peter; Siddall, Philip; Keshava, Nirmal; Cocuzzo, Daniel; Ramadan, Saadallah; Lin, Alexander; Herbert, David; Craig, Ashley; Tran, Yvonne; Middleton, James; Gautam, Shiva; Cousins, Michael; Mountford, Carolyn

    2010-11-01

    Spinal cord injury (SCI) can be accompanied by chronic pain, the mechanisms for which are poorly understood. Here we report that magnetic resonance spectroscopy measurements from the brain, collected at 3T, and processed using wavelet-based feature extraction and classification algorithms, can identify biochemical changes that distinguish control subjects from subjects with SCI as well as subdividing the SCI group into those with and without chronic pain. The results from control subjects (n=10) were compared to those with SCI (n=10). The SCI cohort was made up of subjects with chronic neuropathic pain (n=5) and those without chronic pain (n=5). The wavelet-based decomposition of frequency domain MRS signals employs statistical significance testing to identify features best suited to discriminate different classes. Moreover, the features benefit from careful attention to the post-processing of the spectroscopy data prior to the comparison of the three cohorts. The spectroscopy data, from the thalamus, best distinguished control subjects without SCI from those with SCI with a sensitivity and specificity of 0.9 (Percentage of Correct Classification). The spectroscopy data obtained from the prefrontal cortex and anterior cingulate cortex both distinguished between SCI subjects with chronic neuropathic pain and those without pain with a sensitivity and specificity of 1.0. In this study, where two underlying mechanisms co-exist (i.e. SCI and pain), the thalamic changes appear to be linked more strongly to SCI, while the anterior cingulate cortex and prefrontal cortex changes appear to be specifically linked to the presence of pain. Copyright 2010 Elsevier Inc. All rights reserved.

  7. Validity of Alcohol Use Disorder Identification Test-Korean Revised Version for Screening Alcohol Use Disorder according to Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition Criteria.

    Science.gov (United States)

    Chang, Jung Wei; Kim, Jong Sung; Jung, Jin Gyu; Kim, Sung Soo; Yoon, Seok Joon; Jang, Hak Sun

    2016-11-01

    The Alcohol Use Disorder Identification Test (AUDIT) has been widely used to identify alcohol use disorder (AUD). This study evaluated the validity of the AUDIT-Korean revised version (AUDIT-KR) for screening AUD according to Diagnostic and Statistical Manual of Mental Disorders, fifth edition (DSM-5) criteria. This research was conducted with 443 subjects who visited the Chungnam National University Hospital for a comprehensive medical examination. All subjects completed the demographic questionnaire and AUDIT-KR without assistance. Subjects were divided into two groups according to DSM-5 criteria: an AUD group, which included patients that fit the criteria for AUD (120 males and 21 females), and a non-AUD group, which included 146 males and 156 females that did not meet AUD criteria. The appropriate cut-off values, sensitivity, specificity, and positive and negative predictive values of the AUDIT-KR were evaluated. The mean±standard deviation AUDIT-KR scores were 10.32±7.48 points in males and 3.23±4.42 points in females. The area under the receiver operating characteristic curve (95% confidence interval, CI) of the AUDIT-KR for identifying AUD was 0.884 (0.840-0.920) in males and 0.962 (0.923-0.985) in females. The optimal cut-off value of the AUDIT-KR was 10 points for males (sensitivity, 81.90%; specificity, 81.33%; positive predictive value, 77.2%; negative predictive value, 85.3%) and 5 points for females (sensitivity, 100.00%; specificity, 88.54%; positive predictive value, 52.6%; negative predictive value, 100.0%). The AUDIT-KR has high reliability and validity for identifying AUD according to DSM-5 criteria.

  8. Preference option randomized design (PORD) for comparative effectiveness research: Statistical power for testing comparative effect, preference effect, selection effect, intent-to-treat effect, and overall effect.

    Science.gov (United States)

    Heo, Moonseong; Meissner, Paul; Litwin, Alain H; Arnsten, Julia H; McKee, M Diane; Karasz, Alison; McKinley, Paula; Rehm, Colin D; Chambers, Earle C; Yeh, Ming-Chin; Wylie-Rosett, Judith

    2017-01-01

    Comparative effectiveness research trials in real-world settings may require participants to choose between preferred intervention options. A randomized clinical trial with parallel experimental and control arms is straightforward and regarded as a gold standard design, but by design it forces and anticipates the participants to comply with a randomly assigned intervention regardless of their preference. Therefore, the randomized clinical trial may impose impractical limitations when planning comparative effectiveness research trials. To accommodate participants' preference if they are expressed, and to maintain randomization, we propose an alternative design that allows participants' preference after randomization, which we call a "preference option randomized design (PORD)". In contrast to other preference designs, which ask whether or not participants consent to the assigned intervention after randomization, the crucial feature of preference option randomized design is its unique informed consent process before randomization. Specifically, the preference option randomized design consent process informs participants that they can opt out and switch to the other intervention only if after randomization they actively express the desire to do so. Participants who do not independently express explicit alternate preference or assent to the randomly assigned intervention are considered to not have an alternate preference. In sum, preference option randomized design intends to maximize retention, minimize possibility of forced assignment for any participants, and to maintain randomization by allowing participants with no or equal preference to represent random assignments. This design scheme enables to define five effects that are interconnected with each other through common design parameters-comparative, preference, selection, intent-to-treat, and overall/as-treated-to collectively guide decision making between interventions. Statistical power functions for testing

  9. Optimized lower leg injury probability curves from postmortem human subject tests under axial impacts.

    Science.gov (United States)

    Yoganandan, Narayan; Arun, Mike W J; Pintar, Frank A; Szabo, Aniko

    2014-01-01

    Derive optimum injury probability curves to describe human tolerance of the lower leg using parametric survival analysis. The study reexamined lower leg postmortem human subjects (PMHS) data from a large group of specimens. Briefly, axial loading experiments were conducted by impacting the plantar surface of the foot. Both injury and noninjury tests were included in the testing process. They were identified by pre- and posttest radiographic images and detailed dissection following the impact test. Fractures included injuries to the calcaneus and distal tibia-fibula complex (including pylon), representing severities at the Abbreviated Injury Score (AIS) level 2+. For the statistical analysis, peak force was chosen as the main explanatory variable and the age was chosen as the covariable. Censoring statuses depended on experimental outcomes. Parameters from the parametric survival analysis were estimated using the maximum likelihood approach and the dfbetas statistic was used to identify overly influential samples. The best fit from the Weibull, log-normal, and log-logistic distributions was based on the Akaike information criterion. Plus and minus 95% confidence intervals were obtained for the optimum injury probability distribution. The relative sizes of the interval were determined at predetermined risk levels. Quality indices were described at each of the selected probability levels. The mean age, stature, and weight were 58.2±15.1 years, 1.74±0.08 m, and 74.9±13.8 kg, respectively. Excluding all overly influential tests resulted in the tightest confidence intervals. The Weibull distribution was the most optimum function compared to the other 2 distributions. A majority of quality indices were in the good category for this optimum distribution when results were extracted for 25-, 45- and 65-year-olds at 5, 25, and 50% risk levels age groups for lower leg fracture. For 25, 45, and 65 years, peak forces were 8.1, 6.5, and 5.1 kN at 5% risk; 9.6, 7.7, and 6.1 k

  10. Cancer Statistics

    Science.gov (United States)

    ... What Is Cancer? Cancer Statistics Cancer Disparities Cancer Statistics Cancer has a major impact on society in ... success of efforts to control and manage cancer. Statistics at a Glance: The Burden of Cancer in ...

  11. Caregiving Statistics

    Science.gov (United States)

    ... Coping with Alzheimer’s COPD Caregiving Take Care! Caregiver Statistics Statistics on Family Caregivers and Family Caregiving Caregiving Population ... Health Care Caregiver Self-Awareness State by State Statistics Caregiving Population The value of the services family ...

  12. Identification of Elderly Falling Risk by Balance Tests Under Dual Tasks Conditions

    Directory of Open Access Journals (Sweden)

    Mohammad Ali Aslankhani

    2010-03-01

    Full Text Available Objectives: This study aimed to identify elderly fallers and non-fallers by balance test under dual tasks conditions. Methods & Materials: This study was an analyze-comparative study. Subjects were from three park of Tehran. Subjects were 20 older adults with outhistory of falls (aged 75.95±6.28 years and 21 older adults with a history of 2 or more falls in the previous one year (aged 72.50±7.31 Years . All subjects performed Timed Up & Go test under 3 conditions (TimedUp & Go, Timed Up & Go with numbers counter randomly [TUG cognitive], and Timed Up & Go while carrying a full cup of water [TUG manual]. A multivariate analysis of variance and logistic regression analyses were performed. Results: The results showed significant difference between elderly fallers and non fallers in fall risk composed dependent variable (P=0.0005, as the non fallers had greater score than the elderly fallers. Also, results showed that TUG cognitive has prediction capacity of elderly fall (P=0.013. Conclusion: Consequently, balance under cognitive dual task conditions could be useful method in identification of risk of falling and planning dual task exercise program and physiotherapy to preventfalls.

  13. Bioclim Deliverable D6b: application of statistical down-scaling within the BIOCLIM hierarchical strategy: methods, data requirements and underlying assumptions

    International Nuclear Information System (INIS)

    2004-01-01

    -study regions were identified, together with the additional issues which arise in applying these techniques to output from the BIOCLIM simulations. This preliminary work is described in this BIOCLIM technical note. It provides an overview of statistical down-scaling methods, together with their underlying assumptions and advantages/disadvantages. Specific issues relating to their application within the BIOCLIM context (i.e., application to the IPSL C M4 D snapshot simulations) are identified, for example, the stationarity issue. The predictor and predictand data sets that would be required to implement these methods within the BIOCLIM hierarchical strategy are also outlined, together with the methodological steps involved. Implementation of these techniques was delayed in order to give priority to the application of the rule-based down-scaling method developed in WP3 to WP2 EMIC output (see Deliverable D8a). This task was not originally planned, but has allowed more comprehensive comparison and evaluation of the BIOCLIM scenarios and down-scaling methods to be undertaken

  14. Long term statistics (1845-2014) of daily runoff maxima, monthly rainfall and runoff in the Adda basin (Italian Alps) under natural and anthropogenic changes.

    Science.gov (United States)

    Ranzi, Roberto; Goatelli, Federica; Castioni, Camilla; Tomirotti, Massimo; Crespi, Alice; Mattea, Enrico; Brunetti, Michele; Maugeri, Maurizio

    2017-04-01

    neighbouring stations considering both the distance and the elevation differences between the stations and the considered cell. Finally, the secular precipitation records at each DEM cell of the Adda basin are computed by multiplying the local estimated anomalies for the corresponding climatological values. A statistically significant decreasing trend of precipitation results from the Man Kendall and Sen-Theil tests.

  15. Determination of the intra- and interlaboratory reproducibility of the low volume eye test and its statistical relationship to the Draize eye test.

    Science.gov (United States)

    Cormier, E M; Parker, R D; Henson, C; Cruse, L W; Merritt, A K; Bruce, R D; Osborne, R

    1996-04-01

    The reproducibility of toxicologic test methods, including alternative tests, is a key scientific and regulatory concern. In the present work, historic rabbit eye irritation data were used to determine the intra- and interlaboratory reproducibility of the low volume eye test (LVET). The standard Draize eye irritation test was used as the basis for comparison. The LVET and Draize tests had similar degrees of intra- and interlaboratory reproducibility as determined by examination of their coefficients of variation, although the variability in LVET results was directionally lower. Results from 70 parallel Draize and LVET tests indicated a strong positive association between results from the two tests, for corneal, iridial, conjunctival, and maximum average scores (MAS). Correlation coefficients were 0.60, 0.73, 0.69, and 0.73, respectively (P Draize MAS values was examined by regression analysis and found to follow the relationship LVET MAS = 0.522 (Draize MAS). Thus, the LVET is at least as reproducible as the Draize test and gives responses that are (linearly) correlated to the Draize. The previous findings that the LVET is more predictive of human eye responses than the Draize test lends additional support for its use as a refined alternative to the Draize test.

  16. Dissolution comparisons using a Multivariate Statistical Distance (MSD) test and a comparison of various approaches for calculating the measurements of dissolution profile comparison.

    Science.gov (United States)

    Cardot, J-M; Roudier, B; Schütz, H

    2017-07-01

    The f 2 test is generally used for comparing dissolution profiles. In cases of high variability, the f 2 test is not applicable, and the Multivariate Statistical Distance (MSD) test is frequently proposed as an alternative by the FDA and EMA. The guidelines provide only general recommendations. MSD tests can be performed either on raw data with or without time as a variable or on parameters of models. In addition, data can be limited-as in the case of the f 2 test-to dissolutions of up to 85% or to all available data. In the context of the present paper, the recommended calculation included all raw dissolution data up to the first point greater than 85% as a variable-without the various times as parameters. The proposed MSD overcomes several drawbacks found in other methods.

  17. Trend analysis of runoff and sediment fluxes in the Upper Blue Nile basin: A combined analysis of statistical tests, physically-based models and landuse maps

    Science.gov (United States)

    Gebremicael, T. G.; Mohamed, Y. A.; Betrie, G. D.; van der Zaag, P.; Teferi, E.

    2013-03-01

    SummaryThe landuse/cover changes in the Ethiopian highlands have significantly increased the variability of runoff and sediment fluxes of the Blue Nile River during the last few decades. The objectives of this study were (i) to understand the long-term variations of runoff and sediment fluxes using statistical models, (ii) to interpret and corroborate the statistical results using a physically-based hydrological model, Soil and Water Assessment Tool (SWAT), and (iii) to validate the interpretation of SWAT results by assessing changes of landuse maps. Firstly, Mann-Kendall and Pettitt tests were used to test the trends of Blue Nile flow (1970-2009) and sediment load (1980-2009) at the outlet of the Upper Blue Nile basin at El Diem station. These tests showed statistically significant increasing trends of annual stream flow, wet season stream flow and sediment load at 5% confidence level. The dry season flow showed a significant decrease in the trend. However, during the same period the annual rainfall over the basin showed no significant trends. The results of the statistical tests were sensitive to the time domain. Secondly, the SWAT model was used to simulate the runoff and sediment fluxes in the early 1970s and at the end of the time series in 2000s in order to interpret the physical causes of the trends and corroborate the statistical results. A comparison of model parameter values between the 1970s and 2000s shows significant change, which could explain catchment response changes over the 28 years of record. Thirdly, a comparison of landuse maps of 1970s against 2000s shows conversion of vegetation cover into agriculture and grass lands over wide areas of the Upper Blue Nile basin. The combined results of the statistical tests, the SWAT model, and landuse change detection are consistent with the hypothesis that landuse change has caused a significant change of runoff and sediment load from the Upper Blue Nile during the last four decades. This is an important

  18. Interpretation of Cone Penetration Testing in Silty Soils Conducted under Partially Drained Conditions

    DEFF Research Database (Denmark)

    Holmsgaard, Rikke; Nielsen, Benjaminn Nordahl; Ibsen, Lars Bo

    2016-01-01

    penetration rate. Also evaluated and presented in this paper is how cone resistance obtained under partially drained conditions underestimates the interpreted relative density Dr and friction angle ?. Triaxial test results on undisturbed silt samples were applied for this analysis. © 2015. American Society...... consisted primarily of sandy silt with clay bands. The results illustrated that when the penetration rate is reduced, the cone resistance increases, but the pore pressure decreases. The transition between undrained and fully drained penetration was determined by converting the results into a normalized...

  19. Sub-recoil cooling up to nano-Kelvin. Direct measurement of spatial coherency length. New tests for Levy statistics

    International Nuclear Information System (INIS)

    Saubamea, B.

    1998-12-01

    This thesis presents a new method to measure the temperature of ultracold atoms from the spatial autocorrelation function of the atomic wave-packets. We thus determine the temperature of metastable helium-4 atoms cooled by velocity selective dark resonance, a method known to cool the atoms below the temperature related to the emission or the absorption of a single photon by an atom at rest, namely the recoil temperature. This cooling mechanism prepares each atom in a coherent superposition of two wave-packets with opposite mean momenta, which are initially superimposed and then drift apart. By measuring the temporal decay of their overlap, we have access to the Fourier transform of the momentum distribution of the atoms. Using this method, we can measure temperatures as low as 5 nK, 800 times as small as the recoil temperature. Moreover we study in detail the exact shape of the momentum distribution and compare the experimental results with two different theoretical approaches: a quantum Monte Carlo simulation and an analytical model based on Levy statistics. We compare the calculated line shape with the one deduced from simulations, and each theoretical model with experimental data. A very good agreement is found with each approach. We thus demonstrate the validity of the statistical model of sub-recoil cooling and give the first experimental evidence of some of its characteristics: the absence of steady-state, the self-similarity and the non Lorentzian shape of the momentum distribution of the cooled atoms. All these aspects are related to the non ergodicity of sub-recoil cooling. (author)

  20. A reanalysis of Lord's statistical treatment of football numbers

    NARCIS (Netherlands)

    Zand Scholten, A.; Borsboom, D.

    2009-01-01

    Stevens’ theory of admissible statistics [Stevens, S. S. (1946). On the theory of scales of measurement. Science, 103, 677680] states that measurement levels should guide the choice of statistical test, such that the truth value of statements based on a statistical analysis remains invariant under