WorldWideScience

Sample records for statistically designed test

  1. A Modified Jonckheere Test Statistic for Ordered Alternatives in Repeated Measures Design

    Directory of Open Access Journals (Sweden)

    Hatice Tül Kübra AKDUR

    2016-09-01

    Full Text Available In this article, a new test based on Jonckheere test [1] for  randomized blocks which have dependent observations within block is presented. A weighted sum for each block statistic rather than the unweighted sum proposed by Jonckheereis included. For Jonckheere type statistics, the main assumption is independency of observations within block. In the case of repeated measures design, the assumption of independence is violated. The weighted Jonckheere type statistic for the situation of dependence for different variance-covariance structure and the situation based on ordered alternative hypothesis structure of each block on the design is used. Also, the proposed statistic is compared to the existing test based on Jonckheere in terms of type I error rates by performing Monte Carlo simulation. For the strong correlations, circular bootstrap version of the proposed Jonckheere test provides lower rates of type I error.

  2. Study designs, use of statistical tests, and statistical analysis software choice in 2015: Results from two Pakistani monthly Medline indexed journals.

    Science.gov (United States)

    Shaikh, Masood Ali

    2017-09-01

    Assessment of research articles in terms of study designs used, statistical tests applied and the use of statistical analysis programmes help determine research activity profile and trends in the country. In this descriptive study, all original articles published by Journal of Pakistan Medical Association (JPMA) and Journal of the College of Physicians and Surgeons Pakistan (JCPSP), in the year 2015 were reviewed in terms of study designs used, application of statistical tests, and the use of statistical analysis programmes. JPMA and JCPSP published 192 and 128 original articles, respectively, in the year 2015. Results of this study indicate that cross-sectional study design, bivariate inferential statistical analysis entailing comparison between two variables/groups, and use of statistical software programme SPSS to be the most common study design, inferential statistical analysis, and statistical analysis software programmes, respectively. These results echo previously published assessment of these two journals for the year 2014.

  3. Applied statistical designs for the researcher

    CERN Document Server

    Paulson, Daryl S

    2003-01-01

    Research and Statistics Basic Review of Parametric Statistics Exploratory Data Analysis Two Sample Tests Completely Randomized One-Factor Analysis of Variance One and Two Restrictions on Randomization Completely Randomized Two-Factor Factorial Designs Two-Factor Factorial Completely Randomized Blocked Designs Useful Small Scale Pilot Designs Nested Statistical Designs Linear Regression Nonparametric Statistics Introduction to Research Synthesis and "Meta-Analysis" and Conclusory Remarks References Index.

  4. A statistical design for testing apomictic diversification through linkage analysis.

    Science.gov (United States)

    Zeng, Yanru; Hou, Wei; Song, Shuang; Feng, Sisi; Shen, Lin; Xia, Guohua; Wu, Rongling

    2014-03-01

    The capacity of apomixis to generate maternal clones through seed reproduction has made it a useful characteristic for the fixation of heterosis in plant breeding. It has been observed that apomixis displays pronounced intra- and interspecific diversification, but the genetic mechanisms underlying this diversification remains elusive, obstructing the exploitation of this phenomenon in practical breeding programs. By capitalizing on molecular information in mapping populations, we describe and assess a statistical design that deploys linkage analysis to estimate and test the pattern and extent of apomictic differences at various levels from genotypes to species. The design is based on two reciprocal crosses between two individuals each chosen from a hermaphrodite or monoecious species. A multinomial distribution likelihood is constructed by combining marker information from two crosses. The EM algorithm is implemented to estimate the rate of apomixis and test its difference between two plant populations or species as the parents. The design is validated by computer simulation. A real data analysis of two reciprocal crosses between hickory (Carya cathayensis) and pecan (C. illinoensis) demonstrates the utilization and usefulness of the design in practice. The design provides a tool to address fundamental and applied questions related to the evolution and breeding of apomixis.

  5. Empirical Statistical Power for Testing Multilocus Genotypic Effects under Unbalanced Designs Using a Gibbs Sampler

    Directory of Open Access Journals (Sweden)

    Chaeyoung Lee

    2012-11-01

    Full Text Available Epistasis that may explain a large portion of the phenotypic variation for complex economic traits of animals has been ignored in many genetic association studies. A Baysian method was introduced to draw inferences about multilocus genotypic effects based on their marginal posterior distributions by a Gibbs sampler. A simulation study was conducted to provide statistical powers under various unbalanced designs by using this method. Data were simulated by combined designs of number of loci, within genotype variance, and sample size in unbalanced designs with or without null combined genotype cells. Mean empirical statistical power was estimated for testing posterior mean estimate of combined genotype effect. A practical example for obtaining empirical statistical power estimates with a given sample size was provided under unbalanced designs. The empirical statistical powers would be useful for determining an optimal design when interactive associations of multiple loci with complex phenotypes were examined.

  6. Effect of non-normality on test statistics for one-way independent groups designs.

    Science.gov (United States)

    Cribbie, Robert A; Fiksenbaum, Lisa; Keselman, H J; Wilcox, Rand R

    2012-02-01

    The data obtained from one-way independent groups designs is typically non-normal in form and rarely equally variable across treatment populations (i.e., population variances are heterogeneous). Consequently, the classical test statistic that is used to assess statistical significance (i.e., the analysis of variance F test) typically provides invalid results (e.g., too many Type I errors, reduced power). For this reason, there has been considerable interest in finding a test statistic that is appropriate under conditions of non-normality and variance heterogeneity. Previously recommended procedures for analysing such data include the James test, the Welch test applied either to the usual least squares estimators of central tendency and variability, or the Welch test with robust estimators (i.e., trimmed means and Winsorized variances). A new statistic proposed by Krishnamoorthy, Lu, and Mathew, intended to deal with heterogeneous variances, though not non-normality, uses a parametric bootstrap procedure. In their investigation of the parametric bootstrap test, the authors examined its operating characteristics under limited conditions and did not compare it to the Welch test based on robust estimators. Thus, we investigated how the parametric bootstrap procedure and a modified parametric bootstrap procedure based on trimmed means perform relative to previously recommended procedures when data are non-normal and heterogeneous. The results indicated that the tests based on trimmed means offer the best Type I error control and power when variances are unequal and at least some of the distribution shapes are non-normal. © 2011 The British Psychological Society.

  7. Some challenges with statistical inference in adaptive designs.

    Science.gov (United States)

    Hung, H M James; Wang, Sue-Jane; Yang, Peiling

    2014-01-01

    Adaptive designs have generated a great deal of attention to clinical trial communities. The literature contains many statistical methods to deal with added statistical uncertainties concerning the adaptations. Increasingly encountered in regulatory applications are adaptive statistical information designs that allow modification of sample size or related statistical information and adaptive selection designs that allow selection of doses or patient populations during the course of a clinical trial. For adaptive statistical information designs, a few statistical testing methods are mathematically equivalent, as a number of articles have stipulated, but arguably there are large differences in their practical ramifications. We pinpoint some undesirable features of these methods in this work. For adaptive selection designs, the selection based on biomarker data for testing the correlated clinical endpoints may increase statistical uncertainty in terms of type I error probability, and most importantly the increased statistical uncertainty may be impossible to assess.

  8. [The research protocol VI: How to choose the appropriate statistical test. Inferential statistics].

    Science.gov (United States)

    Flores-Ruiz, Eric; Miranda-Novales, María Guadalupe; Villasís-Keever, Miguel Ángel

    2017-01-01

    The statistical analysis can be divided in two main components: descriptive analysis and inferential analysis. An inference is to elaborate conclusions from the tests performed with the data obtained from a sample of a population. Statistical tests are used in order to establish the probability that a conclusion obtained from a sample is applicable to the population from which it was obtained. However, choosing the appropriate statistical test in general poses a challenge for novice researchers. To choose the statistical test it is necessary to take into account three aspects: the research design, the number of measurements and the scale of measurement of the variables. Statistical tests are divided into two sets, parametric and nonparametric. Parametric tests can only be used if the data show a normal distribution. Choosing the right statistical test will make it easier for readers to understand and apply the results.

  9. The research protocol VI: How to choose the appropriate statistical test. Inferential statistics

    Directory of Open Access Journals (Sweden)

    Eric Flores-Ruiz

    2017-10-01

    Full Text Available The statistical analysis can be divided in two main components: descriptive analysis and inferential analysis. An inference is to elaborate conclusions from the tests performed with the data obtained from a sample of a population. Statistical tests are used in order to establish the probability that a conclusion obtained from a sample is applicable to the population from which it was obtained. However, choosing the appropriate statistical test in general poses a challenge for novice researchers. To choose the statistical test it is necessary to take into account three aspects: the research design, the number of measurements and the scale of measurement of the variables. Statistical tests are divided into two sets, parametric and nonparametric. Parametric tests can only be used if the data show a normal distribution. Choosing the right statistical test will make it easier for readers to understand and apply the results.

  10. 100 statistical tests

    CERN Document Server

    Kanji, Gopal K

    2006-01-01

    This expanded and updated Third Edition of Gopal K. Kanji's best-selling resource on statistical tests covers all the most commonly used tests with information on how to calculate and interpret results with simple datasets. Each entry begins with a short summary statement about the test's purpose, and contains details of the test objective, the limitations (or assumptions) involved, a brief outline of the method, a worked example, and the numerical calculation. 100 Statistical Tests, Third Edition is the one indispensable guide for users of statistical materials and consumers of statistical information at all levels and across all disciplines.

  11. Robust inference from multiple test statistics via permutations: a better alternative to the single test statistic approach for randomized trials.

    Science.gov (United States)

    Ganju, Jitendra; Yu, Xinxin; Ma, Guoguang Julie

    2013-01-01

    Formal inference in randomized clinical trials is based on controlling the type I error rate associated with a single pre-specified statistic. The deficiency of using just one method of analysis is that it depends on assumptions that may not be met. For robust inference, we propose pre-specifying multiple test statistics and relying on the minimum p-value for testing the null hypothesis of no treatment effect. The null hypothesis associated with the various test statistics is that the treatment groups are indistinguishable. The critical value for hypothesis testing comes from permutation distributions. Rejection of the null hypothesis when the smallest p-value is less than the critical value controls the type I error rate at its designated value. Even if one of the candidate test statistics has low power, the adverse effect on the power of the minimum p-value statistic is not much. Its use is illustrated with examples. We conclude that it is better to rely on the minimum p-value rather than a single statistic particularly when that single statistic is the logrank test, because of the cost and complexity of many survival trials. Copyright © 2013 John Wiley & Sons, Ltd.

  12. Statistical tests for person misfit in computerized adaptive testing

    NARCIS (Netherlands)

    Glas, Cornelis A.W.; Meijer, R.R.; van Krimpen-Stoop, Edith

    1998-01-01

    Recently, several person-fit statistics have been proposed to detect nonfitting response patterns. This study is designed to generalize an approach followed by Klauer (1995) to an adaptive testing system using the two-parameter logistic model (2PL) as a null model. The approach developed by Klauer

  13. Statistical core design

    International Nuclear Information System (INIS)

    Oelkers, E.; Heller, A.S.; Farnsworth, D.A.; Kearfott, K.J.

    1978-01-01

    The report describes the statistical analysis of DNBR thermal-hydraulic margin of a 3800 MWt, 205-FA core under design overpower conditions. The analysis used LYNX-generated data at predetermined values of the input variables whose uncertainties were to be statistically combined. LYNX data were used to construct an efficient response surface model in the region of interest; the statistical analysis was accomplished through the evaluation of core reliability; utilizing propagation of the uncertainty distributions of the inputs. The response surface model was implemented in both the analytical error propagation and Monte Carlo Techniques. The basic structural units relating to the acceptance criteria are fuel pins. Therefore, the statistical population of pins with minimum DNBR values smaller than specified values is determined. The specified values are designated relative to the most probable and maximum design DNBR values on the power limiting pin used in present design analysis, so that gains over the present design criteria could be assessed for specified probabilistic acceptance criteria. The results are equivalent to gains ranging from 1.2 to 4.8 percent of rated power dependent on the acceptance criterion. The corresponding acceptance criteria range from 95 percent confidence that no pin will be in DNB to 99.9 percent of the pins, which are expected to avoid DNB

  14. Statistical analysis and planning of multihundred-watt impact tests

    International Nuclear Information System (INIS)

    Martz, H.F. Jr.; Waterman, M.S.

    1977-10-01

    Modular multihundred-watt (MHW) radioisotope thermoelectric generators (RTG's) are used as a power source for spacecraft. Due to possible environmental contamination by radioactive materials, numerous tests are required to determine and verify the safety of the RTG. There are results available from 27 fueled MHW impact tests regarding hoop failure, fingerprint failure, and fuel failure. Data from the 27 tests are statistically analyzed for relationships that exist between the test design variables and the failure types. Next, these relationships are used to develop a statistical procedure for planning and conducting either future MHW impact tests or similar tests on other RTG fuel sources. Finally, some conclusions are given

  15. Testing statistical hypotheses

    CERN Document Server

    Lehmann, E L

    2005-01-01

    The third edition of Testing Statistical Hypotheses updates and expands upon the classic graduate text, emphasizing optimality theory for hypothesis testing and confidence sets. The principal additions include a rigorous treatment of large sample optimality, together with the requisite tools. In addition, an introduction to the theory of resampling methods such as the bootstrap is developed. The sections on multiple testing and goodness of fit testing are expanded. The text is suitable for Ph.D. students in statistics and includes over 300 new problems out of a total of more than 760. E.L. Lehmann is Professor of Statistics Emeritus at the University of California, Berkeley. He is a member of the National Academy of Sciences and the American Academy of Arts and Sciences, and the recipient of honorary degrees from the University of Leiden, The Netherlands and the University of Chicago. He is the author of Elements of Large-Sample Theory and (with George Casella) he is also the author of Theory of Point Estimat...

  16. Designing experiments for maximum information from cyclic oxidation tests and their statistical analysis using half Normal plots

    International Nuclear Information System (INIS)

    Coleman, S.Y.; Nicholls, J.R.

    2006-01-01

    Cyclic oxidation testing at elevated temperatures requires careful experimental design and the adoption of standard procedures to ensure reliable data. This is a major aim of the 'COTEST' research programme. Further, as such tests are both time consuming and costly, in terms of human effort, to take measurements over a large number of cycles, it is important to gain maximum information from a minimum number of tests (trials). This search for standardisation of cyclic oxidation conditions leads to a series of tests to determine the relative effects of cyclic parameters on the oxidation process. Following a review of the available literature, databases and the experience of partners to the COTEST project, the most influential parameters, upper dwell temperature (oxidation temperature) and time (hot time), lower dwell time (cold time) and environment, were investigated in partners' laboratories. It was decided to test upper dwell temperature at 3 levels, at and equidistant from a reference temperature; to test upper dwell time at a reference, a higher and a lower time; to test lower dwell time at a reference and a higher time and wet and dry environments. Thus an experiment, consisting of nine trials, was designed according to statistical criteria. The results of the trial were analysed statistically, to test the main linear and quadratic effects of upper dwell temperature and hot time and the main effects of lower dwell time (cold time) and environment. The nine trials are a quarter fraction of the 36 possible combinations of parameter levels that could have been studied. The results have been analysed by half Normal plots as there are only 2 degrees of freedom for the experimental error variance, which is rather low for a standard analysis of variance. Half Normal plots give a visual indication of which factors are statistically significant. In this experiment each trial has 3 replications, and the data are analysed in terms of mean mass change, oxidation kinetics

  17. Statistical treatment of fatigue test data

    International Nuclear Information System (INIS)

    Raske, D.T.

    1980-01-01

    This report discussed several aspects of fatigue data analysis in order to provide a basis for the development of statistically sound design curves. Included is a discussion on the choice of the dependent variable, the assumptions associated with least squares regression models, the variability of fatigue data, the treatment of data from suspended tests and outlying observations, and various strain-life relations

  18. Research design and statistical analysis

    CERN Document Server

    Myers, Jerome L; Lorch Jr, Robert F

    2013-01-01

    Research Design and Statistical Analysis provides comprehensive coverage of the design principles and statistical concepts necessary to make sense of real data.  The book's goal is to provide a strong conceptual foundation to enable readers to generalize concepts to new research situations.  Emphasis is placed on the underlying logic and assumptions of the analysis and what it tells the researcher, the limitations of the analysis, and the consequences of violating assumptions.  Sampling, design efficiency, and statistical models are emphasized throughout. As per APA recommendations

  19. Multi-reader ROC studies with split-plot designs: a comparison of statistical methods.

    Science.gov (United States)

    Obuchowski, Nancy A; Gallas, Brandon D; Hillis, Stephen L

    2012-12-01

    Multireader imaging trials often use a factorial design, in which study patients undergo testing with all imaging modalities and readers interpret the results of all tests for all patients. A drawback of this design is the large number of interpretations required of each reader. Split-plot designs have been proposed as an alternative, in which one or a subset of readers interprets all images of a sample of patients, while other readers interpret the images of other samples of patients. In this paper, the authors compare three methods of analysis for the split-plot design. Three statistical methods are presented: the Obuchowski-Rockette method modified for the split-plot design, a newly proposed marginal-mean analysis-of-variance approach, and an extension of the three-sample U-statistic method. A simulation study using the Roe-Metz model was performed to compare the type I error rate, power, and confidence interval coverage of the three test statistics. The type I error rates for all three methods are close to the nominal level but tend to be slightly conservative. The statistical power is nearly identical for the three methods. The coverage of 95% confidence intervals falls close to the nominal coverage for small and large sample sizes. The split-plot multireader, multicase study design can be statistically efficient compared to the factorial design, reducing the number of interpretations required per reader. Three methods of analysis, shown to have nominal type I error rates, similar power, and nominal confidence interval coverage, are available for this study design. Copyright © 2012 AUR. All rights reserved.

  20. [Clinical research IV. Relevancy of the statistical test chosen].

    Science.gov (United States)

    Talavera, Juan O; Rivas-Ruiz, Rodolfo

    2011-01-01

    When we look at the difference between two therapies or the association of a risk factor or prognostic indicator with its outcome, we need to evaluate the accuracy of the result. This assessment is based on a judgment that uses information about the study design and statistical management of the information. This paper specifically mentions the relevance of the statistical test selected. Statistical tests are chosen mainly from two characteristics: the objective of the study and type of variables. The objective can be divided into three test groups: a) those in which you want to show differences between groups or inside a group before and after a maneuver, b) those that seek to show the relationship (correlation) between variables, and c) those that aim to predict an outcome. The types of variables are divided in two: quantitative (continuous and discontinuous) and qualitative (ordinal and dichotomous). For example, if we seek to demonstrate differences in age (quantitative variable) among patients with systemic lupus erythematosus (SLE) with and without neurological disease (two groups), the appropriate test is the "Student t test for independent samples." But if the comparison is about the frequency of females (binomial variable), then the appropriate statistical test is the χ(2).

  1. Selecting the most appropriate inferential statistical test for your quantitative research study.

    Science.gov (United States)

    Bettany-Saltikov, Josette; Whittaker, Victoria Jane

    2014-06-01

    To discuss the issues and processes relating to the selection of the most appropriate statistical test. A review of the basic research concepts together with a number of clinical scenarios is used to illustrate this. Quantitative nursing research generally features the use of empirical data which necessitates the selection of both descriptive and statistical tests. Different types of research questions can be answered by different types of research designs, which in turn need to be matched to a specific statistical test(s). Discursive paper. This paper discusses the issues relating to the selection of the most appropriate statistical test and makes some recommendations as to how these might be dealt with. When conducting empirical quantitative studies, a number of key issues need to be considered. Considerations for selecting the most appropriate statistical tests are discussed and flow charts provided to facilitate this process. When nursing clinicians and researchers conduct quantitative research studies, it is crucial that the most appropriate statistical test is selected to enable valid conclusions to be made. © 2013 John Wiley & Sons Ltd.

  2. CONFIDENCE LEVELS AND/VS. STATISTICAL HYPOTHESIS TESTING IN STATISTICAL ANALYSIS. CASE STUDY

    Directory of Open Access Journals (Sweden)

    ILEANA BRUDIU

    2009-05-01

    Full Text Available Estimated parameters with confidence intervals and testing statistical assumptions used in statistical analysis to obtain conclusions on research from a sample extracted from the population. Paper to the case study presented aims to highlight the importance of volume of sample taken in the study and how this reflects on the results obtained when using confidence intervals and testing for pregnant. If statistical testing hypotheses not only give an answer "yes" or "no" to some questions of statistical estimation using statistical confidence intervals provides more information than a test statistic, show high degree of uncertainty arising from small samples and findings build in the "marginally significant" or "almost significant (p very close to 0.05.

  3. The insignificance of statistical significance testing

    Science.gov (United States)

    Johnson, Douglas H.

    1999-01-01

    Despite their use in scientific journals such as The Journal of Wildlife Management, statistical hypothesis tests add very little value to the products of research. Indeed, they frequently confuse the interpretation of data. This paper describes how statistical hypothesis tests are often viewed, and then contrasts that interpretation with the correct one. I discuss the arbitrariness of P-values, conclusions that the null hypothesis is true, power analysis, and distinctions between statistical and biological significance. Statistical hypothesis testing, in which the null hypothesis about the properties of a population is almost always known a priori to be false, is contrasted with scientific hypothesis testing, which examines a credible null hypothesis about phenomena in nature. More meaningful alternatives are briefly outlined, including estimation and confidence intervals for determining the importance of factors, decision theory for guiding actions in the face of uncertainty, and Bayesian approaches to hypothesis testing and other statistical practices.

  4. Testing statistical hypotheses of equivalence

    CERN Document Server

    Wellek, Stefan

    2010-01-01

    Equivalence testing has grown significantly in importance over the last two decades, especially as its relevance to a variety of applications has become understood. Yet published work on the general methodology remains scattered in specialists' journals, and for the most part, it focuses on the relatively narrow topic of bioequivalence assessment.With a far broader perspective, Testing Statistical Hypotheses of Equivalence provides the first comprehensive treatment of statistical equivalence testing. The author addresses a spectrum of specific, two-sided equivalence testing problems, from the

  5. Statistical hypothesis testing with SAS and R

    CERN Document Server

    Taeger, Dirk

    2014-01-01

    A comprehensive guide to statistical hypothesis testing with examples in SAS and R When analyzing datasets the following questions often arise:Is there a short hand procedure for a statistical test available in SAS or R?If so, how do I use it?If not, how do I program the test myself? This book answers these questions and provides an overview of the most commonstatistical test problems in a comprehensive way, making it easy to find and performan appropriate statistical test. A general summary of statistical test theory is presented, along with a basicdescription for each test, including the

  6. Introduction to Statistically Designed Experiments

    Energy Technology Data Exchange (ETDEWEB)

    Heaney, Mike

    2016-09-13

    Statistically designed experiments can save researchers time and money by reducing the number of necessary experimental trials, while resulting in more conclusive experimental results. Surprisingly, many researchers are still not aware of this efficient and effective experimental methodology. As reported in a 2013 article from Chemical & Engineering News, there has been a resurgence of this methodology in recent years (http://cen.acs.org/articles/91/i13/Design-Experiments-Makes-Comeback.html?h=2027056365). This presentation will provide a brief introduction to statistically designed experiments. The main advantages will be reviewed along with the some basic concepts such as factorial and fractional factorial designs. The recommended sequential approach to experiments will be introduced and finally a case study will be presented to demonstrate this methodology.

  7. A weighted generalized score statistic for comparison of predictive values of diagnostic tests.

    Science.gov (United States)

    Kosinski, Andrzej S

    2013-03-15

    Positive and negative predictive values are important measures of a medical diagnostic test performance. We consider testing equality of two positive or two negative predictive values within a paired design in which all patients receive two diagnostic tests. The existing statistical tests for testing equality of predictive values are either Wald tests based on the multinomial distribution or the empirical Wald and generalized score tests within the generalized estimating equations (GEE) framework. As presented in the literature, these test statistics have considerably complex formulas without clear intuitive insight. We propose their re-formulations that are mathematically equivalent but algebraically simple and intuitive. As is clearly seen with a new re-formulation we presented, the generalized score statistic does not always reduce to the commonly used score statistic in the independent samples case. To alleviate this, we introduce a weighted generalized score (WGS) test statistic that incorporates empirical covariance matrix with newly proposed weights. This statistic is simple to compute, always reduces to the score statistic in the independent samples situation, and preserves type I error better than the other statistics as demonstrated by simulations. Thus, we believe that the proposed WGS statistic is the preferred statistic for testing equality of two predictive values and for corresponding sample size computations. The new formulas of the Wald statistics may be useful for easy computation of confidence intervals for difference of predictive values. The introduced concepts have potential to lead to development of the WGS test statistic in a general GEE setting. Copyright © 2012 John Wiley & Sons, Ltd.

  8. Quantum Statistical Testing of a Quantum Random Number Generator

    Energy Technology Data Exchange (ETDEWEB)

    Humble, Travis S [ORNL

    2014-01-01

    The unobservable elements in a quantum technology, e.g., the quantum state, complicate system verification against promised behavior. Using model-based system engineering, we present methods for verifying the opera- tion of a prototypical quantum random number generator. We begin with the algorithmic design of the QRNG followed by the synthesis of its physical design requirements. We next discuss how quantum statistical testing can be used to verify device behavior as well as detect device bias. We conclude by highlighting how system design and verification methods must influence effort to certify future quantum technologies.

  9. Polarimetric Segmentation Using Wishart Test Statistic

    DEFF Research Database (Denmark)

    Skriver, Henning; Schou, Jesper; Nielsen, Allan Aasbjerg

    2002-01-01

    A newly developed test statistic for equality of two complex covariance matrices following the complex Wishart distribution and an associated asymptotic probability for the test statistic has been used in a segmentation algorithm. The segmentation algorithm is based on the MUM (merge using moments......) approach, which is a merging algorithm for single channel SAR images. The polarimetric version described in this paper uses the above-mentioned test statistic for merging. The segmentation algorithm has been applied to polarimetric SAR data from the Danish dual-frequency, airborne polarimetric SAR, EMISAR...

  10. DESIGNING ENVIRONMENTAL MONITORING DATABASES FOR STATISTIC ASSESSMENT

    Science.gov (United States)

    Databases designed for statistical analyses have characteristics that distinguish them from databases intended for general use. EMAP uses a probabilistic sampling design to collect data to produce statistical assessments of environmental conditions. In addition to supporting the ...

  11. A simplification of the likelihood ratio test statistic for testing ...

    African Journals Online (AJOL)

    The traditional likelihood ratio test statistic for testing hypothesis about goodness of fit of multinomial probabilities in one, two and multi – dimensional contingency table was simplified. Advantageously, using the simplified version of the statistic to test the null hypothesis is easier and faster because calculating the expected ...

  12. Test for the statistical significance of differences between ROC curves

    International Nuclear Information System (INIS)

    Metz, C.E.; Kronman, H.B.

    1979-01-01

    A test for the statistical significance of observed differences between two measured Receiver Operating Characteristic (ROC) curves has been designed and evaluated. The set of observer response data for each ROC curve is assumed to be independent and to arise from a ROC curve having a form which, in the absence of statistical fluctuations in the response data, graphs as a straight line on double normal-deviate axes. To test the significance of an apparent difference between two measured ROC curves, maximum likelihood estimates of the two parameters of each curve and the associated parameter variances and covariance are calculated from the corresponding set of observer response data. An approximate Chi-square statistic with two degrees of freedom is then constructed from the differences between the parameters estimated for each ROC curve and from the variances and covariances of these estimates. This statistic is known to be truly Chi-square distributed only in the limit of large numbers of trials in the observer performance experiments. Performance of the statistic for data arising from a limited number of experimental trials was evaluated. Independent sets of rating scale data arising from the same underlying ROC curve were paired, and the fraction of differences found (falsely) significant was compared to the significance level, α, used with the test. Although test performance was found to be somewhat dependent on both the number of trials in the data and the position of the underlying ROC curve in the ROC space, the results for various significance levels showed the test to be reliable under practical experimental conditions

  13. Explorations in Statistics: Hypothesis Tests and P Values

    Science.gov (United States)

    Curran-Everett, Douglas

    2009-01-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This second installment of "Explorations in Statistics" delves into test statistics and P values, two concepts fundamental to the test of a scientific null hypothesis. The essence of a test statistic is that it compares what…

  14. Distinguish Dynamic Basic Blocks by Structural Statistical Testing

    DEFF Research Database (Denmark)

    Petit, Matthieu; Gotlieb, Arnaud

    Statistical testing aims at generating random test data that respect selected probabilistic properties. A distribution probability is associated with the program input space in order to achieve statistical test purpose: to test the most frequent usage of software or to maximize the probability of...... control flow path) during the test data selection. We implemented this algorithm in a statistical test data generator for Java programs. A first experimental validation is presented...

  15. Graphical Tests for Power Comparison of Competing Designs.

    Science.gov (United States)

    Hofmann, H; Follett, L; Majumder, M; Cook, D

    2012-12-01

    Lineups have been established as tools for visual testing similar to standard statistical inference tests, allowing us to evaluate the validity of graphical findings in an objective manner. In simulation studies lineups have been shown as being efficient: the power of visual tests is comparable to classical tests while being much less stringent in terms of distributional assumptions made. This makes lineups versatile, yet powerful, tools in situations where conditions for regular statistical tests are not or cannot be met. In this paper we introduce lineups as a tool for evaluating the power of competing graphical designs. We highlight some of the theoretical properties and then show results from two studies evaluating competing designs: both studies are designed to go to the limits of our perceptual abilities to highlight differences between designs. We use both accuracy and speed of evaluation as measures of a successful design. The first study compares the choice of coordinate system: polar versus cartesian coordinates. The results show strong support in favor of cartesian coordinates in finding fast and accurate answers to spotting patterns. The second study is aimed at finding shift differences between distributions. Both studies are motivated by data problems that we have recently encountered, and explore using simulated data to evaluate the plot designs under controlled conditions. Amazon Mechanical Turk (MTurk) is used to conduct the studies. The lineups provide an effective mechanism for objectively evaluating plot designs.

  16. Intermediate/Advanced Research Design and Statistics

    Science.gov (United States)

    Ploutz-Snyder, Robert

    2009-01-01

    The purpose of this module is To provide Institutional Researchers (IRs) with an understanding of the principles of advanced research design and the intermediate/advanced statistical procedures consistent with such designs

  17. Statistical Methods for the detection of answer copying on achievement tests

    NARCIS (Netherlands)

    Sotaridona, Leonardo

    2003-01-01

    This thesis contains a collection of studies where statistical methods for the detection of answer copying on achievement tests in multiple-choice format are proposed and investigated. Although all methods are suited to detect answer copying, each method is designed to address specific

  18. Content-based VLE designs improve learning efficiency in constructivist statistics education.

    Science.gov (United States)

    Wessa, Patrick; De Rycker, Antoon; Holliday, Ian Edward

    2011-01-01

    We introduced a series of computer-supported workshops in our undergraduate statistics courses, in the hope that it would help students to gain a deeper understanding of statistical concepts. This raised questions about the appropriate design of the Virtual Learning Environment (VLE) in which such an approach had to be implemented. Therefore, we investigated two competing software design models for VLEs. In the first system, all learning features were a function of the classical VLE. The second system was designed from the perspective that learning features should be a function of the course's core content (statistical analyses), which required us to develop a specific-purpose Statistical Learning Environment (SLE) based on Reproducible Computing and newly developed Peer Review (PR) technology. The main research question is whether the second VLE design improved learning efficiency as compared to the standard type of VLE design that is commonly used in education. As a secondary objective we provide empirical evidence about the usefulness of PR as a constructivist learning activity which supports non-rote learning. Finally, this paper illustrates that it is possible to introduce a constructivist learning approach in large student populations, based on adequately designed educational technology, without subsuming educational content to technological convenience. Both VLE systems were tested within a two-year quasi-experiment based on a Reliable Nonequivalent Group Design. This approach allowed us to draw valid conclusions about the treatment effect of the changed VLE design, even though the systems were implemented in successive years. The methodological aspects about the experiment's internal validity are explained extensively. The effect of the design change is shown to have substantially increased the efficiency of constructivist, computer-assisted learning activities for all cohorts of the student population under investigation. The findings demonstrate that a

  19. Content-based VLE designs improve learning efficiency in constructivist statistics education.

    Directory of Open Access Journals (Sweden)

    Patrick Wessa

    Full Text Available BACKGROUND: We introduced a series of computer-supported workshops in our undergraduate statistics courses, in the hope that it would help students to gain a deeper understanding of statistical concepts. This raised questions about the appropriate design of the Virtual Learning Environment (VLE in which such an approach had to be implemented. Therefore, we investigated two competing software design models for VLEs. In the first system, all learning features were a function of the classical VLE. The second system was designed from the perspective that learning features should be a function of the course's core content (statistical analyses, which required us to develop a specific-purpose Statistical Learning Environment (SLE based on Reproducible Computing and newly developed Peer Review (PR technology. OBJECTIVES: The main research question is whether the second VLE design improved learning efficiency as compared to the standard type of VLE design that is commonly used in education. As a secondary objective we provide empirical evidence about the usefulness of PR as a constructivist learning activity which supports non-rote learning. Finally, this paper illustrates that it is possible to introduce a constructivist learning approach in large student populations, based on adequately designed educational technology, without subsuming educational content to technological convenience. METHODS: Both VLE systems were tested within a two-year quasi-experiment based on a Reliable Nonequivalent Group Design. This approach allowed us to draw valid conclusions about the treatment effect of the changed VLE design, even though the systems were implemented in successive years. The methodological aspects about the experiment's internal validity are explained extensively. RESULTS: The effect of the design change is shown to have substantially increased the efficiency of constructivist, computer-assisted learning activities for all cohorts of the student

  20. Content-Based VLE Designs Improve Learning Efficiency in Constructivist Statistics Education

    Science.gov (United States)

    Wessa, Patrick; De Rycker, Antoon; Holliday, Ian Edward

    2011-01-01

    Background We introduced a series of computer-supported workshops in our undergraduate statistics courses, in the hope that it would help students to gain a deeper understanding of statistical concepts. This raised questions about the appropriate design of the Virtual Learning Environment (VLE) in which such an approach had to be implemented. Therefore, we investigated two competing software design models for VLEs. In the first system, all learning features were a function of the classical VLE. The second system was designed from the perspective that learning features should be a function of the course's core content (statistical analyses), which required us to develop a specific–purpose Statistical Learning Environment (SLE) based on Reproducible Computing and newly developed Peer Review (PR) technology. Objectives The main research question is whether the second VLE design improved learning efficiency as compared to the standard type of VLE design that is commonly used in education. As a secondary objective we provide empirical evidence about the usefulness of PR as a constructivist learning activity which supports non-rote learning. Finally, this paper illustrates that it is possible to introduce a constructivist learning approach in large student populations, based on adequately designed educational technology, without subsuming educational content to technological convenience. Methods Both VLE systems were tested within a two-year quasi-experiment based on a Reliable Nonequivalent Group Design. This approach allowed us to draw valid conclusions about the treatment effect of the changed VLE design, even though the systems were implemented in successive years. The methodological aspects about the experiment's internal validity are explained extensively. Results The effect of the design change is shown to have substantially increased the efficiency of constructivist, computer-assisted learning activities for all cohorts of the student population under

  1. Evaluating Two Models of Collaborative Tests in an Online Introductory Statistics Course

    Science.gov (United States)

    Björnsdóttir, Auðbjörg; Garfield, Joan; Everson, Michelle

    2015-01-01

    This study explored the use of two different types of collaborative tests in an online introductory statistics course. A study was designed and carried out to investigate three research questions: (1) What is the difference in students' learning between using consensus and non-consensus collaborative tests in the online environment?, (2) What is…

  2. Statistical considerations of graphite strength for assessing design allowable stresses

    International Nuclear Information System (INIS)

    Ishihara, M.; Mogi, H.; Ioka, I.; Arai, T.; Oku, T.

    1987-01-01

    Several aspects of statistics need to be considered to determine design allowable stresses for graphite structures. These include: 1) Statistical variation of graphite material strength. 2) Uncertainty of calculated stress. 3) Reliability (survival probability) required from operational and safety performance of graphite structures. This paper deals with some statistical considerations of structural graphite for assessing design allowable stress. Firstly, probability distribution functions of tensile and compressive strengths are investigated on experimental Very High Temperature candidated graphites. Normal, logarithmic normal and Weibull distribution functions are compared in terms of coefficient of correlation to measured strength data. This leads to the adaptation of normal distribution function. Then, the relation between factor of safety and fracture probability is discussed on the following items: 1) As the graphite strength is more variable than metalic material's strength, the effect of strength variation to the fracture probability is evaluated. 2) Fracture probability depending on survival probability of 99 ∼ 99.9 (%) with confidence level of 90 ∼ 95 (%) is discussed. 3) As the material properties used in the design analysis are usually the mean values of their variation, the additional effect of these variations on the fracture probability is discussed. Finally, the way to assure the minimum ultimate strength with required survival probability with confidence level is discussed in view of statistical treatment of the strength data from varying sample numbers in a material acceptance test. (author)

  3. Comparison of small n statistical tests of differential expression applied to microarrays

    Directory of Open Access Journals (Sweden)

    Lee Anna Y

    2009-02-01

    Full Text Available Abstract Background DNA microarrays provide data for genome wide patterns of expression between observation classes. Microarray studies often have small samples sizes, however, due to cost constraints or specimen availability. This can lead to poor random error estimates and inaccurate statistical tests of differential expression. We compare the performance of the standard t-test, fold change, and four small n statistical test methods designed to circumvent these problems. We report results of various normalization methods for empirical microarray data and of various random error models for simulated data. Results Three Empirical Bayes methods (CyberT, BRB, and limma t-statistics were the most effective statistical tests across simulated and both 2-colour cDNA and Affymetrix experimental data. The CyberT regularized t-statistic in particular was able to maintain expected false positive rates with simulated data showing high variances at low gene intensities, although at the cost of low true positive rates. The Local Pooled Error (LPE test introduced a bias that lowered false positive rates below theoretically expected values and had lower power relative to the top performers. The standard two-sample t-test and fold change were also found to be sub-optimal for detecting differentially expressed genes. The generalized log transformation was shown to be beneficial in improving results with certain data sets, in particular high variance cDNA data. Conclusion Pre-processing of data influences performance and the proper combination of pre-processing and statistical testing is necessary for obtaining the best results. All three Empirical Bayes methods assessed in our study are good choices for statistical tests for small n microarray studies for both Affymetrix and cDNA data. Choice of method for a particular study will depend on software and normalization preferences.

  4. Optimal testlet pool assembly for multistage testing designs

    NARCIS (Netherlands)

    Ariel, A.; Veldkamp, Bernard P.; Breithaupt, Krista

    2006-01-01

    Computerized multistage testing (MST) designs require sets of test questions (testlets) to be assembled to meet strict, often competing criteria. Rules that govern testlet assembly may dictate the number of questions on a particular subject or may describe desirable statistical properties for the

  5. A study on the advanced statistical core thermal design methodology

    International Nuclear Information System (INIS)

    Lee, Seung Hyuk

    1992-02-01

    A statistical core thermal design methodology for generating the limit DNBR and the nominal DNBR is proposed and used in assessing the best-estimate thermal margin in a reactor core. Firstly, the Latin Hypercube Sampling Method instead of the conventional Experimental Design Technique is utilized as an input sampling method for a regression analysis to evaluate its sampling efficiency. Secondly and as a main topic, the Modified Latin Hypercube Sampling and the Hypothesis Test Statistics method is proposed as a substitute for the current statistical core thermal design method. This new methodology adopts 'a Modified Latin Hypercube Sampling Method' which uses the mean values of each interval of input variables instead of random values to avoid the extreme cases that arise in the tail areas of some parameters. Next, the independence between the input variables is verified through 'Correlation Coefficient Test' for statistical treatment of their uncertainties. And the distribution type of DNBR response is determined though 'Goodness of Fit Test'. Finally, the limit DNBR with one-sided 95% probability and 95% confidence level, DNBR 95/95 ' is estimated. The advantage of this methodology over the conventional statistical method using Response Surface and Monte Carlo simulation technique lies in its simplicity of the analysis procedure, while maintaining the same level of confidence in the limit DNBR result. This methodology is applied to the two cases of DNBR margin calculation. The first case is the application to the determination of the limit DNBR where the DNBR margin is determined by the difference between the nominal DNBR and the limit DNBR. The second case is the application to the determination of the nominal DNBR where the DNBR margin is determined by the difference between the lower limit value of the nominal DNBR and the CHF correlation limit being used. From this study, it is deduced that the proposed methodology gives a good agreement in the DNBR results

  6. Testing and qualification of confidence in statistical procedures

    Energy Technology Data Exchange (ETDEWEB)

    Serghiuta, D.; Tholammakkil, J.; Hammouda, N. [Canadian Nuclear Safety Commission (Canada); O' Hagan, A. [Sheffield Univ. (United Kingdom)

    2014-07-01

    This paper discusses a framework for designing artificial test problems, evaluation criteria, and two of the benchmark tests developed under a research project initiated by the Canadian Nuclear Safety Commission to investigate the approaches for qualification of tolerance limit methods and algorithms proposed for application in optimization of CANDU regional/neutron overpower protection trip setpoints for aged conditions. A significant component of this investigation has been the development of a series of benchmark problems of gradually increased complexity, from simple 'theoretical' problems up to complex problems closer to the real application. The first benchmark problem discussed in this paper is a simplified scalar problem which does not involve extremal, maximum or minimum, operations, typically encountered in the real applications. The second benchmark is a high dimensional, but still simple, problem for statistical inference of maximum channel power during normal operation. Bayesian algorithms have been developed for each benchmark problem to provide an independent way of constructing tolerance limits from the same data and allow assessing how well different methods make use of those data and, depending on the type of application, evaluating what the level of 'conservatism' is. The Bayesian method is not, however, used as a reference method, or 'gold' standard, but simply as an independent review method. The approach and the tests developed can be used as a starting point for developing a generic suite (generic in the sense of potentially applying whatever the proposed statistical method) of empirical studies, with clear criteria for passing those tests. Some lessons learned, in particular concerning the need to assure the completeness of the description of the application and the role of completeness of input information, are also discussed. It is concluded that a formal process which includes extended and detailed benchmark

  7. Applying Statistical Design to Control the Risk of Over-Design with Stochastic Simulation

    Directory of Open Access Journals (Sweden)

    Yi Wu

    2010-02-01

    Full Text Available By comparing a hard real-time system and a soft real-time system, this article elicits the risk of over-design in soft real-time system designing. To deal with this risk, a novel concept of statistical design is proposed. The statistical design is the process accurately accounting for and mitigating the effects of variation in part geometry and other environmental conditions, while at the same time optimizing a target performance factor. However, statistical design can be a very difficult and complex task when using clas-sical mathematical methods. Thus, a simulation methodology to optimize the design is proposed in order to bridge the gap between real-time analysis and optimization for robust and reliable system design.

  8. Simplified Freeman-Tukey test statistics for testing probabilities in ...

    African Journals Online (AJOL)

    This paper presents the simplified version of the Freeman-Tukey test statistic for testing hypothesis about multinomial probabilities in one, two and multidimensional contingency tables that does not require calculating the expected cell frequencies before test of significance. The simplified method established new criteria of ...

  9. Development of modelling algorithm of technological systems by statistical tests

    Science.gov (United States)

    Shemshura, E. A.; Otrokov, A. V.; Chernyh, V. G.

    2018-03-01

    The paper tackles the problem of economic assessment of design efficiency regarding various technological systems at the stage of their operation. The modelling algorithm of a technological system was performed using statistical tests and with account of the reliability index allows estimating the level of machinery technical excellence and defining the efficiency of design reliability against its performance. Economic feasibility of its application shall be determined on the basis of service quality of a technological system with further forecasting of volumes and the range of spare parts supply.

  10. Analysis of Preference Data Using Intermediate Test Statistic Abstract

    African Journals Online (AJOL)

    PROF. O. E. OSUAGWU

    2013-06-01

    Jun 1, 2013 ... West African Journal of Industrial and Academic Research Vol.7 No. 1 June ... Keywords:-Preference data, Friedman statistic, multinomial test statistic, intermediate test statistic. ... new method and consequently a new statistic ...

  11. New Graphical Methods and Test Statistics for Testing Composite Normality

    Directory of Open Access Journals (Sweden)

    Marc S. Paolella

    2015-07-01

    Full Text Available Several graphical methods for testing univariate composite normality from an i.i.d. sample are presented. They are endowed with correct simultaneous error bounds and yield size-correct tests. As all are based on the empirical CDF, they are also consistent for all alternatives. For one test, called the modified stabilized probability test, or MSP, a highly simplified computational method is derived, which delivers the test statistic and also a highly accurate p-value approximation, essentially instantaneously. The MSP test is demonstrated to have higher power against asymmetric alternatives than the well-known and powerful Jarque-Bera test. A further size-correct test, based on combining two test statistics, is shown to have yet higher power. The methodology employed is fully general and can be applied to any i.i.d. univariate continuous distribution setting.

  12. Design of durability test protocol for vehicular fuel cell systems operated in power-follow mode based on statistical results of on-road data

    Science.gov (United States)

    Xu, Liangfei; Reimer, Uwe; Li, Jianqiu; Huang, Haiyan; Hu, Zunyan; Jiang, Hongliang; Janßen, Holger; Ouyang, Minggao; Lehnert, Werner

    2018-02-01

    City buses using polymer electrolyte membrane (PEM) fuel cells are considered to be the most likely fuel cell vehicles to be commercialized in China. The technical specifications of the fuel cell systems (FCSs) these buses are equipped with will differ based on the powertrain configurations and vehicle control strategies, but can generally be classified into the power-follow and soft-run modes. Each mode imposes different levels of electrochemical stress on the fuel cells. Evaluating the aging behavior of fuel cell stacks under the conditions encountered in fuel cell buses requires new durability test protocols based on statistical results obtained during actual driving tests. In this study, we propose a systematic design method for fuel cell durability test protocols that correspond to the power-follow mode based on three parameters for different fuel cell load ranges. The powertrain configurations and control strategy are described herein, followed by a presentation of the statistical data for the duty cycles of FCSs in one city bus in the demonstration project. Assessment protocols are presented based on the statistical results using mathematical optimization methods, and are compared to existing protocols with respect to common factors, such as time at open circuit voltage and root-mean-square power.

  13. Modified Distribution-Free Goodness-of-Fit Test Statistic.

    Science.gov (United States)

    Chun, So Yeon; Browne, Michael W; Shapiro, Alexander

    2018-03-01

    Covariance structure analysis and its structural equation modeling extensions have become one of the most widely used methodologies in social sciences such as psychology, education, and economics. An important issue in such analysis is to assess the goodness of fit of a model under analysis. One of the most popular test statistics used in covariance structure analysis is the asymptotically distribution-free (ADF) test statistic introduced by Browne (Br J Math Stat Psychol 37:62-83, 1984). The ADF statistic can be used to test models without any specific distribution assumption (e.g., multivariate normal distribution) of the observed data. Despite its advantage, it has been shown in various empirical studies that unless sample sizes are extremely large, this ADF statistic could perform very poorly in practice. In this paper, we provide a theoretical explanation for this phenomenon and further propose a modified test statistic that improves the performance in samples of realistic size. The proposed statistic deals with the possible ill-conditioning of the involved large-scale covariance matrices.

  14. Log-concave Probability Distributions: Theory and Statistical Testing

    DEFF Research Database (Denmark)

    An, Mark Yuing

    1996-01-01

    This paper studies the broad class of log-concave probability distributions that arise in economics of uncertainty and information. For univariate, continuous, and log-concave random variables we prove useful properties without imposing the differentiability of density functions. Discrete...... and multivariate distributions are also discussed. We propose simple non-parametric testing procedures for log-concavity. The test statistics are constructed to test one of the two implicati ons of log-concavity: increasing hazard rates and new-is-better-than-used (NBU) property. The test for increasing hazard...... rates are based on normalized spacing of the sample order statistics. The tests for NBU property fall into the category of Hoeffding's U-statistics...

  15. Improved Test Planning and Analysis Through the Use of Advanced Statistical Methods

    Science.gov (United States)

    Green, Lawrence L.; Maxwell, Katherine A.; Glass, David E.; Vaughn, Wallace L.; Barger, Weston; Cook, Mylan

    2016-01-01

    The goal of this work is, through computational simulations, to provide statistically-based evidence to convince the testing community that a distributed testing approach is superior to a clustered testing approach for most situations. For clustered testing, numerous, repeated test points are acquired at a limited number of test conditions. For distributed testing, only one or a few test points are requested at many different conditions. The statistical techniques of Analysis of Variance (ANOVA), Design of Experiments (DOE) and Response Surface Methods (RSM) are applied to enable distributed test planning, data analysis and test augmentation. The D-Optimal class of DOE is used to plan an optimally efficient single- and multi-factor test. The resulting simulated test data are analyzed via ANOVA and a parametric model is constructed using RSM. Finally, ANOVA can be used to plan a second round of testing to augment the existing data set with new data points. The use of these techniques is demonstrated through several illustrative examples. To date, many thousands of comparisons have been performed and the results strongly support the conclusion that the distributed testing approach outperforms the clustered testing approach.

  16. Similar tests and the standardized log likelihood ratio statistic

    DEFF Research Database (Denmark)

    Jensen, Jens Ledet

    1986-01-01

    When testing an affine hypothesis in an exponential family the 'ideal' procedure is to calculate the exact similar test, or an approximation to this, based on the conditional distribution given the minimal sufficient statistic under the null hypothesis. By contrast to this there is a 'primitive......' approach in which the marginal distribution of a test statistic considered and any nuisance parameter appearing in the test statistic is replaced by an estimate. We show here that when using standardized likelihood ratio statistics the 'primitive' procedure is in fact an 'ideal' procedure to order O(n -3...

  17. Two independent pivotal statistics that test location and misspecification and add-up to the Anderson-Rubin statistic

    NARCIS (Netherlands)

    Kleibergen, F.R.

    2002-01-01

    We extend the novel pivotal statistics for testing the parameters in the instrumental variables regression model. We show that these statistics result from a decomposition of the Anderson-Rubin statistic into two independent pivotal statistics. The first statistic is a score statistic that tests

  18. Research design and statistical methods in Indian medical journals: a retrospective survey.

    Science.gov (United States)

    Hassan, Shabbeer; Yellur, Rajashree; Subramani, Pooventhan; Adiga, Poornima; Gokhale, Manoj; Iyer, Manasa S; Mayya, Shreemathi S

    2015-01-01

    Good quality medical research generally requires not only an expertise in the chosen medical field of interest but also a sound knowledge of statistical methodology. The number of medical research articles which have been published in Indian medical journals has increased quite substantially in the past decade. The aim of this study was to collate all evidence on study design quality and statistical analyses used in selected leading Indian medical journals. Ten (10) leading Indian medical journals were selected based on impact factors and all original research articles published in 2003 (N = 588) and 2013 (N = 774) were categorized and reviewed. A validated checklist on study design, statistical analyses, results presentation, and interpretation was used for review and evaluation of the articles. Main outcomes considered in the present study were - study design types and their frequencies, error/defects proportion in study design, statistical analyses, and implementation of CONSORT checklist in RCT (randomized clinical trials). From 2003 to 2013: The proportion of erroneous statistical analyses did not decrease (χ2=0.592, Φ=0.027, p=0.4418), 25% (80/320) in 2003 compared to 22.6% (111/490) in 2013. Compared with 2003, significant improvement was seen in 2013; the proportion of papers using statistical tests increased significantly (χ2=26.96, Φ=0.16, pdesign decreased significantly (χ2=16.783, Φ=0.12 pdesigns has remained very low (7.3%, 43/588) with majority showing some errors (41 papers, 95.3%). Majority of the published studies were retrospective in nature both in 2003 [79.1% (465/588)] and in 2013 [78.2% (605/774)]. Major decreases in error proportions were observed in both results presentation (χ2=24.477, Φ=0.17, presearch seems to have made no major progress regarding using correct statistical analyses, but error/defects in study designs have decreased significantly. Randomized clinical trials are quite rarely published and have high proportion of

  19. Assessment of the beryllium lymphocyte proliferation test using statistical process control.

    Science.gov (United States)

    Cher, Daniel J; Deubner, David C; Kelsh, Michael A; Chapman, Pamela S; Ray, Rose M

    2006-10-01

    Despite more than 20 years of surveillance and epidemiologic studies using the beryllium blood lymphocyte proliferation test (BeBLPT) as a measure of beryllium sensitization (BeS) and as an aid for diagnosing subclinical chronic beryllium disease (CBD), improvements in specific understanding of the inhalation toxicology of CBD have been limited. Although epidemiologic data suggest that BeS and CBD risks vary by process/work activity, it has proven difficult to reach specific conclusions regarding the dose-response relationship between workplace beryllium exposure and BeS or subclinical CBD. One possible reason for this uncertainty could be misclassification of BeS resulting from variation in BeBLPT testing performance. The reliability of the BeBLPT, a biological assay that measures beryllium sensitization, is unknown. To assess the performance of four laboratories that conducted this test, we used data from a medical surveillance program that offered testing for beryllium sensitization with the BeBLPT. The study population was workers exposed to beryllium at various facilities over a 10-year period (1992-2001). Workers with abnormal results were offered diagnostic workups for CBD. Our analyses used a standard statistical technique, statistical process control (SPC), to evaluate test reliability. The study design involved a repeated measures analysis of BeBLPT results generated from the company-wide, longitudinal testing. Analytical methods included use of (1) statistical process control charts that examined temporal patterns of variation for the stimulation index, a measure of cell reactivity to beryllium; (2) correlation analysis that compared prior perceptions of BeBLPT instability to the statistical measures of test variation; and (3) assessment of the variation in the proportion of missing test results and how time periods with more missing data influenced SPC findings. During the period of this study, all laboratories displayed variation in test results that

  20. Caveats for using statistical significance tests in research assessments

    DEFF Research Database (Denmark)

    Schneider, Jesper Wiborg

    2013-01-01

    controversial and numerous criticisms have been leveled against their use. Based on examples from articles by proponents of the use statistical significance tests in research assessments, we address some of the numerous problems with such tests. The issues specifically discussed are the ritual practice......This article raises concerns about the advantages of using statistical significance tests in research assessments as has recently been suggested in the debate about proper normalization procedures for citation indicators by Opthof and Leydesdorff (2010). Statistical significance tests are highly...... argue that applying statistical significance tests and mechanically adhering to their results are highly problematic and detrimental to critical thinking. We claim that the use of such tests do not provide any advantages in relation to deciding whether differences between citation indicators...

  1. A study of statistical tests for near-real-time materials accountancy using field test data of Tokai reprocessing plant

    International Nuclear Information System (INIS)

    Ihara, Hitoshi; Nishimura, Hideo; Ikawa, Koji; Miura, Nobuyuki; Iwanaga, Masayuki; Kusano, Toshitsugu.

    1988-03-01

    An Near-Real-Time Materials Accountancy(NRTA) system had been developed as an advanced safeguards measure for PNC Tokai Reprocessing Plant; a minicomputer system for NRTA data processing was designed and constructed. A full scale field test was carried out as a JASPAS(Japan Support Program for Agency Safeguards) project with the Agency's participation and the NRTA data processing system was used. Using this field test data, investigation of the detection power of a statistical test under real circumstances was carried out for five statistical tests, i.e., a significance test of MUF, CUMUF test, average loss test, MUF residual test and Page's test on MUF residuals. The result shows that the CUMUF test, average loss test, MUF residual test and the Page's test on MUF residual test are useful to detect a significant loss or diversion. An unmeasured inventory estimation model for the PNC reprocessing plant was developed in this study. Using this model, the field test data from the C-1 to 85 - 2 campaigns were re-analyzed. (author)

  2. Teaching Statistics in Language Testing Courses

    Science.gov (United States)

    Brown, James Dean

    2013-01-01

    The purpose of this article is to examine the literature on teaching statistics for useful ideas that teachers of language testing courses can draw on and incorporate into their teaching toolkits as they see fit. To those ends, the article addresses eight questions: What is known generally about teaching statistics? Why are students so anxious…

  3. Bayesian models based on test statistics for multiple hypothesis testing problems.

    Science.gov (United States)

    Ji, Yuan; Lu, Yiling; Mills, Gordon B

    2008-04-01

    We propose a Bayesian method for the problem of multiple hypothesis testing that is routinely encountered in bioinformatics research, such as the differential gene expression analysis. Our algorithm is based on modeling the distributions of test statistics under both null and alternative hypotheses. We substantially reduce the complexity of the process of defining posterior model probabilities by modeling the test statistics directly instead of modeling the full data. Computationally, we apply a Bayesian FDR approach to control the number of rejections of null hypotheses. To check if our model assumptions for the test statistics are valid for various bioinformatics experiments, we also propose a simple graphical model-assessment tool. Using extensive simulations, we demonstrate the performance of our models and the utility of the model-assessment tool. In the end, we apply the proposed methodology to an siRNA screening and a gene expression experiment.

  4. Design of experiments for test of fuel element reliability

    International Nuclear Information System (INIS)

    Boehmert, J.; Juettner, C.; Linek, J.

    1989-01-01

    Changes of fuel element design and modifications of the operational conditions have to be tested in experiments and pilot projects for nuclear safety. Experimental design is an useful statistical method minimizing costs and risks for this procedure. The main problem of our work was to investigate the connection between failure rate of fuel elements, sample size, confidence interval, and error probability. Using the statistic model of the binomial distribution appropriate relations were derived and discussed. A stepwise procedure based on a modified sequential analysis according to Wald was developed as a strategy of introduction for modifications of the fuel element design and of the operational conditions. (author)

  5. Experimental design techniques in statistical practice a practical software-based approach

    CERN Document Server

    Gardiner, W P

    1998-01-01

    Provides an introduction to the diverse subject area of experimental design, with many practical and applicable exercises to help the reader understand, present and analyse the data. The pragmatic approach offers technical training for use of designs and teaches statistical and non-statistical skills in design and analysis of project studies throughout science and industry. Provides an introduction to the diverse subject area of experimental design and includes practical and applicable exercises to help understand, present and analyse the data Offers technical training for use of designs and teaches statistical and non-statistical skills in design and analysis of project studies throughout science and industry Discusses one-factor designs and blocking designs, factorial experimental designs, Taguchi methods and response surface methods, among other topics.

  6. A NEW TEST OF THE STATISTICAL NATURE OF THE BRIGHTEST CLUSTER GALAXIES

    International Nuclear Information System (INIS)

    Lin, Yen-Ting; Ostriker, Jeremiah P.; Miller, Christopher J.

    2010-01-01

    A novel statistic is proposed to examine the hypothesis that all cluster galaxies are drawn from the same luminosity distribution (LD). In such a 'statistical model' of galaxy LD, the brightest cluster galaxies (BCGs) are simply the statistical extreme of the galaxy population. Using a large sample of nearby clusters, we show that BCGs in high luminosity clusters (e.g., L tot ∼> 4 x 10 11 h -2 70 L sun ) are unlikely (probability ≤3 x 10 -4 ) to be drawn from the LD defined by all red cluster galaxies more luminous than M r = -20. On the other hand, BCGs in less luminous clusters are consistent with being the statistical extreme. Applying our method to the second brightest galaxies, we show that they are consistent with being the statistical extreme, which implies that the BCGs are also distinct from non-BCG luminous, red, cluster galaxies. We point out some issues with the interpretation of the classical tests proposed by Tremaine and Richstone (TR) that are designed to examine the statistical nature of BCGs, investigate the robustness of both our statistical test and those of TR against difficulties in photometry of galaxies of large angular size, and discuss the implication of our findings on surveys that use the luminous red galaxies to measure the baryon acoustic oscillation features in the galaxy power spectrum.

  7. Significance levels for studies with correlated test statistics.

    Science.gov (United States)

    Shi, Jianxin; Levinson, Douglas F; Whittemore, Alice S

    2008-07-01

    When testing large numbers of null hypotheses, one needs to assess the evidence against the global null hypothesis that none of the hypotheses is false. Such evidence typically is based on the test statistic of the largest magnitude, whose statistical significance is evaluated by permuting the sample units to simulate its null distribution. Efron (2007) has noted that correlation among the test statistics can induce substantial interstudy variation in the shapes of their histograms, which may cause misleading tail counts. Here, we show that permutation-based estimates of the overall significance level also can be misleading when the test statistics are correlated. We propose that such estimates be conditioned on a simple measure of the spread of the observed histogram, and we provide a method for obtaining conditional significance levels. We justify this conditioning using the conditionality principle described by Cox and Hinkley (1974). Application of the method to gene expression data illustrates the circumstances when conditional significance levels are needed.

  8. The use of statistical tools in field testing of putative effects of genetically modified plants on nontarget organisms.

    Science.gov (United States)

    Semenov, Alexander V; Elsas, Jan Dirk; Glandorf, Debora C M; Schilthuizen, Menno; Boer, Willem F

    2013-08-01

    To fulfill existing guidelines, applicants that aim to place their genetically modified (GM) insect-resistant crop plants on the market are required to provide data from field experiments that address the potential impacts of the GM plants on nontarget organisms (NTO's). Such data may be based on varied experimental designs. The recent EFSA guidance document for environmental risk assessment (2010) does not provide clear and structured suggestions that address the statistics of field trials on effects on NTO's. This review examines existing practices in GM plant field testing such as the way of randomization, replication, and pseudoreplication. Emphasis is placed on the importance of design features used for the field trials in which effects on NTO's are assessed. The importance of statistical power and the positive and negative aspects of various statistical models are discussed. Equivalence and difference testing are compared, and the importance of checking the distribution of experimental data is stressed to decide on the selection of the proper statistical model. While for continuous data (e.g., pH and temperature) classical statistical approaches - for example, analysis of variance (ANOVA) - are appropriate, for discontinuous data (counts) only generalized linear models (GLM) are shown to be efficient. There is no golden rule as to which statistical test is the most appropriate for any experimental situation. In particular, in experiments in which block designs are used and covariates play a role GLMs should be used. Generic advice is offered that will help in both the setting up of field testing and the interpretation and data analysis of the data obtained in this testing. The combination of decision trees and a checklist for field trials, which are provided, will help in the interpretation of the statistical analyses of field trials and to assess whether such analyses were correctly applied. We offer generic advice to risk assessors and applicants that will

  9. SPSS for applied sciences basic statistical testing

    CERN Document Server

    Davis, Cole

    2013-01-01

    This book offers a quick and basic guide to using SPSS and provides a general approach to solving problems using statistical tests. It is both comprehensive in terms of the tests covered and the applied settings it refers to, and yet is short and easy to understand. Whether you are a beginner or an intermediate level test user, this book will help you to analyse different types of data in applied settings. It will also give you the confidence to use other statistical software and to extend your expertise to more specific scientific settings as required.The author does not use mathematical form

  10. A Statistical Approach to Optimizing Concrete Mixture Design

    OpenAIRE

    Ahmad, Shamsad; Alghamdi, Saeid A.

    2014-01-01

    A step-by-step statistical approach is proposed to obtain optimum proportioning of concrete mixtures using the data obtained through a statistically planned experimental program. The utility of the proposed approach for optimizing the design of concrete mixture is illustrated considering a typical case in which trial mixtures were considered according to a full factorial experiment design involving three factors and their three levels (33). A total of 27 concrete mixtures with three replicate...

  11. Variability aware compact model characterization for statistical circuit design optimization

    Science.gov (United States)

    Qiao, Ying; Qian, Kun; Spanos, Costas J.

    2012-03-01

    Variability modeling at the compact transistor model level can enable statistically optimized designs in view of limitations imposed by the fabrication technology. In this work we propose an efficient variabilityaware compact model characterization methodology based on the linear propagation of variance. Hierarchical spatial variability patterns of selected compact model parameters are directly calculated from transistor array test structures. This methodology has been implemented and tested using transistor I-V measurements and the EKV-EPFL compact model. Calculation results compare well to full-wafer direct model parameter extractions. Further studies are done on the proper selection of both compact model parameters and electrical measurement metrics used in the method.

  12. Statistical metrology - measurement and modeling of variation for advanced process development and design rule generation

    International Nuclear Information System (INIS)

    Boning, Duane S.; Chung, James E.

    1998-01-01

    Advanced process technology will require more detailed understanding and tighter control of variation in devices and interconnects. The purpose of statistical metrology is to provide methods to measure and characterize variation, to model systematic and random components of that variation, and to understand the impact of variation on both yield and performance of advanced circuits. Of particular concern are spatial or pattern-dependencies within individual chips; such systematic variation within the chip can have a much larger impact on performance than wafer-level random variation. Statistical metrology methods will play an important role in the creation of design rules for advanced technologies. For example, a key issue in multilayer interconnect is the uniformity of interlevel dielectric (ILD) thickness within the chip. For the case of ILD thickness, we describe phases of statistical metrology development and application to understanding and modeling thickness variation arising from chemical-mechanical polishing (CMP). These phases include screening experiments including design of test structures and test masks to gather electrical or optical data, techniques for statistical decomposition and analysis of the data, and approaches to calibrating empirical and physical variation models. These models can be integrated with circuit CAD tools to evaluate different process integration or design rule strategies. One focus for the generation of interconnect design rules are guidelines for the use of 'dummy fill' or 'metal fill' to improve the uniformity of underlying metal density and thus improve the uniformity of oxide thickness within the die. Trade-offs that can be evaluated via statistical metrology include the improvements to uniformity possible versus the effect of increased capacitance due to additional metal

  13. Analysis and Evaluation of Statistical Models for Integrated Circuits Design

    Directory of Open Access Journals (Sweden)

    Sáenz-Noval J.J.

    2011-10-01

    Full Text Available Statistical models for integrated circuits (IC allow us to estimate the percentage of acceptable devices in the batch before fabrication. Actually, Pelgrom is the statistical model most accepted in the industry; however it was derived from a micrometer technology, which does not guarantee reliability in nanometric manufacturing processes. This work considers three of the most relevant statistical models in the industry and evaluates their limitations and advantages in analog design, so that the designer has a better criterion to make a choice. Moreover, it shows how several statistical models can be used for each one of the stages and design purposes.

  14. A Statistical Perspective on Highly Accelerated Testing

    Energy Technology Data Exchange (ETDEWEB)

    Thomas, Edward V. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-02-01

    Highly accelerated life testing has been heavily promoted at Sandia (and elsewhere) as a means to rapidly identify product weaknesses caused by flaws in the product's design or manufacturing process. During product development, a small number of units are forced to fail at high stress. The failed units are then examined to determine the root causes of failure. The identification of the root causes of product failures exposed by highly accelerated life testing can instigate changes to the product's design and/or manufacturing process that result in a product with increased reliability. It is widely viewed that this qualitative use of highly accelerated life testing (often associated with the acronym HALT) can be useful. However, highly accelerated life testing has also been proposed as a quantitative means for "demonstrating" the reliability of a product where unreliability is associated with loss of margin via an identified and dominating failure mechanism. It is assumed that the dominant failure mechanism can be accelerated by changing the level of a stress factor that is assumed to be related to the dominant failure mode. In extreme cases, a minimal number of units (often from a pre-production lot) are subjected to a single highly accelerated stress relative to normal use. If no (or, sufficiently few) units fail at this high stress level, some might claim that a certain level of reliability has been demonstrated (relative to normal use conditions). Underlying this claim are assumptions regarding the level of knowledge associated with the relationship between the stress level and the probability of failure. The primary purpose of this document is to discuss (from a statistical perspective) the efficacy of using accelerated life testing protocols (and, in particular, "highly accelerated" protocols) to make quantitative inferences concerning the performance of a product (e.g., reliability) when in fact there is lack-of-knowledge and uncertainty concerning

  15. A comparison of test statistics for the recovery of rapid growth-based enumeration tests

    NARCIS (Netherlands)

    van den Heuvel, Edwin R.; IJzerman-Boon, Pieta C.

    This paper considers five test statistics for comparing the recovery of a rapid growth-based enumeration test with respect to the compendial microbiological method using a specific nonserial dilution experiment. The finite sample distributions of these test statistics are unknown, because they are

  16. A statistical test for outlier identification in data envelopment analysis

    Directory of Open Access Journals (Sweden)

    Morteza Khodabin

    2010-09-01

    Full Text Available In the use of peer group data to assess individual, typical or best practice performance, the effective detection of outliers is critical for achieving useful results. In these ‘‘deterministic’’ frontier models, statistical theory is now mostly available. This paper deals with the statistical pared sample method and its capability of detecting outliers in data envelopment analysis. In the presented method, each observation is deleted from the sample once and the resulting linear program is solved, leading to a distribution of efficiency estimates. Based on the achieved distribution, a pared test is designed to identify the potential outlier(s. We illustrate the method through a real data set. The method could be used in a first step, as an exploratory data analysis, before using any frontier estimation.

  17. Design of a testing strategy using non-animal based test methods: lessons learnt from the ACuteTox project.

    Science.gov (United States)

    Kopp-Schneider, Annette; Prieto, Pilar; Kinsner-Ovaskainen, Agnieszka; Stanzel, Sven

    2013-06-01

    In the framework of toxicology, a testing strategy can be viewed as a series of steps which are taken to come to a final prediction about a characteristic of a compound under study. The testing strategy is performed as a single-step procedure, usually called a test battery, using simultaneously all information collected on different endpoints, or as tiered approach in which a decision tree is followed. Design of a testing strategy involves statistical considerations, such as the development of a statistical prediction model. During the EU FP6 ACuteTox project, several prediction models were proposed on the basis of statistical classification algorithms which we illustrate here. The final choice of testing strategies was not based on statistical considerations alone. However, without thorough statistical evaluations a testing strategy cannot be identified. We present here a number of observations made from the statistical viewpoint which relate to the development of testing strategies. The points we make were derived from problems we had to deal with during the evaluation of this large research project. A central issue during the development of a prediction model is the danger of overfitting. Procedures are presented to deal with this challenge. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. Instruction of Statistics via Computer-Based Tools: Effects on Statistics' Anxiety, Attitude, and Achievement

    Science.gov (United States)

    Ciftci, S. Koza; Karadag, Engin; Akdal, Pinar

    2014-01-01

    The purpose of this study was to determine the effect of statistics instruction using computer-based tools, on statistics anxiety, attitude, and achievement. This study was designed as quasi-experimental research and the pattern used was a matched pre-test/post-test with control group design. Data was collected using three scales: a Statistics…

  19. Trends in study design and the statistical methods employed in a leading general medicine journal.

    Science.gov (United States)

    Gosho, M; Sato, Y; Nagashima, K; Takahashi, S

    2018-02-01

    Study design and statistical methods have become core components of medical research, and the methodology has become more multifaceted and complicated over time. The study of the comprehensive details and current trends of study design and statistical methods is required to support the future implementation of well-planned clinical studies providing information about evidence-based medicine. Our purpose was to illustrate study design and statistical methods employed in recent medical literature. This was an extension study of Sato et al. (N Engl J Med 2017; 376: 1086-1087), which reviewed 238 articles published in 2015 in the New England Journal of Medicine (NEJM) and briefly summarized the statistical methods employed in NEJM. Using the same database, we performed a new investigation of the detailed trends in study design and individual statistical methods that were not reported in the Sato study. Due to the CONSORT statement, prespecification and justification of sample size are obligatory in planning intervention studies. Although standard survival methods (eg Kaplan-Meier estimator and Cox regression model) were most frequently applied, the Gray test and Fine-Gray proportional hazard model for considering competing risks were sometimes used for a more valid statistical inference. With respect to handling missing data, model-based methods, which are valid for missing-at-random data, were more frequently used than single imputation methods. These methods are not recommended as a primary analysis, but they have been applied in many clinical trials. Group sequential design with interim analyses was one of the standard designs, and novel design, such as adaptive dose selection and sample size re-estimation, was sometimes employed in NEJM. Model-based approaches for handling missing data should replace single imputation methods for primary analysis in the light of the information found in some publications. Use of adaptive design with interim analyses is increasing

  20. A statistical approach to optimizing concrete mixture design.

    Science.gov (United States)

    Ahmad, Shamsad; Alghamdi, Saeid A

    2014-01-01

    A step-by-step statistical approach is proposed to obtain optimum proportioning of concrete mixtures using the data obtained through a statistically planned experimental program. The utility of the proposed approach for optimizing the design of concrete mixture is illustrated considering a typical case in which trial mixtures were considered according to a full factorial experiment design involving three factors and their three levels (3(3)). A total of 27 concrete mixtures with three replicates (81 specimens) were considered by varying the levels of key factors affecting compressive strength of concrete, namely, water/cementitious materials ratio (0.38, 0.43, and 0.48), cementitious materials content (350, 375, and 400 kg/m(3)), and fine/total aggregate ratio (0.35, 0.40, and 0.45). The experimental data were utilized to carry out analysis of variance (ANOVA) and to develop a polynomial regression model for compressive strength in terms of the three design factors considered in this study. The developed statistical model was used to show how optimization of concrete mixtures can be carried out with different possible options.

  1. A Statistical Approach to Optimizing Concrete Mixture Design

    Directory of Open Access Journals (Sweden)

    Shamsad Ahmad

    2014-01-01

    Full Text Available A step-by-step statistical approach is proposed to obtain optimum proportioning of concrete mixtures using the data obtained through a statistically planned experimental program. The utility of the proposed approach for optimizing the design of concrete mixture is illustrated considering a typical case in which trial mixtures were considered according to a full factorial experiment design involving three factors and their three levels (33. A total of 27 concrete mixtures with three replicates (81 specimens were considered by varying the levels of key factors affecting compressive strength of concrete, namely, water/cementitious materials ratio (0.38, 0.43, and 0.48, cementitious materials content (350, 375, and 400 kg/m3, and fine/total aggregate ratio (0.35, 0.40, and 0.45. The experimental data were utilized to carry out analysis of variance (ANOVA and to develop a polynomial regression model for compressive strength in terms of the three design factors considered in this study. The developed statistical model was used to show how optimization of concrete mixtures can be carried out with different possible options.

  2. Design and Statistics in Quantitative Translation (Process) Research

    DEFF Research Database (Denmark)

    Balling, Laura Winther; Hvelplund, Kristian Tangsgaard

    2015-01-01

    Traditionally, translation research has been qualitative, but quantitative research is becoming increasingly important, especially in translation process research but also in other areas of translation studies. This poses problems to many translation scholars since this way of thinking...... is unfamiliar. In this article, we attempt to mitigate these problems by outlining our approach to good quantitative research, all the way from research questions and study design to data preparation and statistics. We concentrate especially on the nature of the variables involved, both in terms of their scale...... and their role in the design; this has implications for both design and choice of statistics. Although we focus on quantitative research, we also argue that such research should be supplemented with qualitative analyses and considerations of the translation product....

  3. Using partial safety factors in wind turbine design and testing

    Energy Technology Data Exchange (ETDEWEB)

    Musial, W.D. [National Renewable Energy Lab., Golden, CO (United States)

    1997-12-31

    This paper describes the relationship between wind turbine design and testing in terms of the certification process. An overview of the current status of international certification is given along with a description of limit-state design basics. Wind turbine rotor blades are used to illustrate the principles discussed. These concepts are related to both International Electrotechnical Commission and Germanischer Lloyd design standards, and are covered using schematic representations of statistical load and material strength distributions. Wherever possible, interpretations of the partial safety factors are given with descriptions of their intended meaning. Under some circumstances, the authors` interpretations may be subjective. Next, the test-load factors are described in concept and then related to the design factors. Using technical arguments, it is shown that some of the design factors for both load and materials must be used in the test loading, but some should not be used. In addition, some test factors not used in the design may be necessary for an accurate test of the design. The results show that if the design assumptions do not clearly state the effects and uncertainties that are covered by the design`s partial safety factors, outside parties such as test labs or certification agencies could impose their own meaning on these factors.

  4. Ensuring Positiveness of the Scaled Difference Chi-square Test Statistic.

    Science.gov (United States)

    Satorra, Albert; Bentler, Peter M

    2010-06-01

    A scaled difference test statistic [Formula: see text] that can be computed from standard software of structural equation models (SEM) by hand calculations was proposed in Satorra and Bentler (2001). The statistic [Formula: see text] is asymptotically equivalent to the scaled difference test statistic T̄(d) introduced in Satorra (2000), which requires more involved computations beyond standard output of SEM software. The test statistic [Formula: see text] has been widely used in practice, but in some applications it is negative due to negativity of its associated scaling correction. Using the implicit function theorem, this note develops an improved scaling correction leading to a new scaled difference statistic T̄(d) that avoids negative chi-square values.

  5. On the application of design of experiments to accelerated life testing

    International Nuclear Information System (INIS)

    Hakim-Mashhadi, M.

    1992-01-01

    Today, there is an increasing demand for improved quality and reliability due to increasing system complexity and increasing demands from customer. Continuous improvement of quality is not only a means of competition but also a matter of staying in the market. Accelerated life testing and statistical design of experiments are two needed methods for improvement of quality. The combined use of them is very advantageous and increases the test efficiency. Accelerated life testing is a quick way to provide information on the life distribution of materials and products. By subjecting the test unit to conditions more severe than those at normal usage, the test time can be highly reduced. Estimates of life at normal stress levels are obtained by extrapolating the available information through a reasonable acceleration model. Accelerated life testing has mostly been used to measure reliability but it is high time to use it for improvement of quality. Design of experiments serves to find out the effect of design parameters and other interesting factors on performance measure and its variability. The obtained information is essential for a continuous improvement of quality. As an illustration, two sets of experiment are designed and performed at highly increased stress levels. The results are analysed and discussed and a time saving alternative is proposed. The combination of experimental design and accelerated life testing is discussed and illustrated. The combined use of these methods can be argued for in two different cases. One is for an exploratory improvement investigation and the other is for verification of reliability. In either case, the combined use is advantageous and improves the testing efficiency. Some general conclusions are drawn to be used for planning and performance of statistically designed accelerated life testing experiments. (70 refs.) (au)

  6. Experimental toxicology: Issues of statistics, experimental design, and replication.

    Science.gov (United States)

    Briner, Wayne; Kirwan, Jeral

    2017-01-01

    The difficulty of replicating experiments has drawn considerable attention. Issues with replication occur for a variety of reasons ranging from experimental design to laboratory errors to inappropriate statistical analysis. Here we review a variety of guidelines for statistical analysis, design, and execution of experiments in toxicology. In general, replication can be improved by using hypothesis driven experiments with adequate sample sizes, randomization, and blind data collection techniques. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Statistical testing and power analysis for brain-wide association study.

    Science.gov (United States)

    Gong, Weikang; Wan, Lin; Lu, Wenlian; Ma, Liang; Cheng, Fan; Cheng, Wei; Grünewald, Stefan; Feng, Jianfeng

    2018-04-05

    The identification of connexel-wise associations, which involves examining functional connectivities between pairwise voxels across the whole brain, is both statistically and computationally challenging. Although such a connexel-wise methodology has recently been adopted by brain-wide association studies (BWAS) to identify connectivity changes in several mental disorders, such as schizophrenia, autism and depression, the multiple correction and power analysis methods designed specifically for connexel-wise analysis are still lacking. Therefore, we herein report the development of a rigorous statistical framework for connexel-wise significance testing based on the Gaussian random field theory. It includes controlling the family-wise error rate (FWER) of multiple hypothesis testings using topological inference methods, and calculating power and sample size for a connexel-wise study. Our theoretical framework can control the false-positive rate accurately, as validated empirically using two resting-state fMRI datasets. Compared with Bonferroni correction and false discovery rate (FDR), it can reduce false-positive rate and increase statistical power by appropriately utilizing the spatial information of fMRI data. Importantly, our method bypasses the need of non-parametric permutation to correct for multiple comparison, thus, it can efficiently tackle large datasets with high resolution fMRI images. The utility of our method is shown in a case-control study. Our approach can identify altered functional connectivities in a major depression disorder dataset, whereas existing methods fail. A software package is available at https://github.com/weikanggong/BWAS. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Optimal Design and Related Areas in Optimization and Statistics

    CERN Document Server

    Pronzato, Luc

    2009-01-01

    This edited volume, dedicated to Henry P. Wynn, reflects his broad range of research interests, focusing in particular on the applications of optimal design theory in optimization and statistics. It covers algorithms for constructing optimal experimental designs, general gradient-type algorithms for convex optimization, majorization and stochastic ordering, algebraic statistics, Bayesian networks and nonlinear regression. Written by leading specialists in the field, each chapter contains a survey of the existing literature along with substantial new material. This work will appeal to both the

  9. Statistical tests to compare motif count exceptionalities

    Directory of Open Access Journals (Sweden)

    Vandewalle Vincent

    2007-03-01

    Full Text Available Abstract Background Finding over- or under-represented motifs in biological sequences is now a common task in genomics. Thanks to p-value calculation for motif counts, exceptional motifs are identified and represent candidate functional motifs. The present work addresses the related question of comparing the exceptionality of one motif in two different sequences. Just comparing the motif count p-values in each sequence is indeed not sufficient to decide if this motif is significantly more exceptional in one sequence compared to the other one. A statistical test is required. Results We develop and analyze two statistical tests, an exact binomial one and an asymptotic likelihood ratio test, to decide whether the exceptionality of a given motif is equivalent or significantly different in two sequences of interest. For that purpose, motif occurrences are modeled by Poisson processes, with a special care for overlapping motifs. Both tests can take the sequence compositions into account. As an illustration, we compare the octamer exceptionalities in the Escherichia coli K-12 backbone versus variable strain-specific loops. Conclusion The exact binomial test is particularly adapted for small counts. For large counts, we advise to use the likelihood ratio test which is asymptotic but strongly correlated with the exact binomial test and very simple to use.

  10. Testing the statistical compatibility of independent data sets

    International Nuclear Information System (INIS)

    Maltoni, M.; Schwetz, T.

    2003-01-01

    We discuss a goodness-of-fit method which tests the compatibility between statistically independent data sets. The method gives sensible results even in cases where the χ 2 minima of the individual data sets are very low or when several parameters are fitted to a large number of data points. In particular, it avoids the problem that a possible disagreement between data sets becomes diluted by data points which are insensitive to the crucial parameters. A formal derivation of the probability distribution function for the proposed test statistics is given, based on standard theorems of statistics. The application of the method is illustrated on data from neutrino oscillation experiments, and its complementarity to the standard goodness-of-fit is discussed

  11. Adaptive designs for the one-sample log-rank test.

    Science.gov (United States)

    Schmidt, Rene; Faldum, Andreas; Kwiecien, Robert

    2017-09-22

    Traditional designs in phase IIa cancer trials are single-arm designs with a binary outcome, for example, tumor response. In some settings, however, a time-to-event endpoint might appear more appropriate, particularly in the presence of loss to follow-up. Then the one-sample log-rank test might be the method of choice. It allows to compare the survival curve of the patients under treatment to a prespecified reference survival curve. The reference curve usually represents the expected survival under standard of the care. In this work, convergence of the one-sample log-rank statistic to Brownian motion is proven using Rebolledo's martingale central limit theorem while accounting for staggered entry times of the patients. On this basis, a confirmatory adaptive one-sample log-rank test is proposed where provision is made for data dependent sample size reassessment. The focus is to apply the inverse normal method. This is done in two different directions. The first strategy exploits the independent increments property of the one-sample log-rank statistic. The second strategy is based on the patient-wise separation principle. It is shown by simulation that the proposed adaptive test might help to rescue an underpowered trial and at the same time lowers the average sample number (ASN) under the null hypothesis as compared to a single-stage fixed sample design. © 2017, The International Biometric Society.

  12. Demonstration of a software design and statistical analysis methodology with application to patient outcomes data sets.

    Science.gov (United States)

    Mayo, Charles; Conners, Steve; Warren, Christopher; Miller, Robert; Court, Laurence; Popple, Richard

    2013-11-01

    With emergence of clinical outcomes databases as tools utilized routinely within institutions, comes need for software tools to support automated statistical analysis of these large data sets and intrainstitutional exchange from independent federated databases to support data pooling. In this paper, the authors present a design approach and analysis methodology that addresses both issues. A software application was constructed to automate analysis of patient outcomes data using a wide range of statistical metrics, by combining use of C#.Net and R code. The accuracy and speed of the code was evaluated using benchmark data sets. The approach provides data needed to evaluate combinations of statistical measurements for ability to identify patterns of interest in the data. Through application of the tools to a benchmark data set for dose-response threshold and to SBRT lung data sets, an algorithm was developed that uses receiver operator characteristic curves to identify a threshold value and combines use of contingency tables, Fisher exact tests, Welch t-tests, and Kolmogorov-Smirnov tests to filter the large data set to identify values demonstrating dose-response. Kullback-Leibler divergences were used to provide additional confirmation. The work demonstrates the viability of the design approach and the software tool for analysis of large data sets.

  13. Paradigms for adaptive statistical information designs: practical experiences and strategies.

    Science.gov (United States)

    Wang, Sue-Jane; Hung, H M James; O'Neill, Robert

    2012-11-10

    In the last decade or so, interest in adaptive design clinical trials has gradually been directed towards their use in regulatory submissions by pharmaceutical drug sponsors to evaluate investigational new drugs. Methodological advances of adaptive designs are abundant in the statistical literature since the 1970s. The adaptive design paradigm has been enthusiastically perceived to increase the efficiency and to be more cost-effective than the fixed design paradigm for drug development. Much interest in adaptive designs is in those studies with two-stages, where stage 1 is exploratory and stage 2 depends upon stage 1 results, but where the data of both stages will be combined to yield statistical evidence for use as that of a pivotal registration trial. It was not until the recent release of the US Food and Drug Administration Draft Guidance for Industry on Adaptive Design Clinical Trials for Drugs and Biologics (2010) that the boundaries of flexibility for adaptive designs were specifically considered for regulatory purposes, including what are exploratory goals, and what are the goals of adequate and well-controlled (A&WC) trials (2002). The guidance carefully described these distinctions in an attempt to minimize the confusion between the goals of preliminary learning phases of drug development, which are inherently substantially uncertain, and the definitive inference-based phases of drug development. In this paper, in addition to discussing some aspects of adaptive designs in a confirmatory study setting, we underscore the value of adaptive designs when used in exploratory trials to improve planning of subsequent A&WC trials. One type of adaptation that is receiving attention is the re-estimation of the sample size during the course of the trial. We refer to this type of adaptation as an adaptive statistical information design. Specifically, a case example is used to illustrate how challenging it is to plan a confirmatory adaptive statistical information

  14. West Valley high-level nuclear waste glass development: a statistically designed mixture study

    Energy Technology Data Exchange (ETDEWEB)

    Chick, L.A.; Bowen, W.M.; Lokken, R.O.; Wald, J.W.; Bunnell, L.R.; Strachan, D.M.

    1984-10-01

    The first full-scale conversion of high-level commercial nuclear wastes to glass in the United States will be conducted at West Valley, New York, by West Valley Nuclear Services Company, Inc. (WVNS), for the US Department of Energy. Pacific Northwest Laboratory (PNL) is supporting WVNS in the design of the glass-making process and the chemical formulation of the glass. This report describes the statistically designed study performed by PNL to develop the glass composition recommended for use at West Valley. The recommended glass contains 28 wt% waste, as limited by process requirements. The waste loading and the silica content (45 wt%) are similar to those in previously developed waste glasses; however, the new formulation contains more calcium and less boron. A series of tests verified that the increased calcium results in improved chemical durability and does not adversely affect the other modeled properties. The optimization study assessed the effects of seven oxide components on glass properties. Over 100 melts combining the seven components into a wide variety of statistically chosen compositions were tested. Viscosity, electrical conductivity, thermal expansion, crystallinity, and chemical durability were measured and empirically modeled as a function of the glass composition. The mathematical models were then used to predict the optimum formulation. This glass was tested and adjusted to arrive at the final composition recommended for use at West Valley. 56 references, 49 figures, 18 tables.

  15. HOW TO SELECT APPROPRIATE STATISTICAL TEST IN SCIENTIFIC ARTICLES

    Directory of Open Access Journals (Sweden)

    Vladimir TRAJKOVSKI

    2016-09-01

    Full Text Available Statistics is mathematical science dealing with the collection, analysis, interpretation, and presentation of masses of numerical data in order to draw relevant conclusions. Statistics is a form of mathematical analysis that uses quantified models, representations and synopses for a given set of experimental data or real-life studies. The students and young researchers in biomedical sciences and in special education and rehabilitation often declare that they have chosen to enroll that study program because they have lack of knowledge or interest in mathematics. This is a sad statement, but there is much truth in it. The aim of this editorial is to help young researchers to select statistics or statistical techniques and statistical software appropriate for the purposes and conditions of a particular analysis. The most important statistical tests are reviewed in the article. Knowing how to choose right statistical test is an important asset and decision in the research data processing and in the writing of scientific papers. Young researchers and authors should know how to choose and how to use statistical methods. The competent researcher will need knowledge in statistical procedures. That might include an introductory statistics course, and it most certainly includes using a good statistics textbook. For this purpose, there is need to return of Statistics mandatory subject in the curriculum of the Institute of Special Education and Rehabilitation at Faculty of Philosophy in Skopje. Young researchers have a need of additional courses in statistics. They need to train themselves to use statistical software on appropriate way.

  16. Fuel rod design by statistical methods for MOX fuel

    International Nuclear Information System (INIS)

    Heins, L.; Landskron, H.

    2000-01-01

    Statistical methods in fuel rod design have received more and more attention during the last years. One of different possible ways to use statistical methods in fuel rod design can be described as follows: Monte Carlo calculations are performed using the fuel rod code CARO. For each run with CARO, the set of input data is modified: parameters describing the design of the fuel rod (geometrical data, density etc.) and modeling parameters are randomly selected according to their individual distributions. Power histories are varied systematically in a way that each power history of the relevant core management calculation is represented in the Monte Carlo calculations with equal frequency. The frequency distributions of the results as rod internal pressure and cladding strain which are generated by the Monte Carlo calculation are evaluated and compared with the design criteria. Up to now, this methodology has been applied to licensing calculations for PWRs and BWRs, UO 2 and MOX fuel, in 3 countries. Especially for the insertion of MOX fuel resulting in power histories with relatively high linear heat generation rates at higher burnup, the statistical methodology is an appropriate approach to demonstrate the compliance of licensing requirements. (author)

  17. Monte Carlo testing in spatial statistics, with applications to spatial residuals

    DEFF Research Database (Denmark)

    Mrkvička, Tomáš; Soubeyrand, Samuel; Myllymäki, Mari

    2016-01-01

    This paper reviews recent advances made in testing in spatial statistics and discussed at the Spatial Statistics conference in Avignon 2015. The rank and directional quantile envelope tests are discussed and practical rules for their use are provided. These tests are global envelope tests...... with an appropriate type I error probability. Two novel examples are given on their usage. First, in addition to the test based on a classical one-dimensional summary function, the goodness-of-fit of a point process model is evaluated by means of the test based on a higher dimensional functional statistic, namely...

  18. A novel genome-information content-based statistic for genome-wide association analysis designed for next-generation sequencing data.

    Science.gov (United States)

    Luo, Li; Zhu, Yun; Xiong, Momiao

    2012-06-01

    The genome-wide association studies (GWAS) designed for next-generation sequencing data involve testing association of genomic variants, including common, low frequency, and rare variants. The current strategies for association studies are well developed for identifying association of common variants with the common diseases, but may be ill-suited when large amounts of allelic heterogeneity are present in sequence data. Recently, group tests that analyze their collective frequency differences between cases and controls shift the current variant-by-variant analysis paradigm for GWAS of common variants to the collective test of multiple variants in the association analysis of rare variants. However, group tests ignore differences in genetic effects among SNPs at different genomic locations. As an alternative to group tests, we developed a novel genome-information content-based statistics for testing association of the entire allele frequency spectrum of genomic variation with the diseases. To evaluate the performance of the proposed statistics, we use large-scale simulations based on whole genome low coverage pilot data in the 1000 Genomes Project to calculate the type 1 error rates and power of seven alternative statistics: a genome-information content-based statistic, the generalized T(2), collapsing method, multivariate and collapsing (CMC) method, individual χ(2) test, weighted-sum statistic, and variable threshold statistic. Finally, we apply the seven statistics to published resequencing dataset from ANGPTL3, ANGPTL4, ANGPTL5, and ANGPTL6 genes in the Dallas Heart Study. We report that the genome-information content-based statistic has significantly improved type 1 error rates and higher power than the other six statistics in both simulated and empirical datasets.

  19. Scalable Algorithms for Adaptive Statistical Designs

    Directory of Open Access Journals (Sweden)

    Robert Oehmke

    2000-01-01

    Full Text Available We present a scalable, high-performance solution to multidimensional recurrences that arise in adaptive statistical designs. Adaptive designs are an important class of learning algorithms for a stochastic environment, and we focus on the problem of optimally assigning patients to treatments in clinical trials. While adaptive designs have significant ethical and cost advantages, they are rarely utilized because of the complexity of optimizing and analyzing them. Computational challenges include massive memory requirements, few calculations per memory access, and multiply-nested loops with dynamic indices. We analyze the effects of various parallelization options, and while standard approaches do not work well, with effort an efficient, highly scalable program can be developed. This allows us to solve problems thousands of times more complex than those solved previously, which helps make adaptive designs practical. Further, our work applies to many other problems involving neighbor recurrences, such as generalized string matching.

  20. Kolmogorov complexity, pseudorandom generators and statistical models testing

    Czech Academy of Sciences Publication Activity Database

    Šindelář, Jan; Boček, Pavel

    2002-01-01

    Roč. 38, č. 6 (2002), s. 747-759 ISSN 0023-5954 R&D Projects: GA ČR GA102/99/1564 Institutional research plan: CEZ:AV0Z1075907 Keywords : Kolmogorov complexity * pseudorandom generators * statistical models testing Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.341, year: 2002

  1. statistical tests for frequency distribution of mean gravity anomalies

    African Journals Online (AJOL)

    ES Obe

    1980-03-01

    Mar 1, 1980 ... STATISTICAL TESTS FOR FREQUENCY DISTRIBUTION OF MEAN. GRAVITY ANOMALIES. By ... approach. Kaula [1,2] discussed the method of applying statistical techniques in the ..... mathematical foundation of physical ...

  2. Understanding the Sampling Distribution and Its Use in Testing Statistical Significance.

    Science.gov (United States)

    Breunig, Nancy A.

    Despite the increasing criticism of statistical significance testing by researchers, particularly in the publication of the 1994 American Psychological Association's style manual, statistical significance test results are still popular in journal articles. For this reason, it remains important to understand the logic of inferential statistics. A…

  3. Survey of editors and reviewers of high-impact psychology journals: statistical and research design problems in submitted manuscripts.

    Science.gov (United States)

    Harris, Alex; Reeder, Rachelle; Hyun, Jenny

    2011-01-01

    The authors surveyed 21 editors and reviewers from major psychology journals to identify and describe the statistical and design errors they encounter most often and to get their advice regarding prevention of these problems. Content analysis of the text responses revealed themes in 3 major areas: (a) problems with research design and reporting (e.g., lack of an a priori power analysis, lack of congruence between research questions and study design/analysis, failure to adequately describe statistical procedures); (b) inappropriate data analysis (e.g., improper use of analysis of variance, too many statistical tests without adjustments, inadequate strategy for addressing missing data); and (c) misinterpretation of results. If researchers attended to these common methodological and analytic issues, the scientific quality of manuscripts submitted to high-impact psychology journals might be significantly improved.

  4. A d-statistic for single-case designs that is equivalent to the usual between-groups d-statistic.

    Science.gov (United States)

    Shadish, William R; Hedges, Larry V; Pustejovsky, James E; Boyajian, Jonathan G; Sullivan, Kristynn J; Andrade, Alma; Barrientos, Jeannette L

    2014-01-01

    We describe a standardised mean difference statistic (d) for single-case designs that is equivalent to the usual d in between-groups experiments. We show how it can be used to summarise treatment effects over cases within a study, to do power analyses in planning new studies and grant proposals, and to meta-analyse effects across studies of the same question. We discuss limitations of this d-statistic, and possible remedies to them. Even so, this d-statistic is better founded statistically than other effect size measures for single-case design, and unlike many general linear model approaches such as multilevel modelling or generalised additive models, it produces a standardised effect size that can be integrated over studies with different outcome measures. SPSS macros for both effect size computation and power analysis are available.

  5. Statistical Analysis of Designed Experiments Theory and Applications

    CERN Document Server

    Tamhane, Ajit C

    2012-01-01

    A indispensable guide to understanding and designing modern experiments The tools and techniques of Design of Experiments (DOE) allow researchers to successfully collect, analyze, and interpret data across a wide array of disciplines. Statistical Analysis of Designed Experiments provides a modern and balanced treatment of DOE methodology with thorough coverage of the underlying theory and standard designs of experiments, guiding the reader through applications to research in various fields such as engineering, medicine, business, and the social sciences. The book supplies a foundation for the

  6. Design Driven Testing Test Smarter, Not Harder

    CERN Document Server

    Stephens, M

    2010-01-01

    The groundbreaking book Design Driven Testing brings sanity back to the software development process by flipping around the concept of Test Driven Development (TDD) - restoring the concept of using testing to verify a design instead of pretending that unit tests are a replacement for design. Anyone who feels that TDD is "Too Damn Difficult" will appreciate this book. Design Driven Testing shows that, by combining a forward-thinking development process with cutting-edge automation, testing can be a finely targeted, business-driven, rewarding effort. In other words, you'll learn how to test

  7. Inferential statistics, power estimates, and study design formalities continue to suppress biomedical innovation

    OpenAIRE

    Kern, Scott E.

    2014-01-01

    Innovation is the direct intended product of certain styles in research, but not of others. Fundamental conflicts between descriptive vs inferential statistics, deductive vs inductive hypothesis testing, and exploratory vs pre-planned confirmatory research designs have been played out over decades, with winners and losers and consequences. Longstanding warnings from both academics and research-funding interests have failed to influence effectively the course of these battles. The NIH publicly...

  8. Statistical inferences for bearings life using sudden death test

    Directory of Open Access Journals (Sweden)

    Morariu Cristin-Olimpiu

    2017-01-01

    Full Text Available In this paper we propose a calculus method for reliability indicators estimation and a complete statistical inferences for three parameters Weibull distribution of bearings life. Using experimental values regarding the durability of bearings tested on stands by the sudden death tests involves a series of particularities of the estimation using maximum likelihood method and statistical inference accomplishment. The paper detailing these features and also provides an example calculation.

  9. Fundamentals of statistical experimental design and analysis

    CERN Document Server

    Easterling, Robert G

    2015-01-01

    Professionals in all areas - business; government; the physical, life, and social sciences; engineering; medicine, etc. - benefit from using statistical experimental design to better understand their worlds and then use that understanding to improve the products, processes, and programs they are responsible for. This book aims to provide the practitioners of tomorrow with a memorable, easy to read, engaging guide to statistics and experimental design. This book uses examples, drawn from a variety of established texts, and embeds them in a business or scientific context, seasoned with a dash of humor, to emphasize the issues and ideas that led to the experiment and the what-do-we-do-next? steps after the experiment. Graphical data displays are emphasized as means of discovery and communication and formulas are minimized, with a focus on interpreting the results that software produce. The role of subject-matter knowledge, and passion, is also illustrated. The examples do not require specialized knowledge, and t...

  10. Testing the Difference of Correlated Agreement Coefficients for Statistical Significance

    Science.gov (United States)

    Gwet, Kilem L.

    2016-01-01

    This article addresses the problem of testing the difference between two correlated agreement coefficients for statistical significance. A number of authors have proposed methods for testing the difference between two correlated kappa coefficients, which require either the use of resampling methods or the use of advanced statistical modeling…

  11. Guidelines for the design and statistical analysis of experiments in papers submitted to ATLA.

    Science.gov (United States)

    Festing, M F

    2001-01-01

    In vitro experiments need to be well designed and correctly analysed if they are to achieve their full potential to replace the use of animals in research. An "experiment" is a procedure for collecting scientific data in order to answer a hypothesis, or to provide material for generating new hypotheses, and differs from a survey because the scientist has control over the treatments that can be applied. Most experiments can be classified into one of a few formal designs, the most common being completely randomised, and randomised block designs. These are quite common with in vitro experiments, which are often replicated in time. Some experiments involve a single independent (treatment) variable, while other "factorial" designs simultaneously vary two or more independent variables, such as drug treatment and cell line. Factorial designs often provide additional information at little extra cost. Experiments need to be carefully planned to avoid bias, be powerful yet simple, provide for a valid statistical analysis and, in some cases, have a wide range of applicability. Virtually all experiments need some sort of statistical analysis in order to take account of biological variation among the experimental subjects. Parametric methods using the t test or analysis of variance are usually more powerful than non-parametric methods, provided the underlying assumptions of normality of the residuals and equal variances are approximately valid. The statistical analyses of data from a completely randomised design, and from a randomised-block design are demonstrated in Appendices 1 and 2, and methods of determining sample size are discussed in Appendix 3. Appendix 4 gives a checklist for authors submitting papers to ATLA.

  12. Statistical Estimation of Heterogeneities: A New Frontier in Well Testing

    Science.gov (United States)

    Neuman, S. P.; Guadagnini, A.; Illman, W. A.; Riva, M.; Vesselinov, V. V.

    2001-12-01

    Well-testing methods have traditionally relied on analytical solutions of groundwater flow equations in relatively simple domains, consisting of one or at most a few units having uniform hydraulic properties. Recently, attention has been shifting toward methods and solutions that would allow one to characterize subsurface heterogeneities in greater detail. On one hand, geostatistical inverse methods are being used to assess the spatial variability of parameters, such as permeability and porosity, on the basis of multiple cross-hole pressure interference tests. On the other hand, analytical solutions are being developed to describe the mean and variance (first and second statistical moments) of flow to a well in a randomly heterogeneous medium. Geostatistical inverse interpretation of cross-hole tests yields a smoothed but detailed "tomographic" image of how parameters actually vary in three-dimensional space, together with corresponding measures of estimation uncertainty. Moment solutions may soon allow one to interpret well tests in terms of statistical parameters such as the mean and variance of log permeability, its spatial autocorrelation and statistical anisotropy. The idea of geostatistical cross-hole tomography is illustrated through pneumatic injection tests conducted in unsaturated fractured tuff at the Apache Leap Research Site near Superior, Arizona. The idea of using moment equations to interpret well-tests statistically is illustrated through a recently developed three-dimensional solution for steady state flow to a well in a bounded, randomly heterogeneous, statistically anisotropic aquifer.

  13. A Third Moment Adjusted Test Statistic for Small Sample Factor Analysis.

    Science.gov (United States)

    Lin, Johnny; Bentler, Peter M

    2012-01-01

    Goodness of fit testing in factor analysis is based on the assumption that the test statistic is asymptotically chi-square; but this property may not hold in small samples even when the factors and errors are normally distributed in the population. Robust methods such as Browne's asymptotically distribution-free method and Satorra Bentler's mean scaling statistic were developed under the presumption of non-normality in the factors and errors. This paper finds new application to the case where factors and errors are normally distributed in the population but the skewness of the obtained test statistic is still high due to sampling error in the observed indicators. An extension of Satorra Bentler's statistic is proposed that not only scales the mean but also adjusts the degrees of freedom based on the skewness of the obtained test statistic in order to improve its robustness under small samples. A simple simulation study shows that this third moment adjusted statistic asymptotically performs on par with previously proposed methods, and at a very small sample size offers superior Type I error rates under a properly specified model. Data from Mardia, Kent and Bibby's study of students tested for their ability in five content areas that were either open or closed book were used to illustrate the real-world performance of this statistic.

  14. Statistical characteristics of mechanical heart valve cavitation in accelerated testing.

    Science.gov (United States)

    Wu, Changfu; Hwang, Ned H C; Lin, Yu-Kweng M

    2004-07-01

    Cavitation damage has been observed on mechanical heart valves (MHVs) undergoing accelerated testing. Cavitation itself can be modeled as a stochastic process, as it varies from beat to beat of the testing machine. This in-vitro study was undertaken to investigate the statistical characteristics of MHV cavitation. A 25-mm St. Jude Medical bileaflet MHV (SJM 25) was tested in an accelerated tester at various pulse rates, ranging from 300 to 1,000 bpm, with stepwise increments of 100 bpm. A miniature pressure transducer was placed near a leaflet tip on the inflow side of the valve, to monitor regional transient pressure fluctuations at instants of valve closure. The pressure trace associated with each beat was passed through a 70 kHz high-pass digital filter to extract the high-frequency oscillation (HFO) components resulting from the collapse of cavitation bubbles. Three intensity-related measures were calculated for each HFO burst: its time span; its local root-mean-square (LRMS) value; and the area enveloped by the absolute value of the HFO pressure trace and the time axis, referred to as cavitation impulse. These were treated as stochastic processes, of which the first-order probability density functions (PDFs) were estimated for each test rate. Both the LRMS value and cavitation impulse were log-normal distributed, and the time span was normal distributed. These distribution laws were consistent at different test rates. The present investigation was directed at understanding MHV cavitation as a stochastic process. The results provide a basis for establishing further the statistical relationship between cavitation intensity and time-evolving cavitation damage on MHV surfaces. These data are required to assess and compare the performance of MHVs of different designs.

  15. 688,112 statistical results : Content mining psychology articles for statistical test results

    NARCIS (Netherlands)

    Hartgerink, C.H.J.

    2016-01-01

    In this data deposit, I describe a dataset that is the result of content mining 167,318 published articles for statistical test results reported according to the standards prescribed by the American Psychological Association (APA). Articles published by the APA, Springer, Sage, and Taylor & Francis

  16. EVALUATION OF A NEW MEAN SCALED AND MOMENT ADJUSTED TEST STATISTIC FOR SEM.

    Science.gov (United States)

    Tong, Xiaoxiao; Bentler, Peter M

    2013-01-01

    Recently a new mean scaled and skewness adjusted test statistic was developed for evaluating structural equation models in small samples and with potentially nonnormal data, but this statistic has received only limited evaluation. The performance of this statistic is compared to normal theory maximum likelihood and two well-known robust test statistics. A modification to the Satorra-Bentler scaled statistic is developed for the condition that sample size is smaller than degrees of freedom. The behavior of the four test statistics is evaluated with a Monte Carlo confirmatory factor analysis study that varies seven sample sizes and three distributional conditions obtained using Headrick's fifth-order transformation to nonnormality. The new statistic performs badly in most conditions except under the normal distribution. The goodness-of-fit χ(2) test based on maximum-likelihood estimation performed well under normal distributions as well as under a condition of asymptotic robustness. The Satorra-Bentler scaled test statistic performed best overall, while the mean scaled and variance adjusted test statistic outperformed the others at small and moderate sample sizes under certain distributional conditions.

  17. Test design requirements: Thermal conductivity probe testing

    International Nuclear Information System (INIS)

    Heath, R.E.

    1985-01-01

    This document establishes the test design requirements for development of a thermal conductivity probe test. The thermal conductivity probe determines in situ thermal conductivity using a line source transient heat conduction analysis. This document presents the rationale for thermal conductivity measurement using a thermal conductivity probe. A general test description is included. Support requirements along with design constraints are detailed to allow simple design of the thermal conductivity probe and test. The schedule and delivery requirements of the responsible test designer are also included. 7 refs., 1 fig

  18. Sb2Te3 and Its Superlattices: Optimization by Statistical Design.

    Science.gov (United States)

    Behera, Jitendra K; Zhou, Xilin; Ranjan, Alok; Simpson, Robert E

    2018-05-02

    The objective of this work is to demonstrate the usefulness of fractional factorial design for optimizing the crystal quality of chalcogenide van der Waals (vdW) crystals. We statistically analyze the growth parameters of highly c axis oriented Sb 2 Te 3 crystals and Sb 2 Te 3 -GeTe phase change vdW heterostructured superlattices. The statistical significance of the growth parameters of temperature, pressure, power, buffer materials, and buffer layer thickness was found by fractional factorial design and response surface analysis. Temperature, pressure, power, and their second-order interactions are the major factors that significantly influence the quality of the crystals. Additionally, using tungsten rather than molybdenum as a buffer layer significantly enhances the crystal quality. Fractional factorial design minimizes the number of experiments that are necessary to find the optimal growth conditions, resulting in an order of magnitude improvement in the crystal quality. We highlight that statistical design of experiment methods, which is more commonly used in product design, should be considered more broadly by those designing and optimizing materials.

  19. CUSUM-based person-fit statistics for adaptive testing

    NARCIS (Netherlands)

    van Krimpen-Stoop, Edith; Meijer, R.R.

    2001-01-01

    Item scores that do not fit an assumed item response theory model may cause the latent trait value to be inaccurately estimated. Several person-fit statistics for detecting nonfitting score patterns for paper-and-pencil tests have been proposed. In the context of computerized adaptive tests (CAT),

  20. CUSUM-based person-fit statistics for adaptive testing

    NARCIS (Netherlands)

    van Krimpen-Stoop, Edith; Meijer, R.R.

    1999-01-01

    Item scores that do not fit an assumed item response theory model may cause the latent trait value to be estimated inaccurately. Several person-fit statistics for detecting nonfitting score patterns for paper-and-pencil tests have been proposed. In the context of computerized adaptive tests (CAT),

  1. Statistical test of anarchy

    International Nuclear Information System (INIS)

    Gouvea, Andre de; Murayama, Hitoshi

    2003-01-01

    'Anarchy' is the hypothesis that there is no fundamental distinction among the three flavors of neutrinos. It describes the mixing angles as random variables, drawn from well-defined probability distributions dictated by the group Haar measure. We perform a Kolmogorov-Smirnov (KS) statistical test to verify whether anarchy is consistent with all neutrino data, including the new result presented by KamLAND. We find a KS probability for Nature's choice of mixing angles equal to 64%, quite consistent with the anarchical hypothesis. In turn, assuming that anarchy is indeed correct, we compute lower bounds on vertical bar U e3 vertical bar 2 , the remaining unknown 'angle' of the leptonic mixing matrix

  2. Corrections of the NIST Statistical Test Suite for Randomness

    OpenAIRE

    Kim, Song-Ju; Umeno, Ken; Hasegawa, Akio

    2004-01-01

    It is well known that the NIST statistical test suite was used for the evaluation of AES candidate algorithms. We have found that the test setting of Discrete Fourier Transform test and Lempel-Ziv test of this test suite are wrong. We give four corrections of mistakes in the test settings. This suggests that re-evaluation of the test results should be needed.

  3. Statistical alignment: computational properties, homology testing and goodness-of-fit

    DEFF Research Database (Denmark)

    Hein, J; Wiuf, Carsten; Møller, Martin

    2000-01-01

    The model of insertions and deletions in biological sequences, first formulated by Thorne, Kishino, and Felsenstein in 1991 (the TKF91 model), provides a basis for performing alignment within a statistical framework. Here we investigate this model.Firstly, we show how to accelerate the statistical...... alignment algorithms several orders of magnitude. The main innovations are to confine likelihood calculations to a band close to the similarity based alignment, to get good initial guesses of the evolutionary parameters and to apply an efficient numerical optimisation algorithm for finding the maximum...... analysis.Secondly, we propose a new homology test based on this model, where homology means that an ancestor to a sequence pair can be found finitely far back in time. This test has statistical advantages relative to the traditional shuffle test for proteins.Finally, we describe a goodness-of-fit test...

  4. Statistics 101 for Radiologists.

    Science.gov (United States)

    Anvari, Arash; Halpern, Elkan F; Samir, Anthony E

    2015-10-01

    Diagnostic tests have wide clinical applications, including screening, diagnosis, measuring treatment effect, and determining prognosis. Interpreting diagnostic test results requires an understanding of key statistical concepts used to evaluate test efficacy. This review explains descriptive statistics and discusses probability, including mutually exclusive and independent events and conditional probability. In the inferential statistics section, a statistical perspective on study design is provided, together with an explanation of how to select appropriate statistical tests. Key concepts in recruiting study samples are discussed, including representativeness and random sampling. Variable types are defined, including predictor, outcome, and covariate variables, and the relationship of these variables to one another. In the hypothesis testing section, we explain how to determine if observed differences between groups are likely to be due to chance. We explain type I and II errors, statistical significance, and study power, followed by an explanation of effect sizes and how confidence intervals can be used to generalize observed effect sizes to the larger population. Statistical tests are explained in four categories: t tests and analysis of variance, proportion analysis tests, nonparametric tests, and regression techniques. We discuss sensitivity, specificity, accuracy, receiver operating characteristic analysis, and likelihood ratios. Measures of reliability and agreement, including κ statistics, intraclass correlation coefficients, and Bland-Altman graphs and analysis, are introduced. © RSNA, 2015.

  5. Challenges and Approaches to Statistical Design and Inference in High Dimensional Investigations

    Science.gov (United States)

    Garrett, Karen A.; Allison, David B.

    2015-01-01

    Summary Advances in modern technologies have facilitated high-dimensional experiments (HDEs) that generate tremendous amounts of genomic, proteomic, and other “omic” data. HDEs involving whole-genome sequences and polymorphisms, expression levels of genes, protein abundance measurements, and combinations thereof have become a vanguard for new analytic approaches to the analysis of HDE data. Such situations demand creative approaches to the processes of statistical inference, estimation, prediction, classification, and study design. The novel and challenging biological questions asked from HDE data have resulted in many specialized analytic techniques being developed. This chapter discusses some of the unique statistical challenges facing investigators studying high-dimensional biology, and describes some approaches being developed by statistical scientists. We have included some focus on the increasing interest in questions involving testing multiple propositions simultaneously, appropriate inferential indicators for the types of questions biologists are interested in, and the need for replication of results across independent studies, investigators, and settings. A key consideration inherent throughout is the challenge in providing methods that a statistician judges to be sound and a biologist finds informative. PMID:19588106

  6. Challenges and approaches to statistical design and inference in high-dimensional investigations.

    Science.gov (United States)

    Gadbury, Gary L; Garrett, Karen A; Allison, David B

    2009-01-01

    Advances in modern technologies have facilitated high-dimensional experiments (HDEs) that generate tremendous amounts of genomic, proteomic, and other "omic" data. HDEs involving whole-genome sequences and polymorphisms, expression levels of genes, protein abundance measurements, and combinations thereof have become a vanguard for new analytic approaches to the analysis of HDE data. Such situations demand creative approaches to the processes of statistical inference, estimation, prediction, classification, and study design. The novel and challenging biological questions asked from HDE data have resulted in many specialized analytic techniques being developed. This chapter discusses some of the unique statistical challenges facing investigators studying high-dimensional biology and describes some approaches being developed by statistical scientists. We have included some focus on the increasing interest in questions involving testing multiple propositions simultaneously, appropriate inferential indicators for the types of questions biologists are interested in, and the need for replication of results across independent studies, investigators, and settings. A key consideration inherent throughout is the challenge in providing methods that a statistician judges to be sound and a biologist finds informative.

  7. Comparing statistical tests for detecting soil contamination greater than background

    International Nuclear Information System (INIS)

    Hardin, J.W.; Gilbert, R.O.

    1993-12-01

    The Washington State Department of Ecology (WSDE) recently issued a report that provides guidance on statistical issues regarding investigation and cleanup of soil and groundwater contamination under the Model Toxics Control Act Cleanup Regulation. Included in the report are procedures for determining a background-based cleanup standard and for conducting a 3-step statistical test procedure to decide if a site is contaminated greater than the background standard. The guidance specifies that the State test should only be used if the background and site data are lognormally distributed. The guidance in WSDE allows for using alternative tests on a site-specific basis if prior approval is obtained from WSDE. This report presents the results of a Monte Carlo computer simulation study conducted to evaluate the performance of the State test and several alternative tests for various contamination scenarios (background and site data distributions). The primary test performance criteria are (1) the probability the test will indicate that a contaminated site is indeed contaminated, and (2) the probability that the test will indicate an uncontaminated site is contaminated. The simulation study was conducted assuming the background concentrations were from lognormal or Weibull distributions. The site data were drawn from distributions selected to represent various contamination scenarios. The statistical tests studied are the State test, t test, Satterthwaite's t test, five distribution-free tests, and several tandem tests (wherein two or more tests are conducted using the same data set)

  8. Optimal design of tests for heat exchanger fouling identification

    International Nuclear Information System (INIS)

    Palmer, Kyle A.; Hale, William T.; Such, Kyle D.; Shea, Brian R.; Bollas, George M.

    2016-01-01

    Highlights: • Built-in test design that optimizes the information extractable from the said test. • Method minimizes the covariance of a fault with system uncertainty. • Method applied for the identification and quantification of heat exchanger fouling. • Heat exchanger fouling is identifiable despite the uncertainty in inputs and states. - Graphical Abstract: - Abstract: Particulate fouling in plate fin heat exchangers of aircraft environmental control systems is a recurring issue in environments rich in foreign object debris. Heat exchanger fouling detection, in terms of quantification of its severity, is critical for aircraft maintenance scheduling and safe operation. In this work, we focus on methods for offline fouling detection during aircraft ground handling, where the allowable variability range of admissible inputs is wider. We explore methods of optimal experimental design to estimate heat exchanger inputs and input trajectories that maximize the identifiability of fouling. In particular, we present a methodology in which D-optimality is used as a criterion for statistically significant inference of heat exchanger fouling in uncertain environments. The optimal tests are designed on the basis of a heat exchanger model of the inherent mass, energy and momentum balances, validated against literature data. The model is then used to infer sensitivities of the heat exchanger outputs with respect to fouling metrics and maximize them by manipulating input trajectories; thus enhancing the accuracy in quantifying the fouling extent. The proposed methodology is evaluated with statistical indices of the confidence in estimating thermal fouling resistance at uncertain operating conditions, explored in a series of case studies.

  9. Statistical aspects of quantitative real-time PCR experiment design.

    Science.gov (United States)

    Kitchen, Robert R; Kubista, Mikael; Tichopad, Ales

    2010-04-01

    Experiments using quantitative real-time PCR to test hypotheses are limited by technical and biological variability; we seek to minimise sources of confounding variability through optimum use of biological and technical replicates. The quality of an experiment design is commonly assessed by calculating its prospective power. Such calculations rely on knowledge of the expected variances of the measurements of each group of samples and the magnitude of the treatment effect; the estimation of which is often uninformed and unreliable. Here we introduce a method that exploits a small pilot study to estimate the biological and technical variances in order to improve the design of a subsequent large experiment. We measure the variance contributions at several 'levels' of the experiment design and provide a means of using this information to predict both the total variance and the prospective power of the assay. A validation of the method is provided through a variance analysis of representative genes in several bovine tissue-types. We also discuss the effect of normalisation to a reference gene in terms of the measured variance components of the gene of interest. Finally, we describe a software implementation of these methods, powerNest, that gives the user the opportunity to input data from a pilot study and interactively modify the design of the assay. The software automatically calculates expected variances, statistical power, and optimal design of the larger experiment. powerNest enables the researcher to minimise the total confounding variance and maximise prospective power for a specified maximum cost for the large study. Copyright 2010 Elsevier Inc. All rights reserved.

  10. Statistical methods in the mechanical design of fuel assemblies

    Energy Technology Data Exchange (ETDEWEB)

    Radsak, C.; Streit, D.; Muench, C.J. [AREVA NP GmbH, Erlangen (Germany)

    2013-07-01

    The mechanical design of a fuel assembly is still being mainly performed in a de terministic way. This conservative approach is however not suitable to provide a realistic quantification of the design margins with respect to licensing criter ia for more and more demanding operating conditions (power upgrades, burnup increase,..). This quantification can be provided by statistical methods utilizing all available information (e.g. from manufacturing, experience feedback etc.) of the topic under consideration. During optimization e.g. of the holddown system certain objectives in the mechanical design of a fuel assembly (FA) can contradict each other, such as sufficient holddown forces enough to prevent fuel assembly lift-off and reducing the holddown forces to minimize axial loads on the fuel assembly structure to ensure no negative effect on the control rod movement.By u sing a statistical method the fuel assembly design can be optimized much better with respect to these objectives than it would be possible based on a deterministic approach. This leads to a more realistic assessment and safer way of operating fuel assemblies. Statistical models are defined on the one hand by the quanti le that has to be maintained concerning the design limit requirements (e.g. one FA quantile) and on the other hand by the confidence level which has to be met. Using the above example of the holddown force, a feasible quantile can be define d based on the requirement that less than one fuel assembly (quantile > 192/19 3 [%] = 99.5 %) in the core violates the holddown force limit w ith a confidence of 95%. (orig.)

  11. Normality Tests for Statistical Analysis: A Guide for Non-Statisticians

    Science.gov (United States)

    Ghasemi, Asghar; Zahediasl, Saleh

    2012-01-01

    Statistical errors are common in scientific literature and about 50% of the published articles have at least one error. The assumption of normality needs to be checked for many statistical procedures, namely parametric tests, because their validity depends on it. The aim of this commentary is to overview checking for normality in statistical analysis using SPSS. PMID:23843808

  12. Debate on GMOs health risks after statistical findings in regulatory tests.

    Science.gov (United States)

    de Vendômois, Joël Spiroux; Cellier, Dominique; Vélot, Christian; Clair, Emilie; Mesnage, Robin; Séralini, Gilles-Eric

    2010-10-05

    We summarize the major points of international debate on health risk studies for the main commercialized edible GMOs. These GMOs are soy, maize and oilseed rape designed to contain new pesticide residues since they have been modified to be herbicide-tolerant (mostly to Roundup) or to produce mutated Bt toxins. The debated alimentary chronic risks may come from unpredictable insertional mutagenesis effects, metabolic effects, or from the new pesticide residues. The most detailed regulatory tests on the GMOs are three-month long feeding trials of laboratory rats, which are biochemically assessed. The tests are not compulsory, and are not independently conducted. The test data and the corresponding results are kept in secret by the companies. Our previous analyses of regulatory raw data at these levels, taking the representative examples of three GM maize NK 603, MON 810, and MON 863 led us to conclude that hepatorenal toxicities were possible, and that longer testing was necessary. Our study was criticized by the company developing the GMOs in question and the regulatory bodies, mainly on the divergent biological interpretations of statistically significant biochemical and physiological effects. We present the scientific reasons for the crucially different biological interpretations and also highlight the shortcomings in the experimental protocols designed by the company. The debate implies an enormous responsibility towards public health and is essential due to nonexistent traceability or epidemiological studies in the GMO-producing countries.

  13. Statistically designed experiments to screen chemical mixtures for possible interactions

    NARCIS (Netherlands)

    Groten, J.P.; Tajima, O.; Feron, V.J.; Schoen, E.D.

    1998-01-01

    For the accurate analysis of possible interactive effects of chemicals in a defined mixture, statistical designs are necessary to develop clear and manageable experiments. For instance, factorial designs have been successfully used to detect two-factor interactions. Particularly useful for this

  14. Statistical experimental design for refractory coatings

    International Nuclear Information System (INIS)

    McKinnon, J.A.; Standard, O.C.

    2000-01-01

    The production of refractory coatings on metal casting moulds is critically dependent on the development of suitable rheological characteristics, such as viscosity and thixotropy, in the initial coating slurry. In this paper, the basic concepts of mixture design and analysis are applied to the formulation of a refractory coating, with illustration by a worked example. Experimental data of coating viscosity versus composition are fitted to a statistical model to obtain a reliable method of predicting the optimal formulation of the coating. Copyright (2000) The Australian Ceramic Society

  15. Reliability-based econometrics of aerospace structural systems: Design criteria and test options. Ph.D. Thesis - Georgia Inst. of Tech.

    Science.gov (United States)

    Thomas, J. M.; Hanagud, S.

    1974-01-01

    The design criteria and test options for aerospace structural reliability were investigated. A decision methodology was developed for selecting a combination of structural tests and structural design factors. The decision method involves the use of Bayesian statistics and statistical decision theory. Procedures are discussed for obtaining and updating data-based probabilistic strength distributions for aerospace structures when test information is available and for obtaining subjective distributions when data are not available. The techniques used in developing the distributions are explained.

  16. Which statistics should tropical biologists learn?

    Science.gov (United States)

    Loaiza Velásquez, Natalia; González Lutz, María Isabel; Monge-Nájera, Julián

    2011-09-01

    Tropical biologists study the richest and most endangered biodiversity in the planet, and in these times of climate change and mega-extinctions, the need for efficient, good quality research is more pressing than in the past. However, the statistical component in research published by tropical authors sometimes suffers from poor quality in data collection; mediocre or bad experimental design and a rigid and outdated view of data analysis. To suggest improvements in their statistical education, we listed all the statistical tests and other quantitative analyses used in two leading tropical journals, the Revista de Biología Tropical and Biotropica, during a year. The 12 most frequent tests in the articles were: Analysis of Variance (ANOVA), Chi-Square Test, Student's T Test, Linear Regression, Pearson's Correlation Coefficient, Mann-Whitney U Test, Kruskal-Wallis Test, Shannon's Diversity Index, Tukey's Test, Cluster Analysis, Spearman's Rank Correlation Test and Principal Component Analysis. We conclude that statistical education for tropical biologists must abandon the old syllabus based on the mathematical side of statistics and concentrate on the correct selection of these and other procedures and tests, on their biological interpretation and on the use of reliable and friendly freeware. We think that their time will be better spent understanding and protecting tropical ecosystems than trying to learn the mathematical foundations of statistics: in most cases, a well designed one-semester course should be enough for their basic requirements.

  17. Four Papers on Contemporary Software Design Strategies for Statistical Methodologists

    OpenAIRE

    Carey, Vincent; Cook, Dianne

    2014-01-01

    Software design impacts much of statistical analysis and, as technology changes, dramatically so in recent years, it is exciting to learn how statistical software is adapting and changing. This leads to the collection of papers published here, written by John Chambers, Duncan Temple Lang, Michael Lawrence, Martin Morgan, Yihui Xie, Heike Hofmann and Xiaoyue Cheng.

  18. A critique of statistical hypothesis testing in clinical research

    Directory of Open Access Journals (Sweden)

    Somik Raha

    2011-01-01

    Full Text Available Many have documented the difficulty of using the current paradigm of Randomized Controlled Trials (RCTs to test and validate the effectiveness of alternative medical systems such as Ayurveda. This paper critiques the applicability of RCTs for all clinical knowledge-seeking endeavors, of which Ayurveda research is a part. This is done by examining statistical hypothesis testing, the underlying foundation of RCTs, from a practical and philosophical perspective. In the philosophical critique, the two main worldviews of probability are that of the Bayesian and the frequentist. The frequentist worldview is a special case of the Bayesian worldview requiring the unrealistic assumptions of knowing nothing about the universe and believing that all observations are unrelated to each other. Many have claimed that the first belief is necessary for science, and this claim is debunked by comparing variations in learning with different prior beliefs. Moving beyond the Bayesian and frequentist worldviews, the notion of hypothesis testing itself is challenged on the grounds that a hypothesis is an unclear distinction, and assigning a probability on an unclear distinction is an exercise that does not lead to clarity of action. This critique is of the theory itself and not any particular application of statistical hypothesis testing. A decision-making frame is proposed as a way of both addressing this critique and transcending ideological debates on probability. An example of a Bayesian decision-making approach is shown as an alternative to statistical hypothesis testing, utilizing data from a past clinical trial that studied the effect of Aspirin on heart attacks in a sample population of doctors. As a big reason for the prevalence of RCTs in academia is legislation requiring it, the ethics of legislating the use of statistical methods for clinical research is also examined.

  19. Statistical evaluation of SAGE libraries: consequences for experimental design

    NARCIS (Netherlands)

    Ruijter, Jan M.; van Kampen, Antoine H. C.; Baas, Frank

    2002-01-01

    Since the introduction of serial analysis of gene expression (SAGE) as a method to quantitatively analyze the differential expression of genes, several statistical tests have been published for the pairwise comparison of SAGE libraries. Testing the difference between the number of specific tags

  20. Statistical core design methodology using the VIPRE thermal-hydraulics code

    International Nuclear Information System (INIS)

    Lloyd, M.W.; Feltus, M.A.

    1995-01-01

    An improved statistical core design methodology for developing a computational departure from nucleate boiling ratio (DNBR) correlation has been developed and applied in order to analyze the nominal 1.3 DNBR limit on Westinghouse Pressurized Water Reactor (PWR) cores. This analysis, although limited in scope, found that the DNBR limit can be reduced from 1.3 to some lower value and be accurate within an adequate confidence level of 95%, for three particular FSAR operational transients: turbine trip, complete loss of flow, and inadvertent opening of a pressurizer relief valve. The VIPRE-01 thermal-hydraulics code, the SAS/STAT statistical package, and the EPRI/Columbia University DNBR experimental data base were used in this research to develop the Pennsylvania State Statistical Core Design Methodology (PSSCDM). The VIPRE code was used to perform the necessary sensitivity studies and generate the EPRI correlation-calculated DNBR predictions. The SAS package used for these EPRI DNBR correlation predictions from VIPRE as a data set to determine the best fit for the empirical model and to perform the statistical analysis. (author)

  1. Statistical test theory for the behavioral sciences

    CERN Document Server

    de Gruijter, Dato N M

    2007-01-01

    Since the development of the first intelligence test in the early 20th century, educational and psychological tests have become important measurement techniques to quantify human behavior. Focusing on this ubiquitous yet fruitful area of research, Statistical Test Theory for the Behavioral Sciences provides both a broad overview and a critical survey of assorted testing theories and models used in psychology, education, and other behavioral science fields. Following a logical progression from basic concepts to more advanced topics, the book first explains classical test theory, covering true score, measurement error, and reliability. It then presents generalizability theory, which provides a framework to deal with various aspects of test scores. In addition, the authors discuss the concept of validity in testing, offering a strategy for evidence-based validity. In the two chapters devoted to item response theory (IRT), the book explores item response models, such as the Rasch model, and applications, incl...

  2. A method of statistical analysis in the field of sports science when assumptions of parametric tests are not violated

    Directory of Open Access Journals (Sweden)

    Elżbieta Sandurska

    2016-12-01

    Full Text Available Introduction: Application of statistical software typically does not require extensive statistical knowledge, allowing to easily perform even complex analyses. Consequently, test selection criteria and important assumptions may be easily overlooked or given insufficient consideration. In such cases, the results may likely lead to wrong conclusions. Aim: To discuss issues related to assumption violations in the case of Student's t-test and one-way ANOVA, two parametric tests frequently used in the field of sports science, and to recommend solutions. Description of the state of knowledge: Student's t-test and ANOVA are parametric tests, and therefore some of the assumptions that need to be satisfied include normal distribution of the data and homogeneity of variances in groups. If the assumptions are violated, the original design of the test is impaired, and the test may then be compromised giving spurious results. A simple method to normalize the data and to stabilize the variance is to use transformations. If such approach fails, a good alternative to consider is a nonparametric test, such as Mann-Whitney, the Kruskal-Wallis or Wilcoxon signed-rank tests. Summary: Thorough verification of the parametric tests assumptions allows for correct selection of statistical tools, which is the basis of well-grounded statistical analysis. With a few simple rules, testing patterns in the data characteristic for the study of sports science comes down to a straightforward procedure.

  3. Application of a statistical thermal design procedure to evaluate the PWR DNBR safety analysis limits

    International Nuclear Information System (INIS)

    Robeyns, J.; Parmentier, F.; Peeters, G.

    2001-01-01

    In the framework of safety analysis for the Belgian nuclear power plants and for the reload compatibility studies, Tractebel Energy Engineering (TEE) has developed, to define a 95/95 DNBR criterion, a statistical thermal design method based on the analytical full statistical approach: the Statistical Thermal Design Procedure (STDP). In that methodology, each DNBR value in the core assemblies is calculated with an adapted CHF (Critical Heat Flux) correlation implemented in the sub-channel code Cobra for core thermal hydraulic analysis. The uncertainties of the correlation are represented by the statistical parameters calculated from an experimental database. The main objective of a sub-channel analysis is to prove that in all class 1 and class 2 situations, the minimum DNBR (Departure from Nucleate Boiling Ratio) remains higher than the Safety Analysis Limit (SAL). The SAL value is calculated from the Statistical Design Limit (SDL) value adjusted with some penalties and deterministic factors. The search of a realistic value for the SDL is the objective of the statistical thermal design methods. In this report, we apply a full statistical approach to define the DNBR criterion or SDL (Statistical Design Limit) with the strict observance of the design criteria defined in the Standard Review Plan. The same statistical approach is used to define the expected number of rods experiencing DNB. (author)

  4. Efficient statistical tests to compare Youden index: accounting for contingency correlation.

    Science.gov (United States)

    Chen, Fangyao; Xue, Yuqiang; Tan, Ming T; Chen, Pingyan

    2015-04-30

    Youden index is widely utilized in studies evaluating accuracy of diagnostic tests and performance of predictive, prognostic, or risk models. However, both one and two independent sample tests on Youden index have been derived ignoring the dependence (association) between sensitivity and specificity, resulting in potentially misleading findings. Besides, paired sample test on Youden index is currently unavailable. This article develops efficient statistical inference procedures for one sample, independent, and paired sample tests on Youden index by accounting for contingency correlation, namely associations between sensitivity and specificity and paired samples typically represented in contingency tables. For one and two independent sample tests, the variances are estimated by Delta method, and the statistical inference is based on the central limit theory, which are then verified by bootstrap estimates. For paired samples test, we show that the estimated covariance of the two sensitivities and specificities can be represented as a function of kappa statistic so the test can be readily carried out. We then show the remarkable accuracy of the estimated variance using a constrained optimization approach. Simulation is performed to evaluate the statistical properties of the derived tests. The proposed approaches yield more stable type I errors at the nominal level and substantially higher power (efficiency) than does the original Youden's approach. Therefore, the simple explicit large sample solution performs very well. Because we can readily implement the asymptotic and exact bootstrap computation with common software like R, the method is broadly applicable to the evaluation of diagnostic tests and model performance. Copyright © 2015 John Wiley & Sons, Ltd.

  5. Statistical evaluation of design-error related accidents

    International Nuclear Information System (INIS)

    Ott, K.O.; Marchaterre, J.F.

    1980-01-01

    In a recently published paper (Campbell and Ott, 1979), a general methodology was proposed for the statistical evaluation of design-error related accidents. The evaluation aims at an estimate of the combined residual frequency of yet unknown types of accidents lurking in a certain technological system. Here, the original methodology is extended, as to apply to a variety of systems that evolves during the development of large-scale technologies. A special categorization of incidents and accidents is introduced to define the events that should be jointly analyzed. The resulting formalism is applied to the development of the nuclear power reactor technology, considering serious accidents that involve in the accident-progression a particular design inadequacy

  6. Use of run statistics to validate tensile tests

    International Nuclear Information System (INIS)

    Eatherly, W.P.

    1981-01-01

    In tensile testing of irradiated graphites, it is difficult to assure alignment of sample and train for tensile measurements. By recording location of fractures, run (sequential) statistics can readily detect lack of randomness. The technique is based on partitioning binomial distributions

  7. Your Chi-Square Test Is Statistically Significant: Now What?

    Science.gov (United States)

    Sharpe, Donald

    2015-01-01

    Applied researchers have employed chi-square tests for more than one hundred years. This paper addresses the question of how one should follow a statistically significant chi-square test result in order to determine the source of that result. Four approaches were evaluated: calculating residuals, comparing cells, ransacking, and partitioning. Data…

  8. Statistical literacy for clinical practitioners

    CERN Document Server

    Holmes, William H

    2014-01-01

    This textbook on statistics is written for students in medicine, epidemiology, and public health. It builds on the important role evidence-based medicine now plays in the clinical practice of physicians, physician assistants and allied health practitioners. By bringing research design and statistics to the fore, this book can integrate these skills into the curricula of professional programs. Students, particularly practitioners-in-training, will learn statistical skills that are required of today’s clinicians. Practice problems at the end of each chapter and downloadable data sets provided by the authors ensure readers get practical experience that they can then apply to their own work.  Topics covered include:   Functions of Statistics in Clinical Research Common Study Designs Describing Distributions of Categorical and Quantitative Variables Confidence Intervals and Hypothesis Testing Documenting Relationships in Categorical and Quantitative Data Assessing Screening and Diagnostic Tests Comparing Mean...

  9. Properties of permutation-based gene tests and controlling type 1 error using a summary statistic based gene test.

    Science.gov (United States)

    Swanson, David M; Blacker, Deborah; Alchawa, Taofik; Ludwig, Kerstin U; Mangold, Elisabeth; Lange, Christoph

    2013-11-07

    The advent of genome-wide association studies has led to many novel disease-SNP associations, opening the door to focused study on their biological underpinnings. Because of the importance of analyzing these associations, numerous statistical methods have been devoted to them. However, fewer methods have attempted to associate entire genes or genomic regions with outcomes, which is potentially more useful knowledge from a biological perspective and those methods currently implemented are often permutation-based. One property of some permutation-based tests is that their power varies as a function of whether significant markers are in regions of linkage disequilibrium (LD) or not, which we show from a theoretical perspective. We therefore develop two methods for quantifying the degree of association between a genomic region and outcome, both of whose power does not vary as a function of LD structure. One method uses dimension reduction to "filter" redundant information when significant LD exists in the region, while the other, called the summary-statistic test, controls for LD by scaling marker Z-statistics using knowledge of the correlation matrix of markers. An advantage of this latter test is that it does not require the original data, but only their Z-statistics from univariate regressions and an estimate of the correlation structure of markers, and we show how to modify the test to protect the type 1 error rate when the correlation structure of markers is misspecified. We apply these methods to sequence data of oral cleft and compare our results to previously proposed gene tests, in particular permutation-based ones. We evaluate the versatility of the modification of the summary-statistic test since the specification of correlation structure between markers can be inaccurate. We find a significant association in the sequence data between the 8q24 region and oral cleft using our dimension reduction approach and a borderline significant association using the

  10. Reliability Evaluation of Concentric Butterfly Valve Using Statistical Hypothesis Test

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Mu Seong; Choi, Jong Sik; Choi, Byung Oh; Kim, Do Sik [Korea Institute of Machinery and Materials, Daejeon (Korea, Republic of)

    2015-12-15

    A butterfly valve is a type of flow-control device typically used to regulate a fluid flow. This paper presents an estimation of the shape parameter of the Weibull distribution, characteristic life, and B10 life for a concentric butterfly valve based on a statistical analysis of the reliability test data taken before and after the valve improvement. The difference in the shape and scale parameters between the existing and improved valves is reviewed using a statistical hypothesis test. The test results indicate that the shape parameter of the improved valve is similar to that of the existing valve, and that the scale parameter of the improved valve is found to have increased. These analysis results are particularly useful for a reliability qualification test and the determination of the service life cycles.

  11. Reliability Evaluation of Concentric Butterfly Valve Using Statistical Hypothesis Test

    International Nuclear Information System (INIS)

    Chang, Mu Seong; Choi, Jong Sik; Choi, Byung Oh; Kim, Do Sik

    2015-01-01

    A butterfly valve is a type of flow-control device typically used to regulate a fluid flow. This paper presents an estimation of the shape parameter of the Weibull distribution, characteristic life, and B10 life for a concentric butterfly valve based on a statistical analysis of the reliability test data taken before and after the valve improvement. The difference in the shape and scale parameters between the existing and improved valves is reviewed using a statistical hypothesis test. The test results indicate that the shape parameter of the improved valve is similar to that of the existing valve, and that the scale parameter of the improved valve is found to have increased. These analysis results are particularly useful for a reliability qualification test and the determination of the service life cycles

  12. Evaluating statistical tests on OLAP cubes to compare degree of disease.

    Science.gov (United States)

    Ordonez, Carlos; Chen, Zhibo

    2009-09-01

    Statistical tests represent an important technique used to formulate and validate hypotheses on a dataset. They are particularly useful in the medical domain, where hypotheses link disease with medical measurements, risk factors, and treatment. In this paper, we propose to compute parametric statistical tests treating patient records as elements in a multidimensional cube. We introduce a technique that combines dimension lattice traversal and statistical tests to discover significant differences in the degree of disease within pairs of patient groups. In order to understand a cause-effect relationship, we focus on patient group pairs differing in one dimension. We introduce several optimizations to prune the search space, to discover significant group pairs, and to summarize results. We present experiments showing important medical findings and evaluating scalability with medical datasets.

  13. Statistical test for the distribution of galaxies on plates

    International Nuclear Information System (INIS)

    Garcia Lambas, D.

    1985-01-01

    A statistical test for the distribution of galaxies on plates is presented. We apply the test to synthetic astronomical plates obtained by means of numerical simulation (Garcia Lambas and Sersic 1983) with three different models for the 3-dimensional distribution, comparison with an observational plate, suggest the presence of filamentary structure. (author)

  14. Statistical modeling of static strengths of nuclear graphites with relevance to structural design

    International Nuclear Information System (INIS)

    Arai, Taketoshi

    1992-02-01

    Use of graphite materials for structural members poses a problem as to how to take into account of statistical properties of static strength, especially tensile fracture stresses, in component structural design. The present study concerns comprehensive examinations on statistical data base and modelings on nuclear graphites. First, the report provides individual samples and their analyses on strengths of IG-110 and PGX graphites for HTTR components. Those statistical characteristics on other HTGR graphites are also exemplified from the literature. Most of statistical distributions of individual samples are found to be approximately normal. The goodness of fit to normal distributions is more satisfactory with larger sample sizes. Molded and extruded graphites, however, possess a variety of statistical properties depending of samples from different with-in-log locations and/or different orientations. Second, the previous statistical models including the Weibull theory are assessed from the viewpoint of applicability to design procedures. This leads to a conclusion that the Weibull theory and its modified ones are satisfactory only for limited parts of tensile fracture behavior. They are not consistent for whole observations. Only normal statistics are justifiable as practical approaches to discuss specified minimum ultimate strengths as statistical confidence limits for individual samples. Third, the assessment of various statistical models emphasizes the need to develop advanced analytical ones which should involve modeling of microstructural features of actual graphite materials. Improvements of other structural design methodologies are also presented. (author)

  15. Intelligent Condition Diagnosis Method Based on Adaptive Statistic Test Filter and Diagnostic Bayesian Network.

    Science.gov (United States)

    Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing

    2016-01-08

    A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method.

  16. Intelligent Condition Diagnosis Method Based on Adaptive Statistic Test Filter and Diagnostic Bayesian Network

    Directory of Open Access Journals (Sweden)

    Ke Li

    2016-01-01

    Full Text Available A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF and Diagnostic Bayesian Network (DBN is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO. To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA is proposed to evaluate the sensitiveness of symptom parameters (SPs for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method.

  17. Intelligent Condition Diagnosis Method Based on Adaptive Statistic Test Filter and Diagnostic Bayesian Network

    Science.gov (United States)

    Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing

    2016-01-01

    A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method. PMID:26761006

  18. Reliability Verification of DBE Environment Simulation Test Facility by using Statistics Method

    International Nuclear Information System (INIS)

    Jang, Kyung Nam; Kim, Jong Soeg; Jeong, Sun Chul; Kyung Heum

    2011-01-01

    In the nuclear power plant, all the safety-related equipment including cables under the harsh environment should perform the equipment qualification (EQ) according to the IEEE std 323. There are three types of qualification methods including type testing, operating experience and analysis. In order to environmentally qualify the safety-related equipment using type testing method, not analysis or operation experience method, the representative sample of equipment, including interfaces, should be subjected to a series of tests. Among these tests, Design Basis Events (DBE) environment simulating test is the most important test. DBE simulation test is performed in DBE simulation test chamber according to the postulated DBE conditions including specified high-energy line break (HELB), loss of coolant accident (LOCA), main steam line break (MSLB) and etc, after thermal and radiation aging. Because most DBE conditions have 100% humidity condition, in order to trace temperature and pressure of DBE condition, high temperature steam should be used. During DBE simulation test, if high temperature steam under high pressure inject to the DBE test chamber, the temperature and pressure in test chamber rapidly increase over the target temperature. Therefore, the temperature and pressure in test chamber continue fluctuating during the DBE simulation test to meet target temperature and pressure. We should ensure fairness and accuracy of test result by confirming the performance of DBE environment simulation test facility. In this paper, in order to verify reliability of DBE environment simulation test facility, statistics method is used

  19. Filtering a statistically exactly solvable test model for turbulent tracers from partial observations

    International Nuclear Information System (INIS)

    Gershgorin, B.; Majda, A.J.

    2011-01-01

    A statistically exactly solvable model for passive tracers is introduced as a test model for the authors' Nonlinear Extended Kalman Filter (NEKF) as well as other filtering algorithms. The model involves a Gaussian velocity field and a passive tracer governed by the advection-diffusion equation with an imposed mean gradient. The model has direct relevance to engineering problems such as the spread of pollutants in the air or contaminants in the water as well as climate change problems concerning the transport of greenhouse gases such as carbon dioxide with strongly intermittent probability distributions consistent with the actual observations of the atmosphere. One of the attractive properties of the model is the existence of the exact statistical solution. In particular, this unique feature of the model provides an opportunity to design and test fast and efficient algorithms for real-time data assimilation based on rigorous mathematical theory for a turbulence model problem with many active spatiotemporal scales. Here, we extensively study the performance of the NEKF which uses the exact first and second order nonlinear statistics without any approximations due to linearization. The role of partial and sparse observations, the frequency of observations and the observation noise strength in recovering the true signal, its spectrum, and fat tail probability distribution are the central issues discussed here. The results of our study provide useful guidelines for filtering realistic turbulent systems with passive tracers through partial observations.

  20. Appropriate statistical methods are required to assess diagnostic tests for replacement, add-on, and triage

    NARCIS (Netherlands)

    Hayen, Andrew; Macaskill, Petra; Irwig, Les; Bossuyt, Patrick

    2010-01-01

    To explain which measures of accuracy and which statistical methods should be used in studies to assess the value of a new binary test as a replacement test, an add-on test, or a triage test. Selection and explanation of statistical methods, illustrated with examples. Statistical methods for

  1. Analysis and meta-analysis of single-case designs with a standardized mean difference statistic: a primer and applications.

    Science.gov (United States)

    Shadish, William R; Hedges, Larry V; Pustejovsky, James E

    2014-04-01

    This article presents a d-statistic for single-case designs that is in the same metric as the d-statistic used in between-subjects designs such as randomized experiments and offers some reasons why such a statistic would be useful in SCD research. The d has a formal statistical development, is accompanied by appropriate power analyses, and can be estimated using user-friendly SPSS macros. We discuss both advantages and disadvantages of d compared to other approaches such as previous d-statistics, overlap statistics, and multilevel modeling. It requires at least three cases for computation and assumes normally distributed outcomes and stationarity, assumptions that are discussed in some detail. We also show how to test these assumptions. The core of the article then demonstrates in depth how to compute d for one study, including estimation of the autocorrelation and the ratio of between case variance to total variance (between case plus within case variance), how to compute power using a macro, and how to use the d to conduct a meta-analysis of studies using single-case designs in the free program R, including syntax in an appendix. This syntax includes how to read data, compute fixed and random effect average effect sizes, prepare a forest plot and a cumulative meta-analysis, estimate various influence statistics to identify studies contributing to heterogeneity and effect size, and do various kinds of publication bias analyses. This d may prove useful for both the analysis and meta-analysis of data from SCDs. Copyright © 2013 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  2. Simulation Experiments in Practice: Statistical Design and Regression Analysis

    OpenAIRE

    Kleijnen, J.P.C.

    2007-01-01

    In practice, simulation analysts often change only one factor at a time, and use graphical analysis of the resulting Input/Output (I/O) data. The goal of this article is to change these traditional, naïve methods of design and analysis, because statistical theory proves that more information is obtained when applying Design Of Experiments (DOE) and linear regression analysis. Unfortunately, classic DOE and regression analysis assume a single simulation response that is normally and independen...

  3. THE ATKINSON INDEX, THE MORAN STATISTIC, AND TESTING EXPONENTIALITY

    OpenAIRE

    Nao, Mimoto; Ricardas, Zitikis; Department of Statistics and Probability, Michigan State University; Department of Statistical and Actuarial Sciences, University of Western Ontario

    2008-01-01

    Constructing tests for exponentiality has been an active and fruitful research area, with numerous applications in engineering, biology and other sciences concerned with life-time data. In the present paper, we construct and investigate powerful tests for exponentiality based on two well known quantities: the Atkinson index and the Moran statistic. We provide an extensive study of the performance of the tests and compare them with those already available in the literature.

  4. Simulation Experiments in Practice : Statistical Design and Regression Analysis

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2007-01-01

    In practice, simulation analysts often change only one factor at a time, and use graphical analysis of the resulting Input/Output (I/O) data. Statistical theory proves that more information is obtained when applying Design Of Experiments (DOE) and linear regression analysis. Unfortunately, classic

  5. Statistical methods for evaluating the attainment of cleanup standards

    Energy Technology Data Exchange (ETDEWEB)

    Gilbert, R.O.; Simpson, J.C.

    1992-12-01

    This document is the third volume in a series of volumes sponsored by the US Environmental Protection Agency (EPA), Statistical Policy Branch, that provide statistical methods for evaluating the attainment of cleanup Standards at Superfund sites. Volume 1 (USEPA 1989a) provides sampling designs and tests for evaluating attainment of risk-based standards for soils and solid media. Volume 2 (USEPA 1992) provides designs and tests for evaluating attainment of risk-based standards for groundwater. The purpose of this third volume is to provide statistical procedures for designing sampling programs and conducting statistical tests to determine whether pollution parameters in remediated soils and solid media at Superfund sites attain site-specific reference-based standards. This.document is written for individuals who may not have extensive training or experience with statistical methods. The intended audience includes EPA regional remedial project managers, Superfund-site potentially responsible parties, state environmental protection agencies, and contractors for these groups.

  6. Test design requirements: Canister-scale heater test

    International Nuclear Information System (INIS)

    Schauer, M.I.; Craig, P.A.; Stickney, R.G.

    1986-03-01

    This document establishes the Test Design Requirements for the design of a canister scale heater test to be performed in the Exploratory Shaft test facility. The purpose of the test is to obtain thermomechanical rock mass response data for use in validation of the numerical models. The canister scale heater test is a full scale simulation of a high-level nuclear waste container in a prototypic emplacement borehole. Electric heaters are used to simulate the heat loads expected in an actual waste container. This document presents an overview of the test including objectives and justification for the test. A description of the test as it is presently envisioned is included. Discussions on Quality Assurance and Safety are also included in the document. 12 refs., 1 fig

  7. Statistical Tutorial | Center for Cancer Research

    Science.gov (United States)

    Recent advances in cancer biology have resulted in the need for increased statistical analysis of research data.  ST is designed as a follow up to Statistical Analysis of Research Data (SARD) held in April 2018.  The tutorial will apply the general principles of statistical analysis of research data including descriptive statistics, z- and t-tests of means and mean

  8. 688,112 statistical results: Content mining psychology articles for statistical test results

    OpenAIRE

    Hartgerink, C.H.J.

    2016-01-01

    In this data deposit, I describe a dataset that is the result of content mining 167,318 published articles for statistical test results reported according to the standards prescribed by the American Psychological Association (APA). Articles published by the APA, Springer, Sage, and Taylor & Francis were included (mining from Wiley and Elsevier was actively blocked). As a result of this content mining, 688,112 results from 50,845 articles were extracted. In order to provide a comprehensive set...

  9. New filterability and compressibility test cell design for nuclear products

    Energy Technology Data Exchange (ETDEWEB)

    Féraud, J.P. [CEA Marcoule, DTEC/SGCS/LGCI, BP 17171, 30207 Bagnols-sur-Cèze (France); Bourcier, D., E-mail: damien.bourcier@cea.fr [CEA Marcoule, DTEC/SGCS/LGCI, BP 17171, 30207 Bagnols-sur-Cèze (France); Ode, D. [CEA Marcoule, DTEC/SGCS/LGCI, BP 17171, 30207 Bagnols-sur-Cèze (France); Puel, F. [Université Lyon 1, Villeurbanne (France); CNRS, UMR5007, Laboratoire d‘Automatique et de Génie des Procédés (LAGEP), CPE-Lyon, 43 bd du 11 Novembre 1918, 69100 Villeurbanne (France)

    2013-12-15

    Highlights: • Test easily usable without tools in a glove box. • The test minimizes the slurry volume necessary for this type of study. • The test characterizes the flow resistance in a porous medium in formation. • The test is performed at four pressure levels to determine the compressibility. • The technical design ensures reproducible flow resistance measurements. -- Abstract: Filterability and compressibility tests are often carried out at laboratory scale to obtain data required to scale up solid/liquid separation processes. Current technologies, applied with a constant pressure drop, enable specific resistance and cake formation rate measurement in accordance with a modified Darcy's law. The new test cell design described in this paper is easily usable without tools in a glove box and minimizes the slurry volume necessary for this type of study. This is an advantage for investigating toxic and hazardous products such as radioactive materials. Uranium oxalate precipitate slurries were used to test and validate this new cell. In order to reduce the test cell volume, a statistical approach was applied on 8 results obtained with cylindrical test cells of 1.8 cm and 3 cm in diameter. Wall effects can therefore be ignored despite the small filtration cell diameter, allowing tests to be performed with only about one-tenth of the slurry volume of a standard commercial cell. The significant reduction in the size of this experimental device does not alter the consistency of filtration data which may be used in the design of industrial equipment.

  10. New filterability and compressibility test cell design for nuclear products

    International Nuclear Information System (INIS)

    Féraud, J.P.; Bourcier, D.; Ode, D.; Puel, F.

    2013-01-01

    Highlights: • Test easily usable without tools in a glove box. • The test minimizes the slurry volume necessary for this type of study. • The test characterizes the flow resistance in a porous medium in formation. • The test is performed at four pressure levels to determine the compressibility. • The technical design ensures reproducible flow resistance measurements. -- Abstract: Filterability and compressibility tests are often carried out at laboratory scale to obtain data required to scale up solid/liquid separation processes. Current technologies, applied with a constant pressure drop, enable specific resistance and cake formation rate measurement in accordance with a modified Darcy's law. The new test cell design described in this paper is easily usable without tools in a glove box and minimizes the slurry volume necessary for this type of study. This is an advantage for investigating toxic and hazardous products such as radioactive materials. Uranium oxalate precipitate slurries were used to test and validate this new cell. In order to reduce the test cell volume, a statistical approach was applied on 8 results obtained with cylindrical test cells of 1.8 cm and 3 cm in diameter. Wall effects can therefore be ignored despite the small filtration cell diameter, allowing tests to be performed with only about one-tenth of the slurry volume of a standard commercial cell. The significant reduction in the size of this experimental device does not alter the consistency of filtration data which may be used in the design of industrial equipment

  11. Test Statistics and Confidence Intervals to Establish Noninferiority between Treatments with Ordinal Categorical Data.

    Science.gov (United States)

    Zhang, Fanghong; Miyaoka, Etsuo; Huang, Fuping; Tanaka, Yutaka

    2015-01-01

    The problem for establishing noninferiority is discussed between a new treatment and a standard (control) treatment with ordinal categorical data. A measure of treatment effect is used and a method of specifying noninferiority margin for the measure is provided. Two Z-type test statistics are proposed where the estimation of variance is constructed under the shifted null hypothesis using U-statistics. Furthermore, the confidence interval and the sample size formula are given based on the proposed test statistics. The proposed procedure is applied to a dataset from a clinical trial. A simulation study is conducted to compare the performance of the proposed test statistics with that of the existing ones, and the results show that the proposed test statistics are better in terms of the deviation from nominal level and the power.

  12. Testing statistical isotropy in cosmic microwave background polarization maps

    Science.gov (United States)

    Rath, Pranati K.; Samal, Pramoda Kumar; Panda, Srikanta; Mishra, Debesh D.; Aluri, Pavan K.

    2018-04-01

    We apply our symmetry based Power tensor technique to test conformity of PLANCK Polarization maps with statistical isotropy. On a wide range of angular scales (l = 40 - 150), our preliminary analysis detects many statistically anisotropic multipoles in foreground cleaned full sky PLANCK polarization maps viz., COMMANDER and NILC. We also study the effect of residual foregrounds that may still be present in the Galactic plane using both common UPB77 polarization mask, as well as the individual component separation method specific polarization masks. However, some of the statistically anisotropic modes still persist, albeit significantly in NILC map. We further probed the data for any coherent alignments across multipoles in several bins from the chosen multipole range.

  13. Statistical refinements for data analysis of mollusc reproduction tests: an example with Lymnaea stagnalis

    DEFF Research Database (Denmark)

    Holbech, Henrik

    -contribution of each individual to the measured response. Furthermore, the combination of a Gamma-Poisson stochastic part with a Weibull concentration-response model allowed accounting for the inter-replicate variability. Second, we checked for the possibility of optimizing the initial experimental design through...... was twofold. First, we refined the statistical analyses of reproduction data accounting for mortality all along the test period. The variable “number of clutches/eggs produced per individual-day” was used for EC x modelling, as classically done in epidemiology in order to account for the time...

  14. Statistical inference for extended or shortened phase II studies based on Simon's two-stage designs.

    Science.gov (United States)

    Zhao, Junjun; Yu, Menggang; Feng, Xi-Ping

    2015-06-07

    Simon's two-stage designs are popular choices for conducting phase II clinical trials, especially in the oncology trials to reduce the number of patients placed on ineffective experimental therapies. Recently Koyama and Chen (2008) discussed how to conduct proper inference for such studies because they found that inference procedures used with Simon's designs almost always ignore the actual sampling plan used. In particular, they proposed an inference method for studies when the actual second stage sample sizes differ from planned ones. We consider an alternative inference method based on likelihood ratio. In particular, we order permissible sample paths under Simon's two-stage designs using their corresponding conditional likelihood. In this way, we can calculate p-values using the common definition: the probability of obtaining a test statistic value at least as extreme as that observed under the null hypothesis. In addition to providing inference for a couple of scenarios where Koyama and Chen's method can be difficult to apply, the resulting estimate based on our method appears to have certain advantage in terms of inference properties in many numerical simulations. It generally led to smaller biases and narrower confidence intervals while maintaining similar coverages. We also illustrated the two methods in a real data setting. Inference procedures used with Simon's designs almost always ignore the actual sampling plan. Reported P-values, point estimates and confidence intervals for the response rate are not usually adjusted for the design's adaptiveness. Proper statistical inference procedures should be used.

  15. Engineering test facility design center

    International Nuclear Information System (INIS)

    Anon.

    1980-01-01

    The vehicle by which the fusion program would move into the engineering testing phase of fusion power development is designated the Engineering Test Facility (ETF). The ETF would provide a test bed for reactor components in the fusion environment. In order to initiate preliminary planning for the ETF decision, the Office of Fusion Energy established the ETF Design Center activity to prepare the design of the ETF. This section describes the status of this design

  16. Kepler Planet Detection Metrics: Statistical Bootstrap Test

    Science.gov (United States)

    Jenkins, Jon M.; Burke, Christopher J.

    2016-01-01

    This document describes the data produced by the Statistical Bootstrap Test over the final three Threshold Crossing Event (TCE) deliveries to NExScI: SOC 9.1 (Q1Q16)1 (Tenenbaum et al. 2014), SOC 9.2 (Q1Q17) aka DR242 (Seader et al. 2015), and SOC 9.3 (Q1Q17) aka DR253 (Twicken et al. 2016). The last few years have seen significant improvements in the SOC science data processing pipeline, leading to higher quality light curves and more sensitive transit searches. The statistical bootstrap analysis results presented here and the numerical results archived at NASAs Exoplanet Science Institute (NExScI) bear witness to these software improvements. This document attempts to introduce and describe the main features and differences between these three data sets as a consequence of the software changes.

  17. Designing and testing prototypes

    NARCIS (Netherlands)

    Vereijken, P.; Wijnands, F.; Stol, W.

    1995-01-01

    This second progress report focuses on designing a theoretical prototype by linking parameters to methods and designing the methods in this context until they are ready for initial testing. The report focuses also on testing and improving the prototype in general and the methods in particular until

  18. The Relationship between Test Anxiety and Academic Performance of Students in Vital Statistics Course

    Directory of Open Access Journals (Sweden)

    Shirin Iranfar

    2013-12-01

    Full Text Available Introduction: Test anxiety is a common phenomenon among students and is one of the problems of educational system. The present study was conducted to investigate the test anxiety in vital statistics course and its association with academic performance of students at Kermanshah University of Medical Sciences. This study was descriptive-analytical and the study sample included the students studying in nursing and midwifery, paramedicine and health faculties that had taken vital statistics course and were selected through census method. Sarason questionnaire was used to analyze the test anxiety. Data were analyzed by descriptive and inferential statistics. The findings indicated no significant correlation between test anxiety and score of vital statistics course.

  19. STANFORD-OHWAKI-KOHS TACTILE BLOCK DESIGN INTELLIGENCE TEST FOR THE BLIND. PART ONE-FINAL REPORT.

    Science.gov (United States)

    DAUTERMAN, WILLIAM L.; SUINN, RICHARD M.

    THIS TEST WAS DEVELOPED TO MEASURE THE INTELLIGENCE OF BLIND ADOLESCENTS AND ADULTS. SIX HUNDRED AND THIRTY BLIND SUBJECTS 14 YEARS OF AGE AND OLDER WERE USED IN REFINING AND STANDARDIZING THE NONVERBAL, PERFORMANCE OHWAKI-KOHS BLOCK DESIGN TEST FOR USE BY BLIND INDIVIDUALS IN THE UNITED STATES. RESULTS INDICATED STATISTICALLY SIGNIFICANT…

  20. Formal Functional Test Designs: Bridging the Gap Between Test Requirements and Test Specifications

    Science.gov (United States)

    Hops, Jonathan

    1993-01-01

    This presentation describes the testing life cycle, the purpose of the test design phase, and test design methods and gives an example application. Also included is a description of Test Representation Language (TRL), a summary of the language, and an example of an application of TRL. A sample test requirement and sample test design are included.

  1. Simulation Experiments in Practice : Statistical Design and Regression Analysis

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2007-01-01

    In practice, simulation analysts often change only one factor at a time, and use graphical analysis of the resulting Input/Output (I/O) data. The goal of this article is to change these traditional, naïve methods of design and analysis, because statistical theory proves that more information is

  2. Common pitfalls in statistical analysis: The perils of multiple testing

    Science.gov (United States)

    Ranganathan, Priya; Pramesh, C. S.; Buyse, Marc

    2016-01-01

    Multiple testing refers to situations where a dataset is subjected to statistical testing multiple times - either at multiple time-points or through multiple subgroups or for multiple end-points. This amplifies the probability of a false-positive finding. In this article, we look at the consequences of multiple testing and explore various methods to deal with this issue. PMID:27141478

  3. Using statistics to understand the environment

    CERN Document Server

    Cook, Penny A

    2000-01-01

    Using Statistics to Understand the Environment covers all the basic tests required for environmental practicals and projects and points the way to the more advanced techniques that may be needed in more complex research designs. Following an introduction to project design, the book covers methods to describe data, to examine differences between samples, and to identify relationships and associations between variables.Featuring: worked examples covering a wide range of environmental topics, drawings and icons, chapter summaries, a glossary of statistical terms and a further reading section, this book focuses on the needs of the researcher rather than on the mathematics behind the tests.

  4. Testing statistical self-similarity in the topology of river networks

    Science.gov (United States)

    Troutman, Brent M.; Mantilla, Ricardo; Gupta, Vijay K.

    2010-01-01

    Recent work has demonstrated that the topological properties of real river networks deviate significantly from predictions of Shreve's random model. At the same time the property of mean self-similarity postulated by Tokunaga's model is well supported by data. Recently, a new class of network model called random self-similar networks (RSN) that combines self-similarity and randomness has been introduced to replicate important topological features observed in real river networks. We investigate if the hypothesis of statistical self-similarity in the RSN model is supported by data on a set of 30 basins located across the continental United States that encompass a wide range of hydroclimatic variability. We demonstrate that the generators of the RSN model obey a geometric distribution, and self-similarity holds in a statistical sense in 26 of these 30 basins. The parameters describing the distribution of interior and exterior generators are tested to be statistically different and the difference is shown to produce the well-known Hack's law. The inter-basin variability of RSN parameters is found to be statistically significant. We also test generator dependence on two climatic indices, mean annual precipitation and radiative index of dryness. Some indication of climatic influence on the generators is detected, but this influence is not statistically significant with the sample size available. Finally, two key applications of the RSN model to hydrology and geomorphology are briefly discussed.

  5. Six Sigma Quality Management System and Design of Risk-based Statistical Quality Control.

    Science.gov (United States)

    Westgard, James O; Westgard, Sten A

    2017-03-01

    Six sigma concepts provide a quality management system (QMS) with many useful tools for managing quality in medical laboratories. This Six Sigma QMS is driven by the quality required for the intended use of a test. The most useful form for this quality requirement is the allowable total error. Calculation of a sigma-metric provides the best predictor of risk for an analytical examination process, as well as a design parameter for selecting the statistical quality control (SQC) procedure necessary to detect medically important errors. Simple point estimates of sigma at medical decision concentrations are sufficient for laboratory applications. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Operational statistical analysis of the results of computer-based testing of students

    Directory of Open Access Journals (Sweden)

    Виктор Иванович Нардюжев

    2018-12-01

    Full Text Available The article is devoted to the issues of statistical analysis of results of computer-based testing for evaluation of educational achievements of students. The issues are relevant due to the fact that computerbased testing in Russian universities has become an important method for evaluation of educational achievements of students and quality of modern educational process. Usage of modern methods and programs for statistical analysis of results of computer-based testing and assessment of quality of developed tests is an actual problem for every university teacher. The article shows how the authors solve this problem using their own program “StatInfo”. For several years the program has been successfully applied in a credit system of education at such technological stages as loading computerbased testing protocols into a database, formation of queries, generation of reports, lists, and matrices of answers for statistical analysis of quality of test items. Methodology, experience and some results of its usage by university teachers are described in the article. Related topics of a test development, models, algorithms, technologies, and software for large scale computer-based testing has been discussed by the authors in their previous publications which are presented in the reference list.

  7. An Improved Rank Correlation Effect Size Statistic for Single-Case Designs: Baseline Corrected Tau.

    Science.gov (United States)

    Tarlow, Kevin R

    2017-07-01

    Measuring treatment effects when an individual's pretreatment performance is improving poses a challenge for single-case experimental designs. It may be difficult to determine whether improvement is due to the treatment or due to the preexisting baseline trend. Tau- U is a popular single-case effect size statistic that purports to control for baseline trend. However, despite its strengths, Tau- U has substantial limitations: Its values are inflated and not bound between -1 and +1, it cannot be visually graphed, and its relatively weak method of trend control leads to unacceptable levels of Type I error wherein ineffective treatments appear effective. An improved effect size statistic based on rank correlation and robust regression, Baseline Corrected Tau, is proposed and field-tested with both published and simulated single-case time series. A web-based calculator for Baseline Corrected Tau is also introduced for use by single-case investigators.

  8. PedGenie: meta genetic association testing in mixed family and case-control designs

    Directory of Open Access Journals (Sweden)

    Allen-Brady Kristina

    2007-11-01

    Full Text Available Abstract Background- PedGenie software, introduced in 2006, includes genetic association testing of cases and controls that may be independent or related (nuclear families or extended pedigrees or mixtures thereof using Monte Carlo significance testing. Our aim is to demonstrate that PedGenie, a unique and flexible analysis tool freely available in Genie 2.4 software, is significantly enhanced by incorporating meta statistics for detecting genetic association with disease using data across multiple study groups. Methods- Meta statistics (chi-squared tests, odds ratios, and confidence intervals were calculated using formal Cochran-Mantel-Haenszel techniques. Simulated data from unrelated individuals and individuals in families were used to illustrate meta tests and their empirically-derived p-values and confidence intervals are accurate, precise, and for independent designs match those provided by standard statistical software. Results- PedGenie yields accurate Monte Carlo p-values for meta analysis of data across multiple studies, based on validation testing using pedigree, nuclear family, and case-control data simulated under both the null and alternative hypotheses of a genotype-phenotype association. Conclusion- PedGenie allows valid combined analysis of data from mixtures of pedigree-based and case-control resources. Added meta capabilities provide new avenues for association analysis, including pedigree resources from large consortia and multi-center studies.

  9. Statistical tests for the Gaussian nature of primordial fluctuations through CBR experiments

    International Nuclear Information System (INIS)

    Luo, X.

    1994-01-01

    Information about the physical processes that generate the primordial fluctuations in the early Universe can be gained by testing the Gaussian nature of the fluctuations through cosmic microwave background radiation (CBR) temperature anisotropy experiments. One of the crucial aspects of density perturbations that are produced by the standard inflation scenario is that they are Gaussian, whereas seeds produced by topological defects left over from an early cosmic phase transition tend to be non-Gaussian. To carry out this test, sophisticated statistical tools are required. In this paper, we will discuss several such statistical tools, including multivariant skewness and kurtosis, Euler-Poincare characteristics, the three-point temperature correlation function, and Hotelling's T 2 statistic defined through bispectral estimates of a one-dimensional data set. The effect of noise present in the current data is discussed in detail and the COBE 53 GHz data set is analyzed. Our analysis shows that, on the large angular scale to which COBE is sensitive, the statistics are probably Gaussian. On the small angular scales, the importance of Hotelling's T 2 statistic is stressed, and the minimum sample size required to test Gaussianity is estimated. Although the current data set available from various experiments at half-degree scales is still too small, improvement of the data set by roughly a factor of 2 will be enough to test the Gaussianity statistically. On the arc min scale, we analyze the recent RING data through bispectral analysis, and the result indicates possible deviation from Gaussianity. Effects of point sources are also discussed. It is pointed out that the Gaussianity problem can be resolved in the near future by ground-based or balloon-borne experiments

  10. Using the Bootstrap Method for a Statistical Significance Test of Differences between Summary Histograms

    Science.gov (United States)

    Xu, Kuan-Man

    2006-01-01

    A new method is proposed to compare statistical differences between summary histograms, which are the histograms summed over a large ensemble of individual histograms. It consists of choosing a distance statistic for measuring the difference between summary histograms and using a bootstrap procedure to calculate the statistical significance level. Bootstrapping is an approach to statistical inference that makes few assumptions about the underlying probability distribution that describes the data. Three distance statistics are compared in this study. They are the Euclidean distance, the Jeffries-Matusita distance and the Kuiper distance. The data used in testing the bootstrap method are satellite measurements of cloud systems called cloud objects. Each cloud object is defined as a contiguous region/patch composed of individual footprints or fields of view. A histogram of measured values over footprints is generated for each parameter of each cloud object and then summary histograms are accumulated over all individual histograms in a given cloud-object size category. The results of statistical hypothesis tests using all three distances as test statistics are generally similar, indicating the validity of the proposed method. The Euclidean distance is determined to be most suitable after comparing the statistical tests of several parameters with distinct probability distributions among three cloud-object size categories. Impacts on the statistical significance levels resulting from differences in the total lengths of satellite footprint data between two size categories are also discussed.

  11. Towards improved behavioural testing in aquatic toxicology: Acclimation and observation times are important factors when designing behavioural tests with fish.

    Science.gov (United States)

    Melvin, Steven D; Petit, Marie A; Duvignacq, Marion C; Sumpter, John P

    2017-08-01

    The quality and reproducibility of science has recently come under scrutiny, with criticisms spanning disciplines. In aquatic toxicology, behavioural tests are currently an area of controversy since inconsistent findings have been highlighted and attributed to poor quality science. The problem likely relates to limitations to our understanding of basic behavioural patterns, which can influence our ability to design statistically robust experiments yielding ecologically relevant data. The present study takes a first step towards understanding baseline behaviours in fish, including how basic choices in experimental design might influence behavioural outcomes and interpretations in aquatic toxicology. Specifically, we explored how fish acclimate to behavioural arenas and how different lengths of observation time impact estimates of basic swimming parameters (i.e., average, maximum and angular velocity). We performed a semi-quantitative literature review to place our findings in the context of the published literature describing behavioural tests with fish. Our results demonstrate that fish fundamentally change their swimming behaviour over time, and that acclimation and observational timeframes may therefore have implications for influencing both the ecological relevance and statistical robustness of behavioural toxicity tests. Our review identified 165 studies describing behavioural responses in fish exposed to various stressors, and revealed that the majority of publications documenting fish behavioural responses report extremely brief acclimation times and observational durations, which helps explain inconsistencies identified across studies. We recommend that researchers applying behavioural tests with fish, and other species, apply a similar framework to better understand baseline behaviours and the implications of design choices for influencing study outcomes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Two-Sample Statistics for Testing the Equality of Survival Functions Against Improper Semi-parametric Accelerated Failure Time Alternatives: An Application to the Analysis of a Breast Cancer Clinical Trial

    Science.gov (United States)

    BROËT, PHILIPPE; TSODIKOV, ALEXANDER; DE RYCKE, YANN; MOREAU, THIERRY

    2010-01-01

    This paper presents two-sample statistics suited for testing equality of survival functions against improper semi-parametric accelerated failure time alternatives. These tests are designed for comparing either the short- or the long-term effect of a prognostic factor, or both. These statistics are obtained as partial likelihood score statistics from a time-dependent Cox model. As a consequence, the proposed tests can be very easily implemented using widely available software. A breast cancer clinical trial is presented as an example to demonstrate the utility of the proposed tests. PMID:15293627

  13. Two-sample statistics for testing the equality of survival functions against improper semi-parametric accelerated failure time alternatives: an application to the analysis of a breast cancer clinical trial.

    Science.gov (United States)

    Broët, Philippe; Tsodikov, Alexander; De Rycke, Yann; Moreau, Thierry

    2004-06-01

    This paper presents two-sample statistics suited for testing equality of survival functions against improper semi-parametric accelerated failure time alternatives. These tests are designed for comparing either the short- or the long-term effect of a prognostic factor, or both. These statistics are obtained as partial likelihood score statistics from a time-dependent Cox model. As a consequence, the proposed tests can be very easily implemented using widely available software. A breast cancer clinical trial is presented as an example to demonstrate the utility of the proposed tests.

  14. Designing optimal degradation tests via multi-objective genetic algorithms

    International Nuclear Information System (INIS)

    Marseguerra, Marzio; Zio, Enrico; Cipollone, Maurizio

    2003-01-01

    The experimental determination of the failure time probability distribution of highly reliable components, such as those used in nuclear and aerospace applications, is intrinsically difficult due to the lack, or scarce significance, of failure data which can be collected during the relatively short test periods. A possibility to overcome this difficulty is to resort to the so-called degradation tests, in which measurements of components' degradation are used to infer the failure time distribution. To design such tests, parameters like the number of tests to be run, their frequency and duration, must be set so as to obtain an accurate estimate of the distribution statistics, under the existing limitations of budget. The optimisation problem which results is a non-linear one. In this work, we propose a method, based on multi-objective genetic algorithms for determining the values of the test parameters which optimise both the accuracy in the estimate of the failure time distribution percentiles and the testing costs. The method has been validated on a degradation model of literature

  15. Statistical Redundancy Testing for Improved Gene Selection in Cancer Classification Using Microarray Data

    Directory of Open Access Journals (Sweden)

    J. Sunil Rao

    2007-01-01

    Full Text Available In gene selection for cancer classifi cation using microarray data, we define an eigenvalue-ratio statistic to measure a gene’s contribution to the joint discriminability when this gene is included into a set of genes. Based on this eigenvalueratio statistic, we define a novel hypothesis testing for gene statistical redundancy and propose two gene selection methods. Simulation studies illustrate the agreement between statistical redundancy testing and gene selection methods. Real data examples show the proposed gene selection methods can select a compact gene subset which can not only be used to build high quality cancer classifiers but also show biological relevance.

  16. Statistics for experimentalists

    CERN Document Server

    Cooper, B E

    2014-01-01

    Statistics for Experimentalists aims to provide experimental scientists with a working knowledge of statistical methods and search approaches to the analysis of data. The book first elaborates on probability and continuous probability distributions. Discussions focus on properties of continuous random variables and normal variables, independence of two random variables, central moments of a continuous distribution, prediction from a normal distribution, binomial probabilities, and multiplication of probabilities and independence. The text then examines estimation and tests of significance. Topics include estimators and estimates, expected values, minimum variance linear unbiased estimators, sufficient estimators, methods of maximum likelihood and least squares, and the test of significance method. The manuscript ponders on distribution-free tests, Poisson process and counting problems, correlation and function fitting, balanced incomplete randomized block designs and the analysis of covariance, and experiment...

  17. DWPF Sample Vial Insert Study-Statistical Analysis of DWPF Mock-Up Test Data

    International Nuclear Information System (INIS)

    Harris, S.P.

    1997-01-01

    This report is prepared as part of Technical/QA Task Plan WSRC-RP-97-351 which was issued in response to Technical Task Request HLW/DWPF/TTR-970132 submitted by DWPF. Presented in this report is a statistical analysis of DWPF Mock-up test data for evaluation of two new analytical methods which use insert samples from the existing HydragardTM sampler. The first is a new hydrofluoric acid based method called the Cold Chemical Method (Cold Chem) and the second is a modified fusion method.Both new methods use the existing HydragardTM sampler to collect a smaller insert sample from the process sampling system. The insert testing methodology applies to the DWPF Slurry Mix Evaporator (SME) and the Melter Feed Tank (MFT) samples. Samples in small 3 ml containers (Inserts) are analyzed by either the cold chemical method or a modified fusion method. The current analytical method uses a HydragardTM sample station to obtain nearly full 15 ml peanut vials. The samples are prepared by a multi-step process for Inductively Coupled Plasma (ICP) analysis by drying, vitrification, grinding and finally dissolution by either mixed acid or fusion. In contrast, the insert sample is placed directly in the dissolution vessel, thus eliminating the drying, vitrification and grinding operations for the Cold chem method. Although the modified fusion still requires drying and calcine conversion, the process is rapid due to the decreased sample size and that no vitrification step is required.A slurry feed simulant material was acquired from the TNX pilot facility from the test run designated as PX-7.The Mock-up test data were gathered on the basis of a statistical design presented in SRT-SCS-97004 (Rev. 0). Simulant PX-7 samples were taken in the DWPF Analytical Cell Mock-up Facility using 3 ml inserts and 15 ml peanut vials. A number of the insert samples were analyzed by Cold Chem and compared with full peanut vial samples analyzed by the current methods. The remaining inserts were analyzed by

  18. Scientific, statistical, practical, and regulatory considerations in design space development.

    Science.gov (United States)

    Debevec, Veronika; Srčič, Stanko; Horvat, Matej

    2018-03-01

    The quality by design (QbD) paradigm guides the pharmaceutical industry towards improved understanding of products and processes, and at the same time facilitates a high degree of manufacturing and regulatory flexibility throughout the establishment of the design space. This review article presents scientific, statistical and regulatory considerations in design space development. All key development milestones, starting with planning, selection of factors, experimental execution, data analysis, model development and assessment, verification, and validation, and ending with design space submission, are presented and discussed. The focus is especially on frequently ignored topics, like management of factors and CQAs that will not be included in experimental design, evaluation of risk of failure on design space edges, or modeling scale-up strategy. Moreover, development of a design space that is independent of manufacturing scale is proposed as the preferred approach.

  19. Statistical and Machine-Learning Classifier Framework to Improve Pulse Shape Discrimination System Design

    Energy Technology Data Exchange (ETDEWEB)

    Wurtz, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kaplan, A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-10-28

    Pulse shape discrimination (PSD) is a variety of statistical classifier. Fully-­realized statistical classifiers rely on a comprehensive set of tools for designing, building, and implementing. PSD advances rely on improvements to the implemented algorithm. PSD advances can be improved by using conventional statistical classifier or machine learning methods. This paper provides the reader with a glossary of classifier-­building elements and their functions in a fully-­designed and operational classifier framework that can be used to discover opportunities for improving PSD classifier projects. This paper recommends reporting the PSD classifier’s receiver operating characteristic (ROC) curve and its behavior at a gamma rejection rate (GRR) relevant for realistic applications.

  20. A general statistical test for correlations in a finite-length time series.

    Science.gov (United States)

    Hanson, Jeffery A; Yang, Haw

    2008-06-07

    The statistical properties of the autocorrelation function from a time series composed of independently and identically distributed stochastic variables has been studied. Analytical expressions for the autocorrelation function's variance have been derived. It has been found that two common ways of calculating the autocorrelation, moving-average and Fourier transform, exhibit different uncertainty characteristics. For periodic time series, the Fourier transform method is preferred because it gives smaller uncertainties that are uniform through all time lags. Based on these analytical results, a statistically robust method has been proposed to test the existence of correlations in a time series. The statistical test is verified by computer simulations and an application to single-molecule fluorescence spectroscopy is discussed.

  1. Statistical testing of association between menstruation and migraine.

    Science.gov (United States)

    Barra, Mathias; Dahl, Fredrik A; Vetvik, Kjersti G

    2015-02-01

    To repair and refine a previously proposed method for statistical analysis of association between migraine and menstruation. Menstrually related migraine (MRM) affects about 20% of female migraineurs in the general population. The exact pathophysiological link from menstruation to migraine is hypothesized to be through fluctuations in female reproductive hormones, but the exact mechanisms remain unknown. Therefore, the main diagnostic criterion today is concurrency of migraine attacks with menstruation. Methods aiming to exclude spurious associations are wanted, so that further research into these mechanisms can be performed on a population with a true association. The statistical method is based on a simple two-parameter null model of MRM (which allows for simulation modeling), and Fisher's exact test (with mid-p correction) applied to standard 2 × 2 contingency tables derived from the patients' headache diaries. Our method is a corrected version of a previously published flawed framework. To our best knowledge, no other published methods for establishing a menstruation-migraine association by statistical means exist today. The probabilistic methodology shows good performance when subjected to receiver operator characteristic curve analysis. Quick reference cutoff values for the clinical setting were tabulated for assessing association given a patient's headache history. In this paper, we correct a proposed method for establishing association between menstruation and migraine by statistical methods. We conclude that the proposed standard of 3-cycle observations prior to setting an MRM diagnosis should be extended with at least one perimenstrual window to obtain sufficient information for statistical processing. © 2014 American Headache Society.

  2. Near-exact distributions for the block equicorrelation and equivariance likelihood ratio test statistic

    Science.gov (United States)

    Coelho, Carlos A.; Marques, Filipe J.

    2013-09-01

    In this paper the authors combine the equicorrelation and equivariance test introduced by Wilks [13] with the likelihood ratio test (l.r.t.) for independence of groups of variables to obtain the l.r.t. of block equicorrelation and equivariance. This test or its single block version may find applications in many areas as in psychology, education, medicine, genetics and they are important "in many tests of multivariate analysis, e.g. in MANOVA, Profile Analysis, Growth Curve analysis, etc" [12, 9]. By decomposing the overall hypothesis into the hypotheses of independence of groups of variables and the hypothesis of equicorrelation and equivariance we are able to obtain the expressions for the overall l.r.t. statistic and its moments. From these we obtain a suitable factorization of the characteristic function (c.f.) of the logarithm of the l.r.t. statistic, which enables us to develop highly manageable and precise near-exact distributions for the test statistic.

  3. A generalization of Friedman's rank statistic

    NARCIS (Netherlands)

    Kroon, de J.; Laan, van der P.

    1983-01-01

    In this paper a very natural generalization of the two·way analysis of variance rank statistic of FRIEDMAN is given. The general distribution-free test procedure based on this statistic for the effect of J treatments in a random block design can be applied in general two-way layouts without

  4. Statistics II essentials

    CERN Document Server

    Milewski, Emil G

    2012-01-01

    REA's Essentials provide quick and easy access to critical information in a variety of different fields, ranging from the most basic to the most advanced. As its name implies, these concise, comprehensive study guides summarize the essentials of the field covered. Essentials are helpful when preparing for exams, doing homework and will remain a lasting reference source for students, teachers, and professionals. Statistics II discusses sampling theory, statistical inference, independent and dependent variables, correlation theory, experimental design, count data, chi-square test, and time se

  5. DWPF Sample Vial Insert Study-Statistical Analysis of DWPF Mock-Up Test Data

    Energy Technology Data Exchange (ETDEWEB)

    Harris, S.P. [Westinghouse Savannah River Company, AIKEN, SC (United States)

    1997-09-18

    This report is prepared as part of Technical/QA Task Plan WSRC-RP-97-351 which was issued in response to Technical Task Request HLW/DWPF/TTR-970132 submitted by DWPF. Presented in this report is a statistical analysis of DWPF Mock-up test data for evaluation of two new analytical methods which use insert samples from the existing HydragardTM sampler. The first is a new hydrofluoric acid based method called the Cold Chemical Method (Cold Chem) and the second is a modified fusion method.Either new DWPF analytical method could result in a two to three fold improvement in sample analysis time.Both new methods use the existing HydragardTM sampler to collect a smaller insert sample from the process sampling system. The insert testing methodology applies to the DWPF Slurry Mix Evaporator (SME) and the Melter Feed Tank (MFT) samples.The insert sample is named after the initial trials which placed the container inside the sample (peanut) vials. Samples in small 3 ml containers (Inserts) are analyzed by either the cold chemical method or a modified fusion method. The current analytical method uses a HydragardTM sample station to obtain nearly full 15 ml peanut vials. The samples are prepared by a multi-step process for Inductively Coupled Plasma (ICP) analysis by drying, vitrification, grinding and finally dissolution by either mixed acid or fusion. In contrast, the insert sample is placed directly in the dissolution vessel, thus eliminating the drying, vitrification and grinding operations for the Cold chem method. Although the modified fusion still requires drying and calcine conversion, the process is rapid due to the decreased sample size and that no vitrification step is required.A slurry feed simulant material was acquired from the TNX pilot facility from the test run designated as PX-7.The Mock-up test data were gathered on the basis of a statistical design presented in SRT-SCS-97004 (Rev. 0). Simulant PX-7 samples were taken in the DWPF Analytical Cell Mock

  6. Comment on the asymptotics of a distribution-free goodness of fit test statistic.

    Science.gov (United States)

    Browne, Michael W; Shapiro, Alexander

    2015-03-01

    In a recent article Jennrich and Satorra (Psychometrika 78: 545-552, 2013) showed that a proof by Browne (British Journal of Mathematical and Statistical Psychology 37: 62-83, 1984) of the asymptotic distribution of a goodness of fit test statistic is incomplete because it fails to prove that the orthogonal component function employed is continuous. Jennrich and Satorra (Psychometrika 78: 545-552, 2013) showed how Browne's proof can be completed satisfactorily but this required the development of an extensive and mathematically sophisticated framework for continuous orthogonal component functions. This short note provides a simple proof of the asymptotic distribution of Browne's (British Journal of Mathematical and Statistical Psychology 37: 62-83, 1984) test statistic by using an equivalent form of the statistic that does not involve orthogonal component functions and consequently avoids all complicating issues associated with them.

  7. Statistical process control in nursing research.

    Science.gov (United States)

    Polit, Denise F; Chaboyer, Wendy

    2012-02-01

    In intervention studies in which randomization to groups is not possible, researchers typically use quasi-experimental designs. Time series designs are strong quasi-experimental designs but are seldom used, perhaps because of technical and analytic hurdles. Statistical process control (SPC) is an alternative analytic approach to testing hypotheses about intervention effects using data collected over time. SPC, like traditional statistical methods, is a tool for understanding variation and involves the construction of control charts that distinguish between normal, random fluctuations (common cause variation), and statistically significant special cause variation that can result from an innovation. The purpose of this article is to provide an overview of SPC and to illustrate its use in a study of a nursing practice improvement intervention. Copyright © 2011 Wiley Periodicals, Inc.

  8. The Reliability of Single Subject Statistics for Biofeedback Studies.

    Science.gov (United States)

    Bremner, Frederick J.; And Others

    To test the usefulness of single subject statistical designs for biofeedback, three experiments were conducted comparing biofeedback to meditation, and to a compound stimulus recognition task. In a statistical sense, this experimental design is best described as one experiment with two replications. The apparatus for each of the three experiments…

  9. Evaluation of PDA Technical Report No 33. Statistical Testing Recommendations for a Rapid Microbiological Method Case Study.

    Science.gov (United States)

    Murphy, Thomas; Schwedock, Julie; Nguyen, Kham; Mills, Anna; Jones, David

    2015-01-01

    New recommendations for the validation of rapid microbiological methods have been included in the revised Technical Report 33 release from the PDA. The changes include a more comprehensive review of the statistical methods to be used to analyze data obtained during validation. This case study applies those statistical methods to accuracy, precision, ruggedness, and equivalence data obtained using a rapid microbiological methods system being evaluated for water bioburden testing. Results presented demonstrate that the statistical methods described in the PDA Technical Report 33 chapter can all be successfully applied to the rapid microbiological method data sets and gave the same interpretation for equivalence to the standard method. The rapid microbiological method was in general able to pass the requirements of PDA Technical Report 33, though the study shows that there can be occasional outlying results and that caution should be used when applying statistical methods to low average colony-forming unit values. Prior to use in a quality-controlled environment, any new method or technology has to be shown to work as designed by the manufacturer for the purpose required. For new rapid microbiological methods that detect and enumerate contaminating microorganisms, additional recommendations have been provided in the revised PDA Technical Report No. 33. The changes include a more comprehensive review of the statistical methods to be used to analyze data obtained during validation. This paper applies those statistical methods to analyze accuracy, precision, ruggedness, and equivalence data obtained using a rapid microbiological method system being validated for water bioburden testing. The case study demonstrates that the statistical methods described in the PDA Technical Report No. 33 chapter can be successfully applied to rapid microbiological method data sets and give the same comparability results for similarity or difference as the standard method. © PDA, Inc

  10. Statistical evaluation of design-error related nuclear reactor accidents

    International Nuclear Information System (INIS)

    Ott, K.O.; Marchaterre, J.F.

    1981-01-01

    In this paper, general methodology for the statistical evaluation of design-error related accidents is proposed that can be applied to a variety of systems that evolves during the development of large-scale technologies. The evaluation aims at an estimate of the combined ''residual'' frequency of yet unknown types of accidents ''lurking'' in a certain technological system. A special categorization in incidents and accidents is introduced to define the events that should be jointly analyzed. The resulting formalism is applied to the development of U.S. nuclear power reactor technology, considering serious accidents (category 2 events) that involved, in the accident progression, a particular design inadequacy. 9 refs

  11. Nonparametric statistics for social and behavioral sciences

    CERN Document Server

    Kraska-MIller, M

    2013-01-01

    Introduction to Research in Social and Behavioral SciencesBasic Principles of ResearchPlanning for ResearchTypes of Research Designs Sampling ProceduresValidity and Reliability of Measurement InstrumentsSteps of the Research Process Introduction to Nonparametric StatisticsData AnalysisOverview of Nonparametric Statistics and Parametric Statistics Overview of Parametric Statistics Overview of Nonparametric StatisticsImportance of Nonparametric MethodsMeasurement InstrumentsAnalysis of Data to Determine Association and Agreement Pearson Chi-Square Test of Association and IndependenceContingency

  12. Statistical Requirements For Pass-Fail Testing Of Contraband Detection Systems

    International Nuclear Information System (INIS)

    Gilliam, David M.

    2011-01-01

    Contraband detection systems for homeland security applications are typically tested for probability of detection (PD) and probability of false alarm (PFA) using pass-fail testing protocols. Test protocols usually require specified values for PD and PFA to be demonstrated at a specified level of statistical confidence CL. Based on a recent more theoretical treatment of this subject [1], this summary reviews the definition of CL and provides formulas and spreadsheet functions for constructing tables of general test requirements and for determining the minimum number of tests required. The formulas and tables in this article may be generally applied to many other applications of pass-fail testing, in addition to testing of contraband detection systems.

  13. Program Helps Design Tests Of Developmental Software

    Science.gov (United States)

    Hops, Jonathan

    1994-01-01

    Computer program called "A Formal Test Representation Language and Tool for Functional Test Designs" (TRL) provides automatic software tool and formal language used to implement category-partition method and produce specification of test cases in testing phase of development of software. Category-partition method useful in defining input, outputs, and purpose of test-design phase of development and combines benefits of choosing normal cases having error-exposing properties. Traceability maintained quite easily by creating test design for each objective in test plan. Effort to transform test cases into procedures simplified by use of automatic software tool to create cases based on test design. Method enables rapid elimination of undesired test cases from consideration and facilitates review of test designs by peer groups. Written in C language.

  14. P-Value, a true test of statistical significance? a cautionary note ...

    African Journals Online (AJOL)

    While it's not the intention of the founders of significance testing and hypothesis testing to have the two ideas intertwined as if they are complementary, the inconvenient marriage of the two practices into one coherent, convenient, incontrovertible and misinterpreted practice has dotted our standard statistics textbooks and ...

  15. A Statistical Toolkit for Data Analysis

    International Nuclear Information System (INIS)

    Donadio, S.; Guatelli, S.; Mascialino, B.; Pfeiffer, A.; Pia, M.G.; Ribon, A.; Viarengo, P.

    2006-01-01

    The present project aims to develop an open-source and object-oriented software Toolkit for statistical data analysis. Its statistical testing component contains a variety of Goodness-of-Fit tests, from Chi-squared to Kolmogorov-Smirnov, to less known, but generally much more powerful tests such as Anderson-Darling, Goodman, Fisz-Cramer-von Mises, Kuiper, Tiku. Thanks to the component-based design and the usage of the standard abstract interfaces for data analysis, this tool can be used by other data analysis systems or integrated in experimental software frameworks. This Toolkit has been released and is downloadable from the web. In this paper we describe the statistical details of the algorithms, the computational features of the Toolkit and describe the code validation

  16. Causality in Statistical Power: Isomorphic Properties of Measurement, Research Design, Effect Size, and Sample Size

    Directory of Open Access Journals (Sweden)

    R. Eric Heidel

    2016-01-01

    Full Text Available Statistical power is the ability to detect a significant effect, given that the effect actually exists in a population. Like most statistical concepts, statistical power tends to induce cognitive dissonance in hepatology researchers. However, planning for statistical power by an a priori sample size calculation is of paramount importance when designing a research study. There are five specific empirical components that make up an a priori sample size calculation: the scale of measurement of the outcome, the research design, the magnitude of the effect size, the variance of the effect size, and the sample size. A framework grounded in the phenomenon of isomorphism, or interdependencies amongst different constructs with similar forms, will be presented to understand the isomorphic effects of decisions made on each of the five aforementioned components of statistical power.

  17. Statistical approach for collaborative tests, reference material certification procedures

    International Nuclear Information System (INIS)

    Fangmeyer, H.; Haemers, L.; Larisse, J.

    1977-01-01

    The first part introduces the different aspects in organizing and executing intercomparison tests of chemical or physical quantities. It follows a description of a statistical procedure to handle the data collected in a circular analysis. Finally, an example demonstrates how the tool can be applied and which conclusion can be drawn of the results obtained

  18. A test statistic in the complex Wishart distribution and its application to change detection in polarimetric SAR data

    DEFF Research Database (Denmark)

    Conradsen, Knut; Nielsen, Allan Aasbjerg; Schou, Jesper

    2003-01-01

    . Based on this distribution, a test statistic for equality of two such matrices and an associated asymptotic probability for obtaining a smaller value of the test statistic are derived and applied successfully to change detection in polarimetric SAR data. In a case study, EMISAR L-band data from April 17...... to HH, VV, or HV data alone, the derived test statistic reduces to the well-known gamma likelihood-ratio test statistic. The derived test statistic and the associated significance value can be applied as a line or edge detector in fully polarimetric SAR data also....

  19. MITG test assembly design and fabrication

    International Nuclear Information System (INIS)

    Schock, A.

    1983-01-01

    The design, analysis, and evaluation of the Modular Isotopic Thermoelectric Generator (MITG), described in an earlier paper, led to a program to build and test prototypical, modules of that generator. Each test module duplicates the thermoelectric converters, thermal insulation, housing and radiator fins of a typical generator slice, and simulates its isotope heat source module by means of an electrical heater encased in a prototypical graphite box. Once the approx. 20-watt MITG module has been developed, it can be assembled in appropriate number to form a generator design yielding the desired power output. The present paper describes the design and fabrication of the MITG test assembly, which confirmed the fabricability of the multicouples and interleaved multifoil insulation called for by the design. Test plans, procedures, instrumentation, results, and post-test analyses, as well as revised designs, fabrication procedures, and performance estimates, are described in subsequent papers in these proceedings

  20. "What If" Analyses: Ways to Interpret Statistical Significance Test Results Using EXCEL or "R"

    Science.gov (United States)

    Ozturk, Elif

    2012-01-01

    The present paper aims to review two motivations to conduct "what if" analyses using Excel and "R" to understand the statistical significance tests through the sample size context. "What if" analyses can be used to teach students what statistical significance tests really do and in applied research either prospectively to estimate what sample size…

  1. Testing statistical significance scores of sequence comparison methods with structure similarity

    Directory of Open Access Journals (Sweden)

    Leunissen Jack AM

    2006-10-01

    Full Text Available Abstract Background In the past years the Smith-Waterman sequence comparison algorithm has gained popularity due to improved implementations and rapidly increasing computing power. However, the quality and sensitivity of a database search is not only determined by the algorithm but also by the statistical significance testing for an alignment. The e-value is the most commonly used statistical validation method for sequence database searching. The CluSTr database and the Protein World database have been created using an alternative statistical significance test: a Z-score based on Monte-Carlo statistics. Several papers have described the superiority of the Z-score as compared to the e-value, using simulated data. We were interested if this could be validated when applied to existing, evolutionary related protein sequences. Results All experiments are performed on the ASTRAL SCOP database. The Smith-Waterman sequence comparison algorithm with both e-value and Z-score statistics is evaluated, using ROC, CVE and AP measures. The BLAST and FASTA algorithms are used as reference. We find that two out of three Smith-Waterman implementations with e-value are better at predicting structural similarities between proteins than the Smith-Waterman implementation with Z-score. SSEARCH especially has very high scores. Conclusion The compute intensive Z-score does not have a clear advantage over the e-value. The Smith-Waterman implementations give generally better results than their heuristic counterparts. We recommend using the SSEARCH algorithm combined with e-values for pairwise sequence comparisons.

  2. Test Information Targeting Strategies for Adaptive Multistage Testing Designs.

    Science.gov (United States)

    Luecht, Richard M.; Burgin, William

    Adaptive multistage testlet (MST) designs appear to be gaining popularity for many large-scale computer-based testing programs. These adaptive MST designs use a modularized configuration of preconstructed testlets and embedded score-routing schemes to prepackage different forms of an adaptive test. The conditional information targeting (CIT)…

  3. Testing of a "smart-pebble" for measuring particle transport statistics

    Science.gov (United States)

    Kitsikoudis, Vasileios; Avgeris, Loukas; Valyrakis, Manousos

    2017-04-01

    This paper presents preliminary results from novel experiments aiming to assess coarse sediment transport statistics for a range of transport conditions, via the use of an innovative "smart-pebble" device. This device is a waterproof sphere, which has 7 cm diameter and is equipped with a number of sensors that provide information about the velocity, acceleration and positioning of the "smart-pebble" within the flow field. A series of specifically designed experiments are carried out to monitor the entrainment of a "smart-pebble" for fully developed, uniform, turbulent flow conditions over a hydraulically rough bed. Specifically, the bed surface is configured to three sections, each of them consisting of well packed glass beads of slightly increasing size at the downstream direction. The first section has a streamwise length of L1=150 cm and beads size of D1=15 mm, the second section has a length of L2=85 cm and beads size of D2=22 mm, and the third bed section has a length of L3=55 cm and beads size of D3=25.4 mm. Two cameras monitor the area of interest to provide additional information regarding the "smart-pebble" movement. Three-dimensional flow measurements are obtained with the aid of an acoustic Doppler velocimeter along a measurement grid to assess the flow forcing field. A wide range of flow rates near and above the threshold of entrainment is tested, while using four distinct densities for the "smart-pebble", which can affect its transport speed and total momentum. The acquired data are analyzed to derive Lagrangian transport statistics and the implications of such an important experiment for the transport of particles by rolling are discussed. The flow conditions for the initiation of motion, particle accelerations and equilibrium particle velocities (translating into transport rates), statistics of particle impact and its motion, can be extracted from the acquired data, which can be further compared to develop meaningful insights for sediment transport

  4. Finding differentially expressed genes in high dimensional data: Rank based test statistic via a distance measure.

    Science.gov (United States)

    Mathur, Sunil; Sadana, Ajit

    2015-12-01

    We present a rank-based test statistic for the identification of differentially expressed genes using a distance measure. The proposed test statistic is highly robust against extreme values and does not assume the distribution of parent population. Simulation studies show that the proposed test is more powerful than some of the commonly used methods, such as paired t-test, Wilcoxon signed rank test, and significance analysis of microarray (SAM) under certain non-normal distributions. The asymptotic distribution of the test statistic, and the p-value function are discussed. The application of proposed method is shown using a real-life data set. © The Author(s) 2011.

  5. Statistical experimental design for saltstone mixtures

    International Nuclear Information System (INIS)

    Harris, S.P.; Postles, R.L.

    1992-01-01

    The authors used a mixture experimental design for determining a window of operability for a process at the U.S. Department of Energy, Savannah River Site, Defense Waste Processing Facility (DWPF). The high-level radioactive waste at the Savannah River Site is stored in large underground carbon steel tanks. The waste consists of a supernate layer and a sludge layer. Cesium-137 will be removed from the supernate by precipitation and filtration. After further processing, the supernate layer will be fixed as a grout for disposal in concrete vaults. The remaining precipitate will be processed at the DWPF with treated waste tank sludge and glass-making chemicals into borosilicate glass. The leach-rate properties of the supernate grout formed from various mixes of solidified coefficients for NO 3 and chromium were used as a measure of leach rate. Various mixes of cement, Ca(OH) 2 , salt, slag, and fly ash were used. These constituents comprise the whole mix. Thus, a mixture experimental design was used. The regression procedure (PROC REG) in SAS was used to produce analysis of variance (ANOVA) statistics. In addition, detailed model diagnostics are readily available for identifying suspicious observations. For convenience, trillinear contour (TLC) plots, a standard graphics tool for examining mixture response surfaces, of the fitted model were produced using ECHIP

  6. A field test of three LQAS designs to assess the prevalence of acute malnutrition.

    Science.gov (United States)

    Deitchler, Megan; Valadez, Joseph J; Egge, Kari; Fernandez, Soledad; Hennigan, Mary

    2007-08-01

    The conventional method for assessing the prevalence of Global Acute Malnutrition (GAM) in emergency settings is the 30 x 30 cluster-survey. This study describes alternative approaches: three Lot Quality Assurance Sampling (LQAS) designs to assess GAM. The LQAS designs were field-tested and their results compared with those from a 30 x 30 cluster-survey. Computer simulations confirmed that small clusters instead of a simple random sample could be used for LQAS assessments of GAM. Three LQAS designs were developed (33 x 6, 67 x 3, Sequential design) to assess GAM thresholds of 10, 15 and 20%. The designs were field-tested simultaneously with a 30 x 30 cluster-survey in Siraro, Ethiopia during June 2003. Using a nested study design, anthropometric, morbidity and vaccination data were collected on all children 6-59 months in sampled households. Hypothesis tests about GAM thresholds were conducted for each LQAS design. Point estimates were obtained for the 30 x 30 cluster-survey and the 33 x 6 and 67 x 3 LQAS designs. Hypothesis tests showed GAM as or =10% for the 67 x 3 and Sequential designs. Point estimates for the 33 x 6 and 67 x 3 designs were similar to those of the 30 x 30 cluster-survey for GAM (6.7%, CI = 3.2-10.2%; 8.2%, CI = 4.3-12.1%, 7.4%, CI = 4.8-9.9%) and all other indicators. The CIs for the LQAS designs were only slightly wider than the CIs for the 30 x 30 cluster-survey; yet the LQAS designs required substantially less time to administer. The LQAS designs provide statistically appropriate alternatives to the more time-consuming 30 x 30 cluster-survey. However, additional field-testing is needed using independent samples rather than a nested study design.

  7. Statistical tests for power-law cross-correlated processes

    Science.gov (United States)

    Podobnik, Boris; Jiang, Zhi-Qiang; Zhou, Wei-Xing; Stanley, H. Eugene

    2011-12-01

    For stationary time series, the cross-covariance and the cross-correlation as functions of time lag n serve to quantify the similarity of two time series. The latter measure is also used to assess whether the cross-correlations are statistically significant. For nonstationary time series, the analogous measures are detrended cross-correlations analysis (DCCA) and the recently proposed detrended cross-correlation coefficient, ρDCCA(T,n), where T is the total length of the time series and n the window size. For ρDCCA(T,n), we numerically calculated the Cauchy inequality -1≤ρDCCA(T,n)≤1. Here we derive -1≤ρDCCA(T,n)≤1 for a standard variance-covariance approach and for a detrending approach. For overlapping windows, we find the range of ρDCCA within which the cross-correlations become statistically significant. For overlapping windows we numerically determine—and for nonoverlapping windows we derive—that the standard deviation of ρDCCA(T,n) tends with increasing T to 1/T. Using ρDCCA(T,n) we show that the Chinese financial market's tendency to follow the U.S. market is extremely weak. We also propose an additional statistical test that can be used to quantify the existence of cross-correlations between two power-law correlated time series.

  8. Statistics in experimental design, preprocessing, and analysis of proteomics data.

    Science.gov (United States)

    Jung, Klaus

    2011-01-01

    High-throughput experiments in proteomics, such as 2-dimensional gel electrophoresis (2-DE) and mass spectrometry (MS), yield usually high-dimensional data sets of expression values for hundreds or thousands of proteins which are, however, observed on only a relatively small number of biological samples. Statistical methods for the planning and analysis of experiments are important to avoid false conclusions and to receive tenable results. In this chapter, the most frequent experimental designs for proteomics experiments are illustrated. In particular, focus is put on studies for the detection of differentially regulated proteins. Furthermore, issues of sample size planning, statistical analysis of expression levels as well as methods for data preprocessing are covered.

  9. Statistical correlation of structural mode shapes from test measurements and NASTRAN analytical values

    Science.gov (United States)

    Purves, L.; Strang, R. F.; Dube, M. P.; Alea, P.; Ferragut, N.; Hershfeld, D.

    1983-01-01

    The software and procedures of a system of programs used to generate a report of the statistical correlation between NASTRAN modal analysis results and physical tests results from modal surveys are described. Topics discussed include: a mathematical description of statistical correlation, a user's guide for generating a statistical correlation report, a programmer's guide describing the organization and functions of individual programs leading to a statistical correlation report, and a set of examples including complete listings of programs, and input and output data.

  10. Battery designs the testing physical of skills for selecting talented taekwondo

    Directory of Open Access Journals (Sweden)

    Mohammed Mustafa Bakr

    2016-07-01

    Full Text Available The aim of this study is to design a battery test physical of skill for the selection of talented young taekwondo Egyptian national project for may have a practical importance of overseeing the selection of young people as well as in the planning of the training process to reach high levels. Applied study on a sample of (750 Individual clubs and youth centers in 14 countries in the Arab Republic of Egypt, and the average age of (11.11 ± 1.06 years, height (143.50 ± 11,09 cm and weight (42.53 ±10,74kg. Tests were conducted in the period from 17/11/2012 to 29/12/2012. Physical tests following (Ability, Hinge flexibility basin, Agility, Kinetics speed in a level trunk, Kinetics speed in the level face, Endure Performance. Scientific transactions testing laboratories account sincerity and consistency of the tests used in the study. Where the value of the coefficient of sincerity d statistically at the level of the significance of 0.05 is a sign the two sides between the high level and the low level as the value of the coefficient of sincerity between (0.682, 0.932, which indicates that the tests measure what it was made. It was also correlation coefficient d statistically at the level of an indication (0.01 and (0.05 is a sign of the two parties, where the value of P < 0.05 in all tests as they ranged between (0.964-0.998, pointing to a flat these tests. The researcher used the descriptive survey method. Calculated using the weight of the medical balance and use a tape measure to the nearest (0.5 cm. The statistical analysis SPSS was used to apply formulas statistical by calculating: average, standard deviation, correlation, Factor analysis, Six Sigma Score, percentile scores. The results of the standard levels the battery test of physical skills extracted the selection of talented taekwondo sports in the factors of grace the second factor is the dynamic speed. The study concluded that this study concluded to a battery physical of skill tests for

  11. Design of systems on a chip design and test

    CERN Document Server

    Reis, Ricardo; Jess, Jochen AG

    2007-01-01

    Addresses the design challenges associated with generations of the semiconductor technology. This book includes contributions on three different, but complementary axes: core design, computer-aided design tools and test methods. A collection of chapters deal with the heterogeneity aspect of core designs.

  12. Determination of Geometrical REVs Based on Volumetric Fracture Intensity and Statistical Tests

    Directory of Open Access Journals (Sweden)

    Ying Liu

    2018-05-01

    Full Text Available This paper presents a method to estimate a representative element volume (REV of a fractured rock mass based on the volumetric fracture intensity P32 and statistical tests. A 150 m × 80 m × 50 m 3D fracture network model was generated based on field data collected at the Maji dam site by using the rectangular window sampling method. The volumetric fracture intensity P32 of each cube was calculated by varying the cube location in the generated 3D fracture network model and varying the cube side length from 1 to 20 m, and the distribution of the P32 values was described. The size effect and spatial effect of the fractured rock mass were studied; the P32 values from the same cube sizes and different locations were significantly different, and the fluctuation in P32 values clearly decreases as the cube side length increases. In this paper, a new method that comprehensively considers the anisotropy of rock masses, simplicity of calculation and differences between different methods was proposed to estimate the geometrical REV size. The geometrical REV size of the fractured rock mass was determined based on the volumetric fracture intensity P32 and two statistical test methods, namely, the likelihood ratio test and the Wald–Wolfowitz runs test. The results of the two statistical tests were substantially different; critical cube sizes of 13 m and 12 m were estimated by the Wald–Wolfowitz runs test and the likelihood ratio test, respectively. Because the different test methods emphasize different considerations and impact factors, considering a result that these two tests accept, the larger cube size, 13 m, was selected as the geometrical REV size of the fractured rock mass at the Maji dam site in China.

  13. Using Relative Statistics and Approximate Disease Prevalence to Compare Screening Tests.

    Science.gov (United States)

    Samuelson, Frank; Abbey, Craig

    2016-11-01

    Schatzkin et al. and other authors demonstrated that the ratios of some conditional statistics such as the true positive fraction are equal to the ratios of unconditional statistics, such as disease detection rates, and therefore we can calculate these ratios between two screening tests on the same population even if negative test patients are not followed with a reference procedure and the true and false negative rates are unknown. We demonstrate that this same property applies to an expected utility metric. We also demonstrate that while simple estimates of relative specificities and relative areas under ROC curves (AUC) do depend on the unknown negative rates, we can write these ratios in terms of disease prevalence, and the dependence of these ratios on a posited prevalence is often weak particularly if that prevalence is small or the performance of the two screening tests is similar. Therefore we can estimate relative specificity or AUC with little loss of accuracy, if we use an approximate value of disease prevalence.

  14. Probability and statistics with integrated software routines

    CERN Document Server

    Deep, Ronald

    2005-01-01

    Probability & Statistics with Integrated Software Routines is a calculus-based treatment of probability concurrent with and integrated with statistics through interactive, tailored software applications designed to enhance the phenomena of probability and statistics. The software programs make the book unique.The book comes with a CD containing the interactive software leading to the Statistical Genie. The student can issue commands repeatedly while making parameter changes to observe the effects. Computer programming is an excellent skill for problem solvers, involving design, prototyping, data gathering, testing, redesign, validating, etc, all wrapped up in the scientific method.See also: CD to accompany Probability and Stats with Integrated Software Routines (0123694698)* Incorporates more than 1,000 engaging problems with answers* Includes more than 300 solved examples* Uses varied problem solving methods

  15. Statistical test data selection for reliability evalution of process computer software

    International Nuclear Information System (INIS)

    Volkmann, K.P.; Hoermann, H.; Ehrenberger, W.

    1976-01-01

    The paper presents a concept for converting knowledge about the characteristics of process states into practicable procedures for the statistical selection of test cases in testing process computer software. Process states are defined as vectors whose components consist of values of input variables lying in discrete positions or within given limits. Two approaches for test data selection, based on knowledge about cases of demand, are outlined referring to a purely probabilistic method and to the mathematics of stratified sampling. (orig.) [de

  16. Experimental statistics

    CERN Document Server

    Natrella, Mary Gibbons

    1963-01-01

    Formulated to assist scientists and engineers engaged in army ordnance research and development programs, this well-known and highly regarded handbook is a ready reference for advanced undergraduate and graduate students as well as for professionals seeking engineering information and quantitative data for designing, developing, constructing, and testing equipment. Topics include characterizing and comparing the measured performance of a material, product, or process; general considerations in planning experiments; statistical techniques for analyzing extreme-value data; use of transformations

  17. Evaluation of undergraduate nursing students' attitudes towards statistics courses, before and after a course in applied statistics.

    Science.gov (United States)

    Hagen, Brad; Awosoga, Olu; Kellett, Peter; Dei, Samuel Ofori

    2013-09-01

    Undergraduate nursing students must often take a course in statistics, yet there is scant research to inform teaching pedagogy. The objectives of this study were to assess nursing students' overall attitudes towards statistics courses - including (among other things) overall fear and anxiety, preferred learning and teaching styles, and the perceived utility and benefit of taking a statistics course - before and after taking a mandatory course in applied statistics. The authors used a pre-experimental research design (a one-group pre-test/post-test research design), by administering a survey to nursing students at the beginning and end of the course. The study was conducted at a University in Western Canada that offers an undergraduate Bachelor of Nursing degree. Participants included 104 nursing students, in the third year of a four-year nursing program, taking a course in statistics. Although students only reported moderate anxiety towards statistics, student anxiety about statistics had dropped by approximately 40% by the end of the course. Students also reported a considerable and positive change in their attitudes towards learning in groups by the end of the course, a potential reflection of the team-based learning that was used. Students identified preferred learning and teaching approaches, including the use of real-life examples, visual teaching aids, clear explanations, timely feedback, and a well-paced course. Students also identified preferred instructor characteristics, such as patience, approachability, in-depth knowledge of statistics, and a sense of humor. Unfortunately, students only indicated moderate agreement with the idea that statistics would be useful and relevant to their careers, even by the end of the course. Our findings validate anecdotal reports on statistics teaching pedagogy, although more research is clearly needed, particularly on how to increase students' perceptions of the benefit and utility of statistics courses for their nursing

  18. Neutron absorber qualification and acceptance testing from the designer's perspective

    International Nuclear Information System (INIS)

    Bracey, W.; Chiocca, R.

    2004-01-01

    Starting in the mid 1990's, the USNRC began to require less than 100% credit for the 10B present in fixed neutron absorbers spent fuel transport packages. The current practice in the US is to use only 75% of the specified 10B in criticality safety calculations unless extensive acceptance testing demonstrates both the presence of the 10B and uniformity of its distribution. In practice, the NRC has accepted no more than 90% credit for 10B in recent years, while other national competent authorities continue to accept 100%. More recently, with the introduction of new neutron absorber materials, particularly aluminum / boron carbide metal matrix composites, the NRC has also expressed expectations for qualification testing, based in large part on Transnuclear's successful application to use a new composite material in the TN-68 storage / transport cask. The difficulty is that adding more boron than is really necessary to a metal has some negative effects on the material, reducing the ductility and the thermal conductivity, and increasing the cost. Excessive testing requirements can have the undesired effect of keeping superior materials out of spent fuel package designs, without a corresponding justification based on public safety. In European countries and especially in France, 100% credit has been accepted up to now with materials controls specified in the Safety Analysis Report (SAR): Manufacturing process approved by qualification testing Materials manufacturing controlled under a Quality Assurance system. During fabrication, acceptance testing directly on products or on representative samples. Acceptance criteria taking into account a statistical uncertainty corresponding to 3σ. The original and current bases for the reduced 10 B credit, the design requirements for neutron absorber materials, and the experience of Transnuclear and Cogema Logistics with neutron absorber testing are examined. Guidelines for qualification and acceptance testing and process controls

  19. A Note on Three Statistical Tests in the Logistic Regression DIF Procedure

    Science.gov (United States)

    Paek, Insu

    2012-01-01

    Although logistic regression became one of the well-known methods in detecting differential item functioning (DIF), its three statistical tests, the Wald, likelihood ratio (LR), and score tests, which are readily available under the maximum likelihood, do not seem to be consistently distinguished in DIF literature. This paper provides a clarifying…

  20. Engineering Trade-off Considerations Regarding Design-for-Security, Design-for-Verification, and Design-for-Test

    Science.gov (United States)

    Berg, Melanie; Label, Kenneth

    2018-01-01

    The United States government has identified that application specific integrated circuit (ASIC) and field programmable gate array (FPGA) hardware are at risk from a variety of adversary attacks. This finding affects system security and trust. Consequently, processes are being developed for system mitigation and countermeasure application. The scope of this tutorial pertains to potential vulnerabilities and countermeasures within the ASIC/FPGA design cycle. The presentation demonstrates how design practices can affect the risk for the adversary to: change circuitry, steal intellectual property, and listen to data operations. An important portion of the design cycle is assuring the design is working as specified or as expected. This is accomplished by exhaustive testing of the target design. Alternatively, it has been shown that well established schemes for test coverage enhancement (design-for-verification (DFV) and design-for-test (DFT)) can create conduits for adversary accessibility. As a result, it is essential to perform a trade between robust test coverage versus reliable design implementation. The goal of this tutorial is to explain the evolution of design practices; review adversary accessibility points due to DFV and DFT circuitry insertion (back door circuitry); and to describe common engineering trade-off considerations for test versus adversary threats.

  1. Tips and Tricks for Successful Application of Statistical Methods to Biological Data.

    Science.gov (United States)

    Schlenker, Evelyn

    2016-01-01

    This chapter discusses experimental design and use of statistics to describe characteristics of data (descriptive statistics) and inferential statistics that test the hypothesis posed by the investigator. Inferential statistics, based on probability distributions, depend upon the type and distribution of the data. For data that are continuous, randomly and independently selected, as well as normally distributed more powerful parametric tests such as Student's t test and analysis of variance (ANOVA) can be used. For non-normally distributed or skewed data, transformation of the data (using logarithms) may normalize the data allowing use of parametric tests. Alternatively, with skewed data nonparametric tests can be utilized, some of which rely on data that are ranked prior to statistical analysis. Experimental designs and analyses need to balance between committing type 1 errors (false positives) and type 2 errors (false negatives). For a variety of clinical studies that determine risk or benefit, relative risk ratios (random clinical trials and cohort studies) or odds ratios (case-control studies) are utilized. Although both use 2 × 2 tables, their premise and calculations differ. Finally, special statistical methods are applied to microarray and proteomics data, since the large number of genes or proteins evaluated increase the likelihood of false discoveries. Additional studies in separate samples are used to verify microarray and proteomic data. Examples in this chapter and references are available to help continued investigation of experimental designs and appropriate data analysis.

  2. Instructional Theory for Teaching Statistics.

    Science.gov (United States)

    Atwood, Jan R.; Dinham, Sarah M.

    Metatheoretical analysis of Ausubel's Theory of Meaningful Verbal Learning and Gagne's Theory of Instruction using the Dickoff and James paradigm produced two instructional systems for basic statistics. The systems were tested with a pretest-posttest control group design utilizing students enrolled in an introductory-level graduate statistics…

  3. Item calibration in incomplete testing designs

    Directory of Open Access Journals (Sweden)

    Norman D. Verhelst

    2011-01-01

    Full Text Available This study discusses the justifiability of item parameter estimation in incomplete testing designs in item response theory. Marginal maximum likelihood (MML as well as conditional maximum likelihood (CML procedures are considered in three commonly used incomplete designs: random incomplete, multistage testing and targeted testing designs. Mislevy and Sheenan (1989 have shown that in incomplete designs the justifiability of MML can be deduced from Rubin's (1976 general theory on inference in the presence of missing data. Their results are recapitulated and extended for more situations. In this study it is shown that for CML estimation the justification must be established in an alternative way, by considering the neglected part of the complete likelihood. The problems with incomplete designs are not generally recognized in practical situations. This is due to the stochastic nature of the incomplete designs which is not taken into account in standard computer algorithms. For that reason, incorrect uses of standard MML- and CML-algorithms are discussed.

  4. The Impact of Statistical Leakage Models on Design Yield Estimation

    Directory of Open Access Journals (Sweden)

    Rouwaida Kanj

    2011-01-01

    Full Text Available Device mismatch and process variation models play a key role in determining the functionality and yield of sub-100 nm design. Average characteristics are often of interest, such as the average leakage current or the average read delay. However, detecting rare functional fails is critical for memory design and designers often seek techniques that enable accurately modeling such events. Extremely leaky devices can inflict functionality fails. The plurality of leaky devices on a bitline increase the dimensionality of the yield estimation problem. Simplified models are possible by adopting approximations to the underlying sum of lognormals. The implications of such approximations on tail probabilities may in turn bias the yield estimate. We review different closed form approximations and compare against the CDF matching method, which is shown to be most effective method for accurate statistical leakage modeling.

  5. Comparison of Statistical Methods for Detector Testing Programs

    Energy Technology Data Exchange (ETDEWEB)

    Rennie, John Alan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Abhold, Mark [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-10-14

    A typical goal for any detector testing program is to ascertain not only the performance of the detector systems under test, but also the confidence that systems accepted using that testing program’s acceptance criteria will exceed a minimum acceptable performance (which is usually expressed as the minimum acceptable success probability, p). A similar problem often arises in statistics, where we would like to ascertain the fraction, p, of a population of items that possess a property that may take one of two possible values. Typically, the problem is approached by drawing a fixed sample of size n, with the number of items out of n that possess the desired property, x, being termed successes. The sample mean gives an estimate of the population mean p ≈ x/n, although usually it is desirable to accompany such an estimate with a statement concerning the range within which p may fall and the confidence associated with that range. Procedures for establishing such ranges and confidence limits are described in detail by Clopper, Brown, and Agresti for two-sided symmetric confidence intervals.

  6. Jsub(Ic)-testing of A-533 B - statistical evaluation of some different testing techniques

    International Nuclear Information System (INIS)

    Nilsson, F.

    1978-01-01

    The purpose of the present study was to compare statistically some different methods for the evaluation of fracture toughness of the nuclear reactor material A-533 B. Since linear elastic fracture mechanics is not applicable to this material at the interesting temperature (275 0 C), the so-called Jsub(Ic) testing method was employed. Two main difficulties are inherent in this type of testing. The first one is to determine the quantity J as a function of the deflection of the three-point bend specimens used. Three different techniques were used, the first two based on the experimentally observed input of energy to the specimen and the third employing finite element calculations. The second main problem is to determine the point when crack growth begins. For this, two methods were used, a direct electrical method and the indirect R-curve method. A total of forty specimens were tested at two laboratories. No statistically significant different results were obtained from the respective laboratories. The three methods of calculating J yielded somewhat different results, although the discrepancy was small. Also the two methods of determination of the growth initiation point yielded consistent results. The R-curve method, however, exhibited a larger uncertainty as measured by the standard deviation. The resulting Jsub(Ic) value also agreed well with earlier presented results. The relative standard deviation was of the order of 25%, which is quite small for this type of experiment. (author)

  7. Sensitivity analysis and optimization of system dynamics models : Regression analysis and statistical design of experiments

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    1995-01-01

    This tutorial discusses what-if analysis and optimization of System Dynamics models. These problems are solved, using the statistical techniques of regression analysis and design of experiments (DOE). These issues are illustrated by applying the statistical techniques to a System Dynamics model for

  8. A3 Subscale Diffuser Test Article Design

    Science.gov (United States)

    Saunders, G. P.

    2009-01-01

    This paper gives a detailed description of the design of the A3 Subscale Diffuser Test (SDT) Article Design. The subscale diffuser is a geometrically accurate scale model of the A3 altitude rocket facility. It was designed and built to support the SDT risk mitigation project located at the E3 facility at Stennis Space Center, MS (SSC) supporting the design and construction of the A3 facility at SSC. The subscale test article is outfitted with a large array of instrumentation to support the design verification of the A3 facility. The mechanical design of the subscale diffuser and test instrumentation are described here

  9. Statistical evaluation of cleanup: How should it be done?

    International Nuclear Information System (INIS)

    Gilbert, R.O.

    1993-02-01

    This paper discusses statistical issues that must be addressed when conducting statistical tests for the purpose of evaluating if a site has been remediated to guideline values or standards. The importance of using the Data Quality Objectives (DQO) process to plan and design the sampling plan is emphasized. Other topics discussed are: (1) accounting for the uncertainty of cleanup standards when conducting statistical tests, (2) determining the number of samples and measurements needed to attain specified DQOs, (3) considering whether the appropriate testing philosophy in a given situation is ''guilty until proven innocent'' or ''innocent until proven guilty'' when selecting a statistical test for evaluating the attainment of standards, (4) conducting tests using data sets that contain measurements that have been reported by the laboratory as less than the minimum detectable activity, and (5) selecting statistical tests that are appropriate for risk-based or background-based standards. A recent draft report by Berger that provides guidance on sampling plans and data analyses for final status surveys at US Nuclear Regulatory Commission licensed facilities serves as a focal point for discussion

  10. Observations in the statistical analysis of NBG-18 nuclear graphite strength tests

    International Nuclear Information System (INIS)

    Hindley, Michael P.; Mitchell, Mark N.; Blaine, Deborah C.; Groenwold, Albert A.

    2012-01-01

    Highlights: ► Statistical analysis of NBG-18 nuclear graphite strength test. ► A Weibull distribution and normal distribution is tested for all data. ► A Bimodal distribution in the CS data is confirmed. ► The CS data set has the lowest variance. ► A Combined data set is formed and has Weibull distribution. - Abstract: The purpose of this paper is to report on the selection of a statistical distribution chosen to represent the experimental material strength of NBG-18 nuclear graphite. Three large sets of samples were tested during the material characterisation of the Pebble Bed Modular Reactor and Core Structure Ceramics materials. These sets of samples are tensile strength, flexural strength and compressive strength (CS) measurements. A relevant statistical fit is determined and the goodness of fit is also evaluated for each data set. The data sets are also normalised for ease of comparison, and combined into one representative data set. The validity of this approach is demonstrated. A second failure mode distribution is found on the CS test data. Identifying this failure mode supports the similar observations made in the past. The success of fitting the Weibull distribution through the normalised data sets allows us to improve the basis for the estimates of the variability. This could also imply that the variability on the graphite strength for the different strength measures is based on the same flaw distribution and thus a property of the material.

  11. Statistical controversies in clinical research: requiem for the 3 + 3 design for phase I trials.

    Science.gov (United States)

    Paoletti, X; Ezzalfani, M; Le Tourneau, C

    2015-09-01

    More than 95% of published phase I trials have used the 3 + 3 design to identify the dose to be recommended for phase II trials. However, the statistical community agrees on the limitations of the 3 + 3 design compared with model-based approaches. Moreover, the mechanisms of action of targeted agents strongly challenge the hypothesis that the maximum tolerated dose constitutes the optimal dose, and more outcomes including clinical and biological activity increasingly need to be taken into account to identify the optimal dose. We review key elements from clinical publications and from the statistical literature to show that the 3 + 3 design lacks the necessary flexibility to address the challenges of targeted agents. The design issues raised by expansion cohorts, new definitions of dose-limiting toxicity and trials of combinations are not easily addressed by the 3 + 3 design or its extensions. Alternative statistical proposals have been developed to make a better use of the complex data generated by phase I trials. Their applications require a close collaboration between all actors of early phase clinical trials. © The Author 2015. Published by Oxford University Press on behalf of the European Society for Medical Oncology. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  12. Design and Testing of Digital Microfluidic Biochips

    CERN Document Server

    Zhao, Yang

    2013-01-01

    This book provides a comprehensive methodology for automated design, test and diagnosis, and use of robust, low-cost, and manufacturable digital microfluidic systems. It focuses on the development of a comprehensive CAD optimization framework for digital microfluidic biochips that unifies different design problems. With the increase in system complexity and integration levels, biochip designers can utilize the design methods described in this book to evaluate different design alternatives, and carry out design-space exploration to obtain the best design point. Describes practical design automation tools that address different design problems (e.g., synthesis, droplet routing, control-pin mapping, testing and diagnosis, and error recovery) in a unified manner; Applies test pattern generation and error-recovery techniques for digital microfluidics-based biochips; Uses real bioassays as evaluation examples, e.g., multiplexed in vitro human physiological fluids diagnostics, PCR, protein crystallization.  

  13. The Statistic Test on Influence of Surface Treatment to Fatigue Lifetime with Limited Data

    OpenAIRE

    Suhartono, Agus

    2009-01-01

    Justifications on the influences of two or more parameters on fatigue strength are some times problematic due to the scatter nature of the fatigue data. Statistic test can facilitate the evaluation, whether the changes in material characteristics as a result of specific parameters of interest is significant. The statistic tests were applied to fatigue data of AISI 1045 steel specimens. The specimens are consisted of as received specimen, shot peened specimen with 15 and 16 Almen intensity as ...

  14. Conducting tests for statistically significant differences using forest inventory data

    Science.gov (United States)

    James A. Westfall; Scott A. Pugh; John W. Coulston

    2013-01-01

    Many forest inventory and monitoring programs are based on a sample of ground plots from which estimates of forest resources are derived. In addition to evaluating metrics such as number of trees or amount of cubic wood volume, it is often desirable to make comparisons between resource attributes. To properly conduct statistical tests for differences, it is imperative...

  15. Design, Automation, and Test in Europe

    DEFF Research Database (Denmark)

    Systems in CMOS and Beyond; - Physical Design and Validation; - Test and Verification. The winners of the prestigious EDAA Lifetime Achievement Award as well as oher recognized experts in their field wrote an introduction to each section, summarizing the history in their domain and indicating how......The Design, Automation, and Test in Europe (DATE) conference celebrated in 2007 its tenth anniversary. As a tribute to the chip and system-level design and design technology community, this book presents a compilation of the three most influential papers of each year. This provides an excellent...

  16. Testing independence of bivariate interval-censored data using modified Kendall's tau statistic.

    Science.gov (United States)

    Kim, Yuneung; Lim, Johan; Park, DoHwan

    2015-11-01

    In this paper, we study a nonparametric procedure to test independence of bivariate interval censored data; for both current status data (case 1 interval-censored data) and case 2 interval-censored data. To do it, we propose a score-based modification of the Kendall's tau statistic for bivariate interval-censored data. Our modification defines the Kendall's tau statistic with expected numbers of concordant and disconcordant pairs of data. The performance of the modified approach is illustrated by simulation studies and application to the AIDS study. We compare our method to alternative approaches such as the two-stage estimation method by Sun et al. (Scandinavian Journal of Statistics, 2006) and the multiple imputation method by Betensky and Finkelstein (Statistics in Medicine, 1999b). © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. White Noise Assumptions Revisited : Regression Models and Statistical Designs for Simulation Practice

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2006-01-01

    Classic linear regression models and their concomitant statistical designs assume a univariate response and white noise.By definition, white noise is normally, independently, and identically distributed with zero mean.This survey tries to answer the following questions: (i) How realistic are these

  18. WE-E-201-01: Use and Abuse of Common Statistics in Radiological Physics

    Energy Technology Data Exchange (ETDEWEB)

    Labby, Z. [University of Wisconsin (United States)

    2015-06-15

    Physicists are often expected to have a solid grounding in experimental design and statistical analysis, sometimes filling in when biostatisticians or other experts are not available for consultation. Unfortunately, graduate education on these topics is seldom emphasized and few opportunities for continuing education exist. Clinical physicists incorporate new technology and methods into their practice based on published literature. A poor understanding of experimental design and analysis could Result in inappropriate use of new techniques. Clinical physicists also improve current practice through quality initiatives that require sound experimental design and analysis. Academic physicists with a poor understanding of design and analysis may produce ambiguous (or misleading) results. This can Result in unnecessary rewrites, publication rejection, and experimental redesign (wasting time, money, and effort). This symposium will provide a practical review of error and uncertainty, common study designs, and statistical tests. Instruction will primarily focus on practical implementation through examples and answer questions such as: where would you typically apply the test/design and where is the test/design typically misapplied (i.e., common pitfalls)? An analysis of error and uncertainty will also be explored using biological studies and associated modeling as a specific use case. Learning Objectives: Understand common experimental testing and clinical trial designs, what questions they can answer, and how to interpret the results Determine where specific statistical tests are appropriate and identify common pitfalls Understand the how uncertainty and error are addressed in biological testing and associated biological modeling.

  19. WE-E-201-01: Use and Abuse of Common Statistics in Radiological Physics

    International Nuclear Information System (INIS)

    Labby, Z.

    2015-01-01

    Physicists are often expected to have a solid grounding in experimental design and statistical analysis, sometimes filling in when biostatisticians or other experts are not available for consultation. Unfortunately, graduate education on these topics is seldom emphasized and few opportunities for continuing education exist. Clinical physicists incorporate new technology and methods into their practice based on published literature. A poor understanding of experimental design and analysis could Result in inappropriate use of new techniques. Clinical physicists also improve current practice through quality initiatives that require sound experimental design and analysis. Academic physicists with a poor understanding of design and analysis may produce ambiguous (or misleading) results. This can Result in unnecessary rewrites, publication rejection, and experimental redesign (wasting time, money, and effort). This symposium will provide a practical review of error and uncertainty, common study designs, and statistical tests. Instruction will primarily focus on practical implementation through examples and answer questions such as: where would you typically apply the test/design and where is the test/design typically misapplied (i.e., common pitfalls)? An analysis of error and uncertainty will also be explored using biological studies and associated modeling as a specific use case. Learning Objectives: Understand common experimental testing and clinical trial designs, what questions they can answer, and how to interpret the results Determine where specific statistical tests are appropriate and identify common pitfalls Understand the how uncertainty and error are addressed in biological testing and associated biological modeling

  20. Design criteria and fabrication in-pile test section of HANARO fuel test loop

    Energy Technology Data Exchange (ETDEWEB)

    Kim, J. Y.

    1997-10-01

    Safety state fuel test loop will be equipped in HANARO to obtain the development and betterments of advanced fuel and materials through the irradiation tests. The objective of this study is to determine the design criteria and technical specification of in-pile test section and to specify the manufacturing requirements of in-pile test section. HANARO fuel test loop was designed to meet the CANDU and PWR fuel testing and in-pile section will be manufactured and installed in HANARO. The design criteria and technical specification of in-pile test section could be used the fuel and materials design with for irradiation testing IPS of HANARO fuel test loop. This results will become guidances for the planning and programming of irradiation testing. (author). 12 refs., tabs., figs.

  1. Directions for new developments on statistical design and analysis of small population group trials.

    Science.gov (United States)

    Hilgers, Ralf-Dieter; Roes, Kit; Stallard, Nigel

    2016-06-14

    Most statistical design and analysis methods for clinical trials have been developed and evaluated where at least several hundreds of patients could be recruited. These methods may not be suitable to evaluate therapies if the sample size is unavoidably small, which is usually termed by small populations. The specific sample size cut off, where the standard methods fail, needs to be investigated. In this paper, the authors present their view on new developments for design and analysis of clinical trials in small population groups, where conventional statistical methods may be inappropriate, e.g., because of lack of power or poor adherence to asymptotic approximations due to sample size restrictions. Following the EMA/CHMP guideline on clinical trials in small populations, we consider directions for new developments in the area of statistical methodology for design and analysis of small population clinical trials. We relate the findings to the research activities of three projects, Asterix, IDeAl, and InSPiRe, which have received funding since 2013 within the FP7-HEALTH-2013-INNOVATION-1 framework of the EU. As not all aspects of the wide research area of small population clinical trials can be addressed, we focus on areas where we feel advances are needed and feasible. The general framework of the EMA/CHMP guideline on small population clinical trials stimulates a number of research areas. These serve as the basis for the three projects, Asterix, IDeAl, and InSPiRe, which use various approaches to develop new statistical methodology for design and analysis of small population clinical trials. Small population clinical trials refer to trials with a limited number of patients. Small populations may result form rare diseases or specific subtypes of more common diseases. New statistical methodology needs to be tailored to these specific situations. The main results from the three projects will constitute a useful toolbox for improved design and analysis of small

  2. Common pitfalls in statistical analysis: Understanding the properties of diagnostic tests - Part 1.

    Science.gov (United States)

    Ranganathan, Priya; Aggarwal, Rakesh

    2018-01-01

    In this article in our series on common pitfalls in statistical analysis, we look at some of the attributes of diagnostic tests (i.e., tests which are used to determine whether an individual does or does not have disease). The next article in this series will focus on further issues related to diagnostic tests.

  3. Testing University Rankings Statistically: Why this Perhaps is not such a Good Idea after All. Some Reflections on Statistical Power, Effect Size, Random Sampling and Imaginary Populations

    DEFF Research Database (Denmark)

    Schneider, Jesper Wiborg

    2012-01-01

    In this paper we discuss and question the use of statistical significance tests in relation to university rankings as recently suggested. We outline the assumptions behind and interpretations of statistical significance tests and relate this to examples from the recent SCImago Institutions Rankin...

  4. A Comparison of Several Statistical Tests of Reciprocity of Self-Disclosure.

    Science.gov (United States)

    Dindia, Kathryn

    1988-01-01

    Reports the results of a study that used several statistical tests of reciprocity of self-disclosure. Finds little evidence for reciprocity of self-disclosure, and concludes that either reciprocity is an illusion, or that different or more sophisticated methods are needed to detect it. (MS)

  5. MITG post-test analysis and design improvements

    International Nuclear Information System (INIS)

    Schock, A.

    1983-01-01

    The design, performance analysis, and key attributes of the Modular Isotopic Thermoelectric Generator (MITG) were described in a 1981 IECEC paper; and the design, fabrication, and testing of prototypical MITG test assemblies were described in preceding papers in these proceedings. Each test assembly simulated a typical modular slice of the flight generator. The present paper describes a detailed thermal-stress analysis, which identified the causes of stress-related problems observed during the tests. It then describes how additional analyses were used to evaluate design changes to alleviate those problems. Additional design improvements are discussed in the next paper in these proceedings, which also describes revised fabrication procedures and updated performance estimates for the generator

  6. Introduction to statistics using interactive MM*Stat elements

    CERN Document Server

    Härdle, Wolfgang Karl; Rönz, Bernd

    2015-01-01

    MM*Stat, together with its enhanced online version with interactive examples, offers a flexible tool that facilitates the teaching of basic statistics. It covers all the topics found in introductory descriptive statistics courses, including simple linear regression and time series analysis, the fundamentals of inferential statistics (probability theory, random sampling and estimation theory), and inferential statistics itself (confidence intervals, testing). MM*Stat is also designed to help students rework class material independently and to promote comprehension with the help of additional examples. Each chapter starts with the necessary theoretical background, which is followed by a variety of examples. The core examples are based on the content of the respective chapter, while the advanced examples, designed to deepen students’ knowledge, also draw on information and material from previous chapters. The enhanced online version helps students grasp the complexity and the practical relevance of statistical...

  7. Testing the statistical isotropy of large scale structure with multipole vectors

    International Nuclear Information System (INIS)

    Zunckel, Caroline; Huterer, Dragan; Starkman, Glenn D.

    2011-01-01

    A fundamental assumption in cosmology is that of statistical isotropy - that the Universe, on average, looks the same in every direction in the sky. Statistical isotropy has recently been tested stringently using cosmic microwave background data, leading to intriguing results on large angular scales. Here we apply some of the same techniques used in the cosmic microwave background to the distribution of galaxies on the sky. Using the multipole vector approach, where each multipole in the harmonic decomposition of galaxy density field is described by unit vectors and an amplitude, we lay out the basic formalism of how to reconstruct the multipole vectors and their statistics out of galaxy survey catalogs. We apply the algorithm to synthetic galaxy maps, and study the sensitivity of the multipole vector reconstruction accuracy to the density, depth, sky coverage, and pixelization of galaxy catalog maps.

  8. Basic statistical tools in research and data analysis

    Directory of Open Access Journals (Sweden)

    Zulfiqar Ali

    2016-01-01

    Full Text Available Statistical methods involved in carrying out a study include planning, designing, collecting data, analysing, drawing meaningful interpretation and reporting of the research findings. The statistical analysis gives meaning to the meaningless numbers, thereby breathing life into a lifeless data. The results and inferences are precise only if proper statistical tests are used. This article will try to acquaint the reader with the basic research tools that are utilised while conducting various studies. The article covers a brief outline of the variables, an understanding of quantitative and qualitative variables and the measures of central tendency. An idea of the sample size estimation, power analysis and the statistical errors is given. Finally, there is a summary of parametric and non-parametric tests used for data analysis.

  9. Statistics

    CERN Document Server

    Hayslett, H T

    1991-01-01

    Statistics covers the basic principles of Statistics. The book starts by tackling the importance and the two kinds of statistics; the presentation of sample data; the definition, illustration and explanation of several measures of location; and the measures of variation. The text then discusses elementary probability, the normal distribution and the normal approximation to the binomial. Testing of statistical hypotheses and tests of hypotheses about the theoretical proportion of successes in a binomial population and about the theoretical mean of a normal population are explained. The text the

  10. NET test blanket design and remote maintenance

    International Nuclear Information System (INIS)

    Holloway, C.; Hubert, P.

    1991-01-01

    The NET machine has three horizontal ports reserved for testing tritium breeding blanket designs during the physics phase and possibly five during the technology phase. The design of the ports and test blankets are modular to accept a range of blanket options, provide radiation shielding and allow routine replacement. Radiation levels during replacement or maintenance require that all operations must be carried out remotely. The paper describes the problems overcome in providing a port design which includes attachment to the vacuum vessel with double vacuum seals, an integrated cooled first wall and support guides for the test blanket module. The method selected to remotely replace the test module whilst controlling the spread of contamination is also adressed. The paper concludes that the provisions of a test blanket facility based on the NET machine design is feasible. (orig.)

  11. TRANSIT TIMING OBSERVATIONS FROM KEPLER. VI. POTENTIALLY INTERESTING CANDIDATE SYSTEMS FROM FOURIER-BASED STATISTICAL TESTS

    International Nuclear Information System (INIS)

    Steffen, Jason H.; Ford, Eric B.; Rowe, Jason F.; Borucki, William J.; Bryson, Steve; Caldwell, Douglas A.; Jenkins, Jon M.; Koch, David G.; Sanderfer, Dwight T.; Seader, Shawn; Twicken, Joseph D.; Fabrycky, Daniel C.; Holman, Matthew J.; Welsh, William F.; Batalha, Natalie M.; Ciardi, David R.; Kjeldsen, Hans; Prša, Andrej

    2012-01-01

    We analyze the deviations of transit times from a linear ephemeris for the Kepler Objects of Interest (KOI) through quarter six of science data. We conduct two statistical tests for all KOIs and a related statistical test for all pairs of KOIs in multi-transiting systems. These tests identify several systems which show potentially interesting transit timing variations (TTVs). Strong TTV systems have been valuable for the confirmation of planets and their mass measurements. Many of the systems identified in this study should prove fruitful for detailed TTV studies.

  12. Transit timing observations from Kepler. VI. Potentially interesting candidate systems from fourier-based statistical tests

    DEFF Research Database (Denmark)

    Steffen, J.H.; Ford, E.B.; Rowe, J.F.

    2012-01-01

    We analyze the deviations of transit times from a linear ephemeris for the Kepler Objects of Interest (KOI) through quarter six of science data. We conduct two statistical tests for all KOIs and a related statistical test for all pairs of KOIs in multi-transiting systems. These tests identify...... several systems which show potentially interesting transit timing variations (TTVs). Strong TTV systems have been valuable for the confirmation of planets and their mass measurements. Many of the systems identified in this study should prove fruitful for detailed TTV studies....

  13. Price limits and stock market efficiency: Evidence from rolling bicorrelation test statistic

    International Nuclear Information System (INIS)

    Lim, Kian-Ping; Brooks, Robert D.

    2009-01-01

    Using the rolling bicorrelation test statistic, the present paper compares the efficiency of stock markets from China, Korea and Taiwan in selected sub-periods with different price limits regimes. The statistical results do not support the claims that restrictive price limits and price limits per se are jeopardizing market efficiency. However, the evidence does not imply that price limits have no effect on the price discovery process but rather suggesting that market efficiency is not merely determined by price limits.

  14. Learning Axes and Bridging Tools in a Technology-Based Design for Statistics

    Science.gov (United States)

    Abrahamson, Dor; Wilensky, Uri

    2007-01-01

    We introduce a design-based research framework, "learning axes and bridging tools," and demonstrate its application in the preparation and study of an implementation of a middle-school experimental computer-based unit on probability and statistics, "ProbLab" (Probability Laboratory, Abrahamson and Wilensky 2002 [Abrahamson, D., & Wilensky, U.…

  15. A testing procedure for wind turbine generators based on the power grid statistical model

    DEFF Research Database (Denmark)

    Farajzadehbibalan, Saber; Ramezani, Mohammad Hossein; Nielsen, Peter

    2017-01-01

    In this study, a comprehensive test procedure is developed to test wind turbine generators with a hardware-in-loop setup. The procedure employs the statistical model of the power grid considering the restrictions of the test facility and system dynamics. Given the model in the latent space...

  16. A statistically rigorous sampling design to integrate avian monitoring and management within Bird Conservation Regions.

    Science.gov (United States)

    Pavlacky, David C; Lukacs, Paul M; Blakesley, Jennifer A; Skorkowsky, Robert C; Klute, David S; Hahn, Beth A; Dreitz, Victoria J; George, T Luke; Hanni, David J

    2017-01-01

    Monitoring is an essential component of wildlife management and conservation. However, the usefulness of monitoring data is often undermined by the lack of 1) coordination across organizations and regions, 2) meaningful management and conservation objectives, and 3) rigorous sampling designs. Although many improvements to avian monitoring have been discussed, the recommendations have been slow to emerge in large-scale programs. We introduce the Integrated Monitoring in Bird Conservation Regions (IMBCR) program designed to overcome the above limitations. Our objectives are to outline the development of a statistically defensible sampling design to increase the value of large-scale monitoring data and provide example applications to demonstrate the ability of the design to meet multiple conservation and management objectives. We outline the sampling process for the IMBCR program with a focus on the Badlands and Prairies Bird Conservation Region (BCR 17). We provide two examples for the Brewer's sparrow (Spizella breweri) in BCR 17 demonstrating the ability of the design to 1) determine hierarchical population responses to landscape change and 2) estimate hierarchical habitat relationships to predict the response of the Brewer's sparrow to conservation efforts at multiple spatial scales. The collaboration across organizations and regions provided economy of scale by leveraging a common data platform over large spatial scales to promote the efficient use of monitoring resources. We designed the IMBCR program to address the information needs and core conservation and management objectives of the participating partner organizations. Although it has been argued that probabilistic sampling designs are not practical for large-scale monitoring, the IMBCR program provides a precedent for implementing a statistically defensible sampling design from local to bioregional scales. We demonstrate that integrating conservation and management objectives with rigorous statistical

  17. A statistically rigorous sampling design to integrate avian monitoring and management within Bird Conservation Regions.

    Directory of Open Access Journals (Sweden)

    David C Pavlacky

    Full Text Available Monitoring is an essential component of wildlife management and conservation. However, the usefulness of monitoring data is often undermined by the lack of 1 coordination across organizations and regions, 2 meaningful management and conservation objectives, and 3 rigorous sampling designs. Although many improvements to avian monitoring have been discussed, the recommendations have been slow to emerge in large-scale programs. We introduce the Integrated Monitoring in Bird Conservation Regions (IMBCR program designed to overcome the above limitations. Our objectives are to outline the development of a statistically defensible sampling design to increase the value of large-scale monitoring data and provide example applications to demonstrate the ability of the design to meet multiple conservation and management objectives. We outline the sampling process for the IMBCR program with a focus on the Badlands and Prairies Bird Conservation Region (BCR 17. We provide two examples for the Brewer's sparrow (Spizella breweri in BCR 17 demonstrating the ability of the design to 1 determine hierarchical population responses to landscape change and 2 estimate hierarchical habitat relationships to predict the response of the Brewer's sparrow to conservation efforts at multiple spatial scales. The collaboration across organizations and regions provided economy of scale by leveraging a common data platform over large spatial scales to promote the efficient use of monitoring resources. We designed the IMBCR program to address the information needs and core conservation and management objectives of the participating partner organizations. Although it has been argued that probabilistic sampling designs are not practical for large-scale monitoring, the IMBCR program provides a precedent for implementing a statistically defensible sampling design from local to bioregional scales. We demonstrate that integrating conservation and management objectives with rigorous

  18. Person Fit Based on Statistical Process Control in an Adaptive Testing Environment. Research Report 98-13.

    Science.gov (United States)

    van Krimpen-Stoop, Edith M. L. A.; Meijer, Rob R.

    Person-fit research in the context of paper-and-pencil tests is reviewed, and some specific problems regarding person fit in the context of computerized adaptive testing (CAT) are discussed. Some new methods are proposed to investigate person fit in a CAT environment. These statistics are based on Statistical Process Control (SPC) theory. A…

  19. Outcomes Definitions and Statistical Tests in Oncology Studies: A Systematic Review of the Reporting Consistency.

    Science.gov (United States)

    Rivoirard, Romain; Duplay, Vianney; Oriol, Mathieu; Tinquaut, Fabien; Chauvin, Franck; Magne, Nicolas; Bourmaud, Aurelie

    2016-01-01

    Quality of reporting for Randomized Clinical Trials (RCTs) in oncology was analyzed in several systematic reviews, but, in this setting, there is paucity of data for the outcomes definitions and consistency of reporting for statistical tests in RCTs and Observational Studies (OBS). The objective of this review was to describe those two reporting aspects, for OBS and RCTs in oncology. From a list of 19 medical journals, three were retained for analysis, after a random selection: British Medical Journal (BMJ), Annals of Oncology (AoO) and British Journal of Cancer (BJC). All original articles published between March 2009 and March 2014 were screened. Only studies whose main outcome was accompanied by a corresponding statistical test were included in the analysis. Studies based on censored data were excluded. Primary outcome was to assess quality of reporting for description of primary outcome measure in RCTs and of variables of interest in OBS. A logistic regression was performed to identify covariates of studies potentially associated with concordance of tests between Methods and Results parts. 826 studies were included in the review, and 698 were OBS. Variables were described in Methods section for all OBS studies and primary endpoint was clearly detailed in Methods section for 109 RCTs (85.2%). 295 OBS (42.2%) and 43 RCTs (33.6%) had perfect agreement for reported statistical test between Methods and Results parts. In multivariable analysis, variable "number of included patients in study" was associated with test consistency: aOR (adjusted Odds Ratio) for third group compared to first group was equal to: aOR Grp3 = 0.52 [0.31-0.89] (P value = 0.009). Variables in OBS and primary endpoint in RCTs are reported and described with a high frequency. However, statistical tests consistency between methods and Results sections of OBS is not always noted. Therefore, we encourage authors and peer reviewers to verify consistency of statistical tests in oncology studies.

  20. Association testing for next-generation sequencing data using score statistics

    DEFF Research Database (Denmark)

    Skotte, Line; Korneliussen, Thorfinn Sand; Albrechtsen, Anders

    2012-01-01

    computationally feasible due to the use of score statistics. As part of the joint likelihood, we model the distribution of the phenotypes using a generalized linear model framework, which works for both quantitative and discrete phenotypes. Thus, the method presented here is applicable to case-control studies...... of genotype calls into account have been proposed; most require numerical optimization which for large-scale data is not always computationally feasible. We show that using a score statistic for the joint likelihood of observed phenotypes and observed sequencing data provides an attractive approach...... to association testing for next-generation sequencing data. The joint model accounts for the genotype classification uncertainty via the posterior probabilities of the genotypes given the observed sequencing data, which gives the approach higher power than methods based on called genotypes. This strategy remains...

  1. Interpreting Statistical Significance Test Results: A Proposed New "What If" Method.

    Science.gov (United States)

    Kieffer, Kevin M.; Thompson, Bruce

    As the 1994 publication manual of the American Psychological Association emphasized, "p" values are affected by sample size. As a result, it can be helpful to interpret the results of statistical significant tests in a sample size context by conducting so-called "what if" analyses. However, these methods can be inaccurate…

  2. Designing Tasks to Examine Mathematical Knowledge for Teaching Statistics for Primary Teachers

    Science.gov (United States)

    Siswono, T. Y. E.; Kohar, A. W.; Hartono, S.

    2018-01-01

    Mathematical knowledge for teaching (MKT) is viewed as fuel resources for conducting an orchestra in a teaching and learning process. By understanding MKT, especially for primary teachers, it can predict the success of a goal of an instruction and analyze the weaknesses and improvements of it. To explore what teachers think about subject matters, pedagogical terms, and appropriate curriculum, it needs a task which can be identified the teachers’ MKT including the subject matter knowledge (SMK) and pedagogical content knowledge (PCK). This study aims to design an appropriate task for exploring primary teachers’ MKT for statistics in primary school. We designed six tasks to examine 40 primary teachers’ MKT, of which each respectively represents the categories of SMK (common content knowledge (CCK) and specialised content knowledge (SCK)) and PCK (knowledge of content and students (KCS), knowledge of content and teaching (KCT), and knowledge of content and curriculum (KCC)). While MKT has much attention of numbers of scholars, we consider knowledge of content and culture (KCCl) to be hypothesized in the domains of MKT. Thus, we added one more task examining how the primary teachers used their knowledge of content (KC) regarding to MKT in statistics. Some examples of the teachers’ responses on the tasks are discussed and some refinements of MKT task in statistics for primary teachers are suggested.

  3. FRAMES Software System: Linking to the Statistical Package R

    Energy Technology Data Exchange (ETDEWEB)

    Castleton, Karl J.; Whelan, Gene; Hoopes, Bonnie L.

    2006-12-11

    This document provides requirements, design, data-file specifications, test plan, and Quality Assurance/Quality Control protocol for the linkage between the statistical package R and the Framework for Risk Analysis in Multimedia Environmental Systems (FRAMES) Versions 1.x and 2.0. The requirements identify the attributes of the system. The design describes how the system will be structured to meet those requirements. The specification presents the specific modifications to FRAMES to meet the requirements and design. The test plan confirms that the basic functionality listed in the requirements (black box testing) actually functions as designed, and QA/QC confirms that the software meets the client’s needs.

  4. Design of nuclear fuel cells by means of a statistical analysis and a sensibility study

    International Nuclear Information System (INIS)

    Jauregui C, V.; Castillo M, J. A.; Ortiz S, J. J.; Montes T, J. L.; Perusquia del C, R.

    2013-10-01

    This work contains the results of the statistical analysis realized to study the nuclear fuel cells performance, considering the frequencies for the election of fuel bars used in the design of the same ones. The election of the bars used for the cells design are of 3 types, the first election shows that to the plotting the respective frequency is similar to a normal distribution, in the second case the frequencies graph is of type inverted square X 2 and the last election is when the bars are chosen in aleatory form. The heuristic techniques used for the cells design were the neural networks, the ant colonies and a hybrid between the dispersed search and the trajectories re-linkage. To carry out the statistical analysis in the cells design were considered the local power peak factor and the neutron infinite multiplication factor (k∞) of this. On the other hand, the performance of the designed cells was analyzed when verifying the position of the bars containing gadolinium. The results show that is possible to design cells of nuclear fuel with a good performance, when considering the frequency of the bars used in their design. (Author)

  5. Overview of the IFMIF test cell design

    International Nuclear Information System (INIS)

    Moeslang, A.; Daum, E.; Jitsukawa, S.; Noda, K.; Viola, R.

    1996-01-01

    The Conceptual Design Activity (CDA) for the International Fusion Materials Irradiation Facility (IFMIF) has entered its second and final year, and an outline design has been developed. Initial evaluations of the potential of this high flux, high intensity D-Li source have shown that the main materials testing needs can be fulfilled. According to these needs, Vertical Test Assemblies will accommodate test modules for the high flux (0.5 liter, 20 dpa/a, 250-1000 C), the medium flux (6 liter, 1-20 dpa/a, 250-1000 C), the low flux (7.5 liter, 0.1-1 dpa/a), and the very low flux (> 100 liter, 0.01-0.1 dpa/a) regions. Detailed test matrices have been defined for the high and medium flux regions, showing that on the basis of small specimen test technologies, a database for an engineering design of an advanced fusion reactor (DEMO) can be established for a variety of structural materials and ceramic breeders. The design concepts for the Test Cell, including test assemblies, remote handling equipment and Hot Cell Facilities with capacity for investigating all irradiation specimens at the IFMIF site are described

  6. Optimum design of automobile seat using statistical design support system; Tokeiteki sekkei shien system no jidoshayo seat eno tekiyo

    Energy Technology Data Exchange (ETDEWEB)

    Kashiwamura, T [NHK Spring Co. Ltd., Yokohama (Japan); Shiratori, M; Yu, Q; Koda, I [Yokohama National University, Yokohama (Japan)

    1997-10-01

    The authors proposed a new practical optimum design method called statistical design support system, which consists of five steps: the effectivity analysis, reanalysis, evaluation of dispersion, the optimiza4ion and evaluation of structural reliability. In this study, the authors applied the present system to analyze and optimum design of an automobile seat frame subjected to crushing. This study should that the present method could be applied to the complex nonlinear problems such as large deformation, material nonlinearity as well as impact problem. It was shown that the optimum design of the seat frame has been solved easily using the present system. 6 refs., 5 figs., 5 tabs.

  7. Statistical experimental design approach in coal briquetting

    Energy Technology Data Exchange (ETDEWEB)

    B. Salopek; S. Pfaff; R. Rajic

    2003-07-01

    The influence of pressure, temperature, humidity and granulation of the coal upon the resistance to pressure and the water absorption of the briquettes has been tested, with the aim to examine how each of the two dependent variables changes depending on the values assumed by any of the four independent variables and which of the mentioned independent variables significantly influences the dependent ones. The full factorial design with 16 experiments and the central composite design with 27 experiments have been applied. The influence of the independent variables upon the dependent ones has been examined by applying the analysis of variance. The influence values of individual factors and their interaction upon the dependent variables have been stated as well as coefficients of curvilinear equation. 2 refs., 2 figs., 5 tabs.

  8. Engine testing the design, building, modification and use of powertrain test facilities

    CERN Document Server

    MARTYR, A J

    2012-01-01

    Engine Testing is a unique, well-organized and comprehensive collection of the different aspects of engine and vehicle testing equipment and infrastructure for anyone involved in facility design and management, physical testing and the maintenance, upgrading and trouble shooting of testing equipment. Designed so that its chapters can all stand alone to be read in sequence or out of order as needed, Engine Testing is also an ideal resource for automotive engineers required to perform testing functions whose jobs do not involve engine testing on a regular basis. This recognized standard refer

  9. Mechanical testing - designers need: a view at component design and operations stages

    International Nuclear Information System (INIS)

    Shrivastava, S.K.

    2007-01-01

    Mechanical design of any component requires knowledge of values of various material properties which designer(s) make(s) use in designing the component. In design of nuclear power plant components, it assumes even greater importance in view of degree of precision and accuracy with which the values of various properties are required. This is in turn demands, high accuracy in testing machines and measuring methods. In this paper, attempt has been made to bring out that even from conventional tension test, how designer today looks for availability of engineering stress-strain diagram preferably through digitally acquired data points during the test from which he can derive values of Ramberg-Osgood parameters for use in fracture mechanics based analysis. Attempt has been also made to provide account of some of important fracture mechanics related tests which have been evolved in last two decades and designers need for evolution of simple test techniques to measure many more fracture mechanics related parameters as well as cater to constraints such as shape and size of material available from the components. Nuclear power plant has been primarily kept in view and ASME. Section III NB, ASME Section XI and relevant ASTM Standards have been taken as standard references. Further pressure retaining materials of pressure vessels/Reactor Pressure Vessels have been kept in view. (author)

  10. A case study on the design and development of minigames for research methods and statistics

    Directory of Open Access Journals (Sweden)

    P. Van Rosmalen

    2014-08-01

    Full Text Available Research methodology involves logical reasoning and critical thinking skills which are core competences in developing a more sophisticated understanding of the world. Acquiring expertise in research methods and statistics is not easy and poses a significant challenge for many students. The subject material is challenging because it is highly abstract and complex and requires the coordination of different but inter-related knowledge and skills that are all necessary to develop a coherent and usable skills base in this area. Additionally, while many students embrace research methods enthusiastically, others find the area dry, abstract and boring. In this paper we discuss the design and the first evaluation of a set of mini-games to practice research methods. Games are considered to be engaging and allow students to test out scenarios which provide concrete examples in a way that they typically only do once they are out in the field. The design of a game is a complex task. First, we describe how we used cognitive task analysis to identify the knowledge and competences required to develop a comprehensive and usable understanding of research methods. Next, we describe the games designed and how 4C-ID, an instructional design model, was used to underpin the games with a sound instructional design basis. Finally, the evaluation approach is discussed and how the findings of the first evaluation phase were used to improve the games.

  11. Water NSTF Design, Instrumentation, and Test Planning

    Energy Technology Data Exchange (ETDEWEB)

    Lisowski, Darius D.; Gerardi, Craig D.; Hu, Rui; Kilsdonk, Dennis J.; Bremer, Nathan C.; Lomperski, Stephen W.; Kraus, Adam R.; Bucknor, Matthew D.; Lv, Qiuping; Farmer, Mitchell T.

    2017-08-01

    The following report serves as a formal introduction to the water-based Natural convection Shutdown heat removal Test Facility (NSTF) program at Argonne. Since 2005, this US Department of Energy (DOE) sponsored program has conducted large scale experimental testing to generate high-quality and traceable validation data for guiding design decisions of the Reactor Cavity Cooling System (RCCS) concept for advanced reactor designs. The most recent facility iteration, and focus of this report, is the operation of a 1/2 scale model of a water-RCCS concept. Several features of the NSTF prototype align with the conceptual design that has been publicly released for the AREVA 625 MWt SC-HTGR. The design of the NSTF also retains all aspects common to a fundamental boiling water thermosiphon, and thus is well poised to provide necessary experimental data to advance basic understanding of natural circulation phenomena and contribute to computer code validation. Overall, the NSTF program operates to support the DOE vision of aiding US vendors in design choices of future reactor concepts, advancing the maturity of codes for licensing, and ultimately developing safe and reliable reactor technologies. In this report, the top-level program objectives, testing requirements, and unique considerations for the water cooled test assembly are discussed, and presented in sufficient depth to support defining the program’s overall scope and purpose. A discussion of the proposed 6-year testing program is then introduced, which outlines the specific strategy and testing plan for facility operations. The proposed testing plan has been developed to meet the toplevel objective of conducting high-quality test operations that span across a broad range of single- and two-phase operating conditions. Details of characterization, baseline test cases, accident scenario, and parametric variations are provided, including discussions of later-stage test cases that examine the influence of geometric

  12. Hybrid Computerized Adaptive Testing: From Group Sequential Design to Fully Sequential Design

    Science.gov (United States)

    Wang, Shiyu; Lin, Haiyan; Chang, Hua-Hua; Douglas, Jeff

    2016-01-01

    Computerized adaptive testing (CAT) and multistage testing (MST) have become two of the most popular modes in large-scale computer-based sequential testing. Though most designs of CAT and MST exhibit strength and weakness in recent large-scale implementations, there is no simple answer to the question of which design is better because different…

  13. Prototype spent-fuel canister design, analysis, and test

    International Nuclear Information System (INIS)

    Leisher, W.B.; Eakes, R.G.; Duffey, T.A.

    1982-03-01

    Sandia National Laboratories was asked by the US Energy Research and Development Administration (now US Department of Energy) to design the spent fuel shipping cask system for the Clinch River Breeder Reactor Plant (CRBRP). As a part of this task, a canister which holds liquid sodium and the spent fuel assembly was designed, analyzed, and tested. The canister body survived the regulatory Type-B 9.1-m (30-ft) drop test with no apparent leakage. However, the commercially available metal seal used in this design leaked after the tests. This report describes the design approach, analysis, and prototype canister testing. Recommended work for completing the design, when funding is available, is included

  14. Test the Overall Significance of p-values by Using Joint Tail Probability of Ordered p-values as Test Statistic

    NARCIS (Netherlands)

    Fang, Yongxiang; Wit, Ernst

    2008-01-01

    Fisher’s combined probability test is the most commonly used method to test the overall significance of a set independent p-values. However, it is very obviously that Fisher’s statistic is more sensitive to smaller p-values than to larger p-value and a small p-value may overrule the other p-values

  15. Mathematical statistics

    CERN Document Server

    Pestman, Wiebe R

    2009-01-01

    This textbook provides a broad and solid introduction to mathematical statistics, including the classical subjects hypothesis testing, normal regression analysis, and normal analysis of variance. In addition, non-parametric statistics and vectorial statistics are considered, as well as applications of stochastic analysis in modern statistics, e.g., Kolmogorov-Smirnov testing, smoothing techniques, robustness and density estimation. For students with some elementary mathematical background. With many exercises. Prerequisites from measure theory and linear algebra are presented.

  16. SOCR: Statistics Online Computational Resource

    Directory of Open Access Journals (Sweden)

    Ivo D. Dinov

    2006-10-01

    Full Text Available The need for hands-on computer laboratory experience in undergraduate and graduate statistics education has been firmly established in the past decade. As a result a number of attempts have been undertaken to develop novel approaches for problem-driven statistical thinking, data analysis and result interpretation. In this paper we describe an integrated educational web-based framework for: interactive distribution modeling, virtual online probability experimentation, statistical data analysis, visualization and integration. Following years of experience in statistical teaching at all college levels using established licensed statistical software packages, like STATA, S-PLUS, R, SPSS, SAS, Systat, etc., we have attempted to engineer a new statistics education environment, the Statistics Online Computational Resource (SOCR. This resource performs many of the standard types of statistical analysis, much like other classical tools. In addition, it is designed in a plug-in object-oriented architecture and is completely platform independent, web-based, interactive, extensible and secure. Over the past 4 years we have tested, fine-tuned and reanalyzed the SOCR framework in many of our undergraduate and graduate probability and statistics courses and have evidence that SOCR resources build student's intuition and enhance their learning.

  17. Functional Testing of Wireless Sensor Node Designs

    DEFF Research Database (Denmark)

    Virk, Kashif M.; Madsen, Jan

    2007-01-01

    Wireless sensor networks are networked embedded computer systems with stringent power, performance, cost and form-factor requirements along with numerous other constraints related to their pervasiveness and ubiquitousness. Therefore, only a systematic design methdology coupled with an efficient...... test approach can enable their conformance to design and deployment specifications. We discuss off-line, hierarchical, functional testing of complete wireless sensor nodes containing configurable logic through a combination of FPGA-based board test and Software-Based Self-Test (SBST) techniques...

  18. Statistical molecular design of balanced compound libraries for QSAR modeling.

    Science.gov (United States)

    Linusson, A; Elofsson, M; Andersson, I E; Dahlgren, M K

    2010-01-01

    A fundamental step in preclinical drug development is the computation of quantitative structure-activity relationship (QSAR) models, i.e. models that link chemical features of compounds with activities towards a target macromolecule associated with the initiation or progression of a disease. QSAR models are computed by combining information on the physicochemical and structural features of a library of congeneric compounds, typically assembled from two or more building blocks, and biological data from one or more in vitro assays. Since the models provide information on features affecting the compounds' biological activity they can be used as guides for further optimization. However, in order for a QSAR model to be relevant to the targeted disease, and drug development in general, the compound library used must contain molecules with balanced variation of the features spanning the chemical space believed to be important for interaction with the biological target. In addition, the assays used must be robust and deliver high quality data that are directly related to the function of the biological target and the associated disease state. In this review, we discuss and exemplify the concept of statistical molecular design (SMD) in the selection of building blocks and final synthetic targets (i.e. compounds to synthesize) to generate information-rich, balanced libraries for biological testing and computation of QSAR models.

  19. A Unified Statistical Rain-Attenuation Model for Communication Link Fade Predictions and Optimal Stochastic Fade Control Design Using a Location-Dependent Rain-Statistic Database

    Science.gov (United States)

    Manning, Robert M.

    1990-01-01

    A static and dynamic rain-attenuation model is presented which describes the statistics of attenuation on an arbitrarily specified satellite link for any location for which there are long-term rainfall statistics. The model may be used in the design of the optimal stochastic control algorithms to mitigate the effects of attenuation and maintain link reliability. A rain-statistics data base is compiled, which makes it possible to apply the model to any location in the continental U.S. with a resolution of 0-5 degrees in latitude and longitude. The model predictions are compared with experimental observations, showing good agreement.

  20. Statistical calculation of hot channel factors

    International Nuclear Information System (INIS)

    Farhadi, K.

    2007-01-01

    It is a conventional practice in the design of nuclear reactors to introduce hot channel factors to allow for spatial variations of power generation and flow distribution. Consequently, it is not enough to be able to calculate the nominal temperature distributions of fuel element, cladding, coolant, and central fuel. Indeed, one must be able to calculate the probability that the imposed temperature or heat flux limits in the entire core is not exceeded. In this paper, statistical methods are used to calculate hot channel factors for a particular case of a heterogeneous, Material Testing Reactor (MTR) and compare the results obtained from different statistical methods. It is shown that among the statistical methods available, the semi-statistical method is the most reliable one

  1. Descriptive and inferential statistical methods used in burns research.

    Science.gov (United States)

    Al-Benna, Sammy; Al-Ajam, Yazan; Way, Benjamin; Steinstraesser, Lars

    2010-05-01

    Burns research articles utilise a variety of descriptive and inferential methods to present and analyse data. The aim of this study was to determine the descriptive methods (e.g. mean, median, SD, range, etc.) and survey the use of inferential methods (statistical tests) used in articles in the journal Burns. This study defined its population as all original articles published in the journal Burns in 2007. Letters to the editor, brief reports, reviews, and case reports were excluded. Study characteristics, use of descriptive statistics and the number and types of statistical methods employed were evaluated. Of the 51 articles analysed, 11(22%) were randomised controlled trials, 18(35%) were cohort studies, 11(22%) were case control studies and 11(22%) were case series. The study design and objectives were defined in all articles. All articles made use of continuous and descriptive data. Inferential statistics were used in 49(96%) articles. Data dispersion was calculated by standard deviation in 30(59%). Standard error of the mean was quoted in 19(37%). The statistical software product was named in 33(65%). Of the 49 articles that used inferential statistics, the tests were named in 47(96%). The 6 most common tests used (Student's t-test (53%), analysis of variance/co-variance (33%), chi(2) test (27%), Wilcoxon & Mann-Whitney tests (22%), Fisher's exact test (12%)) accounted for the majority (72%) of statistical methods employed. A specified significance level was named in 43(88%) and the exact significance levels were reported in 28(57%). Descriptive analysis and basic statistical techniques account for most of the statistical tests reported. This information should prove useful in deciding which tests should be emphasised in educating burn care professionals. These results highlight the need for burn care professionals to have a sound understanding of basic statistics, which is crucial in interpreting and reporting data. Advice should be sought from professionals

  2. IEEE Std 101-1987: IEEE guide for the statistical analysis of thermal life test data

    International Nuclear Information System (INIS)

    Anon.

    1992-01-01

    This revision of IEEE Std 101-1972 describes statistical analyses for data from thermally accelerated aging tests. It explains the basis and use of statistical calculations for an engineer or scientist. Accelerated test procedures usually call for a number of specimens to be aged at each of several temperatures appreciably above normal operating temperatures. High temperatures are chosen to produce specimen failures (according to specified failure criteria) in typically one week to one year. The test objective is to determine the dependence of median life on temperature from the data, and to estimate, by extrapolation, the median life to be expected at service temperature. This guide presents methods for analyzing such data and for comparing test data on different materials

  3. RAMI strategies in the IFMIF Test Facilities design

    Energy Technology Data Exchange (ETDEWEB)

    Abal, Javier, E-mail: javier.abal@upc.edu [Fusion Energy Engineering Laboratory (FEEL), Technical University of Catalonia (UPC) Barcelona-Tech, Barcelona (Spain); Dies, Javier [Fusion Energy Engineering Laboratory (FEEL), Technical University of Catalonia (UPC) Barcelona-Tech, Barcelona (Spain); Arroyo, José Manuel [Laboratorio Nacional de Fusión por Confinamiento Magnético – CIEMAT, 28040 Madrid (Spain); Bargalló, Enric [Fusion Energy Engineering Laboratory (FEEL), Technical University of Catalonia (UPC) Barcelona-Tech, Barcelona (Spain); Casal, Natalia; García, Ángela [Laboratorio Nacional de Fusión por Confinamiento Magnético – CIEMAT, 28040 Madrid (Spain); Martínez, Gonzalo; Tapia, Carlos; De Blas, Alfredo [Fusion Energy Engineering Laboratory (FEEL), Technical University of Catalonia (UPC) Barcelona-Tech, Barcelona (Spain); Mollá, Joaquín; Ibarra, Ángel [Laboratorio Nacional de Fusión por Confinamiento Magnético – CIEMAT, 28040 Madrid (Spain)

    2013-10-15

    Highlights: • We have implemented fault tolerant design strategies so that the strong availability requirements are met. • The evolution to the present design of the signal and cooling lines inside the TTC has also been compared. • The RAMI analyses have demonstrated a strong capability in being a complementary tool in the design of IFMIF Test Facilities. -- Abstract: In this paper, a RAMI analysis of the different stages in Test Facilities (TF) design is described. The comparison between the availability results has been a milestone not only to evaluate the major unavailability contributors in the updates but also to implement fault tolerant design strategies when possible. These strategies encompass a wide range of design activities: from the definition of degraded modes of operation in the Test Facilities to specific modifications in the test modules in order to guarantee their fail safe operation.

  4. RAMI strategies in the IFMIF Test Facilities design

    International Nuclear Information System (INIS)

    Abal, Javier; Dies, Javier; Arroyo, José Manuel; Bargalló, Enric; Casal, Natalia; García, Ángela; Martínez, Gonzalo; Tapia, Carlos; De Blas, Alfredo; Mollá, Joaquín; Ibarra, Ángel

    2013-01-01

    Highlights: • We have implemented fault tolerant design strategies so that the strong availability requirements are met. • The evolution to the present design of the signal and cooling lines inside the TTC has also been compared. • The RAMI analyses have demonstrated a strong capability in being a complementary tool in the design of IFMIF Test Facilities. -- Abstract: In this paper, a RAMI analysis of the different stages in Test Facilities (TF) design is described. The comparison between the availability results has been a milestone not only to evaluate the major unavailability contributors in the updates but also to implement fault tolerant design strategies when possible. These strategies encompass a wide range of design activities: from the definition of degraded modes of operation in the Test Facilities to specific modifications in the test modules in order to guarantee their fail safe operation

  5. Application of statistical methods to the testing of nuclear counting assemblies

    International Nuclear Information System (INIS)

    Gilbert, J.P.; Friedling, G.

    1965-01-01

    This report describes the application of the hypothesis test theory to the control of the 'statistical purity' and of the stability of the counting batteries used for measurements on activation detectors in research reactors. The principles involved and the experimental results obtained at Cadarache on batteries operating with the reactors PEGGY and AZUR are given. (authors) [fr

  6. Benefits of statistical molecular design, covariance analysis, and reference models in QSAR: a case study on acetylcholinesterase

    Science.gov (United States)

    Andersson, C. David; Hillgren, J. Mikael; Lindgren, Cecilia; Qian, Weixing; Akfur, Christine; Berg, Lotta; Ekström, Fredrik; Linusson, Anna

    2015-03-01

    Scientific disciplines such as medicinal- and environmental chemistry, pharmacology, and toxicology deal with the questions related to the effects small organic compounds exhort on biological targets and the compounds' physicochemical properties responsible for these effects. A common strategy in this endeavor is to establish structure-activity relationships (SARs). The aim of this work was to illustrate benefits of performing a statistical molecular design (SMD) and proper statistical analysis of the molecules' properties before SAR and quantitative structure-activity relationship (QSAR) analysis. Our SMD followed by synthesis yielded a set of inhibitors of the enzyme acetylcholinesterase (AChE) that had very few inherent dependencies between the substructures in the molecules. If such dependencies exist, they cause severe errors in SAR interpretation and predictions by QSAR-models, and leave a set of molecules less suitable for future decision-making. In our study, SAR- and QSAR models could show which molecular sub-structures and physicochemical features that were advantageous for the AChE inhibition. Finally, the QSAR model was used for the prediction of the inhibition of AChE by an external prediction set of molecules. The accuracy of these predictions was asserted by statistical significance tests and by comparisons to simple but relevant reference models.

  7. SOCR Analyses - an Instructional Java Web-based Statistical Analysis Toolkit.

    Science.gov (United States)

    Chu, Annie; Cui, Jenny; Dinov, Ivo D

    2009-03-01

    The Statistical Online Computational Resource (SOCR) designs web-based tools for educational use in a variety of undergraduate courses (Dinov 2006). Several studies have demonstrated that these resources significantly improve students' motivation and learning experiences (Dinov et al. 2008). SOCR Analyses is a new component that concentrates on data modeling and analysis using parametric and non-parametric techniques supported with graphical model diagnostics. Currently implemented analyses include commonly used models in undergraduate statistics courses like linear models (Simple Linear Regression, Multiple Linear Regression, One-Way and Two-Way ANOVA). In addition, we implemented tests for sample comparisons, such as t-test in the parametric category; and Wilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, in the non-parametric category. SOCR Analyses also include several hypothesis test models, such as Contingency tables, Friedman's test and Fisher's exact test.The code itself is open source (http://socr.googlecode.com/), hoping to contribute to the efforts of the statistical computing community. The code includes functionality for each specific analysis model and it has general utilities that can be applied in various statistical computing tasks. For example, concrete methods with API (Application Programming Interface) have been implemented in statistical summary, least square solutions of general linear models, rank calculations, etc. HTML interfaces, tutorials, source code, activities, and data are freely available via the web (www.SOCR.ucla.edu). Code examples for developers and demos for educators are provided on the SOCR Wiki website.In this article, the pedagogical utilization of the SOCR Analyses is discussed, as well as the underlying design framework. As the SOCR project is on-going and more functions and tools are being added to it, these resources are constantly improved. The reader is strongly encouraged to check the SOCR site for most

  8. A Bayesian sequential design with adaptive randomization for 2-sided hypothesis test.

    Science.gov (United States)

    Yu, Qingzhao; Zhu, Lin; Zhu, Han

    2017-11-01

    Bayesian sequential and adaptive randomization designs are gaining popularity in clinical trials thanks to their potentials to reduce the number of required participants and save resources. We propose a Bayesian sequential design with adaptive randomization rates so as to more efficiently attribute newly recruited patients to different treatment arms. In this paper, we consider 2-arm clinical trials. Patients are allocated to the 2 arms with a randomization rate to achieve minimum variance for the test statistic. Algorithms are presented to calculate the optimal randomization rate, critical values, and power for the proposed design. Sensitivity analysis is implemented to check the influence on design by changing the prior distributions. Simulation studies are applied to compare the proposed method and traditional methods in terms of power and actual sample sizes. Simulations show that, when total sample size is fixed, the proposed design can obtain greater power and/or cost smaller actual sample size than the traditional Bayesian sequential design. Finally, we apply the proposed method to a real data set and compare the results with the Bayesian sequential design without adaptive randomization in terms of sample sizes. The proposed method can further reduce required sample size. Copyright © 2017 John Wiley & Sons, Ltd.

  9. Innovative design of ironing board based on Kansei Engineering and usability test

    Directory of Open Access Journals (Sweden)

    Zhafira Nadia

    2018-01-01

    Full Text Available Laundry business is popular service business in Indonesia that mostly involve muscle activity for lifting and ironing activity. It causes the worker get muscular fatigue and injury especially for ironing activity that is included in the static and monotonous activity. moreover, existed ironing board design in the market is not easy to be adjusted to standing or sitting position of workers. Hence, this paper aimed to design the innovative and adjustable ironing board to minimize the musculoskeletal complaint. The preliminary study is conducted by NBM questionnaires found that fatigue experienced mostly in the back, right upper arm, and calves. The product designed by Kansei engineering to satisfy consumer's feeling and assessed by USE questionnaire to fulfil consumer's usability. Orthogonal array and conjoint analysis were conducted to incorporate all attribute potentials in developing a new single concept design. Statistical analysis was also done to test the hypothesis. 80 respondents have participated in this study. The result of this study is an innovative design of ironing board which is valid to meet consumers' needs such as comfort, secure, attractive, sturdy, durable and multifunction and practical at 5% of the significant level.

  10. Hypothesis Designs for Three-Hypothesis Test Problems

    OpenAIRE

    Yan Li; Xiaolong Pu

    2010-01-01

    As a helpful guide for applications, the alternative hypotheses of the three-hypothesis test problems are designed under the required error probabilities and average sample number in this paper. The asymptotic formulas and the proposed numerical quadrature formulas are adopted, respectively, to obtain the hypothesis designs and the corresponding sequential test schemes under the Koopman-Darmois distributions. The example of the normal mean test shows that our methods are qu...

  11. Design standards for experimental and field studies to evaluate diagnostic accuracy of tests for infectious diseases in aquatic animals.

    Science.gov (United States)

    Laurin, E; Thakur, K K; Gardner, I A; Hick, P; Moody, N J G; Crane, M S J; Ernst, I

    2018-05-01

    Design and reporting quality of diagnostic accuracy studies (DAS) are important metrics for assessing utility of tests used in animal and human health. Following standards for designing DAS will assist in appropriate test selection for specific testing purposes and minimize the risk of reporting biased sensitivity and specificity estimates. To examine the benefits of recommending standards, design information from published DAS literature was assessed for 10 finfish, seven mollusc, nine crustacean and two amphibian diseases listed in the 2017 OIE Manual of Diagnostic Tests for Aquatic Animals. Of the 56 DAS identified, 41 were based on field testing, eight on experimental challenge studies and seven on both. Also, we adapted human and terrestrial-animal standards and guidelines for DAS structure for use in aquatic animal diagnostic research. Through this process, we identified and addressed important metrics for consideration at the design phase: study purpose, targeted disease state, selection of appropriate samples and specimens, laboratory analytical methods, statistical methods and data interpretation. These recommended design standards for DAS are presented as a checklist including risk-of-failure points and actions to mitigate bias at each critical step. Adherence to standards when designing DAS will also facilitate future systematic review and meta-analyses of DAS research literature. © 2018 John Wiley & Sons Ltd.

  12. EID - prototype design and user test 2004

    International Nuclear Information System (INIS)

    Welch, Robin; Friberg, Maarten; Nystad, Espen; Teigen, Arild; Veland, Oeystein

    2005-08-01

    programme is to gain insight into how this methodology can contribute to the design of operator displays in the nuclear industry. To do this, it was decided to design a limited number of displays on the FRESH simulator and conduct a user test to examine whether operators would be able to use and accept this type of design. The FRESH EID displays intend to show information and relationships in a graphical form that would require substantially more mental resources to utilize if using the conventional displays. This HWR presents the background for EID, the analysis process, the displays that have been designed, the user test and the outcome of the user test. This first attempt at developing and evaluating an EID has provided both valuable practical lessons learned and promising results for further work. (Author)

  13. Test the Overall Significance of p-values by Using Joint Tail Probability of Ordered p-values as Test Statistic

    OpenAIRE

    Fang, Yongxiang; Wit, Ernst

    2008-01-01

    Fisher’s combined probability test is the most commonly used method to test the overall significance of a set independent p-values. However, it is very obviously that Fisher’s statistic is more sensitive to smaller p-values than to larger p-value and a small p-value may overrule the other p-values and decide the test result. This is, in some cases, viewed as a flaw. In order to overcome this flaw and improve the power of the test, the joint tail probability of a set p-values is proposed as a ...

  14. DECADE design and testing status

    International Nuclear Information System (INIS)

    Sincerny, P.; Childers, K.; Goyer, J.; Kortbawi, D.; Roth, I.; Stallings, C.; Dempsey, J.; Schlitt, L.

    1996-01-01

    DECADE is a very high power generator that will be built at Arnold Engineering Development Center in Tullahoma, Tennessee, by the Defense Special Weapons Agency. The full facility consists of 16 modules. Two full power modules (DM1 and DM2) have been built to verify the DECADE design at Physics International. Each module consists of a 570 kJ Marx generator that pulse charges a water transfer capacitor. The transfer capacitor discharges into a water output line through an array of 6 parallel triggered gas switches. The water output line then pulse charges the inductive store/opening switch pulse compression stage. When the opening switch opens, the inductive store discharges into an electron beam bremsstrahlung diode load. The initial testing of the DM1 module has been completed. The electrical design of the module will be presented, including circuit modeling results. The test data from a module will be compared to the modeling results. This test data, including reproducibility and timing accuracy, was used to predict the output from the full system of 16 modules and a subset of 4 unit modules (DECADE Quad). One option under consideration is to deploy a DECADE Quad at the testing center in Tennessee prior to the full 16 module system. The design sketch of the DECADE Quad and the predicted output based on the modeling and the demonstrated module performance is given. (author). 11 figs., 11 refs

  15. DECADE design and testing status

    Energy Technology Data Exchange (ETDEWEB)

    Sincerny, P; Childers, K; Goyer, J; Kortbawi, D; Roth, I; Stallings, C; Dempsey, J [Physics International Company, San Leandro, CA (United States); Schlitt, L [Leland Schlitt Consulting Services, Livermore, CA (United States)

    1997-12-31

    DECADE is a very high power generator that will be built at Arnold Engineering Development Center in Tullahoma, Tennessee, by the Defense Special Weapons Agency. The full facility consists of 16 modules. Two full power modules (DM1 and DM2) have been built to verify the DECADE design at Physics International. Each module consists of a 570 kJ Marx generator that pulse charges a water transfer capacitor. The transfer capacitor discharges into a water output line through an array of 6 parallel triggered gas switches. The water output line then pulse charges the inductive store/opening switch pulse compression stage. When the opening switch opens, the inductive store discharges into an electron beam bremsstrahlung diode load. The initial testing of the DM1 module has been completed. The electrical design of the module will be presented, including circuit modeling results. The test data from a module will be compared to the modeling results. This test data, including reproducibility and timing accuracy, was used to predict the output from the full system of 16 modules and a subset of 4 unit modules (DECADE Quad). One option under consideration is to deploy a DECADE Quad at the testing center in Tennessee prior to the full 16 module system. The design sketch of the DECADE Quad and the predicted output based on the modeling and the demonstrated module performance is given. (author). 11 figs., 11 refs.

  16. Design requirements for the supercritical water oxidation test bed

    International Nuclear Information System (INIS)

    Svoboda, J.M.; Valentich, D.J.

    1994-05-01

    This report describes the design requirements for the supercritical water oxidation (SCWO) test bed that will be located at the Idaho National Engineering Laboratory (INEL). The test bed will process a maximum of 50 gph of waste plus the required volume of cooling water. The test bed will evaluate the performance of a number of SCWO reactor designs. The goal of the project is to select a reactor that can be scaled up for use in a full-size waste treatment facility to process US Department of Energy mixed wastes. EG ampersand G Idaho, Inc. will design and construct the SCWO test bed at the Water Reactor Research Test Facility (WRRTF), located in the northern region of the INEL. Private industry partners will develop and provide SCWO reactors to interface with the test bed. A number of reactor designs will be tested, including a transpiring wall, tube, and vessel-type reactor. The initial SCWO reactor evaluated will be a transpiring wall design. This design requirements report identifies parameters needed to proceed with preliminary and final design work for the SCWO test bed. A flow sheet and Process and Instrumentation Diagrams define the overall process and conditions of service and delineate equipment, piping, and instrumentation sizes and configuration Codes and standards that govern the safe engineering and design of systems and guidance that locates and interfaces test bed hardware are provided. Detailed technical requirements are addressed for design of piping, valves, instrumentation and control, vessels, tanks, pumps, electrical systems, and structural steel. The approach for conducting the preliminary and final designs and environmental and quality issues influencing the design are provided

  17. Accuracy statistics in predicting Independent Activities of Daily Living (IADL) capacity with comprehensive and brief neuropsychological test batteries.

    Science.gov (United States)

    Karzmark, Peter; Deutsch, Gayle K

    2018-01-01

    This investigation was designed to determine the predictive accuracy of a comprehensive neuropsychological and brief neuropsychological test battery with regard to the capacity to perform instrumental activities of daily living (IADLs). Accuracy statistics that included measures of sensitivity, specificity, positive and negative predicted power and positive likelihood ratio were calculated for both types of batteries. The sample was drawn from a general neurological group of adults (n = 117) that included a number of older participants (age >55; n = 38). Standardized neuropsychological assessments were administered to all participants and were comprised of the Halstead Reitan Battery and portions of the Wechsler Adult Intelligence Scale-III. A comprehensive test battery yielded a moderate increase over base-rate in predictive accuracy that generalized to older individuals. There was only limited support for using a brief battery, for although sensitivity was high, specificity was low. We found that a comprehensive neuropsychological test battery provided good classification accuracy for predicting IADL capacity.

  18. Statistical auditing and randomness test of lotto k/N-type games

    Science.gov (United States)

    Coronel-Brizio, H. F.; Hernández-Montoya, A. R.; Rapallo, F.; Scalas, E.

    2008-11-01

    One of the most popular lottery games worldwide is the so-called “lotto k/N”. It considers N numbers 1,2,…,N from which k are drawn randomly, without replacement. A player selects k or more numbers and the first prize is shared amongst those players whose selected numbers match all of the k randomly drawn. Exact rules may vary in different countries. In this paper, mean values and covariances for the random variables representing the numbers drawn from this kind of game are presented, with the aim of using them to audit statistically the consistency of a given sample of historical results with theoretical values coming from a hypergeometric statistical model. The method can be adapted to test pseudorandom number generators.

  19. IMPLEMENTATION AND VALIDATION OF STATISTICAL TESTS IN RESEARCH'S SOFTWARE HELPING DATA COLLECTION AND PROTOCOLS ANALYSIS IN SURGERY.

    Science.gov (United States)

    Kuretzki, Carlos Henrique; Campos, Antônio Carlos Ligocki; Malafaia, Osvaldo; Soares, Sandramara Scandelari Kusano de Paula; Tenório, Sérgio Bernardo; Timi, Jorge Rufino Ribas

    2016-03-01

    The use of information technology is often applied in healthcare. With regard to scientific research, the SINPE(c) - Integrated Electronic Protocols was created as a tool to support researchers, offering clinical data standardization. By the time, SINPE(c) lacked statistical tests obtained by automatic analysis. Add to SINPE(c) features for automatic realization of the main statistical methods used in medicine . The study was divided into four topics: check the interest of users towards the implementation of the tests; search the frequency of their use in health care; carry out the implementation; and validate the results with researchers and their protocols. It was applied in a group of users of this software in their thesis in the strict sensu master and doctorate degrees in one postgraduate program in surgery. To assess the reliability of the statistics was compared the data obtained both automatically by SINPE(c) as manually held by a professional in statistics with experience with this type of study. There was concern for the use of automatic statistical tests, with good acceptance. The chi-square, Mann-Whitney, Fisher and t-Student were considered as tests frequently used by participants in medical studies. These methods have been implemented and thereafter approved as expected. The incorporation of the automatic SINPE (c) Statistical Analysis was shown to be reliable and equal to the manually done, validating its use as a research tool for medical research.

  20. Development and testing of new upper-limb prosthetic devices: research designs for usability testing.

    Science.gov (United States)

    Resnik, Linda

    2011-01-01

    The purposes of this article are to describe usability testing and introduce designs and methods of usability testing research as it relates to upper-limb prosthetics. This article defines usability, describes usability research, discusses research approaches to and designs for usability testing, and highlights a variety of methodological considerations, including sampling, sample size requirements, and usability metrics. Usability testing is compared with other types of study designs used in prosthetic research.

  1. An omnibus likelihood test statistic and its factorization for change detection in time series of polarimetric SAR data

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Conradsen, Knut; Skriver, Henning

    2016-01-01

    Based on an omnibus likelihood ratio test statistic for the equality of several variance-covariance matrices following the complex Wishart distribution with an associated p-value and a factorization of this test statistic, change analysis in a short sequence of multilook, polarimetric SAR data...... in the covariance matrix representation is carried out. The omnibus test statistic and its factorization detect if and when change(s) occur. The technique is demonstrated on airborne EMISAR L-band data but may be applied to Sentinel-1, Cosmo-SkyMed, TerraSAR-X, ALOS and RadarSat-2 or other dual- and quad...

  2. Change detection in a time series of polarimetric SAR data by an omnibus test statistic and its factorization

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Conradsen, Knut; Skriver, Henning

    2016-01-01

    Based on an omnibus likelihood ratio test statistic for the equality of several variance-covariance matrices following the complex Wishart distribution with an associated p-value and a factorization of this test statistic, change analysis in a short sequence of multilook, polarimetric SAR data...... in the covariance matrix representation is carried out. The omnibus test statistic and its factorization detect if and when change(s) occur. The technique is demonstrated on airborne EMISAR L-band data but may be applied to Sentinel-1, Cosmo-SkyMed, TerraSAR-X, ALOS and RadarSat-2 or other dual- and quad...

  3. Neutron absorber qualification and acceptance testing from the designer's perspective

    Energy Technology Data Exchange (ETDEWEB)

    Bracey, W. [Transnuclear, Inc, Hawthorne, NY (United States); Chiocca, R. [Cogema Logistics, St. Quentin en Yvelines (France)

    2004-07-01

    Starting in the mid 1990's, the USNRC began to require less than 100% credit for the 10B present in fixed neutron absorbers spent fuel transport packages. The current practice in the US is to use only 75% of the specified 10B in criticality safety calculations unless extensive acceptance testing demonstrates both the presence of the 10B and uniformity of its distribution. In practice, the NRC has accepted no more than 90% credit for 10B in recent years, while other national competent authorities continue to accept 100%. More recently, with the introduction of new neutron absorber materials, particularly aluminum / boron carbide metal matrix composites, the NRC has also expressed expectations for qualification testing, based in large part on Transnuclear's successful application to use a new composite material in the TN-68 storage / transport cask. The difficulty is that adding more boron than is really necessary to a metal has some negative effects on the material, reducing the ductility and the thermal conductivity, and increasing the cost. Excessive testing requirements can have the undesired effect of keeping superior materials out of spent fuel package designs, without a corresponding justification based on public safety. In European countries and especially in France, 100% credit has been accepted up to now with materials controls specified in the Safety Analysis Report (SAR): Manufacturing process approved by qualification testing Materials manufacturing controlled under a Quality Assurance system. During fabrication, acceptance testing directly on products or on representative samples. Acceptance criteria taking into account a statistical uncertainty corresponding to 3{sigma}. The original and current bases for the reduced {sup 10}B credit, the design requirements for neutron absorber materials, and the experience of Transnuclear and Cogema Logistics with neutron absorber testing are examined. Guidelines for qualification and acceptance testing and

  4. Towards evidence-based computational statistics: lessons from clinical research on the role and design of real-data benchmark studies.

    Science.gov (United States)

    Boulesteix, Anne-Laure; Wilson, Rory; Hapfelmeier, Alexander

    2017-09-09

    The goal of medical research is to develop interventions that are in some sense superior, with respect to patient outcome, to interventions currently in use. Similarly, the goal of research in methodological computational statistics is to develop data analysis tools that are themselves superior to the existing tools. The methodology of the evaluation of medical interventions continues to be discussed extensively in the literature and it is now well accepted that medicine should be at least partly "evidence-based". Although we statisticians are convinced of the importance of unbiased, well-thought-out study designs and evidence-based approaches in the context of clinical research, we tend to ignore these principles when designing our own studies for evaluating statistical methods in the context of our methodological research. In this paper, we draw an analogy between clinical trials and real-data-based benchmarking experiments in methodological statistical science, with datasets playing the role of patients and methods playing the role of medical interventions. Through this analogy, we suggest directions for improvement in the design and interpretation of studies which use real data to evaluate statistical methods, in particular with respect to dataset inclusion criteria and the reduction of various forms of bias. More generally, we discuss the concept of "evidence-based" statistical research, its limitations and its impact on the design and interpretation of real-data-based benchmark experiments. We suggest that benchmark studies-a method of assessment of statistical methods using real-world datasets-might benefit from adopting (some) concepts from evidence-based medicine towards the goal of more evidence-based statistical research.

  5. Detail design of test loop for FIV in fuel bundle and preliminary test

    Energy Technology Data Exchange (ETDEWEB)

    Sim, Woo Gunl; Lee, Wan Young; Kim, Sung Won [Hannam University, Taejeon (Korea)

    2002-04-01

    It is urgent to develop the analytical model for the structural/mechanical integrity of fuel rod. In general, it is not easy to develop a pure analytical model. Occasionally, experimental results have been utilized for the model.Because of this reason, it is required to design proper test loop. Using the optimized test loop, With the optimized test loop, the dynamic behaviour of the rod will be evaluated and the critical flow velocity, which the rod loses the stability in, will be measured for the design of the rod. To verify the integrity of the fuel rod, it is required to evaluate the dynamic behaviour and the critical flow velocity with the test loop. The test results will be utilized to the design of the rod. Generally, the rod has a ground vibration due to turbulence in wide range of flow velocity and the amplitude of vibration becomes larger by the resonance, in a range of the velocity where occurs vortex. The rod loses stability in critical flow velocity caused by fluid-elastic instability. For the purpose of the present work to perform the conceptional design of the test loop, it is necessary (1) to understand the mechanism of the flow-induced vibration and the related experimental coefficients, (2) to evaluate the existing test loops for improving the loop with design parameters and (3) to decide the design specifications of the major equipments of the loop. 35 refs., 14 figs., 4 tabs. (Author)

  6. A Psychological Measurement of Student Testing Design Preferences.

    Science.gov (United States)

    Shukla, P. K.; Bruno, James

    An analytical technique from the field of market research called conjoint analysis was applied to a psychological measurement of student testing design preferences. Past concerns with testing design are reviewed, and a newer approach to testing is identified--the modified confidence weighted-admissible probability measurement (MCW-APM) test…

  7. Design and Testing of an Educational Water Tunnel

    Science.gov (United States)

    Kosaraju, Srinivas

    2017-11-01

    A new water tunnel is designed and tested for educational and research purposes at Northern Arizona University. The university currently owns an educational wind tunnel with a test section of 12in X 12in X 24in. However, due to limited size of test section and range of Reynolds numbers, its application is currently limited to very few experiments. In an effort to expand the educational and research capabilities, a student team is tasked to design, build and test a water tunnel as a Capstone Senior Design project. The water tunnel is designed to have a test section of 8in X 8in X 36in. and be able to test up to Re = 50E3. Multiple numerical models are used to optimize the flow field inside the test section before building the physical apparatus. The water tunnel is designed to accommodate multiple experiments for drag and lift studies. The built-in die system can deliver up to three different colors to study the streamlines and vortex shedding from the surfaces. During the first phase, a low discharge pump is used to achieve Re = 4E3 to test laminar flows. In the second phase, a high discharge pump will be used to achieve targeted Re = 50E3 to study turbulent flows.

  8. The Healy Clean Coal Project: Design verification tests

    International Nuclear Information System (INIS)

    Guidetti, R.H.; Sheppard, D.B.; Ubhayakar, S.K.; Weede, J.J.; McCrohan, D.V.; Rosendahl, S.M.

    1993-01-01

    As part of the Healy Clean Coal Project, TRW Inc., the supplier of the advanced slagging coal combustors, has successfully completed design verification tests on the major components of the combustion system at its Southern California test facility. These tests, which included the firing of a full-scale precombustor with a new non-storage direct coal feed system, supported the design of the Healy combustion system and its auxiliaries performed under Phase 1 of the project. Two 350 million BTU/hr combustion systems have been designed and are now ready for fabrication and erection, as part of Phase 2 of the project. These systems, along with a back-end Spray Dryer Absorber system, designed and supplied by Joy Technologies, will be integrated with a Foster Wheeler boiler for the 50 MWe power plant at Healy, Alaska. This paper describes the design verification tests and the current status of the project

  9. Testing for Statistical Discrimination based on Gender

    DEFF Research Database (Denmark)

    Lesner, Rune Vammen

    . It is shown that the implications of both screening discrimination and stereotyping are consistent with observable wage dynamics. In addition, it is found that the gender wage gap decreases in tenure but increases in job transitions and that the fraction of women in high-ranking positions within a firm does......This paper develops a model which incorporates the two most commonly cited strands of the literature on statistical discrimination, namely screening discrimination and stereotyping. The model is used to provide empirical evidence of statistical discrimination based on gender in the labour market...... not affect the level of statistical discrimination by gender....

  10. Computer-Aided Test Flow in Core-Based Design

    NARCIS (Netherlands)

    Zivkovic, V.; Tangelder, R.J.W.T.; Kerkhoff, Hans G.

    2000-01-01

    This paper copes with the test-pattern generation and fault coverage determination in the core based design. The basic core-test strategy that one has to apply in the core-based design is stated in this work. A Computer-Aided Test (CAT) flow is proposed resulting in accurate fault coverage of

  11. Computer processing of 14C data; statistical tests and corrections of data

    International Nuclear Information System (INIS)

    Obelic, B.; Planinic, J.

    1977-01-01

    The described computer program calculates the age of samples and performs statistical tests and corrections of data. Data are obtained from the proportional counter that measures anticoincident pulses per 20 minute intervals. After every 9th interval the counter measures total number of counts per interval. Input data are punched on cards. The output list contains input data schedule and the following results: mean CPM value, correction of CPM for normal pressure and temperature (NTP), sample age calculation based on 14 C half life of 5570 and 5730 years, age correction for NTP, dendrochronological corrections and the relative radiocarbon concentration. All results are given with one standard deviation. Input data test (Chauvenet's criterion), gas purity test, standard deviation test and test of the data processor are also included in the program. (author)

  12. Basis and Statistical Design of the Passive HIV-1 Antibody Mediated Prevention (AMP) Test-of-Concept Efficacy Trials.

    Science.gov (United States)

    Gilbert, Peter B; Juraska, Michal; deCamp, Allan C; Karuna, Shelly; Edupuganti, Srilatha; Mgodi, Nyaradzo; Donnell, Deborah J; Bentley, Carter; Sista, Nirupama; Andrew, Philip; Isaacs, Abby; Huang, Yunda; Zhang, Lily; Capparelli, Edmund; Kochar, Nidhi; Wang, Jing; Eshleman, Susan H; Mayer, Kenneth H; Magaret, Craig A; Hural, John; Kublin, James G; Gray, Glenda; Montefiori, David C; Gomez, Margarita M; Burns, David N; McElrath, Julie; Ledgerwood, Julie; Graham, Barney S; Mascola, John R; Cohen, Myron; Corey, Lawrence

    2017-01-01

    Anti-HIV-1 broadly neutralizing antibodies (bnAbs) have been developed as potential agents for prevention of HIV-1 infection. The HIV Vaccine Trials Network and the HIV Prevention Trials Network are conducting the Antibody Mediated Prevention (AMP) trials to assess whether, and how, intravenous infusion of the anti-CD4 binding site bnAb, VRC01, prevents HIV-1 infection. These are the first test-of-concept studies to assess HIV-1 bnAb prevention efficacy in humans. The AMP trials are two parallel phase 2b HIV-1 prevention efficacy trials conducted in two cohorts: 2700 HIV-uninfected men and transgender persons who have sex with men in the United States, Peru, Brazil, and Switzerland; and 1500 HIV-uninfected sexually active women in seven countries in sub-Saharan Africa. Participants are randomized 1:1:1 to receive an intravenous infusion of 10 mg/kg VRC01, 30 mg/kg VRC01, or a control preparation every 8 weeks for a total of 10 infusions. Each trial is designed (1) to assess overall prevention efficacy (PE) pooled over the two VRC01 dose groups vs. control and (2) to assess VRC01 dose and laboratory markers as correlates of protection (CoPs) against overall and genotype- and phenotype-specific infection. Each AMP trial is designed to have 90% power to detect PE > 0% if PE is ≥ 60%. The AMP trials are also designed to identify VRC01 properties (i.e., concentration and effector functions) that correlate with protection and to provide insight into mechanistic CoPs. CoPs are assessed using data from breakthrough HIV-1 infections, including genetic sequences and sensitivities to VRC01-mediated neutralization and Fc effector functions. The AMP trials test whether VRC01 can prevent HIV-1 infection in two study populations. If affirmative, they will provide information for estimating the optimal dosage of VRC01 (or subsequent derivatives) and identify threshold levels of neutralization and Fc effector functions associated with high-level protection, setting a benchmark

  13. Severe Accident Test Station Design Document

    Energy Technology Data Exchange (ETDEWEB)

    Snead, Mary A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Yan, Yong [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Howell, Michael [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Keiser, James R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Terrani, Kurt A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-09-01

    The purpose of the ORNL severe accident test station (SATS) is to provide a platform for evaluation of advanced fuels under projected beyond design basis accident (BDBA) conditions. The SATS delivers the capability to map the behavior of advanced fuels concepts under accident scenarios across various temperature and pressure profiles, steam and steam-hydrogen gas mixtures, and thermal shock. The overall facility will include parallel capabilities for examination of fuels and irradiated materials (in-cell) and non-irradiated materials (out-of-cell) at BDBA conditions as well as design basis accident (DBA) or loss of coolant accident (LOCA) conditions. Also, a supporting analytical infrastructure to provide the data-needs for the fuel-modeling components of the Fuel Cycle Research and Development (FCRD) program will be put in place in a parallel manner. This design report contains the information for the first, second and third phases of design and construction of the SATS. The first phase consisted of the design and construction of an out-of-cell BDBA module intended for examination of non-irradiated materials. The second phase of this work was to construct the BDBA in-cell module to test irradiated fuels and materials as well as the module for DBA (i.e. LOCA) testing out-of-cell, The third phase was to build the in-cell DBA module. The details of the design constraints and requirements for the in-cell facility have been closely captured during the deployment of the out-of-cell SATS modules to ensure effective future implementation of the in-cell modules.

  14. Severe Accident Test Station Design Document

    International Nuclear Information System (INIS)

    Snead, Mary A.; Yan, Yong; Howell, Michael; Keiser, James R.; Terrani, Kurt A.

    2015-01-01

    The purpose of the ORNL severe accident test station (SATS) is to provide a platform for evaluation of advanced fuels under projected beyond design basis accident (BDBA) conditions. The SATS delivers the capability to map the behavior of advanced fuels concepts under accident scenarios across various temperature and pressure profiles, steam and steam-hydrogen gas mixtures, and thermal shock. The overall facility will include parallel capabilities for examination of fuels and irradiated materials (in-cell) and non-irradiated materials (out-of-cell) at BDBA conditions as well as design basis accident (DBA) or loss of coolant accident (LOCA) conditions. Also, a supporting analytical infrastructure to provide the data-needs for the fuel-modeling components of the Fuel Cycle Research and Development (FCRD) program will be put in place in a parallel manner. This design report contains the information for the first, second and third phases of design and construction of the SATS. The first phase consisted of the design and construction of an out-of-cell BDBA module intended for examination of non-irradiated materials. The second phase of this work was to construct the BDBA in-cell module to test irradiated fuels and materials as well as the module for DBA (i.e. LOCA) testing out-of-cell, The third phase was to build the in-cell DBA module. The details of the design constraints and requirements for the in-cell facility have been closely captured during the deployment of the out-of-cell SATS modules to ensure effective future implementation of the in-cell modules.

  15. A Note on Comparing the Power of Test Statistics at Low Significance Levels.

    Science.gov (United States)

    Morris, Nathan; Elston, Robert

    2011-01-01

    It is an obvious fact that the power of a test statistic is dependent upon the significance (alpha) level at which the test is performed. It is perhaps a less obvious fact that the relative performance of two statistics in terms of power is also a function of the alpha level. Through numerous personal discussions, we have noted that even some competent statisticians have the mistaken intuition that relative power comparisons at traditional levels such as α = 0.05 will be roughly similar to relative power comparisons at very low levels, such as the level α = 5 × 10 -8 , which is commonly used in genome-wide association studies. In this brief note, we demonstrate that this notion is in fact quite wrong, especially with respect to comparing tests with differing degrees of freedom. In fact, at very low alpha levels the cost of additional degrees of freedom is often comparatively low. Thus we recommend that statisticians exercise caution when interpreting the results of power comparison studies which use alpha levels that will not be used in practice.

  16. Cosmological Non-Gaussian Signature Detection: Comparing Performance of Different Statistical Tests

    Directory of Open Access Journals (Sweden)

    O. Forni

    2005-09-01

    Full Text Available Currently, it appears that the best method for non-Gaussianity detection in the cosmic microwave background (CMB consists in calculating the kurtosis of the wavelet coefficients. We know that wavelet-kurtosis outperforms other methods such as the bispectrum, the genus, ridgelet-kurtosis, and curvelet-kurtosis on an empirical basis, but relatively few studies have compared other transform-based statistics, such as extreme values, or more recent tools such as higher criticism (HC, or proposed “best possible” choices for such statistics. In this paper, we consider two models for transform-domain coefficients: (a a power-law model, which seems suited to the wavelet coefficients of simulated cosmic strings, and (b a sparse mixture model, which seems suitable for the curvelet coefficients of filamentary structure. For model (a, if power-law behavior holds with finite 8th moment, excess kurtosis is an asymptotically optimal detector, but if the 8th moment is not finite, a test based on extreme values is asymptotically optimal. For model (b, if the transform coefficients are very sparse, a recent test, higher criticism, is an optimal detector, but if they are dense, kurtosis is an optimal detector. Empirical wavelet coefficients of simulated cosmic strings have power-law character, infinite 8th moment, while curvelet coefficients of the simulated cosmic strings are not very sparse. In all cases, excess kurtosis seems to be an effective test in moderate-resolution imagery.

  17. Implementation Support of Security Design Patterns Using Test Templates

    Directory of Open Access Journals (Sweden)

    Masatoshi Yoshizawa

    2016-06-01

    Full Text Available Security patterns are intended to support software developers as the patterns encapsulate security expert knowledge. However, these patterns may be inappropriately applied because most developers are not security experts, leading to threats and vulnerabilities. Here we propose a support method for security design patterns in the implementation phase of software development. Our method creates a test template from a security design pattern, consisting of an “aspect test template” to observe the internal processing and a “test case template”. Providing design information creates a test from the test template with a tool. Because our test template is reusable, it can easily perform a test to validate a security design pattern. In an experiment involving four students majoring in information sciences, we confirm that our method can realize an effective test, verify pattern applications, and support pattern implementation.

  18. Testing Genetic Pleiotropy with GWAS Summary Statistics for Marginal and Conditional Analyses.

    Science.gov (United States)

    Deng, Yangqing; Pan, Wei

    2017-12-01

    There is growing interest in testing genetic pleiotropy, which is when a single genetic variant influences multiple traits. Several methods have been proposed; however, these methods have some limitations. First, all the proposed methods are based on the use of individual-level genotype and phenotype data; in contrast, for logistical, and other, reasons, summary statistics of univariate SNP-trait associations are typically only available based on meta- or mega-analyzed large genome-wide association study (GWAS) data. Second, existing tests are based on marginal pleiotropy, which cannot distinguish between direct and indirect associations of a single genetic variant with multiple traits due to correlations among the traits. Hence, it is useful to consider conditional analysis, in which a subset of traits is adjusted for another subset of traits. For example, in spite of substantial lowering of low-density lipoprotein cholesterol (LDL) with statin therapy, some patients still maintain high residual cardiovascular risk, and, for these patients, it might be helpful to reduce their triglyceride (TG) level. For this purpose, in order to identify new therapeutic targets, it would be useful to identify genetic variants with pleiotropic effects on LDL and TG after adjusting the latter for LDL; otherwise, a pleiotropic effect of a genetic variant detected by a marginal model could simply be due to its association with LDL only, given the well-known correlation between the two types of lipids. Here, we develop a new pleiotropy testing procedure based only on GWAS summary statistics that can be applied for both marginal analysis and conditional analysis. Although the main technical development is based on published union-intersection testing methods, care is needed in specifying conditional models to avoid invalid statistical estimation and inference. In addition to the previously used likelihood ratio test, we also propose using generalized estimating equations under the

  19. Towards evidence-based computational statistics: lessons from clinical research on the role and design of real-data benchmark studies

    Directory of Open Access Journals (Sweden)

    Anne-Laure Boulesteix

    2017-09-01

    Full Text Available Abstract Background The goal of medical research is to develop interventions that are in some sense superior, with respect to patient outcome, to interventions currently in use. Similarly, the goal of research in methodological computational statistics is to develop data analysis tools that are themselves superior to the existing tools. The methodology of the evaluation of medical interventions continues to be discussed extensively in the literature and it is now well accepted that medicine should be at least partly “evidence-based”. Although we statisticians are convinced of the importance of unbiased, well-thought-out study designs and evidence-based approaches in the context of clinical research, we tend to ignore these principles when designing our own studies for evaluating statistical methods in the context of our methodological research. Main message In this paper, we draw an analogy between clinical trials and real-data-based benchmarking experiments in methodological statistical science, with datasets playing the role of patients and methods playing the role of medical interventions. Through this analogy, we suggest directions for improvement in the design and interpretation of studies which use real data to evaluate statistical methods, in particular with respect to dataset inclusion criteria and the reduction of various forms of bias. More generally, we discuss the concept of “evidence-based” statistical research, its limitations and its impact on the design and interpretation of real-data-based benchmark experiments. Conclusion We suggest that benchmark studies—a method of assessment of statistical methods using real-world datasets—might benefit from adopting (some concepts from evidence-based medicine towards the goal of more evidence-based statistical research.

  20. Evaluation of the Wishart test statistics for polarimetric SAR data

    DEFF Research Database (Denmark)

    Skriver, Henning; Nielsen, Allan Aasbjerg; Conradsen, Knut

    2003-01-01

    A test statistic for equality of two covariance matrices following the complex Wishart distribution has previously been used in new algorithms for change detection, edge detection and segmentation in polarimetric SAR images. Previously, the results for change detection and edge detection have been...... quantitatively evaluated. This paper deals with the evaluation of segmentation. A segmentation performance measure originally developed for single-channel SAR images has been extended to polarimetric SAR images, and used to evaluate segmentation for a merge-using-moment algorithm for polarimetric SAR data....

  1. Computer-Aided Test Flow in Core-Based Design

    OpenAIRE

    Zivkovic, V.; Tangelder, R.J.W.T.; Kerkhoff, Hans G.

    2000-01-01

    This paper copes with the test-pattern generation and fault coverage determination in the core based design. The basic core-test strategy that one has to apply in the core-based design is stated in this work. A Computer-Aided Test (CAT) flow is proposed resulting in accurate fault coverage of embedded cores. The CAT now is applied to a few cores within the Philips Core Test Pilot IC project

  2. Design and Construction of a Hydroturbine Test Facility

    Science.gov (United States)

    Ayli, Ece; Kavurmaci, Berat; Cetinturk, Huseyin; Kaplan, Alper; Celebioglu, Kutay; Aradag, Selin; Tascioglu, Yigit; ETU Hydro Research Center Team

    2014-11-01

    Hydropower is one of the clean, renewable, flexible and efficient energy resources. Most of the developing countries invest on this cost-effective energy source. Hydroturbines for hydroelectric power plants are tailor-made. Each turbine is designed and constructed according to the properties, namely the head and flow rate values of the specific water source. Therefore, a center (ETU Hydro-Center for Hydro Energy Research) for the design, manufacturing and performance tests of hydraulic turbines is established at TOBB University of Economics and Technology to promote research in this area. CFD aided hydraulic and structural design, geometry optimization, manufacturing and performance tests of hydraulic turbines are the areas of expertise of this center. In this paper, technical details of the design and construction of this one of a kind test facility in Turkey, is explained. All the necessary standards of IEC (International Electrotechnical Commission) are met since the test facility will act as a certificated test center for hydraulic turbines.

  3. Design of a fusion engineering test facility

    International Nuclear Information System (INIS)

    Sager, P.H.

    1980-01-01

    The fusion Engineering Test Facility (ETF) is being designed to provide for engineering testing capability in a program leading to the demonstration of fusion as a viable energy option. It will combine power-reactor-type components and subsystems into an integrated tokamak system and provide a test bed to test blanket modules in a fusion environment. Because of the uncertainties in impurity control two basic designs are being developed: a design with a bundle divertor (Design 1) and one with a poloidal divertor (Design 2). The two designs are similar where possible, the latter having somewhat larger toroidal field (TF) coils to accommodate removal of the larger torus sectors required for the single-null poloidal divertor. Both designs have a major radius of 5.4 m, a minor radius of 1.3 m, and a D-shaped plasma with an elongation of 1.6. Ten TF coils are incorporated in both designs, producing a toroidal field of 5.5 T on-axis. The ohmic heating and equilibrium field (EF) coils supply sufficient volt-seconds to produce a flat-top burn of 100 s and a duty cycle of 135 s, including a start of 12 s, a burn termination of 10 s, and a pumpdown of 13 s. The total fusion power during burn is 750 MW, giving a neutron wall loading of 1.5 MW/m 2 . In Design 1 of the poloidal field (PF) coils except the fast-response EF coils are located outside the FT coils and are superconducting. The fast-response coils are located inside the TF coil bore near the torus and are normal conducting so that they can be easily replaced.In Design 2 all of the PF coils are located outside the TF coils and are superconducting. Ignition is achieved with 60 MW of neutral beam injection at 150 keV. Five megawatts of radio frequency heating (electron cyclotron resonance heating) is used to assist in the startup and limit the breakdown requirement to 25 V

  4. Analysis of statistical misconception in terms of statistical reasoning

    Science.gov (United States)

    Maryati, I.; Priatna, N.

    2018-05-01

    Reasoning skill is needed for everyone to face globalization era, because every person have to be able to manage and use information from all over the world which can be obtained easily. Statistical reasoning skill is the ability to collect, group, process, interpret, and draw conclusion of information. Developing this skill can be done through various levels of education. However, the skill is low because many people assume that statistics is just the ability to count and using formulas and so do students. Students still have negative attitude toward course which is related to research. The purpose of this research is analyzing students’ misconception in descriptive statistic course toward the statistical reasoning skill. The observation was done by analyzing the misconception test result and statistical reasoning skill test; observing the students’ misconception effect toward statistical reasoning skill. The sample of this research was 32 students of math education department who had taken descriptive statistic course. The mean value of misconception test was 49,7 and standard deviation was 10,6 whereas the mean value of statistical reasoning skill test was 51,8 and standard deviation was 8,5. If the minimal value is 65 to state the standard achievement of a course competence, students’ mean value is lower than the standard competence. The result of students’ misconception study emphasized on which sub discussion that should be considered. Based on the assessment result, it was found that students’ misconception happen on this: 1) writing mathematical sentence and symbol well, 2) understanding basic definitions, 3) determining concept that will be used in solving problem. In statistical reasoning skill, the assessment was done to measure reasoning from: 1) data, 2) representation, 3) statistic format, 4) probability, 5) sample, and 6) association.

  5. Partial discharge testing: a progress report. Statistical evaluation of PD data

    International Nuclear Information System (INIS)

    Warren, V.; Allan, J.

    2005-01-01

    It has long been known that comparing the partial discharge results obtained from a single machine is a valuable tool enabling companies to observe the gradual deterioration of a machine stator winding and thus plan appropriate maintenance for the machine. In 1998, at the annual Iris Rotating Machines Conference (IRMC), a paper was presented that compared thousands of PD test results to establish the criteria for comparing results from different machines and the expected PD levels. At subsequent annual Iris conferences, using similar analytical procedures, papers were presented that supported the previous criteria and: in 1999, established sensor location as an additional criterion; in 2000, evaluated the effect of insulation type and age on PD activity; in 2001, evaluated the effect of manufacturer on PD activity; in 2002, evaluated the effect of operating pressure for hydrogen-cooled machines; in 2003, evaluated the effect of insulation type and setting Trac alarms; in 2004, re-evaluated the effect of manufacturer on PD activity. Before going further in database analysis procedures, it would be prudent to statistically evaluate the anecdotal evidence observed to date. The goal was to determine which variables of machine conditions greatly influenced the PD results and which didn't. Therefore, this year's paper looks at the impact of operating voltage, machine type and winding type on the test results for air-cooled machines. Because of resource constraints, only data collected through 2003 was used; however, as before, it is still standardized for frequency bandwidth and pruned to include only full-load-hot (FLH) results collected for one sensor on operating machines. All questionable data, or data from off-line testing or unusual machine conditions was excluded, leaving 6824 results. Calibration of on-line PD test results is impractical; therefore, only results obtained using the same method of data collection and noise separation techniques are compared. For

  6. Reducing statistics anxiety and enhancing statistics learning achievement: effectiveness of a one-minute strategy.

    Science.gov (United States)

    Chiou, Chei-Chang; Wang, Yu-Min; Lee, Li-Tze

    2014-08-01

    Statistical knowledge is widely used in academia; however, statistics teachers struggle with the issue of how to reduce students' statistics anxiety and enhance students' statistics learning. This study assesses the effectiveness of a "one-minute paper strategy" in reducing students' statistics-related anxiety and in improving students' statistics-related achievement. Participants were 77 undergraduates from two classes enrolled in applied statistics courses. An experiment was implemented according to a pretest/posttest comparison group design. The quasi-experimental design showed that the one-minute paper strategy significantly reduced students' statistics anxiety and improved students' statistics learning achievement. The strategy was a better instructional tool than the textbook exercise for reducing students' statistics anxiety and improving students' statistics achievement.

  7. To test photon statistics by atomic beam deflection

    International Nuclear Information System (INIS)

    Wang Yuzhu; Chen Yudan; Huang Weigang; Liu Liang

    1985-02-01

    There exists a simple relation between the photon statistics in resonance fluorescence and the statistics of the momentum transferred to an atom by a plane travelling wave [Cook, R.J., Opt. Commun., 35, 347(1980)]. Using an atomic beam deflection by light pressure, we have observed sub-Poissonian statistics in resonance fluorescence of two-level atoms. (author)

  8. Integral Design workshops: organization, structure and testing

    OpenAIRE

    Zeiler, W Wim; Savanovic, P Perica

    2010-01-01

    The purpose of this paper is to achieve an understanding of design activities in the context of building design. The starting point is an overview of design research and design methodology. From the insights gained by this analysis of design in this specific context, we present an 'organization structure and design' workshop approach for collaborative multi-discipline design management. The workshops set-up, used to implement and to test the approach, are presented as well as the experiences ...

  9. Statistical analysis of thermal conductivity of nanofluid containing ...

    Indian Academy of Sciences (India)

    Thermal conductivity measurements of nanofluids were analysed via two-factor completely randomized design and comparison of data means is carried out with Duncan's multiple-range test. Statistical analysis of experimental data show that temperature and weight fraction have a reasonable impact on the thermal ...

  10. Creating Ribo-T: (Design, Build, Test)n.

    Science.gov (United States)

    Carlson, Erik D

    2015-11-20

    Engineering biology is especially challenging given our relatively poor ability to rationally design within life's complex design landscape. Thus, moving through the engineering "design, build, test" cycle multiple times accumulates system knowledge and hopefully yields a successful design. Here I discuss the engineering process behind our recently published work creating a ribosome with tethered subunits, Ribo-T.

  11. High-throughput optimization by statistical designs: example with rat liver slices cryopreservation.

    Science.gov (United States)

    Martin, H; Bournique, B; Blanchi, B; Lerche-Langrand, C

    2003-08-01

    The purpose of this study was to optimize cryopreservation conditions of rat liver slices in a high-throughput format, with focus on reproducibility. A statistical design of 32 experiments was performed and intracellular lactate dehydrogenase (LDHi) activity and antipyrine (AP) metabolism were evaluated as biomarkers. At freezing, modified University of Wisconsin solution was better than Williams'E medium, and pure dimethyl sulfoxide was better than a cryoprotectant mixture. The best cryoprotectant concentrations were 10% for LDHi and 20% for AP metabolism. Fetal calf serum could be used at 50 or 80%, and incubation of slices with the cryoprotectant could last 10 or 20 min. At thawing, 42 degrees C was better than 22 degrees C. After thawing, 1h was better than 3h of preculture. Cryopreservation increased the interslice variability of the biomarkers. After cryopreservation, LDHi and AP metabolism levels were up to 84 and 80% of fresh values. However, these high levels were not reproducibly achieved. Two factors involved in the day-to-day variability of LDHi were identified: the incubation time with the cryoprotectant and the preculture time. In conclusion, the statistical design was very efficient to quickly determine optimized conditions by simultaneously measuring the role of numerous factors. The cryopreservation procedure developed appears suitable for qualitative metabolic profiling studies.

  12. Statistical Shape Analysis of the Human Ear Canal with Application to In-the-Ear Hearing Aid Design

    DEFF Research Database (Denmark)

    Paulsen, Rasmus Reinhold

    2004-01-01

    This thesis is about the statistical shape analysis of the human ear canal with application to the mechanical design of in-the-ear hearing aids. Initially, it is described how a statistical shape model of the human ear canal is built based on a training set of laser-scanned ear impressions. A thin...

  13. A statistical method for testing epidemiological results, as applied to the Hanford worker population

    International Nuclear Information System (INIS)

    Brodsky, A.

    1979-01-01

    Some recent reports of Mancuso, Stewart and Kneale claim findings of radiation-produced cancer in the Hanford worker population. These claims are based on statistical computations that use small differences in accumulated exposures between groups dying of cancer and groups dying of other causes; actual mortality and longevity were not reported. This paper presents a statistical method for evaluation of actual mortality and longevity longitudinally over time, as applied in a primary analysis of the mortality experience of the Hanford worker population. Although available, this method was not utilized in the Mancuso-Stewart-Kneale paper. The author's preliminary longitudinal analysis shows that the gross mortality experience of persons employed at Hanford during 1943-70 interval did not differ significantly from that of certain controls, when both employees and controls were selected from families with two or more offspring and comparison were matched by age, sex, race and year of entry into employment. This result is consistent with findings reported by Sanders (Health Phys. vol.35, 521-538, 1978). The method utilizes an approximate chi-square (1 D.F.) statistic for testing population subgroup comparisons, as well as the cumulation of chi-squares (1 D.F.) for testing the overall result of a particular type of comparison. The method is available for computer testing of the Hanford mortality data, and could also be adapted to morbidity or other population studies. (author)

  14. Proficiency Testing for Determination of Water Content in Toluene of Chemical Reagents by iteration robust statistic technique

    Science.gov (United States)

    Wang, Hao; Wang, Qunwei; He, Ming

    2018-05-01

    In order to investigate and improve the level of detection technology of water content in liquid chemical reagents of domestic laboratories, proficiency testing provider PT0031 (CNAS) has organized proficiency testing program of water content in toluene, 48 laboratories from 18 provinces/cities/municipals took part in the PT. This paper introduces the implementation process of proficiency testing for determination of water content in toluene, including sample preparation, homogeneity and stability test, the results of statistics of iteration robust statistic technique and analysis, summarized and analyzed those of the different test standards which are widely used in the laboratories, put forward the technological suggestions for the improvement of the test quality of water content. Satisfactory results were obtained by 43 laboratories, amounting to 89.6% of the total participating laboratories.

  15. Recent Literature on Whether Statistical Significance Tests Should or Should Not Be Banned.

    Science.gov (United States)

    Deegear, James

    This paper summarizes the literature regarding statistical significant testing with an emphasis on recent literature in various discipline and literature exploring why researchers have demonstrably failed to be influenced by the American Psychological Association publication manual's encouragement to report effect sizes. Also considered are…

  16. The Bayesian New Statistics: Hypothesis testing, estimation, meta-analysis, and power analysis from a Bayesian perspective.

    Science.gov (United States)

    Kruschke, John K; Liddell, Torrin M

    2018-02-01

    In the practice of data analysis, there is a conceptual distinction between hypothesis testing, on the one hand, and estimation with quantified uncertainty on the other. Among frequentists in psychology, a shift of emphasis from hypothesis testing to estimation has been dubbed "the New Statistics" (Cumming 2014). A second conceptual distinction is between frequentist methods and Bayesian methods. Our main goal in this article is to explain how Bayesian methods achieve the goals of the New Statistics better than frequentist methods. The article reviews frequentist and Bayesian approaches to hypothesis testing and to estimation with confidence or credible intervals. The article also describes Bayesian approaches to meta-analysis, randomized controlled trials, and power analysis.

  17. Engineering design of vertical test stand cryostat

    International Nuclear Information System (INIS)

    Suhane, S.K.; Sharma, N.K.; Raghavendra, S.; Joshi, S.C.; Das, S.; Kush, P.K.; Sahni, V.C.; Gupta, P.D.; Sylvester, C.; Rabehl, R.; Ozelis, J.

    2011-01-01

    Under Indian Institutions and Fermilab collaboration, Raja Ramanna Centre for Advanced Technology and Fermi National Accelerator Laboratory are jointly developing 2K Vertical Test Stand (VTS) cryostats for testing SCRF cavities at 2K. The VTS cryostat has been designed for a large testing aperture of 86.36 cm for testing of 325 MHz Spoke resonators, 650 MHz and 1.3 GHz multi-cell SCRF cavities for Fermilab's Project-X. Units will be installed at Fermilab and RRCAT and used to test cavities for Project-X. A VTS cryostat comprises of liquid helium (LHe) vessel with internal magnetic shield, top insert plate equipped with cavity support stand and radiation shield, liquid nitrogen (LN 2 ) shield and vacuum vessel with external magnetic shield. The engineering design and analysis of VTS cryostat has been carried out using ASME B and PV Code and Finite Element Analysis. Design of internal and external magnetic shields was performed to limit the magnetic field inside LHe vessel at the cavity surface 2 shield has been performed to check the effectiveness of LN 2 cooling and for compliance with ASME piping code allowable stresses.

  18. A statistical characterization method for damping material properties and its application to structural-acoustic system design

    International Nuclear Information System (INIS)

    Jung, Byung C.; Lee, Doo Ho; Youn, Byeng D.; Lee, Soo Bum

    2011-01-01

    The performance of surface damping treatments may vary once the surface is exposed to a wide range of temperatures, because the performance of viscoelastic damping material is highly dependent on operational temperature. In addition, experimental data for dynamic responses of viscoelastic material are inherently random, which makes it difficult to design a robust damping layout. In this paper a statistical modeling procedure with a statistical calibration method is suggested for the variability characterization of viscoelastic damping material in constrained-layer damping structures. First, the viscoelastic material property is decomposed into two sources: (I) a random complex modulus due to operational temperature variability, and (II) experimental/model errors in the complex modulus. Next, the variability in the damping material property is obtained using the statistical calibration method by solving an unconstrained optimization problem with a likelihood function metric. Two case studies are considered to show the influence of the material variability on the acoustic performances in the structural-acoustic systems. It is shown that the variability of the damping material is propagated to that of the acoustic performances in the systems. Finally, robust and reliable damping layout designs of the two case studies are obtained through the reliability-based design optimization (RBDO) amidst severe variability in operational temperature and the damping material

  19. Examining publication bias—a simulation-based evaluation of statistical tests on publication bias

    Directory of Open Access Journals (Sweden)

    Andreas Schneck

    2017-11-01

    Full Text Available Background Publication bias is a form of scientific misconduct. It threatens the validity of research results and the credibility of science. Although several tests on publication bias exist, no in-depth evaluations are available that examine which test performs best for different research settings. Methods Four tests on publication bias, Egger’s test (FAT, p-uniform, the test of excess significance (TES, as well as the caliper test, were evaluated in a Monte Carlo simulation. Two different types of publication bias and its degree (0%, 50%, 100% were simulated. The type of publication bias was defined either as file-drawer, meaning the repeated analysis of new datasets, or p-hacking, meaning the inclusion of covariates in order to obtain a significant result. In addition, the underlying effect (β = 0, 0.5, 1, 1.5, effect heterogeneity, the number of observations in the simulated primary studies (N = 100, 500, and the number of observations for the publication bias tests (K = 100, 1,000 were varied. Results All tests evaluated were able to identify publication bias both in the file-drawer and p-hacking condition. The false positive rates were, with the exception of the 15%- and 20%-caliper test, unbiased. The FAT had the largest statistical power in the file-drawer conditions, whereas under p-hacking the TES was, except under effect heterogeneity, slightly better. The CTs were, however, inferior to the other tests under effect homogeneity and had a decent statistical power only in conditions with 1,000 primary studies. Discussion The FAT is recommended as a test for publication bias in standard meta-analyses with no or only small effect heterogeneity. If two-sided publication bias is suspected as well as under p-hacking the TES is the first alternative to the FAT. The 5%-caliper test is recommended under conditions of effect heterogeneity and a large number of primary studies, which may be found if publication bias is examined in a

  20. Study design and statistical analysis of data in human population studies with the micronucleus assay.

    Science.gov (United States)

    Ceppi, Marcello; Gallo, Fabio; Bonassi, Stefano

    2011-01-01

    The most common study design performed in population studies based on the micronucleus (MN) assay, is the cross-sectional study, which is largely performed to evaluate the DNA damaging effects of exposure to genotoxic agents in the workplace, in the environment, as well as from diet or lifestyle factors. Sample size is still a critical issue in the design of MN studies since most recent studies considering gene-environment interaction, often require a sample size of several hundred subjects, which is in many cases difficult to achieve. The control of confounding is another major threat to the validity of causal inference. The most popular confounders considered in population studies using MN are age, gender and smoking habit. Extensive attention is given to the assessment of effect modification, given the increasing inclusion of biomarkers of genetic susceptibility in the study design. Selected issues concerning the statistical treatment of data have been addressed in this mini-review, starting from data description, which is a critical step of statistical analysis, since it allows to detect possible errors in the dataset to be analysed and to check the validity of assumptions required for more complex analyses. Basic issues dealing with statistical analysis of biomarkers are extensively evaluated, including methods to explore the dose-response relationship among two continuous variables and inferential analysis. A critical approach to the use of parametric and non-parametric methods is presented, before addressing the issue of most suitable multivariate models to fit MN data. In the last decade, the quality of statistical analysis of MN data has certainly evolved, although even nowadays only a small number of studies apply the Poisson model, which is the most suitable method for the analysis of MN data.

  1. A Statistical Primer: Understanding Descriptive and Inferential Statistics

    OpenAIRE

    Gillian Byrne

    2007-01-01

    As libraries and librarians move more towards evidence‐based decision making, the data being generated in libraries is growing. Understanding the basics of statistical analysis is crucial for evidence‐based practice (EBP), in order to correctly design and analyze researchas well as to evaluate the research of others. This article covers the fundamentals of descriptive and inferential statistics, from hypothesis construction to sampling to common statistical techniques including chi‐square, co...

  2. Honeycomb technology materials, design, manufacturing, applications and testing

    CERN Document Server

    Bitzer, Tom

    1997-01-01

    Honeycomb Technology is a guide to honeycomb cores and honeycomb sandwich panels, from the manufacturing methods by which they are produced, to the different types of design, applications for usage and methods of testing the materials. It explains the different types of honeycomb cores available and provides tabulated data of their properties. The author has been involved in the testing and design of honeycomb cores and sandwich panels for nearly 30 years. Honeycomb Technology reflects this by emphasizing a `hands-on' approach and discusses procedures for designing sandwich panels, explaining the necessary equations. Also included is a section on how to design honeycomb energy absorbers and one full chapter discussing honeycomb core and sandwich panel testing. Honeycomb Technology will be of interest to engineers in the aircraft, aerospace and building industries. It will also be of great use to engineering students interested in basic sandwich panel design.

  3. Testing a statistical method of global mean palotemperature estimations in a long climate simulation

    Energy Technology Data Exchange (ETDEWEB)

    Zorita, E.; Gonzalez-Rouco, F. [GKSS-Forschungszentrum Geesthacht GmbH (Germany). Inst. fuer Hydrophysik

    2001-07-01

    Current statistical methods of reconstructing the climate of the last centuries are based on statistical models linking climate observations (temperature, sea-level-pressure) and proxy-climate data (tree-ring chronologies, ice-cores isotope concentrations, varved sediments, etc.). These models are calibrated in the instrumental period, and the longer time series of proxy data are then used to estimate the past evolution of the climate variables. Using such methods the global mean temperature of the last 600 years has been recently estimated. In this work this method of reconstruction is tested using data from a very long simulation with a climate model. This testing allows to estimate the errors of the estimations as a function of the number of proxy data and the time scale at which the estimations are probably reliable. (orig.)

  4. Pivotal statistics for testing subsets of structural parameters in the IV Regression Model

    NARCIS (Netherlands)

    Kleibergen, F.R.

    2000-01-01

    We construct a novel statistic to test hypothezes on subsets of the structural parameters in anInstrumental Variables (IV) regression model. We derive the chi squared limiting distribution of thestatistic and show that it has a degrees of freedom parameter that is equal to the number ofstructural

  5. Design and test of a high pressure centrifugal compressor

    International Nuclear Information System (INIS)

    Choi, Jae Ho; Han, Chak Heui; Paeng, Ki Seok; Chen, Seung Bae; Kim, Yong Ryun

    2005-01-01

    This paper presents an aerodynamic design, flow analysis and performance test of a pressure ratio 4:1 centrifugal compressor for gas turbine engine. The compressor is made up of a centrifugal impeller, a two-stage diffuser consisted of radial and axial types. The impeller has a 45 degree backswept angle and the design running tip clearance is 5% of impeller exit height. Three-dimensional numerical analysis is performed to analyze the flows in the impeller, diffuser and deswirler considering the impeller tip clearance. Test module and rig facilities for the compressor stage performance test are designed and fabricated. The overall compressor stage performances as well as the static pressure fields on the impeller and diffuser are measured. Two diffusers of wedge and airfoil types are tested with an impeller. The calculation and test results show that flow fields downstream the deswirler at the design and off-design points are highly nonuniform and the airfoil diffuser has the better aerodynamic characteristics than those of wedge diffuser

  6. Statistical Diversions

    Science.gov (United States)

    Petocz, Peter; Sowey, Eric

    2008-01-01

    In this article, the authors focus on hypothesis testing--that peculiarly statistical way of deciding things. Statistical methods for testing hypotheses were developed in the 1920s and 1930s by some of the most famous statisticians, in particular Ronald Fisher, Jerzy Neyman and Egon Pearson, who laid the foundations of almost all modern methods of…

  7. Heteroscedastic Tests Statistics for One-Way Analysis of Variance: The Trimmed Means and Hall's Transformation Conjunction

    Science.gov (United States)

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2005-01-01

    To deal with nonnormal and heterogeneous data for the one-way fixed effect analysis of variance model, the authors adopted a trimmed means method in conjunction with Hall's invertible transformation into a heteroscedastic test statistic (Alexander-Govern test or Welch test). The results of simulation experiments showed that the proposed technique…

  8. Statistical Methods for Environmental Pollution Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Gilbert, Richard O. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    1987-01-01

    The application of statistics to environmental pollution monitoring studies requires a knowledge of statistical analysis methods particularly well suited to pollution data. This book fills that need by providing sampling plans, statistical tests, parameter estimation procedure techniques, and references to pertinent publications. Most of the statistical techniques are relatively simple, and examples, exercises, and case studies are provided to illustrate procedures. The book is logically divided into three parts. Chapters 1, 2, and 3 are introductory chapters. Chapters 4 through 10 discuss field sampling designs and Chapters 11 through 18 deal with a broad range of statistical analysis procedures. Some statistical techniques given here are not commonly seen in statistics book. For example, see methods for handling correlated data (Sections 4.5 and 11.12), for detecting hot spots (Chapter 10), and for estimating a confidence interval for the mean of a lognormal distribution (Section 13.2). Also, Appendix B lists a computer code that estimates and tests for trends over time at one or more monitoring stations using nonparametric methods (Chapters 16 and 17). Unfortunately, some important topics could not be included because of their complexity and the need to limit the length of the book. For example, only brief mention could be made of time series analysis using Box-Jenkins methods and of kriging techniques for estimating spatial and spatial-time patterns of pollution, although multiple references on these topics are provided. Also, no discussion of methods for assessing risks from environmental pollution could be included.

  9. E-4 Test Facility Design Status

    Science.gov (United States)

    Ryan, Harry; Canady, Randy; Sewell, Dale; Rahman, Shamim; Gilbrech, Rick

    2001-01-01

    Combined-cycle propulsion technology is a strong candidate for meeting NASA space transportation goals. Extensive ground testing of integrated air-breathing/rocket system (e.g., components, subsystems and engine systems) across all propulsion operational modes (e.g., ramjet, scramjet) will be needed to demonstrate this propulsion technology. Ground testing will occur at various test centers based on each center's expertise. Testing at the NASA John C. Stennis Space Center will be primarily concentrated on combined-cycle power pack and engine systems at sea level conditions at a dedicated test facility, E-4. This paper highlights the status of the SSC E-4 test Facility design.

  10. Designing a Software Test Automation Framework

    Directory of Open Access Journals (Sweden)

    Sabina AMARICAI

    2014-01-01

    Full Text Available Testing is an art and science that should ultimately lead to lower cost businesses through increasing control and reducing risk. Testing specialists should thoroughly understand the system or application from both the technical and the business perspective, and then design, build and implement the minimum-cost, maximum-coverage validation framework. Test Automation is an important ingredient for testing large scale applications. In this paper we discuss several test automation frameworks, their advantages and disadvantages. We also propose a custom automation framework model that is suited for applications with very complex business requirements and numerous interfaces.

  11. Mathematics and Statistics Research Department progress report for period ending June 30, 1976

    International Nuclear Information System (INIS)

    Gosslee, D.G.; Shelton, B.K.; Ward, R.C.; Wilson, D.G.

    1976-10-01

    Brief summaries of work done in mathematics and related fields are presented. Research in mathematics and statistics concerned statistical estimation, statistical testing, experiment design, probability, continuum mechanics, functional integration, matrices and other operators, and mathematical software. More applied studies were conducted in the areas of analytical chemistry, biological research, chemistry and physics research, energy research, environmental research, health physics research, materials research, reactor and thermonuclear research, sampling inspection, quality control, and life testing, and uranium resource evaluation research. Additional sections deal with educational activities, presentation of research results, and professional activities. 7 figures, 9 tables

  12. Reliability assessment for safety critical systems by statistical random testing

    International Nuclear Information System (INIS)

    Mills, S.E.

    1995-11-01

    In this report we present an overview of reliability assessment for software and focus on some basic aspects of assessing reliability for safety critical systems by statistical random testing. We also discuss possible deviations from some essential assumptions on which the general methodology is based. These deviations appear quite likely in practical applications. We present and discuss possible remedies and adjustments and then undertake applying this methodology to a portion of the SDS1 software. We also indicate shortcomings of the methodology and possible avenues to address to follow to address these problems. (author). 128 refs., 11 tabs., 31 figs

  13. Reliability assessment for safety critical systems by statistical random testing

    Energy Technology Data Exchange (ETDEWEB)

    Mills, S E [Carleton Univ., Ottawa, ON (Canada). Statistical Consulting Centre

    1995-11-01

    In this report we present an overview of reliability assessment for software and focus on some basic aspects of assessing reliability for safety critical systems by statistical random testing. We also discuss possible deviations from some essential assumptions on which the general methodology is based. These deviations appear quite likely in practical applications. We present and discuss possible remedies and adjustments and then undertake applying this methodology to a portion of the SDS1 software. We also indicate shortcomings of the methodology and possible avenues to address to follow to address these problems. (author). 128 refs., 11 tabs., 31 figs.

  14. Testing for Statistical Discrimination based on Gender

    OpenAIRE

    Lesner, Rune Vammen

    2016-01-01

    This paper develops a model which incorporates the two most commonly cited strands of the literature on statistical discrimination, namely screening discrimination and stereotyping. The model is used to provide empirical evidence of statistical discrimination based on gender in the labour market. It is shown that the implications of both screening discrimination and stereotyping are consistent with observable wage dynamics. In addition, it is found that the gender wage gap decreases in tenure...

  15. Sources of Safety Data and Statistical Strategies for Design and Analysis: Clinical Trials.

    Science.gov (United States)

    Zink, Richard C; Marchenko, Olga; Sanchez-Kam, Matilde; Ma, Haijun; Jiang, Qi

    2018-03-01

    There has been an increased emphasis on the proactive and comprehensive evaluation of safety endpoints to ensure patient well-being throughout the medical product life cycle. In fact, depending on the severity of the underlying disease, it is important to plan for a comprehensive safety evaluation at the start of any development program. Statisticians should be intimately involved in this process and contribute their expertise to study design, safety data collection, analysis, reporting (including data visualization), and interpretation. In this manuscript, we review the challenges associated with the analysis of safety endpoints and describe the safety data that are available to influence the design and analysis of premarket clinical trials. We share our recommendations for the statistical and graphical methodologies necessary to appropriately analyze, report, and interpret safety outcomes, and we discuss the advantages and disadvantages of safety data obtained from clinical trials compared to other sources. Clinical trials are an important source of safety data that contribute to the totality of safety information available to generate evidence for regulators, sponsors, payers, physicians, and patients. This work is a result of the efforts of the American Statistical Association Biopharmaceutical Section Safety Working Group.

  16. Organic biowastes blend selection for composting industrial eggshell by-product: experimental and statistical mixture design.

    Science.gov (United States)

    Soares, Micaela A R; Andrade, Sandra R; Martins, Rui C; Quina, Margarida J; Quinta-Ferreira, Rosa M

    2012-01-01

    Composting is one of the technologies recommended for pre-treating industrial eggshells (ES) before its application in soils, for calcium recycling. However, due to the high inorganic content of ES, a mixture of biodegradable materials is required to assure a successful procedure. In this study, an adequate organic blend composition containing potato peel (PP), grass clippings (GC) and wheat straw (WS) was determined by applying the simplex-centroid mixture design method to achieve a desired moisture content, carbon: nitrogen ratio and free air space for effective composting of ES. A blend of 56% PP, 37% GC and 7% WS was selected and tested in a self heating reactor, where 10% (w/w) of ES was incorporated. After 29 days of reactor operation, a dry matter reduction of 46% was achieved and thermophilic temperatures were maintained during 15 days, indicating that the blend selected by statistical approach was adequate for composting of ES.

  17. Statistical methods in epidemiology. VII. An overview of the chi2 test for 2 x 2 contingency table analysis.

    Science.gov (United States)

    Rigby, A S

    2001-11-10

    The odds ratio is an appropriate method of analysis for data in 2 x 2 contingency tables. However, other methods of analysis exist. One such method is based on the chi2 test of goodness-of-fit. Key players in the development of statistical theory include Pearson, Fisher and Yates. Data are presented in the form of 2 x 2 contingency tables and a method of analysis based on the chi2 test is introduced. There are many variations of the basic test statistic, one of which is the chi2 test with Yates' continuity correction. The usefulness (or not) of Yates' continuity correction is discussed. Problems of interpretation when the method is applied to k x m tables are highlighted. Some properties of the chi2 the test are illustrated by taking examples from the author's teaching experiences. Journal editors should be encouraged to give both observed and expected cell frequencies so that better information comes out of the chi2 test statistic.

  18. Design of a materials testing experiment for the INTOR

    International Nuclear Information System (INIS)

    Vogel, M.A.; Opperman, E.K.

    1981-01-01

    The United States, Japan, USSR and the European community are jointly participating in the design of an International Tokamak Reactor called INTOR. In support of the US contribution to the INTOR design, the features of an experiment for bulk neutron irradiation damage studies were developed. It is anticipated that materials testing will be an important part of the programmatic mission of INTOR and consequently the requirements for materials testing in INTOR must be identified early in the reactor design to insure compatibility. The design features of the experiment, called a Channel Test, are given in this paper. The major components of the channel test are the water cooled heat sink (channel module) and the specimen capsule. The temperature within each of the 153 specimen capsules is predetermined by engineering the thermal barrier between the specimen capsule and heat sink. Individual capsules can be independently accessed and are designed to operate at a predetermined temperature within the range of 50 to 700 0 C. The total irradiation volume within a single channel test is 45 liters. Features of the channel test that result in experimental versatility and simplified remote access and handling are discussed

  19. Optical design and testing: introduction.

    Science.gov (United States)

    Liang, Chao-Wen; Koshel, John; Sasian, Jose; Breault, Robert; Wang, Yongtian; Fang, Yi Chin

    2014-10-10

    Optical design and testing has numerous applications in industrial, military, consumer, and medical settings. Assembling a complete imaging or nonimage optical system may require the integration of optics, mechatronics, lighting technology, optimization, ray tracing, aberration analysis, image processing, tolerance compensation, and display rendering. This issue features original research ranging from the optical design of image and nonimage optical stimuli for human perception, optics applications, bio-optics applications, 3D display, solar energy system, opto-mechatronics to novel imaging or nonimage modalities in visible and infrared spectral imaging, modulation transfer function measurement, and innovative interferometry.

  20. Statistical Inference at Work: Statistical Process Control as an Example

    Science.gov (United States)

    Bakker, Arthur; Kent, Phillip; Derry, Jan; Noss, Richard; Hoyles, Celia

    2008-01-01

    To characterise statistical inference in the workplace this paper compares a prototypical type of statistical inference at work, statistical process control (SPC), with a type of statistical inference that is better known in educational settings, hypothesis testing. Although there are some similarities between the reasoning structure involved in…

  1. The test-negative design for estimating influenza vaccine effectiveness.

    Science.gov (United States)

    Jackson, Michael L; Nelson, Jennifer C

    2013-04-19

    The test-negative design has emerged in recent years as the preferred method for estimating influenza vaccine effectiveness (VE) in observational studies. However, the methodologic basis of this design has not been formally developed. In this paper we develop the rationale and underlying assumptions of the test-negative study. Under the test-negative design for influenza VE, study subjects are all persons who seek care for an acute respiratory illness (ARI). All subjects are tested for influenza infection. Influenza VE is estimated from the ratio of the odds of vaccination among subjects testing positive for influenza to the odds of vaccination among subjects testing negative. With the assumptions that (a) the distribution of non-influenza causes of ARI does not vary by influenza vaccination status, and (b) VE does not vary by health care-seeking behavior, the VE estimate from the sample can generalized to the full source population that gave rise to the study sample. Based on our derivation of this design, we show that test-negative studies of influenza VE can produce biased VE estimates if they include persons seeking care for ARI when influenza is not circulating or do not adjust for calendar time. The test-negative design is less susceptible to bias due to misclassification of infection and to confounding by health care-seeking behavior, relative to traditional case-control or cohort studies. The cost of the test-negative design is the additional, difficult-to-test assumptions that incidence of non-influenza respiratory infections is similar between vaccinated and unvaccinated groups within any stratum of care-seeking behavior, and that influenza VE does not vary across care-seeking strata. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. Statistical analysis of weld bead geometry in Ti6Al4V laser cladding. Comparison of central composite design and five step full factorial test plan

    Energy Technology Data Exchange (ETDEWEB)

    Marko, Angelina [Fraunhofer Institute for Production Systems and Design Technology IPK, Berlin (Germany); Graf, Benjamin; Rethmeier, Michael [Fraunhofer Institute for Production Systems and Design Technology IPK, Berlin (Germany). Dept. for Joining and Coating Technology

    2017-11-01

    The process of laser cladding has become more important during recent years because of its broad application for cladding, repair or additive manufacturing. In the field of mechanical engineering, one use is the repair of turbine blades. For high quality and reliability of the repaired components, it is necessary to adjust the weld bead geometry to the specific repair task. The bead geometry influences the metallurgical bonding and the degree of dilution as well as the formation of defects like pores or cracks. Therefore, it is important to know the effects of the different parameters on the welding bead. A valuable tool to meet this industrial challenge is the design of experiments (DoE). In this context, the user can choose between a huge number of test plans. Greater profit of information is expected by a larger test range. In order to confirm the acceptance, a five-step full factorial test plan is compared to a central composite design in this paper. Moreover, the limits of the experimental range are indicated and restrictions can be derived. As the results show, the essential effects are detected with a full factorial test plan as well as with a central composite design. Merely the effect strength could not always be specified unambiguously. On this account and in consideration of cost efficiency, the use of central compound design is recommended in industrial applications.

  3. Statistical analysis of weld bead geometry in Ti6Al4V laser cladding. Comparison of central composite design and five step full factorial test plan

    International Nuclear Information System (INIS)

    Marko, Angelina; Graf, Benjamin; Rethmeier, Michael

    2017-01-01

    The process of laser cladding has become more important during recent years because of its broad application for cladding, repair or additive manufacturing. In the field of mechanical engineering, one use is the repair of turbine blades. For high quality and reliability of the repaired components, it is necessary to adjust the weld bead geometry to the specific repair task. The bead geometry influences the metallurgical bonding and the degree of dilution as well as the formation of defects like pores or cracks. Therefore, it is important to know the effects of the different parameters on the welding bead. A valuable tool to meet this industrial challenge is the design of experiments (DoE). In this context, the user can choose between a huge number of test plans. Greater profit of information is expected by a larger test range. In order to confirm the acceptance, a five-step full factorial test plan is compared to a central composite design in this paper. Moreover, the limits of the experimental range are indicated and restrictions can be derived. As the results show, the essential effects are detected with a full factorial test plan as well as with a central composite design. Merely the effect strength could not always be specified unambiguously. On this account and in consideration of cost efficiency, the use of central compound design is recommended in industrial applications.

  4. Statistically designed optimisation of enzyme catalysed starch removal from potato pulp

    DEFF Research Database (Denmark)

    Thomassen, Lise Vestergaard; Meyer, Anne S.

    2010-01-01

    to obtain dietary fibers is usually accomplished via a three step, sequential enzymatic treatment procedure using a heat stable alpha-amylase, protease, and amyloglucosidase. Statistically designed experiments were performed to investigate the influence of enzyme dose, amount of dry matter, incubation time...... and temperature on the amount of starch released from the potato pulp. The data demonstrated that all the starch could be released from potato pulp in one step when 8% (w/w) dry potato pulp was treated with 0.2% (v/w) (enzyme/substrate (E/S)) of a thermostable Bacillus licheniformis alpha-amylase (Termamyl(R) SC...

  5. Design and simulations for RFPI system test jig

    International Nuclear Information System (INIS)

    Keshwani, Rajesh; Khole, Shailesh; Sujo, C.I.; Shukla, Hitesh; Afaash, M.; Bharade, Sandeep; Joshi, Gopal

    2015-01-01

    RFPI system for condition monitoring and protection of RF systems of LEHIPA has been designed. The intention of this system is to protect and monitor different high power RF components. The system consists of variety of analog and digital cards to process, acquire and monitor signals from RF sensors, photo multiplier tubes (PMT), field emission probes, photo sensors, etc. The system can also accept and process digital inputs or contacts and analog input signals from other subsystems, indicating their status. It comprises of analog signal processing electronics, condition monitoring, data acquisition and fault reporting features. Since RFPI system should process and take action within prescribed time limit, the testing of system is critical issue. In order to carry out stand-alone field testing of such elaborate electronic system, variety of carefully designed electronic test circuit with suitable rangeability is required. These include fast current sources, fast amplifiers, fast rectifiers, etc. This paper elaborates on specifications, design approach, circuit design and simulations for various circuits used in test jig

  6. Study on the NaOH/metakaolin ratio and crystallization time for zeolite a synthesis from kaolin using statistical design

    Energy Technology Data Exchange (ETDEWEB)

    Silva Filho, Severino Higino da; Bieseki, Lindiane; Pergher, Sibele Berenice Castella, E-mail: sibelepergher@gmail.com [Universidade Federal do Rio Grande do Norte (LABPEMOL/UFRN), Natal, RN (Brazil). Lab. de Peneiras Moleculares; Maia, Ana Aurea B.; Angelica, Romulo Simoes [Universidade Federal do Para (UFPA), Belem PA (Brazil); Treichel, Helen [Universidade Federal da Fronteira Sil (UFFS), Erechim, RS (Brazil)

    2017-05-15

    The NaOH/metakaolin ratio and crystallization time were studied for the synthesis of zeolite NaA from a sample of kaolin from a Capim mine. The tests were carried out by using statistical design with axial points and replication of the central point. The samples obtained were characterized by X-ray diffraction (DRX), scanning electron microscopy and chemical analysis using a microprobe EPMA. The results showed that there is a relationship between the amount of NaOH added and crystallization time. The tests carried out using the lowest NaOH/metakaolin ratio (0.5) and the shortest time (4 h) produced a non-crystalline material. On the other hand, increasing the NaOH/metakaolin ratio and the crystallization time led to the formation of a NaA phase with a high structural level, but with the presence of a sodalite phase as an impurity. (author)

  7. Investigating salt frost scaling by using statistical methods

    DEFF Research Database (Denmark)

    Hasholt, Marianne Tange; Clemmensen, Line Katrine Harder

    2010-01-01

    A large data set comprising data for 118 concrete mixes on mix design, air void structure, and the outcome of freeze/thaw testing according to SS 13 72 44 has been analysed by use of statistical methods. The results show that with regard to mix composition, the most important parameter...

  8. Tests and Confidence Intervals for an Extended Variance Component Using the Modified Likelihood Ratio Statistic

    DEFF Research Database (Denmark)

    Christensen, Ole Fredslund; Frydenberg, Morten; Jensen, Jens Ledet

    2005-01-01

    The large deviation modified likelihood ratio statistic is studied for testing a variance component equal to a specified value. Formulas are presented in the general balanced case, whereas in the unbalanced case only the one-way random effects model is studied. Simulation studies are presented......, showing that the normal approximation to the large deviation modified likelihood ratio statistic gives confidence intervals for variance components with coverage probabilities very close to the nominal confidence coefficient....

  9. Statistical power analysis a simple and general model for traditional and modern hypothesis tests

    CERN Document Server

    Murphy, Kevin R; Wolach, Allen

    2014-01-01

    Noted for its accessible approach, this text applies the latest approaches of power analysis to both null hypothesis and minimum-effect testing using the same basic unified model. Through the use of a few simple procedures and examples, the authors show readers with little expertise in statistical analysis how to obtain the values needed to carry out the power analysis for their research. Illustrations of how these analyses work and how they can be used to choose the appropriate criterion for defining statistically significant outcomes are sprinkled throughout. The book presents a simple and g

  10. A new statistic for the analysis of circular data in gamma-ray astronomy

    Science.gov (United States)

    Protheroe, R. J.

    1985-01-01

    A new statistic is proposed for the analysis of circular data. The statistic is designed specifically for situations where a test of uniformity is required which is powerful against alternatives in which a small fraction of the observations is grouped in a small range of directions, or phases.

  11. Statistical Analysis of Compressive and Flexural Test Results on the Sustainable Adobe Reinforced with Steel Wire Mesh

    Science.gov (United States)

    Jokhio, Gul A.; Syed Mohsin, Sharifah M.; Gul, Yasmeen

    2018-04-01

    It has been established that Adobe provides, in addition to being sustainable and economic, a better indoor air quality without spending extensive amounts of energy as opposed to the modern synthetic materials. The material, however, suffers from weak structural behaviour when subjected to adverse loading conditions. A wide range of mechanical properties has been reported in literature owing to lack of research and standardization. The present paper presents the statistical analysis of the results that were obtained through compressive and flexural tests on Adobe samples. Adobe specimens with and without wire mesh reinforcement were tested and the results were reported. The statistical analysis of these results presents an interesting read. It has been found that the compressive strength of adobe increases by about 43% after adding a single layer of wire mesh reinforcement. This increase is statistically significant. The flexural response of Adobe has also shown improvement with the addition of wire mesh reinforcement, however, the statistical significance of the same cannot be established.

  12. Design and analysis of experiments with SAS

    CERN Document Server

    Lawson, John

    2010-01-01

    IntroductionStatistics and Data Collection Beginnings of Statistically Planned Experiments Definitions and Preliminaries Purposes of Experimental Design Types of Experimental Designs Planning Experiments Performing the Experiments Use of SAS SoftwareCompletely Randomized Designs with One Factor Introduction Replication and Randomization A Historical Example Linear Model for Completely Randomized Design (CRD) Verifying Assumptions of the Linear Model Analysis Strategies When Assumptions Are Violated Determining the Number of Replicates Comparison of Treatments after the F-TestFactorial Designs

  13. Confidence intervals permit, but don't guarantee, better inference than statistical significance testing

    Directory of Open Access Journals (Sweden)

    Melissa Coulson

    2010-07-01

    Full Text Available A statistically significant result, and a non-significant result may differ little, although significance status may tempt an interpretation of difference. Two studies are reported that compared interpretation of such results presented using null hypothesis significance testing (NHST, or confidence intervals (CIs. Authors of articles published in psychology, behavioural neuroscience, and medical journals were asked, via email, to interpret two fictitious studies that found similar results, one statistically significant, and the other non-significant. Responses from 330 authors varied greatly, but interpretation was generally poor, whether results were presented as CIs or using NHST. However, when interpreting CIs respondents who mentioned NHST were 60% likely to conclude, unjustifiably, the two results conflicted, whereas those who interpreted CIs without reference to NHST were 95% likely to conclude, justifiably, the two results were consistent. Findings were generally similar for all three disciplines. An email survey of academic psychologists confirmed that CIs elicit better interpretations if NHST is not invoked. Improved statistical inference can result from encouragement of meta-analytic thinking and use of CIs but, for full benefit, such highly desirable statistical reform requires also that researchers interpret CIs without recourse to NHST.

  14. Mathematics and Statistics Research Department progress report for period ending June 30, 1975

    International Nuclear Information System (INIS)

    Coveyou, R.R.; Gosslee, D.G.; Wilson, D.G.

    1975-10-01

    Brief reports on mathematical and statistical research and consulting and collaboration are given for the following areas: statistical estimation, statistical testing, experimental design, probability, energy systems modeling, continuum mechanics, matrices and other operators, numerical analysis, biomathematics and biostatistics, analytical chemistry, biology and medicine, health physics research, management, materials research, physics research, and programming. Information on seminars, publications, etc., is also included. (10 figures, 4 tables)

  15. Design, modeling and testing of data converters

    CERN Document Server

    Kiaei, Sayfe; Xu, Fang

    2014-01-01

    This book presents the a scientific discussion of the state-of-the-art techniques and designs for modeling, testing and for the performance analysis of data converters. The focus is put on sustainable data conversion. Sustainability has become a public issue that industries and users can not ignore. Devising environmentally friendly solutions for data conversion designing, modeling and testing is nowadays a requirement that researchers and practitioners must consider in their activities. This book presents the outcome of the IWADC workshop 2011, held in Orvieto, Italy.

  16. Why the null matters: statistical tests, random walks and evolution.

    Science.gov (United States)

    Sheets, H D; Mitchell, C E

    2001-01-01

    A number of statistical tests have been developed to determine what type of dynamics underlie observed changes in morphology in evolutionary time series, based on the pattern of change within the time series. The theory of the 'scaled maximum', the 'log-rate-interval' (LRI) method, and the Hurst exponent all operate on the same principle of comparing the maximum change, or rate of change, in the observed dataset to the maximum change expected of a random walk. Less change in a dataset than expected of a random walk has been interpreted as indicating stabilizing selection, while more change implies directional selection. The 'runs test' in contrast, operates on the sequencing of steps, rather than on excursion. Applications of these tests to computer generated, simulated time series of known dynamical form and various levels of additive noise indicate that there is a fundamental asymmetry in the rate of type II errors of the tests based on excursion: they are all highly sensitive to noise in models of directional selection that result in a linear trend within a time series, but are largely noise immune in the case of a simple model of stabilizing selection. Additionally, the LRI method has a lower sensitivity than originally claimed, due to the large range of LRI rates produced by random walks. Examination of the published results of these tests show that they have seldom produced a conclusion that an observed evolutionary time series was due to directional selection, a result which needs closer examination in light of the asymmetric response of these tests.

  17. Assessment of noise in a digital image using the join-count statistic and the Moran test

    International Nuclear Information System (INIS)

    Kehshih Chuang; Huang, H.K.

    1992-01-01

    It is assumed that data bits of a pixel in digital images can be divided into signal and noise bits. The signal bits occupy the most significant part of the pixel. The signal parts of each pixel are correlated while the noise parts are uncorrelated. Two statistical methods, the Moran test and the join-count statistic, are used to examine the noise parts. Images from computerized tomography, magnetic resonance and computed radiography are used for the evaluation of the noise bits. A residual image is formed by subtracting the original image from its smoothed version. The noise level in the residual image is then identical to that in the original image. Both statistical tests are then performed on the bit planes of the residual image. Results show that most digital images contain only 8-9 bits of correlated information. Both methods are easy to implement and fast to perform. (author)

  18. An Evaluation of the Use of Statistical Procedures in Soil Science

    Directory of Open Access Journals (Sweden)

    Laene de Fátima Tavares

    2016-01-01

    Full Text Available ABSTRACT Experimental statistical procedures used in almost all scientific papers are fundamental for clearer interpretation of the results of experiments conducted in agrarian sciences. However, incorrect use of these procedures can lead the researcher to incorrect or incomplete conclusions. Therefore, the aim of this study was to evaluate the characteristics of the experiments and quality of the use of statistical procedures in soil science in order to promote better use of statistical procedures. For that purpose, 200 articles, published between 2010 and 2014, involving only experimentation and studies by sampling in the soil areas of fertility, chemistry, physics, biology, use and management were randomly selected. A questionnaire containing 28 questions was used to assess the characteristics of the experiments, the statistical procedures used, and the quality of selection and use of these procedures. Most of the articles evaluated presented data from studies conducted under field conditions and 27 % of all papers involved studies by sampling. Most studies did not mention testing to verify normality and homoscedasticity, and most used the Tukey test for mean comparisons. Among studies with a factorial structure of the treatments, many had ignored this structure, and data were compared assuming the absence of factorial structure, or the decomposition of interaction was performed without showing or mentioning the significance of the interaction. Almost none of the papers that had split-block factorial designs considered the factorial structure, or they considered it as a split-plot design. Among the articles that performed regression analysis, only a few of them tested non-polynomial fit models, and none reported verification of the lack of fit in the regressions. The articles evaluated thus reflected poor generalization and, in some cases, wrong generalization in experimental design and selection of procedures for statistical analysis.

  19. Statistical testing of the full-range leadership theory in nursing.

    Science.gov (United States)

    Kanste, Outi; Kääriäinen, Maria; Kyngäs, Helvi

    2009-12-01

    The aim of this study is to test statistically the structure of the full-range leadership theory in nursing. The data were gathered by postal questionnaires from nurses and nurse leaders working in healthcare organizations in Finland. A follow-up study was performed 1 year later. The sample consisted of 601 nurses and nurse leaders, and the follow-up study had 78 respondents. Theory was tested through structural equation modelling, standard regression analysis and two-way anova. Rewarding transformational leadership seems to promote and passive laissez-faire leadership to reduce willingness to exert extra effort, perceptions of leader effectiveness and satisfaction with the leader. Active management-by-exception seems to reduce willingness to exert extra effort and perception of leader effectiveness. Rewarding transformational leadership remained as a strong explanatory factor of all outcome variables measured 1 year later. The data supported the main structure of the full-range leadership theory, lending support to the universal nature of the theory.

  20. Exploiting the full power of temporal gene expression profiling through a new statistical test: Application to the analysis of muscular dystrophy data

    Directory of Open Access Journals (Sweden)

    Turk Rolf

    2006-04-01

    Full Text Available Abstract Background The identification of biologically interesting genes in a temporal expression profiling dataset is challenging and complicated by high levels of experimental noise. Most statistical methods used in the literature do not fully exploit the temporal ordering in the dataset and are not suited to the case where temporal profiles are measured for a number of different biological conditions. We present a statistical test that makes explicit use of the temporal order in the data by fitting polynomial functions to the temporal profile of each gene and for each biological condition. A Hotelling T2-statistic is derived to detect the genes for which the parameters of these polynomials are significantly different from each other. Results We validate the temporal Hotelling T2-test on muscular gene expression data from four mouse strains which were profiled at different ages: dystrophin-, beta-sarcoglycan and gamma-sarcoglycan deficient mice, and wild-type mice. The first three are animal models for different muscular dystrophies. Extensive biological validation shows that the method is capable of finding genes with temporal profiles significantly different across the four strains, as well as identifying potential biomarkers for each form of the disease. The added value of the temporal test compared to an identical test which does not make use of temporal ordering is demonstrated via a simulation study, and through confirmation of the expression profiles from selected genes by quantitative PCR experiments. The proposed method maximises the detection of the biologically interesting genes, whilst minimising false detections. Conclusion The temporal Hotelling T2-test is capable of finding relatively small and robust sets of genes that display different temporal profiles between the conditions of interest. The test is simple, it can be used on gene expression data generated from any experimental design and for any number of conditions, and it

  1. Statistical analysis of non-homogeneous Poisson processes. Statistical processing of a particle multidetector

    International Nuclear Information System (INIS)

    Lacombe, J.P.

    1985-12-01

    Statistic study of Poisson non-homogeneous and spatial processes is the first part of this thesis. A Neyman-Pearson type test is defined concerning the intensity measurement of these processes. Conditions are given for which consistency of the test is assured, and others giving the asymptotic normality of the test statistics. Then some techniques of statistic processing of Poisson fields and their applications to a particle multidetector study are given. Quality tests of the device are proposed togetherwith signal extraction methods [fr

  2. SMART wind turbine rotor. Design and field test

    Energy Technology Data Exchange (ETDEWEB)

    Berg, Jonathan Charles; Resor, Brian Ray; Paquette, Joshua A.; White, Jonathan Randall

    2014-01-01

    The Wind Energy Technologies department at Sandia National Laboratories has developed and field tested a wind turbine rotor with integrated trailing-edge flaps designed for active control of rotor aerodynamics. The SMART Rotor project was funded by the Wind and Water Power Technologies Office of the U.S. Department of Energy (DOE) and was conducted to demonstrate active rotor control and evaluate simulation tools available for active control research. This report documents the design, fabrication, and testing of the SMART Rotor. This report begins with an overview of active control research at Sandia and the objectives of this project. The SMART blade, based on the DOE / SNL 9-meter CX-100 blade design, is then documented including all modifications necessary to integrate the trailing edge flaps, sensors incorporated into the system, and the fabrication processes that were utilized. Finally the test site and test campaign are described.

  3. A practical model-based statistical approach for generating functional test cases: application in the automotive industry

    OpenAIRE

    Awédikian , Roy; Yannou , Bernard

    2012-01-01

    International audience; With the growing complexity of industrial software applications, industrials are looking for efficient and practical methods to validate the software. This paper develops a model-based statistical testing approach that automatically generates online and offline test cases for embedded software. It discusses an integrated framework that combines solutions for three major software testing research questions: (i) how to select test inputs; (ii) how to predict the expected...

  4. Evaluation of statistical designs in phase I expansion cohorts: the Dana-Farber/Harvard Cancer Center experience.

    Science.gov (United States)

    Dahlberg, Suzanne E; Shapiro, Geoffrey I; Clark, Jeffrey W; Johnson, Bruce E

    2014-07-01

    Phase I trials have traditionally been designed to assess toxicity and establish phase II doses with dose-finding studies and expansion cohorts but are frequently exceeding the traditional sample size to further assess endpoints in specific patient subsets. The scientific objectives of phase I expansion cohorts and their evolving role in the current era of targeted therapies have yet to be systematically examined. Adult therapeutic phase I trials opened within Dana-Farber/Harvard Cancer Center (DF/HCC) from 1988 to 2012 were identified for sample size details. Statistical designs and study objectives of those submitted in 2011 were reviewed for expansion cohort details. Five hundred twenty-two adult therapeutic phase I trials were identified during the 25 years. The average sample size of a phase I study has increased from 33.8 patients to 73.1 patients over that time. The proportion of trials with planned enrollment of 50 or fewer patients dropped from 93.0% during the time period 1988 to 1992 to 46.0% between 2008 and 2012; at the same time, the proportion of trials enrolling 51 to 100 patients and more than 100 patients increased from 5.3% and 1.8%, respectively, to 40.5% and 13.5% (χ(2) test, two-sided P < .001). Sixteen of the 60 trials (26.7%) in 2011 enrolled patients to three or more sub-cohorts in the expansion phase. Sixty percent of studies provided no statistical justification of the sample size, although 91.7% of trials stated response as an objective. Our data suggest that phase I studies have dramatically changed in size and scientific scope within the last decade. Additional studies addressing the implications of this trend on research processes, ethical concerns, and resource burden are needed. © The Author 2014. Published by Oxford University Press. All rights reserved.

  5. Tribological behaviour and statistical experimental design of sintered iron-copper based composites

    Science.gov (United States)

    Popescu, Ileana Nicoleta; Ghiţă, Constantin; Bratu, Vasile; Palacios Navarro, Guillermo

    2013-11-01

    The sintered iron-copper based composites for automotive brake pads have a complex composite composition and should have good physical, mechanical and tribological characteristics. In this paper, we obtained frictional composites by Powder Metallurgy (P/M) technique and we have characterized them by microstructural and tribological point of view. The morphology of raw powders was determined by SEM and the surfaces of obtained sintered friction materials were analyzed by ESEM, EDS elemental and compo-images analyses. One lot of samples were tested on a "pin-on-disc" type wear machine under dry sliding conditions, at applied load between 3.5 and 11.5 × 10-1 MPa and 12.5 and 16.9 m/s relative speed in braking point at constant temperature. The other lot of samples were tested on an inertial test stand according to a methodology simulating the real conditions of dry friction, at a contact pressure of 2.5-3 MPa, at 300-1200 rpm. The most important characteristics required for sintered friction materials are high and stable friction coefficient during breaking and also, for high durability in service, must have: low wear, high corrosion resistance, high thermal conductivity, mechanical resistance and thermal stability at elevated temperature. Because of the tribological characteristics importance (wear rate and friction coefficient) of sintered iron-copper based composites, we predicted the tribological behaviour through statistical analysis. For the first lot of samples, the response variables Yi (represented by the wear rate and friction coefficient) have been correlated with x1 and x2 (the code value of applied load and relative speed in braking points, respectively) using a linear factorial design approach. We obtained brake friction materials with improved wear resistance characteristics and high and stable friction coefficients. It has been shown, through experimental data and obtained linear regression equations, that the sintered composites wear rate increases

  6. Solid breeder test blanket module design and analysis

    Energy Technology Data Exchange (ETDEWEB)

    Ying, A. [University of California Los Angeles, Los Angeles, CA 90095-1597 (United States)]. E-mail: ying@fusion.ucla.edu; Abdou, M. [University of California Los Angeles, Los Angeles, CA 90095-1597 (United States); Calderoni, P. [University of California Los Angeles, Los Angeles, CA 90095-1597 (United States); Sharafat, S. [University of California Los Angeles, Los Angeles, CA 90095-1597 (United States); Youssef, M. [University of California Los Angeles, Los Angeles, CA 90095-1597 (United States); An, Z. [University of California Los Angeles, Los Angeles, CA 90095-1597 (United States); Abou-Sena, A. [University of California Los Angeles, Los Angeles, CA 90095-1597 (United States); Kim, E. [University of California Los Angeles, Los Angeles, CA 90095-1597 (United States); Reyes, S. [LANL, Livermore, CA (United States); Willms, S. [LANL, Los Alamos, NM (United States); Kurtz, R. [PNNL, Richland, WA (United States)

    2006-02-15

    This paper presents the design and analysis for the US ITER solid breeder blanket test articles. Objectives of solid breeder blanket testing during the first phase of the ITER operation focus on exploration of fusion break-in phenomena and configuration scoping. Specific emphasis is placed on first wall structural response, evaluation of neutronic parameters, assessment of thermomechanical behavior and characterization of tritium release. The tests will be conducted with three unit cell arrays/sub-modules. The development approach includes: (1) design the unit cell/sub-module for low temperature operations and (2) refer to a reactor blanket design and use engineering scaling to reproduce key parameters under ITER wall loading conditions, so that phenomena under investigation can be measured at a reactor-like level.

  7. Which statistics should tropical biologists learn?

    Directory of Open Access Journals (Sweden)

    Natalia Loaiza Velásquez

    2011-09-01

    Full Text Available Tropical biologists study the richest and most endangered biodiversity in the planet, and in these times of climate change and mega-extinctions, the need for efficient, good quality research is more pressing than in the past. However, the statistical component in research published by tropical authors sometimes suffers from poor quality in data collection; mediocre or bad experimental design and a rigid and outdated view of data analysis. To suggest improvements in their statistical education, we listed all the statistical tests and other quantitative analyses used in two leading tropical journals, the Revista de Biología Tropical and Biotropica, during a year. The 12 most frequent tests in the articles were: Analysis of Variance (ANOVA, Chi-Square Test, Student’s T Test, Linear Regression, Pearson’s Correlation Coefficient, Mann-Whitney U Test, Kruskal-Wallis Test, Shannon’s Diversity Index, Tukey’s Test, Cluster Analysis, Spearman’s Rank Correlation Test and Principal Component Analysis. We conclude that statistical education for tropical biologists must abandon the old syllabus based on the mathematical side of statistics and concentrate on the correct selection of these and other procedures and tests, on their biological interpretation and on the use of reliable and friendly freeware. We think that their time will be better spent understanding and protecting tropical ecosystems than trying to learn the mathematical foundations of statistics: in most cases, a well designed one-semester course should be enough for their basic requirements. Rev. Biol. Trop. 59 (3: 983-992. Epub 2011 September 01.Los biólogos tropicales estudian la biodiversidad más rica y amenazada del planeta, y en estos tiempos de cambio climático y mega-extinción, la necesidad de investigación de buena calidad es más acuciante que en el pasado. Sin embargo, el componente estadístico en la investigación publicada por los autores tropicales adolece a veces

  8. NASA reliability preferred practices for design and test

    Science.gov (United States)

    1991-01-01

    Given here is a manual that was produced to communicate within the aerospace community design practices that have contributed to NASA mission success. The information represents the best technical advice that NASA has to offer on reliability design and test practices. Topics covered include reliability practices, including design criteria, test procedures, and analytical techniques that have been applied to previous space flight programs; and reliability guidelines, including techniques currently applied to space flight projects, where sufficient information exists to certify that the technique will contribute to mission success.

  9. Developing Statistical Knowledge for Teaching during Design-Based Research

    Science.gov (United States)

    Groth, Randall E.

    2017-01-01

    Statistical knowledge for teaching is not precisely equivalent to statistics subject matter knowledge. Teachers must know how to make statistics understandable to others as well as understand the subject matter themselves. This dual demand on teachers calls for the development of viable teacher education models. This paper offers one such model,…

  10. Perceived Statistical Knowledge Level and Self-Reported Statistical Practice Among Academic Psychologists

    Directory of Open Access Journals (Sweden)

    Laura Badenes-Ribera

    2018-06-01

    Full Text Available Introduction: Publications arguing against the null hypothesis significance testing (NHST procedure and in favor of good statistical practices have increased. The most frequently mentioned alternatives to NHST are effect size statistics (ES, confidence intervals (CIs, and meta-analyses. A recent survey conducted in Spain found that academic psychologists have poor knowledge about effect size statistics, confidence intervals, and graphic displays for meta-analyses, which might lead to a misinterpretation of the results. In addition, it also found that, although the use of ES is becoming generalized, the same thing is not true for CIs. Finally, academics with greater knowledge about ES statistics presented a profile closer to good statistical practice and research design. Our main purpose was to analyze the extension of these results to a different geographical area through a replication study.Methods: For this purpose, we elaborated an on-line survey that included the same items as the original research, and we asked academic psychologists to indicate their level of knowledge about ES, their CIs, and meta-analyses, and how they use them. The sample consisted of 159 Italian academic psychologists (54.09% women, mean age of 47.65 years. The mean number of years in the position of professor was 12.90 (SD = 10.21.Results: As in the original research, the results showed that, although the use of effect size estimates is becoming generalized, an under-reporting of CIs for ES persists. The most frequent ES statistics mentioned were Cohen's d and R2/η2, which can have outliers or show non-normality or violate statistical assumptions. In addition, academics showed poor knowledge about meta-analytic displays (e.g., forest plot and funnel plot and quality checklists for studies. Finally, academics with higher-level knowledge about ES statistics seem to have a profile closer to good statistical practices.Conclusions: Changing statistical practice is not

  11. Statistical energy as a tool for binning-free, multivariate goodness-of-fit tests, two-sample comparison and unfolding

    International Nuclear Information System (INIS)

    Aslan, B.; Zech, G.

    2005-01-01

    We introduce the novel concept of statistical energy as a statistical tool. We define statistical energy of statistical distributions in a similar way as for electric charge distributions. Charges of opposite sign are in a state of minimum energy if they are equally distributed. This property is used to check whether two samples belong to the same parent distribution, to define goodness-of-fit tests and to unfold distributions distorted by measurement. The approach is binning-free and especially powerful in multidimensional applications

  12. Experimental and Sampling Design for the INL-2 Sample Collection Operational Test

    Energy Technology Data Exchange (ETDEWEB)

    Piepel, Gregory F.; Amidan, Brett G.; Matzke, Brett D.

    2009-02-16

    , sample extraction, and analytical methods to be used in the INL-2 study. For each of the five test events, the specified floor of the INL building will be contaminated with BG using a point-release device located in the room specified in the experimental design. Then quality control (QC), reference material coupon (RMC), judgmental, and probabilistic samples will be collected according to the sampling plan for each test event. Judgmental samples will be selected based on professional judgment and prior information. Probabilistic samples were selected with a random aspect and in sufficient numbers to provide desired confidence for detecting contamination or clearing uncontaminated (or decontaminated) areas. Following sample collection for a given test event, the INL building will be decontaminated. For possibly contaminated areas, the numbers of probabilistic samples were chosen to provide 95% confidence of detecting contaminated areas of specified sizes. For rooms that may be uncontaminated following a contamination event, or for whole floors after decontamination, the numbers of judgmental and probabilistic samples were chosen using the CJR approach. The numbers of samples were chosen to support making X%/Y% clearance statements with X = 95% or 99% and Y = 96% or 97%. The experimental and sampling design also provides for making X%/Y% clearance statements using only probabilistic samples. For each test event, the numbers of characterization and clearance samples were selected within limits based on operational considerations while still maintaining high confidence for detection and clearance aspects. The sampling design for all five test events contains 2085 samples, with 1142 after contamination and 943 after decontamination. These numbers include QC, RMC, judgmental, and probabilistic samples. The experimental and sampling design specified in this report provides a good statistical foundation for achieving the objectives of the INL-2 study.

  13. A default Bayesian hypothesis test for ANOVA designs

    NARCIS (Netherlands)

    Wetzels, R.; Grasman, R.P.P.P.; Wagenmakers, E.J.

    2012-01-01

    This article presents a Bayesian hypothesis test for analysis of variance (ANOVA) designs. The test is an application of standard Bayesian methods for variable selection in regression models. We illustrate the effect of various g-priors on the ANOVA hypothesis test. The Bayesian test for ANOVA

  14. A Bayesian Optimal Design for Sequential Accelerated Degradation Testing

    Directory of Open Access Journals (Sweden)

    Xiaoyang Li

    2017-07-01

    Full Text Available When optimizing an accelerated degradation testing (ADT plan, the initial values of unknown model parameters must be pre-specified. However, it is usually difficult to obtain the exact values, since many uncertainties are embedded in these parameters. Bayesian ADT optimal design was presented to address this problem by using prior distributions to capture these uncertainties. Nevertheless, when the difference between a prior distribution and actual situation is large, the existing Bayesian optimal design might cause some over-testing or under-testing issues. For example, the implemented ADT following the optimal ADT plan consumes too much testing resources or few accelerated degradation data are obtained during the ADT. To overcome these obstacles, a Bayesian sequential step-down-stress ADT design is proposed in this article. During the sequential ADT, the test under the highest stress level is firstly conducted based on the initial prior information to quickly generate degradation data. Then, the data collected under higher stress levels are employed to construct the prior distributions for the test design under lower stress levels by using the Bayesian inference. In the process of optimization, the inverse Gaussian (IG process is assumed to describe the degradation paths, and the Bayesian D-optimality is selected as the optimal objective. A case study on an electrical connector’s ADT plan is provided to illustrate the application of the proposed Bayesian sequential ADT design method. Compared with the results from a typical static Bayesian ADT plan, the proposed design could guarantee more stable and precise estimations of different reliability measures.

  15. Design of a Realistic Test Simulator For a Built-In Self Test Environment

    Directory of Open Access Journals (Sweden)

    A. Ahmad

    2010-12-01

    Full Text Available This paper presents a realistic test approach suitable to Design For Testability (DFT and Built- In Self Test (BIST environments. The approach is culminated in the form of a test simulator which is capable of providing a required goal of test for the System Under Test (SUT. The simulator uses the approach of fault diagnostics with fault grading procedure to provide the tests. The tool is developed on a common PC platform and hence no special software is required. Thereby, it is a low cost tool and hence economical. The tool is very much suitable for determining realistic test sequences for a targeted goal of testing for any SUT. The developed tool incorporates a flexible Graphical User Interface (GUI procedure and can be operated without any special programming skill. The tool is debugged and tested with the results of many bench mark circuits. Further, this developed tool can be utilized for educational purposes for many courses such as fault-tolerant computing, fault diagnosis, digital electronics, and safe - reliable - testable digital logic designs.

  16. Diffractive optics: design, fabrication, and test

    National Research Council Canada - National Science Library

    O'Shea, Donald C

    2004-01-01

    This book provides the reader with the broad range of materials that were discussed in a series of short courses presented at Georgia Tech on the design, fabrication, and testing of diffractive optical elements (DOEs...

  17. The J-2X Fuel Turbopump - Design, Development, and Test

    Science.gov (United States)

    Tellier, James G.; Hawkins, Lakiesha V.; Shinguchi, Brian H.; Marsh, Matthew W.

    2011-01-01

    Pratt and Whitney Rocketdyne (PWR), a NASA subcontractor, is executing the design, development, test, and evaluation (DDT&E) of a liquid oxygen, liquid hydrogen two hundred ninety four thousand pound thrust rocket engine initially intended for the Upper Stage (US) and Earth Departure Stage (EDS) of the Constellation Program Ares-I Crew Launch Vehicle (CLV). A key element of the design approach was to base the new J-2X engine on the heritage J-2S engine with the intent of uprating the engine and incorporating SSME and RS-68 lessons learned. The J-2S engine was a design upgrade of the flight proven J-2 configuration used to put American astronauts on the moon. The J-2S Fuel Turbopump (FTP) was the first Rocketdyne-designed liquid hydrogen centrifugal pump and provided many of the early lessons learned for the Space Shuttle Main Engine High Pressure Fuel Turbopumps. This paper will discuss the design trades and analyses performed for the current J-2X FTP to increase turbine life; increase structural margins, facilitate component fabrication; expedite turbopump assembly; and increase rotordynamic stability margins. Risk mitigation tests including inducer water tests, whirligig turbine blade tests, turbine air rig tests, and workhorse gas generator tests characterized operating environments, drove design modifications, or identified performance impact. Engineering design, fabrication, analysis, and assembly activities support FTP readiness for the first J-2X engine test scheduled for July 2011.

  18. Design of horizontal test cryostat for testing two 650 MHz cavities: cryogenic considerations

    Science.gov (United States)

    Khare, P.; Gilankar, S.; Kush, P. K.; Lakshminarayanan, A.; Choubey, R.; Ghosh, R.; Jain, A.; Patel, H.; Gupta, P. D.; Hocker, A.; Ozelis, J. P.; Geynisman, M.; Reid, C.; Poloubotko, V.; Mitchell, D.; Peterson, T. J.; Nicol, T. H.

    2017-02-01

    Horizontal Test Cryostat has been designed for testing two 650 MHz "dressed" Superconducting Radio Frequency (SCRF) cavities in a single testing cycle at Raja Ramanna Centre for Advanced Technology, India (RRCAT) in collaboration with Fermi National Accelerator Laboratory, USA (FNAL). This cryostat will facilitate testing of two 5-cell 650 MHz SCRF cavities, in CW or pulsed regime, for upcoming High Intensity Superconducting Proton Accelerator projects at both countries. Two such HTS facilities are planned, one at RRCAT for Indian Spallation Neutron Source project (ISNS), which is on the horizon, and the other at FNAL, USA. A test cryostat, a part of horizontal test stand-2 (HTS-2) will be set up at RRCAT for Indian project. In order to maximize the utility of this facility, it can also be used to test two dressed 9-cell 1.3 GHz cavities and other similarly-sized devices. The facility assumes, as an input, the availability of liquid nitrogen at 80 K and liquid helium at 4.5 K and 2 K, with a refrigeration capacity of approximately 50 W at 2 K. Design work of cryostat has been completed and now procurement process is in progress. This paper discusses salient features of the cryostat. It also describes different design calculations and ANSYS analysis for cool down of few subsystems like cavity support system and liquid nitrogen cooled thermal radiation shield of horizontal test cryostat..

  19. Testing Universal Design of a Public Media Website with Diverse Users.

    Science.gov (United States)

    Chen, Weiqin; Kessel, Siri; Sanderson, Norun C; Tatara, Naoe

    2016-01-01

    Testing with users can identify more issues than other testing methods. Many researchers have argued for the importance of user testing in Universal Design. However, testing Universal Design with diverse users poses many challenges. In this paper we will share our experience with testing the Universal Design of a public media website with real users. We discuss the challenges faced and lessons learned in the process.

  20. The extended statistical analysis of toxicity tests using standardised effect sizes (SESs): a comparison of nine published papers.

    Science.gov (United States)

    Festing, Michael F W

    2014-01-01

    The safety of chemicals, drugs, novel foods and genetically modified crops is often tested using repeat-dose sub-acute toxicity tests in rats or mice. It is important to avoid misinterpretations of the results as these tests are used to help determine safe exposure levels in humans. Treated and control groups are compared for a range of haematological, biochemical and other biomarkers which may indicate tissue damage or other adverse effects. However, the statistical analysis and presentation of such data poses problems due to the large number of statistical tests which are involved. Often, it is not clear whether a "statistically significant" effect is real or a false positive (type I error) due to sampling variation. The author's conclusions appear to be reached somewhat subjectively by the pattern of statistical significances, discounting those which they judge to be type I errors and ignoring any biomarker where the p-value is greater than p = 0.05. However, by using standardised effect sizes (SESs) a range of graphical methods and an over-all assessment of the mean absolute response can be made. The approach is an extension, not a replacement of existing methods. It is intended to assist toxicologists and regulators in the interpretation of the results. Here, the SES analysis has been applied to data from nine published sub-acute toxicity tests in order to compare the findings with those of the author's. Line plots, box plots and bar plots show the pattern of response. Dose-response relationships are easily seen. A "bootstrap" test compares the mean absolute differences across dose groups. In four out of seven papers where the no observed adverse effect level (NOAEL) was estimated by the authors, it was set too high according to the bootstrap test, suggesting that possible toxicity is under-estimated.

  1. The extended statistical analysis of toxicity tests using standardised effect sizes (SESs: a comparison of nine published papers.

    Directory of Open Access Journals (Sweden)

    Michael F W Festing

    Full Text Available The safety of chemicals, drugs, novel foods and genetically modified crops is often tested using repeat-dose sub-acute toxicity tests in rats or mice. It is important to avoid misinterpretations of the results as these tests are used to help determine safe exposure levels in humans. Treated and control groups are compared for a range of haematological, biochemical and other biomarkers which may indicate tissue damage or other adverse effects. However, the statistical analysis and presentation of such data poses problems due to the large number of statistical tests which are involved. Often, it is not clear whether a "statistically significant" effect is real or a false positive (type I error due to sampling variation. The author's conclusions appear to be reached somewhat subjectively by the pattern of statistical significances, discounting those which they judge to be type I errors and ignoring any biomarker where the p-value is greater than p = 0.05. However, by using standardised effect sizes (SESs a range of graphical methods and an over-all assessment of the mean absolute response can be made. The approach is an extension, not a replacement of existing methods. It is intended to assist toxicologists and regulators in the interpretation of the results. Here, the SES analysis has been applied to data from nine published sub-acute toxicity tests in order to compare the findings with those of the author's. Line plots, box plots and bar plots show the pattern of response. Dose-response relationships are easily seen. A "bootstrap" test compares the mean absolute differences across dose groups. In four out of seven papers where the no observed adverse effect level (NOAEL was estimated by the authors, it was set too high according to the bootstrap test, suggesting that possible toxicity is under-estimated.

  2. Design and implementation of a modular program system for the carrying-through of statistical analyses

    International Nuclear Information System (INIS)

    Beck, W.

    1984-01-01

    From the complexity of computer programs for the solution of scientific and technical problems results a lot of questions. Typical questions concern the strength and weakness of computer programs, the propagation of incertainties among the input data, the sensitivity of input data on output data and the substitute of complex models by more simple ones, which provide equivalent results in certain ranges. Those questions have a general practical meaning, principle answers may be found by statistical methods, which are based on the Monte Carlo Method. In this report the statistical methods are chosen, described and valuated. They are implemented into the modular program system STAR, which is an own component of the program system RSYST. The design of STAR considers users with different knowledge of data processing and statistics. The variety of statistical methods, generating and evaluating procedures. The processing of large data sets in complex structures. The coupling to other components of RSYST and RSYST foreign programs. That the system can be easily modificated and enlarged. Four examples are given, which demonstrate the application of STAR. (orig.) [de

  3. STATISTICAL EVALUATION OF EXAMINATION TESTS IN MATHEMATICS FOR ECONOMISTS

    Directory of Open Access Journals (Sweden)

    KASPŘÍKOVÁ, Nikola

    2012-12-01

    Full Text Available Examination results are rather important for many students with regard to their future profession development. Results of exams should be carefully inspected by the teachers to help improve design and evaluation of tests and education process in general. Analysis of examination papers in mathematics taken by students of basic mathematics course at University of Economics in Prague is reported. The first issue addressed is identification of significant dependencies between performance in particular problem areas covered in the test and also between particular items and total score in test or ability level as a latent trait. The assessment is first performed with Spearman correlation coefficient, items in the test are then evaluated within Item Response Theory framework. The second analytical task addressed is a search for groups of students who are similar with respect to performance in test. Cluster analysis is performed using partitioning around medoids method and final model selection is made according to average silhouette width. Results of clustering, which may be also considered in connection with setting of the minimum score for passing the exam, show that two groups of students can be identified. The group which may be called "well-performers" is the more clearly defined one.

  4. Testing effects in mixed- versus pure-list designs.

    Science.gov (United States)

    Rowland, Christopher A; Littrell-Baez, Megan K; Sensenig, Amanda E; DeLosh, Edward L

    2014-08-01

    In the present study, we investigated the role of list composition in the testing effect. Across three experiments, participants learned items through study and initial testing or study and restudy. List composition was manipulated, such that tested and restudied items appeared either intermixed in the same lists (mixed lists) or in separate lists (pure lists). In Experiment 1, half of the participants received mixed lists and half received pure lists. In Experiment 2, all participants were given both mixed and pure lists. Experiment 3 followed Erlebacher's (Psychological Bulletin, 84, 212-219, 1977) method, such that mixed lists, pure tested lists, and pure restudied lists were given to independent groups. Across all three experiments, the final recall results revealed significant testing effects for both mixed and pure lists, with no reliable difference in the magnitude of the testing advantage across list designs. This finding suggests that the testing effect is not subject to a key boundary condition-list design-that impacts other memory phenomena, including the generation effect.

  5. Beryllium satellite thrust cone design, manufacture and test

    International Nuclear Information System (INIS)

    Schneiter, H.; Chandler, D.

    1977-01-01

    Pre-formed beryllium sheet material has been used in the design, manufacturing and test of a satellite thrust cone structure. Adhesive bonding was used for attachment of aluminium flanges and conical segment lap strips. Difficulties in beryllium structure design such as incompatibilities with aluminium and handling problems are discussed. Testing to optimize beryllium-beryllium and beryllium-aluminium adhesive bonds is described. The completed thrust cone assembly has been subjected to static load testing and the results are presented. A summary of the relative merits of the use of beryllium in satellite structures is given with recommendations for future users. (author)

  6. Development of computer-assisted instruction application for statistical data analysis android platform as learning resource

    Science.gov (United States)

    Hendikawati, P.; Arifudin, R.; Zahid, M. Z.

    2018-03-01

    This study aims to design an android Statistics Data Analysis application that can be accessed through mobile devices to making it easier for users to access. The Statistics Data Analysis application includes various topics of basic statistical along with a parametric statistics data analysis application. The output of this application system is parametric statistics data analysis that can be used for students, lecturers, and users who need the results of statistical calculations quickly and easily understood. Android application development is created using Java programming language. The server programming language uses PHP with the Code Igniter framework, and the database used MySQL. The system development methodology used is the Waterfall methodology with the stages of analysis, design, coding, testing, and implementation and system maintenance. This statistical data analysis application is expected to support statistical lecturing activities and make students easier to understand the statistical analysis of mobile devices.

  7. A Critique of One-Tailed Hypothesis Test Procedures in Business and Economics Statistics Textbooks.

    Science.gov (United States)

    Liu, Tung; Stone, Courtenay C.

    1999-01-01

    Surveys introductory business and economics statistics textbooks and finds that they differ over the best way to explain one-tailed hypothesis tests: the simple null-hypothesis approach or the composite null-hypothesis approach. Argues that the composite null-hypothesis approach contains methodological shortcomings that make it more difficult for…

  8. Performance demonstration tests for eddy current inspection of steam generator tubing

    International Nuclear Information System (INIS)

    Kurtz, R.J.; Heasler, P.G.; Anderson, C.M.

    1996-05-01

    This report describes the methodology and results for development of performance demonstration tests for eddy current (ET) inspection of steam generator tubes. Statistical test design principles were used to develop the performance demonstration tests. Thresholds on ET system inspection performance were selected to ensure that field inspection systems would have a high probability of detecting and and correctly sizing tube degradation. The technical basis for the ET system performance thresholds is presented in detail. Statistical test design calculations for probability of detection and flaw sizing tests are described. A recommended performance demonstration test based on the design calculations is presented. A computer program for grading the probability of detection portion of the performance demonstration test is given

  9. Performance demonstration tests for eddy current inspection of steam generator tubing

    Energy Technology Data Exchange (ETDEWEB)

    Kurtz, R.J.; Heasler, P.G.; Anderson, C.M.

    1996-05-01

    This report describes the methodology and results for development of performance demonstration tests for eddy current (ET) inspection of steam generator tubes. Statistical test design principles were used to develop the performance demonstration tests. Thresholds on ET system inspection performance were selected to ensure that field inspection systems would have a high probability of detecting and and correctly sizing tube degradation. The technical basis for the ET system performance thresholds is presented in detail. Statistical test design calculations for probability of detection and flaw sizing tests are described. A recommended performance demonstration test based on the design calculations is presented. A computer program for grading the probability of detection portion of the performance demonstration test is given.

  10. HistFitter software framework for statistical data analysis

    Energy Technology Data Exchange (ETDEWEB)

    Baak, M. [CERN, Geneva (Switzerland); Besjes, G.J. [Radboud University Nijmegen, Nijmegen (Netherlands); Nikhef, Amsterdam (Netherlands); Cote, D. [University of Texas, Arlington (United States); Koutsman, A. [TRIUMF, Vancouver (Canada); Lorenz, J. [Ludwig-Maximilians-Universitaet Muenchen, Munich (Germany); Excellence Cluster Universe, Garching (Germany); Short, D. [University of Oxford, Oxford (United Kingdom)

    2015-04-15

    We present a software framework for statistical data analysis, called HistFitter, that has been used extensively by the ATLAS Collaboration to analyze big datasets originating from proton-proton collisions at the Large Hadron Collider at CERN. Since 2012 HistFitter has been the standard statistical tool in searches for supersymmetric particles performed by ATLAS. HistFitter is a programmable and flexible framework to build, book-keep, fit, interpret and present results of data models of nearly arbitrary complexity. Starting from an object-oriented configuration, defined by users, the framework builds probability density functions that are automatically fit to data and interpreted with statistical tests. Internally HistFitter uses the statistics packages RooStats and HistFactory. A key innovation of HistFitter is its design, which is rooted in analysis strategies of particle physics. The concepts of control, signal and validation regions are woven into its fabric. These are progressively treated with statistically rigorous built-in methods. Being capable of working with multiple models at once that describe the data, HistFitter introduces an additional level of abstraction that allows for easy bookkeeping, manipulation and testing of large collections of signal hypotheses. Finally, HistFitter provides a collection of tools to present results with publication quality style through a simple command-line interface. (orig.)

  11. HistFitter software framework for statistical data analysis

    International Nuclear Information System (INIS)

    Baak, M.; Besjes, G.J.; Cote, D.; Koutsman, A.; Lorenz, J.; Short, D.

    2015-01-01

    We present a software framework for statistical data analysis, called HistFitter, that has been used extensively by the ATLAS Collaboration to analyze big datasets originating from proton-proton collisions at the Large Hadron Collider at CERN. Since 2012 HistFitter has been the standard statistical tool in searches for supersymmetric particles performed by ATLAS. HistFitter is a programmable and flexible framework to build, book-keep, fit, interpret and present results of data models of nearly arbitrary complexity. Starting from an object-oriented configuration, defined by users, the framework builds probability density functions that are automatically fit to data and interpreted with statistical tests. Internally HistFitter uses the statistics packages RooStats and HistFactory. A key innovation of HistFitter is its design, which is rooted in analysis strategies of particle physics. The concepts of control, signal and validation regions are woven into its fabric. These are progressively treated with statistically rigorous built-in methods. Being capable of working with multiple models at once that describe the data, HistFitter introduces an additional level of abstraction that allows for easy bookkeeping, manipulation and testing of large collections of signal hypotheses. Finally, HistFitter provides a collection of tools to present results with publication quality style through a simple command-line interface. (orig.)

  12. Role of statistics in characterizing nuclear waste package behavior

    International Nuclear Information System (INIS)

    Bowen, W.M.

    1984-11-01

    The characterization of nuclear waste package behavior is primarily based on the outcome of laboratory tests, where components of a proposed waste package are either individually or simultaneously subjected to simulated repository conditions. At each step of a testing method, both controllable and uncontrollable factors contribute to the overall uncertainty in the final outcome of the test. If not dealt with correctly, these sources of uncertainty could obscure or distort important information that might otherwise be gleaned from the test data. This could result in misleading or erroneous conclusions about the behavior characteristic being studied. It could also preclude estimation of the individual contributions of the major sources of uncertainty to the overall uncertainty. Statistically designed experiments and sampling plans, followed by correctly applied statistical analysis and estimation methods will yield the most information possible for the time and resources spent on experimentation, and they can eliminate the above concerns. Conclusions reached on the basis of such information will be sound and defensible. This presentation is intended to emphasize the importance of correctly applied, theoretically sound statistical methodology in characterizing nuclear waste package behavior. 8 references, 1 table

  13. Role of statistics in characterizing nuclear waste package behavior

    International Nuclear Information System (INIS)

    Bowen, W.M.

    1984-01-01

    The characterization of nuclear waste package behavior is primarily based on the outcome of laboratory tests, where components of a proposed waste package are either individually or simultaneously subjected to simulated repository conditions. At each step of a testing method, both controllable and uncontrollable factors contribute to the overall uncertainty in the final outcome of the test. If not dealt with correctly, these sources of uncertainty could obscure or distort important information that might otherwise be gleaned form the test data. This could result in misleading or erroneous conclusions about the behavior characteristic being studied. It could also preclude estimation of the individual contributions of the major sources of uncertainty to the overall uncertainty. Statistically designed experiments and sampling plans, followed by correctly applied statistical analysis and estimation methods will yield the most information possible for the time and resources spent on experimentation, and they can eliminate the above concerns. Conclusions reached on the basis of such information will be sound and defensible. This presentation is intended to emphasize the importance of correctly applied, theoretically sound statistical methodology in characterizing nuclear waste package behavior

  14. Computer-Aided Test Flow in Core-Based Design

    NARCIS (Netherlands)

    Zivkovic, V.; Tangelder, R.J.W.T.; Kerkhoff, Hans G.

    2000-01-01

    This paper copes with the efficient test-pattern generation in a core-based design. A consistent Computer-Aided Test (CAT) flow is proposed based on the required core-test strategy. It generates a test-pattern set for the embedded cores with high fault coverage and low DfT area overhead. The CAT

  15. Designing and implementing test automation frameworks with QTP

    CERN Document Server

    Bhargava, Ashish

    2013-01-01

    A tutorial-based approach, showing basic coding and designing techniques to build test automation frameworks.If you are a beginner, an automation engineer, an aspiring test automation engineer, a manual tester, a test lead or a test architect who wants to learn, create, and maintain test automation frameworks, this book will accelerate your ability to develop and adapt the framework.

  16. Design and Optimization of a Hypersonic Test Facility for Sub-Scale Testing

    National Research Council Canada - National Science Library

    O'Kresik, Stephen

    2003-01-01

    ... to 7.5 with a maximum system mass flow rate variation from 3 to 45 lbm/s. Additionally, a dynamic design process was outlined to assist other designers in producing similar test stands. Finally, a software analysis package was developed to analyze proposed changes in the support system architecture.

  17. Using GIFTS on the Cray-1 for the large coil test facility test: stand design analysis

    International Nuclear Information System (INIS)

    Baudry, T.V.; Gray, W.H.

    1981-06-01

    The GIFTS finite element program has been used extensively throughout the Large Coil Test Facility (LCTF) test stand design analysis. Effective use has been made of GIFTS both as a preprocessor to other finite element programs and as a complete structural analysis package. The LCTF test stand design involved stress analysis ranging from simple textbook-type problems to very complicated three-dimensional structural problems. Two areas of the design analysis are discussed

  18. Baseline Statistics of Linked Statistical Data

    NARCIS (Netherlands)

    Scharnhorst, Andrea; Meroño-Peñuela, Albert; Guéret, Christophe

    2014-01-01

    We are surrounded by an ever increasing ocean of information, everybody will agree to that. We build sophisticated strategies to govern this information: design data models, develop infrastructures for data sharing, building tool for data analysis. Statistical datasets curated by National

  19. Mathematics and Statistics Research Department progress report for period ending June 30, 1977

    International Nuclear Information System (INIS)

    Lever, W.E.; Shepherd, D.E.; Ward, R.C.; Wilson, D.G.

    1977-09-01

    Brief descriptions are given of work done in mathematical and statistical research (moving-boundary problems; numerical analysis; continuum mechanics; matrices and other operators; experiment design; statistical testing; multivariate, multipopulation classification; statistical estimation) and statistical and mathematical collaboration (analytical chemistry, biological research, chemistry and physics research, energy research, engineering technology research, environmental sciences research, health physics research, meterials research, sampling inspection and quality control, uranium resource evaluation research). Most of the descriptions are a page or less in length. Educational activities, publications, seminar titles, etc., are also included

  20. CFAssay: statistical analysis of the colony formation assay

    International Nuclear Information System (INIS)

    Braselmann, Herbert; Michna, Agata; Heß, Julia; Unger, Kristian

    2015-01-01

    Colony formation assay is the gold standard to determine cell reproductive death after treatment with ionizing radiation, applied for different cell lines or in combination with other treatment modalities. Associated linear-quadratic cell survival curves can be calculated with different methods. For easy code exchange and methodological standardisation among collaborating laboratories a software package CFAssay for R (R Core Team, R: A Language and Environment for Statistical Computing, 2014) was established to perform thorough statistical analysis of linear-quadratic cell survival curves after treatment with ionizing radiation and of two-way designs of experiments with chemical treatments only. CFAssay offers maximum likelihood and related methods by default and the least squares or weighted least squares method can be optionally chosen. A test for comparision of cell survival curves and an ANOVA test for experimental two-way designs are provided. For the two presented examples estimated parameters do not differ much between maximum-likelihood and least squares. However the dispersion parameter of the quasi-likelihood method is much more sensitive for statistical variation in the data than the multiple R 2 coefficient of determination from the least squares method. The dispersion parameter for goodness of fit and different plot functions in CFAssay help to evaluate experimental data quality. As open source software interlaboratory code sharing between users is facilitated

  1. A review of mammalian carcinogenicity study design and potential effects of alternate test procedures on the safety evaluation of food ingredients.

    Science.gov (United States)

    Hayes, A W; Dayan, A D; Hall, W C; Kodell, R L; Williams, G M; Waddell, W D; Slesinski, R S; Kruger, C L

    2011-06-01

    Extensive experience in conducting long term cancer bioassays has been gained over the past 50 years of animal testing on drugs, pesticides, industrial chemicals, food additives and consumer products. Testing protocols for the conduct of carcinogenicity studies in rodents have been developed in Guidelines promulgated by regulatory agencies, including the US EPA (Environmental Protection Agency), the US FDA (Food and Drug Administration), the OECD (Organization for Economic Co-operation and Development) for the EU member states and the MAFF (Ministries of Agriculture, Forestries and Fisheries) and MHW (Ministry of Health and Welfare) in Japan. The basis of critical elements of the study design that lead to an accepted identification of the carcinogenic hazard of substances in food and beverages is the focus of this review. The approaches used by entities well-known for carcinogenicity testing and/or guideline development are discussed. Particular focus is placed on comparison of testing programs used by the US National Toxicology Program (NTP) and advocated in OECD guidelines to the testing programs of the European Ramazzini Foundation (ERF), an organization with numerous published carcinogenicity studies. This focus allows for a good comparison of differences in approaches to carcinogenicity testing and allows for a critical consideration of elements important to appropriate carcinogenicity study designs and practices. OECD protocols serve as good standard models for carcinogenicity testing protocol design. Additionally, the detailed design of any protocol should include attention to the rationale for inclusion of particular elements, including the impact of those elements on study interpretations. Appropriate interpretation of study results is dependent on rigorous evaluation of the study design and conduct, including differences from standard practices. Important considerations are differences in the strain of animal used, diet and housing practices, rigorousness

  2. Design, fracture control, fabrication, and testing of pressurized space-vehicle structures

    Science.gov (United States)

    Babel, H. W.; Christensen, R. H.; Dixon, H. H.

    1974-01-01

    The relationship between analysis, design, fabrication, and testing of thin shells is illustrated by Saturn S-IVB, Thor, Delta, and other single-use and reusable large-size cryogenic aluminum tankage. The analyses and design to meet the design requirements are reviewed and include consideration of fracture control, general instability, and other failure modes. The effect of research and development testing on the structure is indicated. It is shown how fabrication and nondestructive and acceptance testing constrain the design. Finally, qualification testing is reviewed to illustrate the extent of testing used to develop the Saturn S-IVB.

  3. Design of Multilayer Insulation for the Multipurpose Hydrogen Test Bed

    Science.gov (United States)

    Marlow, Weston A.

    2011-01-01

    Multilayer insulation (MLI) is a critical component for future, long term space missions. These missions will require the storage of cryogenic fuels for extended periods of time with little to no boil-off and MLI is vital due to its exceptional radiation shielding properties. Several MLI test articles were designed and fabricated which explored methods of assembling and connecting blankets, yielding results for evaluation. Insight gained, along with previous design experience, will be used in the design of the replacement blanket for the Multipurpose Hydrogen Test Bed (MHTB), which is slated for upcoming tests. Future design considerations are discussed which include mechanical testing to determine robustness of such a system, as well as cryostat testing of samples to give insight to the loss of thermal performance of sewn panels in comparison to the highly efficient, albeit laborious application of the original MHTB blanket.

  4. Proceedings of Design, Automation and Test in Europe (DATE07)

    DEFF Research Database (Denmark)

    Welcome to the DATE 07 Conference Proceedings. DATE combines the world’s leading electronic systems design conference and Europe's leading international exhibition for electronic design, automation and test, from system level hardware and software implementation right down to integrated circuit...... with 78 sessions covering the latest in system design and embedded software, IC design methodologies and EDA tool developments. One of the main strengths of the conference is a wide but high-quality coverage of design, design automation and test topics, from the system level (including PCB and FPGA......) to the integrated circuit level. In addition, for the third year a special embedded software track is offered to allow for the increasing importance of software in embedded systems. Compared with previous years, submissions in design, test and embedded software have grown significantly, showing a clear trend toward...

  5. Do Test Design and Uses Influence Test Preparation? Testing a Model of Washback with Structural Equation Modeling

    Science.gov (United States)

    Xie, Qin; Andrews, Stephen

    2013-01-01

    This study introduces Expectancy-value motivation theory to explain the paths of influences from perceptions of test design and uses to test preparation as a special case of washback on learning. Based on this theory, two conceptual models were proposed and tested via Structural Equation Modeling. Data collection involved over 870 test takers of…

  6. Review of research designs and statistical methods employed in dental postgraduate dissertations.

    Science.gov (United States)

    Shirahatti, Ravi V; Hegde-Shetiya, Sahana

    2015-01-01

    There is a need to evaluate the quality of postgraduate dissertations of dentistry submitted to university in the light of the international standards of reporting. We conducted the review with an objective to document the use of sampling methods, measurement standardization, blinding, methods to eliminate bias, appropriate use of statistical tests, appropriate use of data presentation in postgraduate dental research and suggest and recommend modifications. The public access database of the dissertations from Rajiv Gandhi University of Health Sciences was reviewed. Three hundred and thirty-three eligible dissertations underwent preliminary evaluation followed by detailed evaluation of 10% of randomly selected dissertations. The dissertations were assessed based on international reporting guidelines such as strengthening the reporting of observational studies in epidemiology (STROBE), consolidated standards of reporting trials (CONSORT), and other scholarly resources. The data were compiled using MS Excel and SPSS 10.0. Numbers and percentages were used for describing the data. The "in vitro" studies were the most common type of research (39%), followed by observational (32%) and experimental studies (29%). The disciplines conservative dentistry (92%) and prosthodontics (75%) reported high numbers of in vitro research. Disciplines oral surgery (80%) and periodontics (67%) had conducted experimental studies as a major share of their research. Lacunae in the studies included observational studies not following random sampling (70%), experimental studies not following random allocation (75%), not mentioning about blinding, confounding variables and calibrations in measurements, misrepresenting the data by inappropriate data presentation, errors in reporting probability values and not reporting confidence intervals. Few studies showed grossly inappropriate choice of statistical tests and many studies needed additional tests. Overall observations indicated the need to

  7. Statistical Analysis of the Polarimetric Cloud Analysis and Seeding Test (POLCAST) Field Projects

    Science.gov (United States)

    Ekness, Jamie Lynn

    The North Dakota farming industry brings in more than $4.1 billion annually in cash receipts. Unfortunately, agriculture sales vary significantly from year to year, which is due in large part to weather events such as hail storms and droughts. One method to mitigate drought is to use hygroscopic seeding to increase the precipitation efficiency of clouds. The North Dakota Atmospheric Research Board (NDARB) sponsored the Polarimetric Cloud Analysis and Seeding Test (POLCAST) research project to determine the effectiveness of hygroscopic seeding in North Dakota. The POLCAST field projects obtained airborne and radar observations, while conducting randomized cloud seeding. The Thunderstorm Identification Tracking and Nowcasting (TITAN) program is used to analyze radar data (33 usable cases) in determining differences in the duration of the storm, rain rate and total rain amount between seeded and non-seeded clouds. The single ratio of seeded to non-seeded cases is 1.56 (0.28 mm/0.18 mm) or 56% increase for the average hourly rainfall during the first 60 minutes after target selection. A seeding effect is indicated with the lifetime of the storms increasing by 41 % between seeded and non-seeded clouds for the first 60 minutes past seeding decision. A double ratio statistic, a comparison of radar derived rain amount of the last 40 minutes of a case (seed/non-seed), compared to the first 20 minutes (seed/non-seed), is used to account for the natural variability of the cloud system and gives a double ratio of 1.85. The Mann-Whitney test on the double ratio of seeded to non-seeded cases (33 cases) gives a significance (p-value) of 0.063. Bootstrapping analysis of the POLCAST set indicates that 50 cases would provide statistically significant results based on the Mann-Whitney test of the double ratio. All the statistical analysis conducted on the POLCAST data set show that hygroscopic seeding in North Dakota does increase precipitation. While an additional POLCAST field

  8. A Fractual Mechanical Testing and Design Strategy for FRC Structures

    DEFF Research Database (Denmark)

    Stang, Henrik; Olesen, John Forbes

    1999-01-01

    A unified testing and design strategy for fibre reinforced concrete structures is summarised. The strategy is based on fracture mechanical concepts. Emphasis is placed on material characterisation and testing specifications.......A unified testing and design strategy for fibre reinforced concrete structures is summarised. The strategy is based on fracture mechanical concepts. Emphasis is placed on material characterisation and testing specifications....

  9. IFMIF target and test cell - design and integration

    International Nuclear Information System (INIS)

    Heinzel, V.

    2007-01-01

    The International Fusion Material Irradiation Facility (IFMIF) aims at the qualification of appropriate materials for a Demonstration Fusion Power Plant (DEMO) to a fluence of up to 150 dpa (displacement per atom) at a DEMO typical neutron spectrum. It comprises two accelerators each providing a deuteron beam with 125 mA and 40 MeV. The deuterons strike a lithium target and create via stripping reactions neutrons. The neutrons are mainly forward directed into the High-Flux-Test-Module (HFTM). The Medium Flux-Test-Modules (MFTM) and the Low-Flux-Test-Modules (LFTM) are arranged in beam direction behind. In the HFTM a damage rate in steel of more than 20 dpa/fpy (displacement per atome per full power year) will be provide in a volume of 0.5 litre. The neutron spectrum is prone to produce helium and tritium in steel like in the first wall of a DEMO reactor. The Medium- Flux-Test-Modules are designed for creep fatigues in situ and tritium release test. The test modules are cooled with helium. The target is a lithium jet with a free surface towards the deuteron beams. The jet follows a concave curved so called back wall. Centrifugal forces increase the static pressure, which prevents lithium boiling at the beam tube pressure and the power release of 10 MW due to the deuteron beams. The target and Test Cell (TTC) houses the target and the test modules as well as the lithium supply tubes and a quench tank into which the lithium splashes after the target. The lithium containing components have a temperature of 250 to 350 C. Nuclear reactions mainly in beam direction contribute to heat releases in TTC components. The TTC is filled with a noble gas with almost atmospheric pressure. Natural convection transfers heat to the walls but also mitigates temperature peaks. The Forschungszentrum Karlsruhe (FZK) has developed or validated tools for: - The extended Monte Carlo Code McDeLicious for calculations of the neutron source term, dpa rates in the material specimens, activation

  10. A critical discussion of null hypothesis significance testing and statistical power analysis within psychological research

    DEFF Research Database (Denmark)

    Jones, Allan; Sommerlund, Bo

    2007-01-01

    The uses of null hypothesis significance testing (NHST) and statistical power analysis within psychological research are critically discussed. The article looks at the problems of relying solely on NHST when dealing with small and large sample sizes. The use of power-analysis in estimating...... the potential error introduced by small and large samples is advocated. Power analysis is not recommended as a replacement to NHST but as an additional source of information about the phenomena under investigation. Moreover, the importance of conceptual analysis in relation to statistical analysis of hypothesis...

  11. On detection and assessment of statistical significance of Genomic Islands

    Directory of Open Access Journals (Sweden)

    Chaudhuri Probal

    2008-04-01

    Full Text Available Abstract Background Many of the available methods for detecting Genomic Islands (GIs in prokaryotic genomes use markers such as transposons, proximal tRNAs, flanking repeats etc., or they use other supervised techniques requiring training datasets. Most of these methods are primarily based on the biases in GC content or codon and amino acid usage of the islands. However, these methods either do not use any formal statistical test of significance or use statistical tests for which the critical values and the P-values are not adequately justified. We propose a method, which is unsupervised in nature and uses Monte-Carlo statistical tests based on randomly selected segments of a chromosome. Such tests are supported by precise statistical distribution theory, and consequently, the resulting P-values are quite reliable for making the decision. Results Our algorithm (named Design-Island, an acronym for Detection of Statistically Significant Genomic Island runs in two phases. Some 'putative GIs' are identified in the first phase, and those are refined into smaller segments containing horizontally acquired genes in the refinement phase. This method is applied to Salmonella typhi CT18 genome leading to the discovery of several new pathogenicity, antibiotic resistance and metabolic islands that were missed by earlier methods. Many of these islands contain mobile genetic elements like phage-mediated genes, transposons, integrase and IS elements confirming their horizontal acquirement. Conclusion The proposed method is based on statistical tests supported by precise distribution theory and reliable P-values along with a technique for visualizing statistically significant islands. The performance of our method is better than many other well known methods in terms of their sensitivity and accuracy, and in terms of specificity, it is comparable to other methods.

  12. Method for determining appropriate statistical models of the random cyclic stress amplitudes of a stainless pipe weld metal

    International Nuclear Information System (INIS)

    Wang Jinnuo; Zhao Yongxiang; Wang Shaohua

    2001-01-01

    It is revealed by a strain-controlled fatigue test that there is a significant scatter of the cyclic stress-strain responses for a nuclear engineering material, 1Cr18Ni9Ti stainless steel pipe-weld metal. This implies that the existent deterministic analysis might be non-conservative. Taking into account the scatter, a method for determining the appropriate statistical models of material cyclic stress amplitudes is presented by considering the total fit, consistency with fatigue physics, and safety of design of seven commonly used distribution fitting into the test data. The seven distribution are Weibull (two-and three-parameter), normal, lognormal, extreme minimum value, extreme maximum value, and exponential. In the method, statistical parameters of the distributions are evaluated by a linear regression technique. Statistical tests are made by a transformation from t-distribution function to Pearson statistical parameter. Statistical tests are made by a transformation from t-distribution function to Pearson statistical parameter, i.e. the linear relationship coefficient. The total fit is assessed by a parameter so-called fitted relationship coefficient of the empirical and theoretical failure probabilities. The consistency with fatigue physics is analyzed by the hazard rate curves of distributions. The safety of design is measured by examining the change of predicted errors in the tail regions of distributions

  13. Using Cochran's Z Statistic to Test the Kernel-Smoothed Item Response Function Differences between Focal and Reference Groups

    Science.gov (United States)

    Zheng, Yinggan; Gierl, Mark J.; Cui, Ying

    2010-01-01

    This study combined the kernel smoothing procedure and a nonparametric differential item functioning statistic--Cochran's Z--to statistically test the difference between the kernel-smoothed item response functions for reference and focal groups. Simulation studies were conducted to investigate the Type I error and power of the proposed…

  14. Engineering test facility design definition

    Science.gov (United States)

    Bercaw, R. W.; Seikel, G. R.

    1980-01-01

    The Engineering Test Facility (ETF) is the major focus of the Department of Energy (DOE) Magnetohydrodynamics (MHD) Program to facilitate commercialization and to demonstrate the commercial operability of MHD/steam electric power. The ETF will be a fully integrated commercial prototype MHD power plant with a nominal output of 200 MW sub e. Performance of this plant is expected to meet or surpass existing utility standards for fuel, maintenance, and operating costs; plant availability; load following; safety; and durability. It is expected to meet all applicable environmental regulations. The current design concept conforming to the general definition, the basis for its selection, and the process which will be followed in further defining and updating the conceptual design.

  15. Nursing students' attitudes toward statistics: Effect of a biostatistics course and association with examination performance.

    Science.gov (United States)

    Kiekkas, Panagiotis; Panagiotarou, Aliki; Malja, Alvaro; Tahirai, Daniela; Zykai, Rountina; Bakalis, Nick; Stefanopoulos, Nikolaos

    2015-12-01

    Although statistical knowledge and skills are necessary for promoting evidence-based practice, health sciences students have expressed anxiety about statistics courses, which may hinder their learning of statistical concepts. To evaluate the effects of a biostatistics course on nursing students' attitudes toward statistics and to explore the association between these attitudes and their performance in the course examination. One-group quasi-experimental pre-test/post-test design. Undergraduate nursing students of the fifth or higher semester of studies, who attended a biostatistics course. Participants were asked to complete the pre-test and post-test forms of The Survey of Attitudes Toward Statistics (SATS)-36 scale at the beginning and end of the course respectively. Pre-test and post-test scale scores were compared, while correlations between post-test scores and participants' examination performance were estimated. Among 156 participants, post-test scores of the overall SATS-36 scale and of the Affect, Cognitive Competence, Interest and Effort components were significantly higher than pre-test ones, indicating that the course was followed by more positive attitudes toward statistics. Among 104 students who participated in the examination, higher post-test scores of the overall SATS-36 scale and of the Affect, Difficulty, Interest and Effort components were significantly but weakly correlated with higher examination performance. Students' attitudes toward statistics can be improved through appropriate biostatistics courses, while positive attitudes contribute to higher course achievements and possibly to improved statistical skills in later professional life. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. R for statistics

    CERN Document Server

    Cornillon, Pierre-Andre; Husson, Francois; Jegou, Nicolas; Josse, Julie; Kloareg, Maela; Matzner-Lober, Eric; Rouviere, Laurent

    2012-01-01

    An Overview of RMain ConceptsInstalling RWork SessionHelpR ObjectsFunctionsPackagesExercisesPreparing DataReading Data from FileExporting ResultsManipulating VariablesManipulating IndividualsConcatenating Data TablesCross-TabulationExercisesR GraphicsConventional Graphical FunctionsGraphical Functions with latticeExercisesMaking Programs with RControl FlowsPredefined FunctionsCreating a FunctionExercisesStatistical MethodsIntroduction to the Statistical MethodsA Quick Start with RInstalling ROpening and Closing RThe Command PromptAttribution, Objects, and FunctionSelectionOther Rcmdr PackageImporting (or Inputting) DataGraphsStatistical AnalysisHypothesis TestConfidence Intervals for a MeanChi-Square Test of IndependenceComparison of Two MeansTesting Conformity of a ProportionComparing Several ProportionsThe Power of a TestRegressionSimple Linear RegressionMultiple Linear RegressionPartial Least Squares (PLS) RegressionAnalysis of Variance and CovarianceOne-Way Analysis of VarianceMulti-Way Analysis of Varian...

  17. Consensus on Intermediate Scale Salt Field Test Design

    Energy Technology Data Exchange (ETDEWEB)

    Kuhlman, Kristopher L; Mills, Melissa Marie; Matteo, Edward N

    2017-03-01

    This report summarizes the first stage in a collaborative effort by Sandia, Los Alamos, and Lawrence Berkeley National Laboratories to design a small-diameter borehole heater test in salt at the Waste Isolation Pilot Plant (WIPP) for the US Department of Energy Office of Nuclear Energy (DOE-NE). The intention is to complete test design during the remainder of fiscal year 2017 (FY17), and the implementation of the test will begin in FY18. This document is the result of regular meetings between the three national labs and the DOE-NE, and is intended to represent a consensus of these meetings and discussions.

  18. Design of Test Parts to Characterize Micro Additive Manufacturing Processes

    DEFF Research Database (Denmark)

    Thompson, Mary Kathryn; Mischkot, Michael

    2015-01-01

    The minimum feature size and obtainable tolerances of additive manufacturing processes are linked to the smallest volumetric elements (voxels) that can be created. This work presents the iterative design of a test part to investigate the resolution of AM processes with voxel sizes at the micro...... scale. Each design iteration reduces the test part size, increases the number of test features, improves functionality, and decreases coupling in the part. The final design is a set of three test parts that are easy to orient and measure, and that provide useful information about micro additive...... manufacturing processes....

  19. Designing Solutions by a Student Centred Approach: Integration of Chemical Process Simulation with Statistical Tools to Improve Distillation Systems

    Directory of Open Access Journals (Sweden)

    Isabel M. Joao

    2017-09-01

    Full Text Available Projects thematically focused on simulation and statistical techniques for designing and optimizing chemical processes can be helpful in chemical engineering education in order to meet the needs of engineers. We argue for the relevance of the projects to improve a student centred approach and boost higher order thinking skills. This paper addresses the use of Aspen HYSYS by Portuguese chemical engineering master students to model distillation systems together with statistical experimental design techniques in order to optimize the systems highlighting the value of applying problem specific knowledge, simulation tools and sound statistical techniques. The paper summarizes the work developed by the students in order to model steady-state processes, dynamic processes and optimize the distillation systems emphasizing the benefits of the simulation tools and statistical techniques in helping the students learn how to learn. Students strengthened their domain specific knowledge and became motivated to rethink and improve chemical processes in their future chemical engineering profession. We discuss the main advantages of the methodology from the students’ and teachers perspective

  20. Statistical monitoring of linear antenna arrays

    KAUST Repository

    Harrou, Fouzi

    2016-11-03

    The paper concerns the problem of monitoring linear antenna arrays using the generalized likelihood ratio (GLR) test. When an abnormal event (fault) affects an array of antenna elements, the radiation pattern changes and significant deviation from the desired design performance specifications can resulted. In this paper, the detection of faults is addressed from a statistical point of view as a fault detection problem. Specifically, a statistical method rested on the GLR principle is used to detect potential faults in linear arrays. To assess the strength of the GLR-based monitoring scheme, three case studies involving different types of faults were performed. Simulation results clearly shown the effectiveness of the GLR-based fault-detection method to monitor the performance of linear antenna arrays.

  1. High Temperature Gas-Cooled Test Reactor Point Design: Summary Report

    Energy Technology Data Exchange (ETDEWEB)

    Sterbentz, James William [Idaho National Lab. (INL), Idaho Falls, ID (United States); Bayless, Paul David [Idaho National Lab. (INL), Idaho Falls, ID (United States); Nelson, Lee Orville [Idaho National Lab. (INL), Idaho Falls, ID (United States); Gougar, Hans David [Idaho National Lab. (INL), Idaho Falls, ID (United States); Kinsey, J. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Strydom, Gerhard [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-03-01

    A point design has been developed for a 200-MW high-temperature gas-cooled test reactor. The point design concept uses standard prismatic blocks and 15.5% enriched uranium oxycarbide fuel. Reactor physics and thermal-hydraulics simulations have been performed to characterize the capabilities of the design. In addition to the technical data, overviews are provided on the technology readiness level, licensing approach, and costs of the test reactor point design.

  2. High Temperature Gas-Cooled Test Reactor Point Design: Summary Report

    Energy Technology Data Exchange (ETDEWEB)

    Sterbentz, James William [Idaho National Lab. (INL), Idaho Falls, ID (United States); Bayless, Paul David [Idaho National Lab. (INL), Idaho Falls, ID (United States); Nelson, Lee Orville [Idaho National Lab. (INL), Idaho Falls, ID (United States); Gougar, Hans David [Idaho National Lab. (INL), Idaho Falls, ID (United States); Strydom, Gerhard [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-01-01

    A point design has been developed for a 200-MW high-temperature gas-cooled test reactor. The point design concept uses standard prismatic blocks and 15.5% enriched uranium oxycarbide fuel. Reactor physics and thermal-hydraulics simulations have been performed to characterize the capabilities of the design. In addition to the technical data, overviews are provided on the technology readiness level, licensing approach, and costs of the test reactor point design.

  3. The use and misuse of statistical methodologies in pharmacology research.

    Science.gov (United States)

    Marino, Michael J

    2014-01-01

    Descriptive, exploratory, and inferential statistics are necessary components of hypothesis-driven biomedical research. Despite the ubiquitous need for these tools, the emphasis on statistical methods in pharmacology has become dominated by inferential methods often chosen more by the availability of user-friendly software than by any understanding of the data set or the critical assumptions of the statistical tests. Such frank misuse of statistical methodology and the quest to reach the mystical αstatistical training. Perhaps more critically, a poor understanding of statistical tools limits the conclusions that may be drawn from a study by divorcing the investigator from their own data. The net result is a decrease in quality and confidence in research findings, fueling recent controversies over the reproducibility of high profile findings and effects that appear to diminish over time. The recent development of "omics" approaches leading to the production of massive higher dimensional data sets has amplified these issues making it clear that new approaches are needed to appropriately and effectively mine this type of data. Unfortunately, statistical education in the field has not kept pace. This commentary provides a foundation for an intuitive understanding of statistics that fosters an exploratory approach and an appreciation for the assumptions of various statistical tests that hopefully will increase the correct use of statistics, the application of exploratory data analysis, and the use of statistical study design, with the goal of increasing reproducibility and confidence in the literature. Copyright © 2013. Published by Elsevier Inc.

  4. Overview of the IFMIF test facility design in IFMIF/EVEDA phase

    International Nuclear Information System (INIS)

    Tian, Kuo; Abou-Sena, Ali; Arbeiter, Frederik; García, Ángela; Gouat, Philippe; Heidinger, Roland; Heinzel, Volker; Ibarra, Ángel; Leysen, Willem; Mas, Avelino; Mittwollen, Martin; Möslang, Anton; Theile, Jürgen; Yamamoto, Michiyoshi; Yokomine, Takehiko

    2015-01-01

    Highlights: • This paper summarizes the current design status of IFMIF EVEDA test facility. • The principle functions of the test facility and key components are described. • The brief specifications of the systems and key components are addressed. - Abstract: The test facility (TF) is one of the three major facilities of the International Fusion Material Irradiation Facility (IFMIF). Engineering designs of TF main systems and key components have been initiated and developed in the IFMIF EVEDA (Engineering Validation and Engineering Design Activities) phase since 2007. The related work covers the designs of a test cell which is the meeting point of the TF and accelerator facility and lithium facility, a series of test modules for experiments under different irradiation conditions, an access cell to accommodate remote handling systems, four test module handling cells for test module processing and assembling, and test facility ancillary systems for engineering support on energy, media, and control infrastructure. This paper summarizes the principle functions, brief specifications, and the current design status of the above mentioned IFMIF TF systems and key components.

  5. OPTIMUM DESIGN OF EXPERIMENTS FOR ACCELERATED RELIABILITY TESTING

    Directory of Open Access Journals (Sweden)

    Sebastian Marian ZAHARIA

    2014-05-01

    Full Text Available In this paper is presented a case study that demonstrates how design to experiments (DOE information can be used to design better accelerated reliability tests. In the case study described in this paper, will be done a comparison and optimization between main accelerated reliability test plans (3 Level Best Standard Plan, 3 Level Best Compromise Plan, 3 Level Best Equal Expected Number Failing Plan, 3 Level 4:2:1 Allocation Plan. Before starting an accelerated reliability test, it is advisable to have a plan that helps in accurately estimating reliability at operating conditions while minimizing test time and costs. A test plan should be used to decide on the appropriate stress levels that should be used (for each stress type and the amount of the test units that need to be allocated to the different stress levels (for each combination of the different stress types' levels. For the case study it used ALTA 7 software what provides a complete analysis for data from accelerated reliability tests

  6. Statistical mixture design and multivariate analysis of inkjet printed a-WO3/TiO2/WOX electrochromic films.

    Science.gov (United States)

    Wojcik, Pawel Jerzy; Pereira, Luís; Martins, Rodrigo; Fortunato, Elvira

    2014-01-13

    An efficient mathematical strategy in the field of solution processed electrochromic (EC) films is outlined as a combination of an experimental work, modeling, and information extraction from massive computational data via statistical software. Design of Experiment (DOE) was used for statistical multivariate analysis and prediction of mixtures through a multiple regression model, as well as the optimization of a five-component sol-gel precursor subjected to complex constraints. This approach significantly reduces the number of experiments to be realized, from 162 in the full factorial (L=3) and 72 in the extreme vertices (D=2) approach down to only 30 runs, while still maintaining a high accuracy of the analysis. By carrying out a finite number of experiments, the empirical modeling in this study shows reasonably good prediction ability in terms of the overall EC performance. An optimized ink formulation was employed in a prototype of a passive EC matrix fabricated in order to test and trial this optically active material system together with a solid-state electrolyte for the prospective application in EC displays. Coupling of DOE with chromogenic material formulation shows the potential to maximize the capabilities of these systems and ensures increased productivity in many potential solution-processed electrochemical applications.

  7. Design verification testing for fuel element type CAREM

    International Nuclear Information System (INIS)

    Martin Ghiselli, A.; Bonifacio Pulido, K.; Villabrille, G.; Rozembaum, I.

    2013-01-01

    The hydraulic and hydrodynamic characterization tests are part of the design verification process of a nuclear fuel element prototype and its components. These tests are performed in a low pressure and temperature facility. The tests requires the definition of the simulation parameters for setting the test conditions, the results evaluation to feedback mathematical models, extrapolated the results to reactor conditions and finally to decide the acceptability of the tested prototype. (author)

  8. Statistical Physics

    CERN Document Server

    Wannier, Gregory Hugh

    1966-01-01

    Until recently, the field of statistical physics was traditionally taught as three separate subjects: thermodynamics, statistical mechanics, and kinetic theory. This text, a forerunner in its field and now a classic, was the first to recognize the outdated reasons for their separation and to combine the essentials of the three subjects into one unified presentation of thermal physics. It has been widely adopted in graduate and advanced undergraduate courses, and is recommended throughout the field as an indispensable aid to the independent study and research of statistical physics.Designed for

  9. Statistical evaluation of diagnostic performance topics in ROC analysis

    CERN Document Server

    Zou, Kelly H; Bandos, Andriy I; Ohno-Machado, Lucila; Rockette, Howard E

    2016-01-01

    Statistical evaluation of diagnostic performance in general and Receiver Operating Characteristic (ROC) analysis in particular are important for assessing the performance of medical tests and statistical classifiers, as well as for evaluating predictive models or algorithms. This book presents innovative approaches in ROC analysis, which are relevant to a wide variety of applications, including medical imaging, cancer research, epidemiology, and bioinformatics. Statistical Evaluation of Diagnostic Performance: Topics in ROC Analysis covers areas including monotone-transformation techniques in parametric ROC analysis, ROC methods for combined and pooled biomarkers, Bayesian hierarchical transformation models, sequential designs and inferences in the ROC setting, predictive modeling, multireader ROC analysis, and free-response ROC (FROC) methodology. The book is suitable for graduate-level students and researchers in statistics, biostatistics, epidemiology, public health, biomedical engineering, radiology, medi...

  10. Statistical inference based on divergence measures

    CERN Document Server

    Pardo, Leandro

    2005-01-01

    The idea of using functionals of Information Theory, such as entropies or divergences, in statistical inference is not new. However, in spite of the fact that divergence statistics have become a very good alternative to the classical likelihood ratio test and the Pearson-type statistic in discrete models, many statisticians remain unaware of this powerful approach.Statistical Inference Based on Divergence Measures explores classical problems of statistical inference, such as estimation and hypothesis testing, on the basis of measures of entropy and divergence. The first two chapters form an overview, from a statistical perspective, of the most important measures of entropy and divergence and study their properties. The author then examines the statistical analysis of discrete multivariate data with emphasis is on problems in contingency tables and loglinear models using phi-divergence test statistics as well as minimum phi-divergence estimators. The final chapter looks at testing in general populations, prese...

  11. Testing the Developmental Origins of Health and Disease Hypothesis for Psychopathology Using Family-Based Quasi-Experimental Designs

    Science.gov (United States)

    D’Onofrio, Brian M.; Class, Quetzal A.; Lahey, Benjamin B.; Larsson, Henrik

    2014-01-01

    The Developmental Origin of Health and Disease (DOHaD) hypothesis is a broad theoretical framework that emphasizes how early risk factors have a causal influence on psychopathology. Researchers have raised concerns about the causal interpretation of statistical associations between early risk factors and later psychopathology because most existing studies have been unable to rule out the possibility of environmental and genetic confounding. In this paper we illustrate how family-based quasi-experimental designs can test the DOHaD hypothesis by ruling out alternative hypotheses. We review the logic underlying sibling-comparison, co-twin control, offspring of siblings/twins, adoption, and in vitro fertilization designs. We then present results from studies using these designs focused on broad indices of fetal development (low birth weight and gestational age) and a particular teratogen, smoking during pregnancy. The results provide mixed support for the DOHaD hypothesis for psychopathology, illustrating the critical need to use design features that rule out unmeasured confounding. PMID:25364377

  12. Statistical optimization of the growth factors for Chaetoceros neogracile using fractional factorial design and central composite design.

    Science.gov (United States)

    Jeong, Sung-Eun; Park, Jae-Kweon; Kim, Jeong-Dong; Chang, In-Jeong; Hong, Seong-Joo; Kang, Sung-Ho; Lee, Choul-Gyun

    2008-12-01

    Statistical experimental designs; involving (i) a fractional factorial design (FFD) and (ii) a central composite design (CCD) were applied to optimize the culture medium constituents for production of a unique antifreeze protein by the Antartic microalgae Chaetoceros neogracile. The results of the FFD suggested that NaCl, KCl, MgCl2, and Na2SiO3 were significant variables that highly influenced the growth rate and biomass production. The optimum culture medium for the production of an antifreeze protein from C. neogracile was found to be Kalleampersandrsquor;s artificial seawater, pH of 7.0ampersandplusmn;0.5, consisting of 28.566 g/l of NaCl, 3.887 g/l of MgCl2, 1.787 g/l of MgSO4, 1.308 g/l of CaSO4, 0.832 g/l of K2SO4, 0.124 g/l of CaCO3, 0.103 g/l of KBr, 0.0288 g/l of SrSO4, and 0.0282 g/l of H3BO3. The antifreeze activity significantly increased after cells were treated with cold shock (at -5oC) for 14 h. To the best of our knowledge, this is the first report demonstrating an antifreeze-like protein of C. neogracile.

  13. Design research in statistics education : on symbolizing and computer tools

    NARCIS (Netherlands)

    Bakker, A.

    2004-01-01

    The present knowledge society requires statistical literacy-the ability to interpret, critically evaluate, and communicate about statistical information and messages (Gal, 2002). However, research shows that students generally do not gain satisfactory statistical understanding. The research

  14. Statistical methods for conducting agreement (comparison of clinical tests) and precision (repeatability or reproducibility) studies in optometry and ophthalmology.

    Science.gov (United States)

    McAlinden, Colm; Khadka, Jyoti; Pesudovs, Konrad

    2011-07-01

    The ever-expanding choice of ocular metrology and imaging equipment has driven research into the validity of their measurements. Consequently, studies of the agreement between two instruments or clinical tests have proliferated in the ophthalmic literature. It is important that researchers apply the appropriate statistical tests in agreement studies. Correlation coefficients are hazardous and should be avoided. The 'limits of agreement' method originally proposed by Altman and Bland in 1983 is the statistical procedure of choice. Its step-by-step use and practical considerations in relation to optometry and ophthalmology are detailed in addition to sample size considerations and statistical approaches to precision (repeatability or reproducibility) estimates. Ophthalmic & Physiological Optics © 2011 The College of Optometrists.

  15. Design and study of Engineering Test Facility - Helium Circulator

    International Nuclear Information System (INIS)

    Jiang Huijing; Ye Ping; Zhao Gang; Geng Yinan; Wang Jie

    2015-01-01

    Helium circulator is one of the key equipment of High-temperature Gas-cooled Reactor Pebble-bed Module (HTR-PM). In order to simulate most normal and accident operating conditions of helium circulator in HTR-PM, a full scale, rated flow rate and power, engineering test loop, which was called Engineering Test Facility - Helium Circulator (ETF-HC), was designed and established. Two prototypes of helium circulator, which was supported by Active Magnetic Bearing (AMB) or sealed by dry gas seals, would be tested on ETF-HC. Therefore, special interchangeable design was under consideration. ETF-HC was constructed compactly, which consisted of eleven sub-systems. In order to reduce the flow resistance of the circuit, special ducts, elbows, valves and flowmeters were selected. Two stages of heat exchange loops were designed and a helium - high pressure pure water heat exchanger was applied to ensure water wouldn't be vaporized while simulating accident conditions. Commissioning tests were carried out and operation results showed that ETF-HC meets the requirement of helium circulator operation. On this test facility, different kinds of experiments were supposed to be held, including mechanical and aerodynamic performance tests, durability tests and so on. These tests would provide the features and performance of helium circulator and verify its feasibility, availability and reliability. (author)

  16. Designing testing service at baristand industri Medan’s liquid waste laboratory

    Science.gov (United States)

    Kusumawaty, Dewi; Napitupulu, Humala L.; Sembiring, Meilita T.

    2018-03-01

    Baristand Industri Medan is a technical implementation unit under the Industrial and Research and Development Agency, the Ministry of Industry. One of the services often used in Baristand Industri Medan is liquid waste testing service. The company set the standard of service is nine working days for testing services. At 2015, 89.66% on testing services liquid waste does not meet the specified standard of services company because of many samples accumulated. The purpose of this research is designing online services to schedule the coming the liquid waste sample. The method used is designing an information system that consists of model design, output design, input design, database design and technology design. The results of designing information system of testing liquid waste online consist of three pages are pages to the customer, the recipient samples and laboratory. From the simulation results with scheduled samples, then the standard services a minimum of nine working days can be reached.

  17. Final design of ITER port plug test facility

    Energy Technology Data Exchange (ETDEWEB)

    Cerisier, Thierry, E-mail: thierry.cerisier@yahoo.fr [ITER Organization, Route de Vinon-sur-Verdon, CS 90046, St Paul-lez-Durance Cedex, 13067 (France); Levesy, Bruno [ITER Organization, Route de Vinon-sur-Verdon, CS 90046, St Paul-lez-Durance Cedex, 13067 (France); Romannikov, Alexander [Institution “Project Center ITER”, Kurchatov sq. 1, Building 3, Moscow 123182 (Russian Federation); Rumyantsev, Yuri [JSC “Cryogenmash”, Moscow reg., Balashikha 143907 (Russian Federation); Cordier, Jean-Jacques; Dammann, Alexis [ITER Organization, Route de Vinon-sur-Verdon, CS 90046, St Paul-lez-Durance Cedex, 13067 (France); Minakov, Victor; Rosales, Natalya; Mitrofanova, Elena [JSC “Cryogenmash”, Moscow reg., Balashikha 143907 (Russian Federation); Portone, Sergey; Mironova, Ekaterina [Institution “Project Center ITER”, Kurchatov sq. 1, Building 3, Moscow 123182 (Russian Federation)

    2016-11-01

    Highlights: • We introduce the port plug test facility (purpose and status of the design). • We present the PPTF sub-systems. • We present the environmental and functional tests. • We present the occupational and nuclear safety functions. • We conclude on the achievements and next steps. - Abstract: To achieve the overall ITER machine availability target, the availability of diagnostics and heating port plugs shall be as high as 99.5%. To fulfill this requirement, it is mandatory to test the port plugs at operating temperature before installation on the machine and after refurbishment. The ITER port plug test facility (PPTF) is composed of several test stands that can be used to test the port plugs whereas at the end of manufacturing (in a non-nuclear environment), or after refurbishment in the ITER hot cell facility. The PPTF provides the possibility to perform environmental (leak tightness, vacuum and thermo-hydraulic performances) and functional tests (radio frequency acceptance tests, behavior of the plugs’ steering mechanism and calibration of diagnostics) on upper and equatorial port plugs. The final design of the port plug test facility is described. The configuration of the standalone test stands and the integration in the hot cell facility are presented.

  18. Basic principles of test-negative design in evaluating influenza vaccine effectiveness.

    Science.gov (United States)

    Fukushima, Wakaba; Hirota, Yoshio

    2017-08-24

    Based on the unique characteristics of influenza, the concept of "monitoring" influenza vaccine effectiveness (VE) across the seasons using the same observational study design has been developed. In recent years, there has been a growing number of influenza VE reports using the test-negative design, which can minimize both misclassification of diseases and confounding by health care-seeking behavior. Although the test-negative designs offer considerable advantages, there are some concerns that widespread use of the test-negative design without knowledge of the basic principles of epidemiology could produce invalid findings. In this article, we briefly review the basic concepts of the test-negative design with respect to classic study design such as cohort studies or case-control studies. We also mention selection bias, which may be of concern in some countries where rapid diagnostic testing is frequently used in routine clinical practices, as in Japan. Copyright © 2017. Published by Elsevier Ltd.

  19. Crashworthy airframe design concepts: Fabrication and testing

    Science.gov (United States)

    Cronkhite, J. D.; Berry, V. L.

    1982-01-01

    Crashworthy floor concepts applicable to general aviation aircraft metal airframe structures were investigated. Initially several energy absorbing lower fuselage structure concepts were evaluated. Full scale floor sections representative of a twin engine, general aviation airplane lower fuselage structure were designed and fabricated. The floors featured an upper high strength platform with an energy absorbing, crushable structure underneath. Eighteen floors were fabricated that incorporated five different crushable subfloor concepts. The floors were then evaluated through static and dynamic testing. Computer programs NASTRAN and KRASH were used for the static and dynamic analysis of the floor section designs. Two twin engine airplane fuselages were modified to incorporate the most promising crashworthy floor sections for test evaluation.

  20. Characterizing the Joint Effect of Diverse Test-Statistic Correlation Structures and Effect Size on False Discovery Rates in a Multiple-Comparison Study of Many Outcome Measures

    Science.gov (United States)

    Feiveson, Alan H.; Ploutz-Snyder, Robert; Fiedler, James

    2011-01-01

    In their 2009 Annals of Statistics paper, Gavrilov, Benjamini, and Sarkar report the results of a simulation assessing the robustness of their adaptive step-down procedure (GBS) for controlling the false discovery rate (FDR) when normally distributed test statistics are serially correlated. In this study we extend the investigation to the case of multiple comparisons involving correlated non-central t-statistics, in particular when several treatments or time periods are being compared to a control in a repeated-measures design with many dependent outcome measures. In addition, we consider several dependence structures other than serial correlation and illustrate how the FDR depends on the interaction between effect size and the type of correlation structure as indexed by Foerstner s distance metric from an identity. The relationship between the correlation matrix R of the original dependent variables and R, the correlation matrix of associated t-statistics is also studied. In general R depends not only on R, but also on sample size and the signed effect sizes for the multiple comparisons.

  1. Probabilistic Design of Wind Turbines

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Toft, H.S.

    2010-01-01

    Probabilistic design of wind turbines requires definition of the structural elements to be included in the probabilistic basis: e.g., blades, tower, foundation; identification of important failure modes; careful stochastic modeling of the uncertain parameters; recommendations for target reliability....... It is described how uncertainties in wind turbine design related to computational models, statistical data from test specimens, results from a few full-scale tests and from prototype wind turbines can be accounted for using the Maximum Likelihood Method and a Bayesian approach. Assessment of the optimal...... reliability level by cost-benefit optimization is illustrated by an offshore wind turbine example. Uncertainty modeling is illustrated by an example where physical, statistical and model uncertainties are estimated....

  2. Efficient preliminary floating offshore wind turbine design and testing methodologies and application to a concrete spar design.

    Science.gov (United States)

    Matha, Denis; Sandner, Frank; Molins, Climent; Campos, Alexis; Cheng, Po Wen

    2015-02-28

    The current key challenge in the floating offshore wind turbine industry and research is on designing economic floating systems that can compete with fixed-bottom offshore turbines in terms of levelized cost of energy. The preliminary platform design, as well as early experimental design assessments, are critical elements in the overall design process. In this contribution, a brief review of current floating offshore wind turbine platform pre-design and scaled testing methodologies is provided, with a focus on their ability to accommodate the coupled dynamic behaviour of floating offshore wind systems. The exemplary design and testing methodology for a monolithic concrete spar platform as performed within the European KIC AFOSP project is presented. Results from the experimental tests compared to numerical simulations are presented and analysed and show very good agreement for relevant basic dynamic platform properties. Extreme and fatigue loads and cost analysis of the AFOSP system confirm the viability of the presented design process. In summary, the exemplary application of the reduced design and testing methodology for AFOSP confirms that it represents a viable procedure during pre-design of floating offshore wind turbine platforms. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  3. Conceptual Design for a High-Temperature Gas Loop Test Facility

    Energy Technology Data Exchange (ETDEWEB)

    James B. Kesseli

    2006-08-01

    This report documents an early-stage conceptual design for a high-temperature gas test loop. The objectives accomplished by the study include, (1) investigation of existing gas test loops to determine ther capabilities and how the proposed system might best complement them, (2) development of a preliminary test plan to help identify the performance characteristics required of the test unit, (3) development of test loop requirements, (4) development of a conceptual design including process flow sheet, mechanical layout, and equipment specifications and costs, and (5) development of a preliminary test loop safety plan.

  4. Statistics For Dummies

    CERN Document Server

    Rumsey, Deborah

    2011-01-01

    The fun and easy way to get down to business with statistics Stymied by statistics? No fear ? this friendly guide offers clear, practical explanations of statistical ideas, techniques, formulas, and calculations, with lots of examples that show you how these concepts apply to your everyday life. Statistics For Dummies shows you how to interpret and critique graphs and charts, determine the odds with probability, guesstimate with confidence using confidence intervals, set up and carry out a hypothesis test, compute statistical formulas, and more.Tracks to a typical first semester statistics cou

  5. Nuclear multifragmentation, its relation to general physics. A rich test ground of the fundamentals of statistical mechanics

    International Nuclear Information System (INIS)

    Gross, D.H.E.

    2006-01-01

    Heat can flow from cold to hot at any phase separation even in macroscopic systems. Therefore also Lynden-Bell's famous gravo-thermal catastrophe must be reconsidered. In contrast to traditional canonical Boltzmann-Gibbs statistics this is correctly described only by microcanonical statistics. Systems studied in chemical thermodynamics (ChTh) by using canonical statistics consist of several homogeneous macroscopic phases. Evidently, macroscopic statistics as in chemistry cannot and should not be applied to non-extensive or inhomogeneous systems like nuclei or galaxies. Nuclei are small and inhomogeneous. Multifragmented nuclei are even more inhomogeneous and the fragments even smaller. Phase transitions of first order and especially phase separations therefore cannot be described by a (homogeneous) canonical ensemble. Taking this serious, fascinating perspectives open for statistical nuclear fragmentation as test ground for the basic principles of statistical mechanics, especially of phase transitions, without the use of the thermodynamic limit. Moreover, there is also a lot of similarity between the accessible phase space of fragmenting nuclei and inhomogeneous multistellar systems. This underlines the fundamental significance for statistical physics in general. (orig.)

  6. SP-100 GES/NAT radiation shielding systems design and development testing

    International Nuclear Information System (INIS)

    Disney, R.K.; Kulikowski, H.D.; McGinnis, C.A.; Reese, J.C.; Thomas, K.; Wiltshire, F.

    1991-01-01

    Advanced Energy Systems (AES) of Westinghouse Electric Corporation is under subcontract to the General Electric Company to supply nuclear radiation shielding components for the SP-100 Ground Engineering System (GES) Nuclear Assembly Test to be conducted at Westinghouse Hanford Company at Richland, Washington. The radiation shielding components are integral to the Nuclear Assembly Test (NAT) assembly and include prototypic and non-prototypic radiation shielding components which provide prototypic test conditions for the SP-100 reactor subsystem and reactor control subsystem components during the GES/NAT operations. W-AES is designing three radiation shield components for the NAT assembly; a prototypic Generic Flight System (GFS) shield, the Lower Internal Facility Shield (LIFS), and the Upper Internal Facility Shield (UIFS). This paper describes the design approach and development testing to support the design, fabrication, and assembly of these three shield components for use within the vacuum vessel of the GES/NAT. The GES/NAT shields must be designed to operate in a high vacuum which simulates space operations. The GFS shield and LIFS must provide prototypic radiation/thermal environments and mechanical interfaces for reactor system components. The NAT shields, in combination with the test facility shielding, must provide adequate radiation attenuation for overall test operations. Special design considerations account for the ground test facility effects on the prototypic GFS shield. Validation of the GFS shield design and performance will be based on detailed Monte Carlo analyses and developmental testing of design features. Full scale prototype testing of the shield subsystems is not planned

  7. Analysis of Multiple Choice Tests Designed by Faculty Members of Kermanshah University of Medical Sciences

    Directory of Open Access Journals (Sweden)

    Reza Pourmirza Kalhori

    2013-12-01

    .14 and 23.99% (± 15.40, respectively. Comparison of the variances of qualitative and quantitative indices indicated significant differences between difficulty index, differ¬entiation index, reliability of the total test and percentage of taxonomy II among faculties (P_value<0.001, but this differences were not observed in taxonomies I and III. The results of Tukey multiple comparison test revealed a statistically significantly increase in the reliability of the medical faculty tests (P_value=0.001 and a statistically significantly decrease in the difficulty index of paramedi¬cal faculty tests compared to other faculties (P_value =0.041. Due to the lower differentiation index, the per¬centage of taxonomies II and III and the percentage of the questions with no structural problems compared to the standard criterion in some faculties, it is necessary to provide qualitative and quantitative feedback for the faculty members, as mentioned in previous studies (5 to promote their knowledge in designing the multiple-choice questions as an assessing tool of students.

  8. Control room design and human factors using a virtual reality based tool for design, test and training

    International Nuclear Information System (INIS)

    Lirvall, Peter

    1998-02-01

    This report describes a user-centred approach to control room design adopted by IFE for the nuclear industry. The novelty of this approach is the development of a Control Room Philosophy, and the use of Virtual Reality (VR) technology as a tool in the design process, integrated with a specially developed Design Documentation System (DDS) and a process display prototyping tool PICASSO-3. The control room philosophy identifies all functional aspects of a control centre, to define the baseline principles and guidelines for the design. The use of VR technology enables end-users of the control room design (e.g. control room operators) to specify their preferred design of the new control room, and to replace the need for a physical mock-up to test and evaluate the proposed design. The DDS, integrated with the VR design tool, guides the control room operators, through a structured approach, to document the proposed design in a complete design specification. The VR tool, specially developed by IFE, is called the VR based Design, Test and Training tool (VR DTandT). It is not only intended to visualise the design, but also to test and evaluate the design. When the design is implemented, the same model is re-used as a VR based training simulator for operators. A special feature in the VR DTandT tool is that the verification and validation (VandV) tests, concerning human factors, are according to the regulative standards for nuclear control rooms

  9. 10 CFR 63.133 - Design testing.

    Science.gov (United States)

    2010-01-01

    ... design, such as, for example, borehole and shaft seals, backfill, and drip shields, as well as the... placement is begun. (d) Tests must be conducted to evaluate the effectiveness of borehole, shaft, and ramp seals before full-scale operation proceeds to seal boreholes, shafts, and ramps. ...

  10. An evaluation of the quality of statistical design and analysis of published medical research: results from a systematic survey of general orthopaedic journals.

    Science.gov (United States)

    Parsons, Nick R; Price, Charlotte L; Hiskens, Richard; Achten, Juul; Costa, Matthew L

    2012-04-25

    The application of statistics in reported research in trauma and orthopaedic surgery has become ever more important and complex. Despite the extensive use of statistical analysis, it is still a subject which is often not conceptually well understood, resulting in clear methodological flaws and inadequate reporting in many papers. A detailed statistical survey sampled 100 representative orthopaedic papers using a validated questionnaire that assessed the quality of the trial design and statistical analysis methods. The survey found evidence of failings in study design, statistical methodology and presentation of the results. Overall, in 17% (95% confidence interval; 10-26%) of the studies investigated the conclusions were not clearly justified by the results, in 39% (30-49%) of studies a different analysis should have been undertaken and in 17% (10-26%) a different analysis could have made a difference to the overall conclusions. It is only by an improved dialogue between statistician, clinician, reviewer and journal editor that the failings in design methodology and analysis highlighted by this survey can be addressed.

  11. An evaluation of the quality of statistical design and analysis of published medical research: results from a systematic survey of general orthopaedic journals

    Directory of Open Access Journals (Sweden)

    Parsons Nick R

    2012-04-01

    Full Text Available Abstract Background The application of statistics in reported research in trauma and orthopaedic surgery has become ever more important and complex. Despite the extensive use of statistical analysis, it is still a subject which is often not conceptually well understood, resulting in clear methodological flaws and inadequate reporting in many papers. Methods A detailed statistical survey sampled 100 representative orthopaedic papers using a validated questionnaire that assessed the quality of the trial design and statistical analysis methods. Results The survey found evidence of failings in study design, statistical methodology and presentation of the results. Overall, in 17% (95% confidence interval; 10–26% of the studies investigated the conclusions were not clearly justified by the results, in 39% (30–49% of studies a different analysis should have been undertaken and in 17% (10–26% a different analysis could have made a difference to the overall conclusions. Conclusion It is only by an improved dialogue between statistician, clinician, reviewer and journal editor that the failings in design methodology and analysis highlighted by this survey can be addressed.

  12. Analyzing Data from a Pretest-Posttest Control Group Design: The Importance of Statistical Assumptions

    Science.gov (United States)

    Zientek, Linda; Nimon, Kim; Hammack-Brown, Bryn

    2016-01-01

    Purpose: Among the gold standards in human resource development (HRD) research are studies that test theoretically developed hypotheses and use experimental designs. A somewhat typical experimental design would involve collecting pretest and posttest data on individuals assigned to a control or experimental group. Data from such a design that…

  13. Deep Borehole Field Test Conceptual Design Report

    Energy Technology Data Exchange (ETDEWEB)

    Hardin, Ernest L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-09-30

    This report documents conceptual design development for the Deep Borehole Field Test (DBFT), including test packages (simulated waste packages, not containing waste) and a system for demonstrating emplacement and retrieval of those packages in the planned Field Test Borehole (FTB). For the DBFT to have demonstration value, it must be based on conceptualization of a deep borehole disposal (DBD) system. This document therefore identifies key options for a DBD system, describes an updated reference DBD concept, and derives a recommended concept for the DBFT demonstration. The objective of the DBFT is to confirm the safety and feasibility of the DBD concept for long-term isolation of radioactive waste. The conceptual design described in this report will demonstrate equipment and operations for safe waste handling and downhole emplacement of test packages, while contributing to an evaluation of the overall safety and practicality of the DBD concept. The DBFT also includes drilling and downhole characterization investigations that are described elsewhere (see Section 1). Importantly, no radioactive waste will be used in the DBFT, nor will the DBFT site be used for disposal of any type of waste. The foremost performance objective for conduct of the DBFT is to demonstrate safe operations in all aspects of the test.

  14. Dish/Stirling Hybrid-Receiver Sub-Scale Tests and Full-Scale Design

    International Nuclear Information System (INIS)

    Andraka, Charles; Bohn, Mark S.; Corey, John; Mehos, Mark; Moreno, James; Rawlinson, Scott

    1999-01-01

    We have designed and tested a prototype dish/Stirling hybrid-receiver combustion system. The system consists of a pre-mixed natural-gas burner heating a pin-finned sodium heat pipe. The design emphasizes simplicity, low cost, and ruggedness. Our test was on a 1/6 th -scale device, with a nominal firing rate of 18kWt, a power throughput of 13kWt, and a sodium vapor temperature of 750 ampersand deg;C. The air/fuel mixture was electrically preheated to 640 ampersand deg;C to simulate recuperation. The test rig was instrumented for temperatures, pressures, flow rates, overall leak rate, and exhaust emissions. The data verify our burner and heat-transfer models. Performance and post-test examinations validate our choice of materials and fabrication methods. Based on the 1/6 th -scale results, we are designing a till-scale hybrid receiver. This is a fully-integrated system, including burner, pin-fin primary heat exchanger, recuperator (in place of the electrical pre-heater used in the prototype system), solar absorber, and sodium heat pipe. The major challenges of the design are to avoid pre-ignition, achieve robust heat-pipe performance, and attain long life of the burner matrix, recuperator, and flue-gas seals. We have used computational fluid dynamics extensively in designing to avoid pre-ignition and for designing the heat-pipe wick, and we have used individual component tests and results of the 1/6 th -scale test to optimize for long life. In this paper, we present our design philosophy and basic details of our design. We describe the sub-scale test rig and compare test results with predictions. Finally, we outline the evolution of our full-scale design, and present its current status

  15. Selection of hidden layer nodes in neural networks by statistical tests

    International Nuclear Information System (INIS)

    Ciftcioglu, Ozer

    1992-05-01

    A statistical methodology for selection of the number of hidden layer nodes in feedforward neural networks is described. The method considers the network as an empirical model for the experimental data set subject to pattern classification so that the selection process becomes a model estimation through parameter identification. The solution is performed for an overdetermined estimation problem for identification using nonlinear least squares minimization technique. The number of the hidden layer nodes is determined as result of hypothesis testing. Accordingly the redundant network structure with respect to the number of parameters is avoided and the classification error being kept to a minimum. (author). 11 refs.; 4 figs.; 1 tab

  16. A statistical test for the habitable zone concept

    Science.gov (United States)

    Checlair, J.; Abbot, D. S.

    2017-12-01

    Traditional habitable zone theory assumes that the silicate-weathering feedback regulates the atmospheric CO2 of planets within the habitable zone to maintain surface temperatures that allow for liquid water. There is some non-definitive evidence that this feedback has worked in Earth history, but it is untested in an exoplanet context. A critical prediction of the silicate-weathering feedback is that, on average, within the habitable zone planets that receive a higher stellar flux should have a lower CO2 in order to maintain liquid water at their surface. We can test this prediction directly by using a statistical approach involving low-precision CO2 measurements on many planets with future instruments such as JWST, LUVOIR, or HabEx. The purpose of this work is to carefully outline the requirements for such a test. First, we use a radiative-transfer model to compute the amount of CO2 necessary to maintain surface liquid water on planets for different values of insolation and planetary parameters. We run a large ensemble of Earth-like planets with different masses, atmospheric masses, inert atmospheric composition, cloud composition and level, and other greenhouse gases. Second, we post-process this data to determine the precision with which future instruments such as JWST, LUVOIR, and HabEx could measure the CO2. We then combine the variation due to planetary parameters and observational error to determine the number of planet measurements that would be needed to effectively marginalize over uncertainties and resolve the predicted trend in CO2 vs. stellar flux. The results of this work may influence the usage of JWST and will enhance mission planning for LUVOIR and HabEx.

  17. Development of the Statistical Reasoning in Biology Concept Inventory (SRBCI)

    Science.gov (United States)

    Deane, Thomas; Nomme, Kathy; Jeffery, Erica; Pollock, Carol; Birol, Gülnur

    2016-01-01

    We followed established best practices in concept inventory design and developed a 12-item inventory to assess student ability in statistical reasoning in biology (Statistical Reasoning in Biology Concept Inventory [SRBCI]). It is important to assess student thinking in this conceptual area, because it is a fundamental requirement of being statistically literate and associated skills are needed in almost all walks of life. Despite this, previous work shows that non–expert-like thinking in statistical reasoning is common, even after instruction. As science educators, our goal should be to move students along a novice-to-expert spectrum, which could be achieved with growing experience in statistical reasoning. We used item response theory analyses (the one-parameter Rasch model and associated analyses) to assess responses gathered from biology students in two populations at a large research university in Canada in order to test SRBCI’s robustness and sensitivity in capturing useful data relating to the students’ conceptual ability in statistical reasoning. Our analyses indicated that SRBCI is a unidimensional construct, with items that vary widely in difficulty and provide useful information about such student ability. SRBCI should be useful as a diagnostic tool in a variety of biology settings and as a means of measuring the success of teaching interventions designed to improve statistical reasoning skills. PMID:26903497

  18. Accounting for Proof Test Data in a Reliability Based Design Optimization Framework

    Science.gov (United States)

    Ventor, Gerharad; Scotti, Stephen J.

    2012-01-01

    This paper investigates the use of proof (or acceptance) test data during the reliability based design optimization of structural components. It is assumed that every component will be proof tested and that the component will only enter into service if it passes the proof test. The goal is to reduce the component weight, while maintaining high reliability, by exploiting the proof test results during the design process. The proposed procedure results in the simultaneous design of the structural component and the proof test itself and provides the designer with direct control over the probability of failing the proof test. The procedure is illustrated using two analytical example problems and the results indicate that significant weight savings are possible when exploiting the proof test results during the design process.

  19. Design, construction and testing of a DC bioeffects test enclosure for small animals. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Frazier, M J; Preache, M M

    1980-11-01

    This final report describes both the engineering development of a DC bioeffects test enclosure for small laboratory animals, and the biological protocol for the use of such enclosures in the testing of animals to determine possible biological effects of the environment associated with HVDC transmission lines. The test enclosure which has been designed is a modular unit, which will house up to eight rat-sized animals in individual compartments. Multiple test enclosures can be used to test larger numbers of animals. A prototype test enclosure has been fabricated and tested to characterize its electrical performance characteristics. The test enclosure provides a simulation of the dominant environment associated with HVDC transmission lines; namely, a static electric field and an ion current density. A biological experimental design has been developed for assessing the effects of the dominant components of the HVDC transmission line environment.

  20. Multi-laboratory precision of Marshall design related tests

    CSIR Research Space (South Africa)

    Denneman, E

    2008-07-01

    Full Text Available The Marshall method is still the method of choice for the design of Hot-Mix Asphalt (HMA) in South Africa. During the validation of a HMA mix design, considerable variability was encountered in Marshall test results for the same mix supplied...