Testing a statistical method of global mean palotemperature estimations in a long climate simulation
Energy Technology Data Exchange (ETDEWEB)
Zorita, E.; Gonzalez-Rouco, F. [GKSS-Forschungszentrum Geesthacht GmbH (Germany). Inst. fuer Hydrophysik
2001-07-01
Current statistical methods of reconstructing the climate of the last centuries are based on statistical models linking climate observations (temperature, sea-level-pressure) and proxy-climate data (tree-ring chronologies, ice-cores isotope concentrations, varved sediments, etc.). These models are calibrated in the instrumental period, and the longer time series of proxy data are then used to estimate the past evolution of the climate variables. Using such methods the global mean temperature of the last 600 years has been recently estimated. In this work this method of reconstruction is tested using data from a very long simulation with a climate model. This testing allows to estimate the errors of the estimations as a function of the number of proxy data and the time scale at which the estimations are probably reliable. (orig.)
Kanji, Gopal K
2006-01-01
This expanded and updated Third Edition of Gopal K. Kanji's best-selling resource on statistical tests covers all the most commonly used tests with information on how to calculate and interpret results with simple datasets. Each entry begins with a short summary statement about the test's purpose, and contains details of the test objective, the limitations (or assumptions) involved, a brief outline of the method, a worked example, and the numerical calculation. 100 Statistical Tests, Third Edition is the one indispensable guide for users of statistical materials and consumers of statistical information at all levels and across all disciplines.
Testing statistical hypotheses
Lehmann, E L
2005-01-01
The third edition of Testing Statistical Hypotheses updates and expands upon the classic graduate text, emphasizing optimality theory for hypothesis testing and confidence sets. The principal additions include a rigorous treatment of large sample optimality, together with the requisite tools. In addition, an introduction to the theory of resampling methods such as the bootstrap is developed. The sections on multiple testing and goodness of fit testing are expanded. The text is suitable for Ph.D. students in statistics and includes over 300 new problems out of a total of more than 760. E.L. Lehmann is Professor of Statistics Emeritus at the University of California, Berkeley. He is a member of the National Academy of Sciences and the American Academy of Arts and Sciences, and the recipient of honorary degrees from the University of Leiden, The Netherlands and the University of Chicago. He is the author of Elements of Large-Sample Theory and (with George Casella) he is also the author of Theory of Point Estimat...
International Nuclear Information System (INIS)
Gouvea, Andre de; Murayama, Hitoshi
2003-01-01
'Anarchy' is the hypothesis that there is no fundamental distinction among the three flavors of neutrinos. It describes the mixing angles as random variables, drawn from well-defined probability distributions dictated by the group Haar measure. We perform a Kolmogorov-Smirnov (KS) statistical test to verify whether anarchy is consistent with all neutrino data, including the new result presented by KamLAND. We find a KS probability for Nature's choice of mixing angles equal to 64%, quite consistent with the anarchical hypothesis. In turn, assuming that anarchy is indeed correct, we compute lower bounds on vertical bar U e3 vertical bar 2 , the remaining unknown 'angle' of the leptonic mixing matrix
Spherical Process Models for Global Spatial Statistics
Jeong, Jaehong; Jun, Mikyoung; Genton, Marc G.
2017-01-01
Statistical models used in geophysical, environmental, and climate science applications must reflect the curvature of the spatial domain in global data. Over the past few decades, statisticians have developed covariance models that capture
Statistical review of global LP gas 2001
International Nuclear Information System (INIS)
2001-01-01
This review provides essential production and consumption data from 1990 through 2000. A more detailed breakdown of supply and sector demand is given for the year 2000 and historic data on international trade, shipping and pricing is also shown. Statistics pertaining to auto-gas are also included in this edition of Statistical Review of Global LP Gas 2001. (author)
Statistical Review of Global LP Gas 2002
International Nuclear Information System (INIS)
2002-01-01
This review provides essential production and consumption data from 1991 through 2001. A detailed breakdown of supply and sector demand is given for the year 2001 and historic data on international trade, shipping and pricing is also shown. Statistics pertaining to auto-gas are also included in this edition of Statistical Review of Global LP Gas 2001. (author)
Testing statistical hypotheses of equivalence
Wellek, Stefan
2010-01-01
Equivalence testing has grown significantly in importance over the last two decades, especially as its relevance to a variety of applications has become understood. Yet published work on the general methodology remains scattered in specialists' journals, and for the most part, it focuses on the relatively narrow topic of bioequivalence assessment.With a far broader perspective, Testing Statistical Hypotheses of Equivalence provides the first comprehensive treatment of statistical equivalence testing. The author addresses a spectrum of specific, two-sided equivalence testing problems, from the
Spherical Process Models for Global Spatial Statistics
Jeong, Jaehong
2017-11-28
Statistical models used in geophysical, environmental, and climate science applications must reflect the curvature of the spatial domain in global data. Over the past few decades, statisticians have developed covariance models that capture the spatial and temporal behavior of these global data sets. Though the geodesic distance is the most natural metric for measuring distance on the surface of a sphere, mathematical limitations have compelled statisticians to use the chordal distance to compute the covariance matrix in many applications instead, which may cause physically unrealistic distortions. Therefore, covariance functions directly defined on a sphere using the geodesic distance are needed. We discuss the issues that arise when dealing with spherical data sets on a global scale and provide references to recent literature. We review the current approaches to building process models on spheres, including the differential operator, the stochastic partial differential equation, the kernel convolution, and the deformation approaches. We illustrate realizations obtained from Gaussian processes with different covariance structures and the use of isotropic and nonstationary covariance models through deformations and geographical indicators for global surface temperature data. To assess the suitability of each method, we compare their log-likelihood values and prediction scores, and we end with a discussion of related research problems.
Global aesthetic surgery statistics: a closer look.
Heidekrueger, Paul I; Juran, S; Ehrl, D; Aung, T; Tanna, N; Broer, P Niclas
2017-08-01
Obtaining quality global statistics about surgical procedures remains an important yet challenging task. The International Society of Aesthetic Plastic Surgery (ISAPS) reports the total number of surgical and non-surgical procedures performed worldwide on a yearly basis. While providing valuable insight, ISAPS' statistics leave two important factors unaccounted for: (1) the underlying base population, and (2) the number of surgeons performing the procedures. Statistics of the published ISAPS' 'International Survey on Aesthetic/Cosmetic Surgery' were analysed by country, taking into account the underlying national base population according to the official United Nations population estimates. Further, the number of surgeons per country was used to calculate the number of surgeries performed per surgeon. In 2014, based on ISAPS statistics, national surgical procedures ranked in the following order: 1st USA, 2nd Brazil, 3rd South Korea, 4th Mexico, 5th Japan, 6th Germany, 7th Colombia, and 8th France. When considering the size of the underlying national populations, the demand for surgical procedures per 100,000 people changes the overall ranking substantially. It was also found that the rate of surgical procedures per surgeon shows great variation between the responding countries. While the US and Brazil are often quoted as the countries with the highest demand for plastic surgery, according to the presented analysis, other countries surpass these countries in surgical procedures per capita. While data acquisition and quality should be improved in the future, valuable insight regarding the demand for surgical procedures can be gained by taking specific demographic and geographic factors into consideration.
Statistical models of global Langmuir mixing
Li, Qing; Fox-Kemper, Baylor; Breivik, Øyvind; Webb, Adrean
2017-05-01
The effects of Langmuir mixing on the surface ocean mixing may be parameterized by applying an enhancement factor which depends on wave, wind, and ocean state to the turbulent velocity scale in the K-Profile Parameterization. Diagnosing the appropriate enhancement factor online in global climate simulations is readily achieved by coupling with a prognostic wave model, but with significant computational and code development expenses. In this paper, two alternatives that do not require a prognostic wave model, (i) a monthly mean enhancement factor climatology, and (ii) an approximation to the enhancement factor based on the empirical wave spectra, are explored and tested in a global climate model. Both appear to reproduce the Langmuir mixing effects as estimated using a prognostic wave model, with nearly identical and substantial improvements in the simulated mixed layer depth and intermediate water ventilation over control simulations, but significantly less computational cost. Simpler approaches, such as ignoring Langmuir mixing altogether or setting a globally constant Langmuir number, are found to be deficient. Thus, the consequences of Stokes depth and misaligned wind and waves are important.
Statistical hypothesis testing with SAS and R
Taeger, Dirk
2014-01-01
A comprehensive guide to statistical hypothesis testing with examples in SAS and R When analyzing datasets the following questions often arise:Is there a short hand procedure for a statistical test available in SAS or R?If so, how do I use it?If not, how do I program the test myself? This book answers these questions and provides an overview of the most commonstatistical test problems in a comprehensive way, making it easy to find and performan appropriate statistical test. A general summary of statistical test theory is presented, along with a basicdescription for each test, including the
Nonparametric Statistics Test Software Package.
1983-09-01
25 I1l,lCELL WRITE (NCF,12 ) IvE (I ,RCCT(I) 122 FORMAT(IlXt 3(H5 9 1) IF( IeLT *NCELL) WRITE (NOF1123 J PARTV(I1J 123 FORMAT( Xll----’,FIo.3J 25 CONT...the user’s entries. Its purpose is to write two types of files needed by the program Crunch: the data file, and the option file. 211 Iuill rateLchiavar...data file and communicate the choice of test and test parameters to Crunch. After a data file is written, Lochinvar prompts the writing of the
Global Tourism. New Volatility, Old Statistics
Corti, Alberto
2016-01-01
In 2015 the scenario of global tourism has radically changed. The new scenario has shifted from the approach of the foregoing “closed-circuit” international tourism flows and the creation of different development centres of the tourism economy in the world taking over the global business that was previously in the hands of Europe and North America. The globalisation of tourism is unavoidable and, in many respects, positive. The creation of new tourist destinations and new countries generating...
Global Envelope Tests for Spatial Processes
DEFF Research Database (Denmark)
Myllymäki, Mari; Mrkvička, Tomáš; Grabarnik, Pavel
2017-01-01
Envelope tests are a popular tool in spatial statistics, where they are used in goodness-of-ﬁt testing. These tests graphically compare an empirical function T(r) with its simulated counterparts from the null model. However, the type I error probability α is conventionally controlled for a ﬁxed d......) the construction of envelopes for a deviation test. These new tests allow the a priori selection of the global α and they yield p-values. We illustrate these tests using simulated and real point pattern data....
Global envelope tests for spatial processes
DEFF Research Database (Denmark)
Myllymäki, Mari; Mrkvička, Tomáš; Grabarnik, Pavel
Envelope tests are a popular tool in spatial statistics, where they are used in goodness-of-fit testing. These tests graphically compare an empirical function T(r) with its simulated counterparts from the null model. However, the type I error probability α is conventionally controlled for a fixed......) the construction of envelopes for a deviation test. These new tests allow the a priori selection of the global α and they yield p-values. We illustrate these tests using simulated and real point pattern data....
The insignificance of statistical significance testing
Johnson, Douglas H.
1999-01-01
Despite their use in scientific journals such as The Journal of Wildlife Management, statistical hypothesis tests add very little value to the products of research. Indeed, they frequently confuse the interpretation of data. This paper describes how statistical hypothesis tests are often viewed, and then contrasts that interpretation with the correct one. I discuss the arbitrariness of P-values, conclusions that the null hypothesis is true, power analysis, and distinctions between statistical and biological significance. Statistical hypothesis testing, in which the null hypothesis about the properties of a population is almost always known a priori to be false, is contrasted with scientific hypothesis testing, which examines a credible null hypothesis about phenomena in nature. More meaningful alternatives are briefly outlined, including estimation and confidence intervals for determining the importance of factors, decision theory for guiding actions in the face of uncertainty, and Bayesian approaches to hypothesis testing and other statistical practices.
Polarimetric Segmentation Using Wishart Test Statistic
DEFF Research Database (Denmark)
Skriver, Henning; Schou, Jesper; Nielsen, Allan Aasbjerg
2002-01-01
A newly developed test statistic for equality of two complex covariance matrices following the complex Wishart distribution and an associated asymptotic probability for the test statistic has been used in a segmentation algorithm. The segmentation algorithm is based on the MUM (merge using moments......) approach, which is a merging algorithm for single channel SAR images. The polarimetric version described in this paper uses the above-mentioned test statistic for merging. The segmentation algorithm has been applied to polarimetric SAR data from the Danish dual-frequency, airborne polarimetric SAR, EMISAR...
Teaching Statistics in Language Testing Courses
Brown, James Dean
2013-01-01
The purpose of this article is to examine the literature on teaching statistics for useful ideas that teachers of language testing courses can draw on and incorporate into their teaching toolkits as they see fit. To those ends, the article addresses eight questions: What is known generally about teaching statistics? Why are students so anxious…
A simplification of the likelihood ratio test statistic for testing ...
African Journals Online (AJOL)
The traditional likelihood ratio test statistic for testing hypothesis about goodness of fit of multinomial probabilities in one, two and multi – dimensional contingency table was simplified. Advantageously, using the simplified version of the statistic to test the null hypothesis is easier and faster because calculating the expected ...
Significance levels for studies with correlated test statistics.
Shi, Jianxin; Levinson, Douglas F; Whittemore, Alice S
2008-07-01
When testing large numbers of null hypotheses, one needs to assess the evidence against the global null hypothesis that none of the hypotheses is false. Such evidence typically is based on the test statistic of the largest magnitude, whose statistical significance is evaluated by permuting the sample units to simulate its null distribution. Efron (2007) has noted that correlation among the test statistics can induce substantial interstudy variation in the shapes of their histograms, which may cause misleading tail counts. Here, we show that permutation-based estimates of the overall significance level also can be misleading when the test statistics are correlated. We propose that such estimates be conditioned on a simple measure of the spread of the observed histogram, and we provide a method for obtaining conditional significance levels. We justify this conditioning using the conditionality principle described by Cox and Hinkley (1974). Application of the method to gene expression data illustrates the circumstances when conditional significance levels are needed.
SPSS for applied sciences basic statistical testing
Davis, Cole
2013-01-01
This book offers a quick and basic guide to using SPSS and provides a general approach to solving problems using statistical tests. It is both comprehensive in terms of the tests covered and the applied settings it refers to, and yet is short and easy to understand. Whether you are a beginner or an intermediate level test user, this book will help you to analyse different types of data in applied settings. It will also give you the confidence to use other statistical software and to extend your expertise to more specific scientific settings as required.The author does not use mathematical form
A statistical-dynamical downscaling procedure for global climate simulations
International Nuclear Information System (INIS)
Frey-Buness, A.; Heimann, D.; Sausen, R.; Schumann, U.
1994-01-01
A statistical-dynamical downscaling procedure for global climate simulations is described. The procedure is based on the assumption that any regional climate is associated with a specific frequency distribution of classified large-scale weather situations. The frequency distributions are derived from multi-year episodes of low resolution global climate simulations. Highly resolved regional distributions of wind and temperature are calculated with a regional model for each class of large-scale weather situation. They are statistically evaluated by weighting them with the according climate-specific frequency. The procedure is exemplarily applied to the Alpine region for a global climate simulation of the present climate. (orig.)
Statistical treatment of fatigue test data
International Nuclear Information System (INIS)
Raske, D.T.
1980-01-01
This report discussed several aspects of fatigue data analysis in order to provide a basis for the development of statistically sound design curves. Included is a discussion on the choice of the dependent variable, the assumptions associated with least squares regression models, the variability of fatigue data, the treatment of data from suspended tests and outlying observations, and various strain-life relations
Statistical test theory for the behavioral sciences
de Gruijter, Dato N M
2007-01-01
Since the development of the first intelligence test in the early 20th century, educational and psychological tests have become important measurement techniques to quantify human behavior. Focusing on this ubiquitous yet fruitful area of research, Statistical Test Theory for the Behavioral Sciences provides both a broad overview and a critical survey of assorted testing theories and models used in psychology, education, and other behavioral science fields. Following a logical progression from basic concepts to more advanced topics, the book first explains classical test theory, covering true score, measurement error, and reliability. It then presents generalizability theory, which provides a framework to deal with various aspects of test scores. In addition, the authors discuss the concept of validity in testing, offering a strategy for evidence-based validity. In the two chapters devoted to item response theory (IRT), the book explores item response models, such as the Rasch model, and applications, incl...
Simplified Freeman-Tukey test statistics for testing probabilities in ...
African Journals Online (AJOL)
This paper presents the simplified version of the Freeman-Tukey test statistic for testing hypothesis about multinomial probabilities in one, two and multidimensional contingency tables that does not require calculating the expected cell frequencies before test of significance. The simplified method established new criteria of ...
Statistical tests to compare motif count exceptionalities
Directory of Open Access Journals (Sweden)
Vandewalle Vincent
2007-03-01
Full Text Available Abstract Background Finding over- or under-represented motifs in biological sequences is now a common task in genomics. Thanks to p-value calculation for motif counts, exceptional motifs are identified and represent candidate functional motifs. The present work addresses the related question of comparing the exceptionality of one motif in two different sequences. Just comparing the motif count p-values in each sequence is indeed not sufficient to decide if this motif is significantly more exceptional in one sequence compared to the other one. A statistical test is required. Results We develop and analyze two statistical tests, an exact binomial one and an asymptotic likelihood ratio test, to decide whether the exceptionality of a given motif is equivalent or significantly different in two sequences of interest. For that purpose, motif occurrences are modeled by Poisson processes, with a special care for overlapping motifs. Both tests can take the sequence compositions into account. As an illustration, we compare the octamer exceptionalities in the Escherichia coli K-12 backbone versus variable strain-specific loops. Conclusion The exact binomial test is particularly adapted for small counts. For large counts, we advise to use the likelihood ratio test which is asymptotic but strongly correlated with the exact binomial test and very simple to use.
New Graphical Methods and Test Statistics for Testing Composite Normality
Directory of Open Access Journals (Sweden)
Marc S. Paolella
2015-07-01
Full Text Available Several graphical methods for testing univariate composite normality from an i.i.d. sample are presented. They are endowed with correct simultaneous error bounds and yield size-correct tests. As all are based on the empirical CDF, they are also consistent for all alternatives. For one test, called the modified stabilized probability test, or MSP, a highly simplified computational method is derived, which delivers the test statistic and also a highly accurate p-value approximation, essentially instantaneously. The MSP test is demonstrated to have higher power against asymmetric alternatives than the well-known and powerful Jarque-Bera test. A further size-correct test, based on combining two test statistics, is shown to have yet higher power. The methodology employed is fully general and can be applied to any i.i.d. univariate continuous distribution setting.
Analysis of Preference Data Using Intermediate Test Statistic Abstract
African Journals Online (AJOL)
PROF. O. E. OSUAGWU
2013-06-01
Jun 1, 2013 ... West African Journal of Industrial and Academic Research Vol.7 No. 1 June ... Keywords:-Preference data, Friedman statistic, multinomial test statistic, intermediate test statistic. ... new method and consequently a new statistic ...
Exploring the temporal stability of global road safety statistics.
Dimitriou, Loukas; Nikolaou, Paraskevas; Antoniou, Constantinos
2018-02-08
Given the importance of rigorous quantitative reasoning in supporting national, regional or global road safety policies, data quality, reliability, and stability are of the upmost importance. This study focuses on macroscopic properties of road safety statistics and the temporal stability of these statistics at a global level. A thorough investigation of two years of measurements was conducted to identify any unexpected gaps that could highlight the existence of inconsistent measurements. The database used in this research includes 121 member countries of the United Nation (UN-121) with a population of at least one million (smaller country data shows higher instability) and includes road safety and socioeconomic variables collected from a number of international databases (e.g. WHO and World Bank) for the years 2010 and 2013. For the fulfillment of the earlier stated goal, a number of data visualization and exploratory analyses (Hierarchical Clustering and Principal Component Analysis) were conducted. Furthermore, in order to provide a richer analysis of the data, we developed and compared the specification of a number of Structural Equation Models for the years 2010 and 2013. Different scenarios have been developed, with different endogenous variables (indicators of mortality rate and fatality risk) and structural forms. The findings of the current research indicate inconsistency phenomena in global statistics of different instances/years. Finally, the results of this research provide evidence on the importance of careful and systematic data collection for developing advanced statistical and econometric techniques and furthermore for developing road safety policies. Copyright © 2017 Elsevier Ltd. All rights reserved.
Kepler Planet Detection Metrics: Statistical Bootstrap Test
Jenkins, Jon M.; Burke, Christopher J.
2016-01-01
This document describes the data produced by the Statistical Bootstrap Test over the final three Threshold Crossing Event (TCE) deliveries to NExScI: SOC 9.1 (Q1Q16)1 (Tenenbaum et al. 2014), SOC 9.2 (Q1Q17) aka DR242 (Seader et al. 2015), and SOC 9.3 (Q1Q17) aka DR253 (Twicken et al. 2016). The last few years have seen significant improvements in the SOC science data processing pipeline, leading to higher quality light curves and more sensitive transit searches. The statistical bootstrap analysis results presented here and the numerical results archived at NASAs Exoplanet Science Institute (NExScI) bear witness to these software improvements. This document attempts to introduce and describe the main features and differences between these three data sets as a consequence of the software changes.
Statistical tests for person misfit in computerized adaptive testing
Glas, Cornelis A.W.; Meijer, R.R.; van Krimpen-Stoop, Edith
1998-01-01
Recently, several person-fit statistics have been proposed to detect nonfitting response patterns. This study is designed to generalize an approach followed by Klauer (1995) to an adaptive testing system using the two-parameter logistic model (2PL) as a null model. The approach developed by Klauer
An omnibus test for the global null hypothesis.
Futschik, Andreas; Taus, Thomas; Zehetmayer, Sonja
2018-01-01
Global hypothesis tests are a useful tool in the context of clinical trials, genetic studies, or meta-analyses, when researchers are not interested in testing individual hypotheses, but in testing whether none of the hypotheses is false. There are several possibilities how to test the global null hypothesis when the individual null hypotheses are independent. If it is assumed that many of the individual null hypotheses are false, combination tests have been recommended to maximize power. If, however, it is assumed that only one or a few null hypotheses are false, global tests based on individual test statistics are more powerful (e.g. Bonferroni or Simes test). However, usually there is no a priori knowledge on the number of false individual null hypotheses. We therefore propose an omnibus test based on cumulative sums of the transformed p-values. We show that this test yields an impressive overall performance. The proposed method is implemented in an R-package called omnibus.
A Statistical Perspective on Highly Accelerated Testing
Energy Technology Data Exchange (ETDEWEB)
Thomas, Edward V. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-02-01
Highly accelerated life testing has been heavily promoted at Sandia (and elsewhere) as a means to rapidly identify product weaknesses caused by flaws in the product's design or manufacturing process. During product development, a small number of units are forced to fail at high stress. The failed units are then examined to determine the root causes of failure. The identification of the root causes of product failures exposed by highly accelerated life testing can instigate changes to the product's design and/or manufacturing process that result in a product with increased reliability. It is widely viewed that this qualitative use of highly accelerated life testing (often associated with the acronym HALT) can be useful. However, highly accelerated life testing has also been proposed as a quantitative means for "demonstrating" the reliability of a product where unreliability is associated with loss of margin via an identified and dominating failure mechanism. It is assumed that the dominant failure mechanism can be accelerated by changing the level of a stress factor that is assumed to be related to the dominant failure mode. In extreme cases, a minimal number of units (often from a pre-production lot) are subjected to a single highly accelerated stress relative to normal use. If no (or, sufficiently few) units fail at this high stress level, some might claim that a certain level of reliability has been demonstrated (relative to normal use conditions). Underlying this claim are assumptions regarding the level of knowledge associated with the relationship between the stress level and the probability of failure. The primary purpose of this document is to discuss (from a statistical perspective) the efficacy of using accelerated life testing protocols (and, in particular, "highly accelerated" protocols) to make quantitative inferences concerning the performance of a product (e.g., reliability) when in fact there is lack-of-knowledge and uncertainty concerning
Explorations in Statistics: Hypothesis Tests and P Values
Curran-Everett, Douglas
2009-01-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This second installment of "Explorations in Statistics" delves into test statistics and P values, two concepts fundamental to the test of a scientific null hypothesis. The essence of a test statistic is that it compares what…
Statistics-Based Compression of Global Wind Fields
Jeong, Jaehong
2017-02-07
Wind has the potential to make a significant contribution to future energy resources. Locating the sources of this renewable energy on a global scale is however extremely challenging, given the difficulty to store very large data sets generated by modern computer models. We propose a statistical model that aims at reproducing the data-generating mechanism of an ensemble of runs via a Stochastic Generator (SG) of global annual wind data. We introduce an evolutionary spectrum approach with spatially varying parameters based on large-scale geographical descriptors such as altitude to better account for different regimes across the Earth\\'s orography. We consider a multi-step conditional likelihood approach to estimate the parameters that explicitly accounts for nonstationary features while also balancing memory storage and distributed computation. We apply the proposed model to more than 18 million points of yearly global wind speed. The proposed SG requires orders of magnitude less storage for generating surrogate ensemble members from wind than does creating additional wind fields from the climate model, even if an effective lossy data compression algorithm is applied to the simulation output.
Statistics-Based Compression of Global Wind Fields
Jeong, Jaehong; Castruccio, Stefano; Crippa, Paola; Genton, Marc G.
2017-01-01
Wind has the potential to make a significant contribution to future energy resources. Locating the sources of this renewable energy on a global scale is however extremely challenging, given the difficulty to store very large data sets generated by modern computer models. We propose a statistical model that aims at reproducing the data-generating mechanism of an ensemble of runs via a Stochastic Generator (SG) of global annual wind data. We introduce an evolutionary spectrum approach with spatially varying parameters based on large-scale geographical descriptors such as altitude to better account for different regimes across the Earth's orography. We consider a multi-step conditional likelihood approach to estimate the parameters that explicitly accounts for nonstationary features while also balancing memory storage and distributed computation. We apply the proposed model to more than 18 million points of yearly global wind speed. The proposed SG requires orders of magnitude less storage for generating surrogate ensemble members from wind than does creating additional wind fields from the climate model, even if an effective lossy data compression algorithm is applied to the simulation output.
Accelerated testing statistical models, test plans, and data analysis
Nelson, Wayne B
2009-01-01
The Wiley-Interscience Paperback Series consists of selected books that have been made more accessible to consumers in an effort to increase global appeal and general circulation. With these new unabridged softcover volumes, Wiley hopes to extend the lives of these works by making them available to future generations of statisticians, mathematicians, and scientists. "". . . a goldmine of knowledge on accelerated life testing principles and practices . . . one of the very few capable of advancing the science of reliability. It definitely belongs in every bookshelf on engineering.""-Dev G.
Testing for Statistical Discrimination based on Gender
DEFF Research Database (Denmark)
Lesner, Rune Vammen
. It is shown that the implications of both screening discrimination and stereotyping are consistent with observable wage dynamics. In addition, it is found that the gender wage gap decreases in tenure but increases in job transitions and that the fraction of women in high-ranking positions within a firm does......This paper develops a model which incorporates the two most commonly cited strands of the literature on statistical discrimination, namely screening discrimination and stereotyping. The model is used to provide empirical evidence of statistical discrimination based on gender in the labour market...... not affect the level of statistical discrimination by gender....
Statistical Decision Theory Estimation, Testing, and Selection
Liese, Friedrich
2008-01-01
Suitable for advanced graduate students and researchers in mathematical statistics and decision theory, this title presents an account of the concepts and a treatment of the major results of classical finite sample size decision theory and modern asymptotic decision theory
Testing for Statistical Discrimination based on Gender
Lesner, Rune Vammen
2016-01-01
This paper develops a model which incorporates the two most commonly cited strands of the literature on statistical discrimination, namely screening discrimination and stereotyping. The model is used to provide empirical evidence of statistical discrimination based on gender in the labour market. It is shown that the implications of both screening discrimination and stereotyping are consistent with observable wage dynamics. In addition, it is found that the gender wage gap decreases in tenure...
Distinguish Dynamic Basic Blocks by Structural Statistical Testing
DEFF Research Database (Denmark)
Petit, Matthieu; Gotlieb, Arnaud
Statistical testing aims at generating random test data that respect selected probabilistic properties. A distribution probability is associated with the program input space in order to achieve statistical test purpose: to test the most frequent usage of software or to maximize the probability of...... control flow path) during the test data selection. We implemented this algorithm in a statistical test data generator for Java programs. A first experimental validation is presented...
Global statistics on addictive behaviours: 2014 status report.
Gowing, Linda R; Ali, Robert L; Allsop, Steve; Marsden, John; Turf, Elizabeth E; West, Robert; Witton, John
2015-06-01
Addictive behaviours are among the greatest scourges on humankind. It is important to estimate the extent of the problem globally and in different geographical regions. Such estimates are available, but there is a need to collate and evaluate these to arrive at the best available synthetic figures. Addiction has commissioned this paper as the first of a series attempting to do this. Online sources of global, regional and national information on prevalence and major harms relating to alcohol use, tobacco use, unsanctioned psychoactive drug use and gambling were identified through expert review and assessed. The primary data sources located were the websites of the World Health Organization (WHO), the United Nations Office on Drugs and Crime (UNODC) and the Alberta Gambling Research Institute. Summary statistics were compared with recent publications on the global epidemiology of addictive behaviours. An estimated 4.9% of the world's adult population (240 million people) suffer from alcohol use disorder (7.8% of men and 1.5% of women), with alcohol causing an estimated 257 disability-adjusted life years lost per 100 000 population. An estimated 22.5% of adults in the world (1 billion people) smoke tobacco products (32.0% of men and 7.0% of women). It is estimated that 11% of deaths in males and 6% of deaths in females each year are due to tobacco. Of 'unsanctioned psychoactive drugs', cannabis is the most prevalent at 3.5% globally, with each of the others at gambling are not possible, but in countries where it has been assessed the prevalence is estimated at 1.5%. Tobacco and alcohol use are by far the most prevalent addictive behaviours and cause the large majority of the harm. However, the quality of data on prevalence and addiction-related harms is mostly low, and comparisons between countries and regions must be viewed with caution. There is an urgent need to review the quality of data on which global estimates are made and coordinate efforts to arrive at
Extending the Reach of Statistical Software Testing
National Research Council Canada - National Science Library
Weber, Robert
2004-01-01
.... In particular, as system complexity increases, the matrices required to generate test cases and perform model analysis can grow dramatically, even exponentially, overwhelming the test generation...
Glass viscosity calculation based on a global statistical modelling approach
Energy Technology Data Exchange (ETDEWEB)
Fluegel, Alex
2007-02-01
A global statistical glass viscosity model was developed for predicting the complete viscosity curve, based on more than 2200 composition-property data of silicate glasses from the scientific literature, including soda-lime-silica container and float glasses, TV panel glasses, borosilicate fiber wool and E type glasses, low expansion borosilicate glasses, glasses for nuclear waste vitrification, lead crystal glasses, binary alkali silicates, and various further compositions from over half a century. It is shown that within a measurement series from a specific laboratory the reported viscosity values are often over-estimated at higher temperatures due to alkali and boron oxide evaporation during the measurement and glass preparation, including data by Lakatos et al. (1972) and the recently published High temperature glass melt property database for process modeling by Seward et al. (2005). Similarly, in the glass transition range many experimental data of borosilicate glasses are reported too high due to phase separation effects. The developed global model corrects those errors. The model standard error was 9-17°C, with R^2 = 0.985-0.989. The prediction 95% confidence interval for glass in mass production largely depends on the glass composition of interest, the composition uncertainty, and the viscosity level. New insights in the mixed-alkali effect are provided.
Monte Carlo testing in spatial statistics, with applications to spatial residuals
DEFF Research Database (Denmark)
Mrkvička, Tomáš; Soubeyrand, Samuel; Myllymäki, Mari
2016-01-01
This paper reviews recent advances made in testing in spatial statistics and discussed at the Spatial Statistics conference in Avignon 2015. The rank and directional quantile envelope tests are discussed and practical rules for their use are provided. These tests are global envelope tests...... with an appropriate type I error probability. Two novel examples are given on their usage. First, in addition to the test based on a classical one-dimensional summary function, the goodness-of-fit of a point process model is evaluated by means of the test based on a higher dimensional functional statistic, namely...
[The research protocol VI: How to choose the appropriate statistical test. Inferential statistics].
Flores-Ruiz, Eric; Miranda-Novales, María Guadalupe; Villasís-Keever, Miguel Ángel
2017-01-01
The statistical analysis can be divided in two main components: descriptive analysis and inferential analysis. An inference is to elaborate conclusions from the tests performed with the data obtained from a sample of a population. Statistical tests are used in order to establish the probability that a conclusion obtained from a sample is applicable to the population from which it was obtained. However, choosing the appropriate statistical test in general poses a challenge for novice researchers. To choose the statistical test it is necessary to take into account three aspects: the research design, the number of measurements and the scale of measurement of the variables. Statistical tests are divided into two sets, parametric and nonparametric. Parametric tests can only be used if the data show a normal distribution. Choosing the right statistical test will make it easier for readers to understand and apply the results.
The research protocol VI: How to choose the appropriate statistical test. Inferential statistics
Directory of Open Access Journals (Sweden)
Eric Flores-Ruiz
2017-10-01
Full Text Available The statistical analysis can be divided in two main components: descriptive analysis and inferential analysis. An inference is to elaborate conclusions from the tests performed with the data obtained from a sample of a population. Statistical tests are used in order to establish the probability that a conclusion obtained from a sample is applicable to the population from which it was obtained. However, choosing the appropriate statistical test in general poses a challenge for novice researchers. To choose the statistical test it is necessary to take into account three aspects: the research design, the number of measurements and the scale of measurement of the variables. Statistical tests are divided into two sets, parametric and nonparametric. Parametric tests can only be used if the data show a normal distribution. Choosing the right statistical test will make it easier for readers to understand and apply the results.
statistical tests for frequency distribution of mean gravity anomalies
African Journals Online (AJOL)
ES Obe
1980-03-01
Mar 1, 1980 ... STATISTICAL TESTS FOR FREQUENCY DISTRIBUTION OF MEAN. GRAVITY ANOMALIES. By ... approach. Kaula [1,2] discussed the method of applying statistical techniques in the ..... mathematical foundation of physical ...
Statistical analysis of global horizontal solar irradiation GHI in Fez city, Morocco
Bounoua, Z.; Mechaqrane, A.
2018-05-01
An accurate knowledge of the solar energy reaching the ground is necessary for sizing and optimizing the performances of solar installations. This paper describes a statistical analysis of the global horizontal solar irradiation (GHI) at Fez city, Morocco. For better reliability, we have first applied a set of check procedures to test the quality of hourly GHI measurements. We then eliminate the erroneous values which are generally due to measurement or the cosine effect errors. Statistical analysis show that the annual mean daily values of GHI is of approximately 5 kWh/m²/day. Daily monthly mean values and other parameter are also calculated.
Statistical Tests for Mixed Linear Models
Khuri, André I; Sinha, Bimal K
2011-01-01
An advanced discussion of linear models with mixed or random effects. In recent years a breakthrough has occurred in our ability to draw inferences from exact and optimum tests of variance component models, generating much research activity that relies on linear models with mixed and random effects. This volume covers the most important research of the past decade as well as the latest developments in hypothesis testing. It compiles all currently available results in the area of exact and optimum tests for variance component models and offers the only comprehensive treatment for these models a
Testing for statistical discrimination in health care.
Balsa, Ana I; McGuire, Thomas G; Meredith, Lisa S
2005-02-01
To examine the extent to which doctors' rational reactions to clinical uncertainty ("statistical discrimination") can explain racial differences in the diagnosis of depression, hypertension, and diabetes. Main data are from the Medical Outcomes Study (MOS), a 1986 study conducted by RAND Corporation in three U.S. cities. The study compares the processes and outcomes of care for patients in different health care systems. Complementary data from National Health And Examination Survey III (NHANES III) and National Comorbidity Survey (NCS) are also used. Across three systems of care (staff health maintenance organizations, multispecialty groups, and solo practices), the MOS selected 523 health care clinicians. A representative cross-section (21,480) of patients was then chosen from a pool of adults who visited any of these providers during a 9-day period. We analyzed a subsample of the MOS data consisting of patients of white family physicians or internists (11,664 patients). We obtain variables reflecting patients' health conditions and severity, demographics, socioeconomic status, and insurance from the patients' screener interview (administered by MOS staff prior to the patient's encounter with the clinician). We used the reports made by the clinician after the visit to construct indicators of doctors' diagnoses. We obtained prevalence rates from NHANES III and NCS. We find evidence consistent with statistical discrimination for diagnoses of hypertension, diabetes, and depression. In particular, we find that if clinicians act like Bayesians, plausible priors held by the physician about the prevalence of the disease across racial groups could account for racial differences in the diagnosis of hypertension and diabetes. In the case of depression, we find evidence that race affects decisions through differences in communication patterns between doctors and white and minority patients. To contend effectively with inequities in health care, it is necessary to understand
A global approach to estimate irrigated areas - a comparison between different data and statistics
Meier, Jonas; Zabel, Florian; Mauser, Wolfram
2018-02-01
Agriculture is the largest global consumer of water. Irrigated areas constitute 40 % of the total area used for agricultural production (FAO, 2014a) Information on their spatial distribution is highly relevant for regional water management and food security. Spatial information on irrigation is highly important for policy and decision makers, who are facing the transition towards more efficient sustainable agriculture. However, the mapping of irrigated areas still represents a challenge for land use classifications, and existing global data sets differ strongly in their results. The following study tests an existing irrigation map based on statistics and extends the irrigated area using ancillary data. The approach processes and analyzes multi-temporal normalized difference vegetation index (NDVI) SPOT-VGT data and agricultural suitability data - both at a spatial resolution of 30 arcsec - incrementally in a multiple decision tree. It covers the period from 1999 to 2012. The results globally show a 18 % larger irrigated area than existing approaches based on statistical data. The largest differences compared to the official national statistics are found in Asia and particularly in China and India. The additional areas are mainly identified within already known irrigated regions where irrigation is more dense than previously estimated. The validation with global and regional products shows the large divergence of existing data sets with respect to size and distribution of irrigated areas caused by spatial resolution, the considered time period and the input data and assumption made.
The Role of Discrete Global Grid Systems in the Global Statistical Geospatial Framework
Purss, M. B. J.; Peterson, P.; Minchin, S. A.; Bermudez, L. E.
2016-12-01
The United Nations Committee of Experts on Global Geospatial Information Management (UN-GGIM) has proposed the development of a Global Statistical Geospatial Framework (GSGF) as a mechanism for the establishment of common analytical systems that enable the integration of statistical and geospatial information. Conventional coordinate reference systems address the globe with a continuous field of points suitable for repeatable navigation and analytical geometry. While this continuous field is represented on a computer in a digitized and discrete fashion by tuples of fixed-precision floating point values, it is a non-trivial exercise to relate point observations spatially referenced in this way to areal coverages on the surface of the Earth. The GSGF states the need to move to gridded data delivery and the importance of using common geographies and geocoding. The challenges associated with meeting these goals are not new and there has been a significant effort within the geospatial community to develop nested gridding standards to tackle these issues over many years. These efforts have recently culminated in the development of a Discrete Global Grid Systems (DGGS) standard which has been developed under the auspices of Open Geospatial Consortium (OGC). DGGS provide a fixed areal based geospatial reference frame for the persistent location of measured Earth observations, feature interpretations, and modelled predictions. DGGS address the entire planet by partitioning it into a discrete hierarchical tessellation of progressively finer resolution cells, which are referenced by a unique index that facilitates rapid computation, query and analysis. The geometry and location of the cell is the principle aspect of a DGGS. Data integration, decomposition, and aggregation is optimised in the DGGS hierarchical structure and can be exploited for efficient multi-source data processing, storage, discovery, transmission, visualization, computation, analysis, and modelling. During
Similar tests and the standardized log likelihood ratio statistic
DEFF Research Database (Denmark)
Jensen, Jens Ledet
1986-01-01
When testing an affine hypothesis in an exponential family the 'ideal' procedure is to calculate the exact similar test, or an approximation to this, based on the conditional distribution given the minimal sufficient statistic under the null hypothesis. By contrast to this there is a 'primitive......' approach in which the marginal distribution of a test statistic considered and any nuisance parameter appearing in the test statistic is replaced by an estimate. We show here that when using standardized likelihood ratio statistics the 'primitive' procedure is in fact an 'ideal' procedure to order O(n -3...
CONFIDENCE LEVELS AND/VS. STATISTICAL HYPOTHESIS TESTING IN STATISTICAL ANALYSIS. CASE STUDY
Directory of Open Access Journals (Sweden)
ILEANA BRUDIU
2009-05-01
Full Text Available Estimated parameters with confidence intervals and testing statistical assumptions used in statistical analysis to obtain conclusions on research from a sample extracted from the population. Paper to the case study presented aims to highlight the importance of volume of sample taken in the study and how this reflects on the results obtained when using confidence intervals and testing for pregnant. If statistical testing hypotheses not only give an answer "yes" or "no" to some questions of statistical estimation using statistical confidence intervals provides more information than a test statistic, show high degree of uncertainty arising from small samples and findings build in the "marginally significant" or "almost significant (p very close to 0.05.
A statistical procedure for testing financial contagion
Directory of Open Access Journals (Sweden)
Attilio Gardini
2013-05-01
Full Text Available The aim of the paper is to provide an analysis of contagion through the measurement of the risk premia disequilibria dynamics. In order to discriminate among several disequilibrium situations we propose to test contagion on the basis of a two-step procedure: in the first step we estimate the preference parameters of the consumption-based asset pricing model (CCAPM to control for fundamentals and to measure the equilibrium risk premia in different countries; in the second step we measure the differences among empirical risk premia and equilibrium risk premia in order to test cross-country disequilibrium situations due to contagion. Disequilibrium risk premium measures are modelled by the multivariate DCC-GARCH model including a deterministic crisis variable. The model describes simultaneously the risk premia dynamics due to endogenous amplifications of volatility and to exogenous idiosyncratic shocks (contagion, having controlled for fundamentals effects in the first step. Our approach allows us to achieve two goals: (i to identify the disequilibria generated by irrational behaviours of the agents, which cause increasing in volatility that is not explained by the economic fundamentals but is endogenous to financial markets, and (ii to assess the existence of contagion effect defined by exogenous shift in cross-country return correlations during crisis periods. Our results show evidence of contagion from the United States to United Kingdom, Japan, France, and Italy during the financial crisis which started in 2007-08.
Statistical hypothesis tests of some micrometeorological observations
International Nuclear Information System (INIS)
SethuRaman, S.; Tichler, J.
1977-01-01
Chi-square goodness-of-fit is used to test the hypothesis that the medium scale of turbulence in the atmospheric surface layer is normally distributed. Coefficients of skewness and excess are computed from the data. If the data are not normal, these coefficients are used in Edgeworth's asymptotic expansion of Gram-Charlier series to determine an altrnate probability density function. The observed data are then compared with the modified probability densities and the new chi-square values computed.Seventy percent of the data analyzed was either normal or approximatley normal. The coefficient of skewness g 1 has a good correlation with the chi-square values. Events with vertical-barg 1 vertical-bar 1 vertical-bar<0.43 were approximately normal. Intermittency associated with the formation and breaking of internal gravity waves in surface-based inversions over water is thought to be the reason for the non-normality
HOW TO SELECT APPROPRIATE STATISTICAL TEST IN SCIENTIFIC ARTICLES
Directory of Open Access Journals (Sweden)
Vladimir TRAJKOVSKI
2016-09-01
Full Text Available Statistics is mathematical science dealing with the collection, analysis, interpretation, and presentation of masses of numerical data in order to draw relevant conclusions. Statistics is a form of mathematical analysis that uses quantified models, representations and synopses for a given set of experimental data or real-life studies. The students and young researchers in biomedical sciences and in special education and rehabilitation often declare that they have chosen to enroll that study program because they have lack of knowledge or interest in mathematics. This is a sad statement, but there is much truth in it. The aim of this editorial is to help young researchers to select statistics or statistical techniques and statistical software appropriate for the purposes and conditions of a particular analysis. The most important statistical tests are reviewed in the article. Knowing how to choose right statistical test is an important asset and decision in the research data processing and in the writing of scientific papers. Young researchers and authors should know how to choose and how to use statistical methods. The competent researcher will need knowledge in statistical procedures. That might include an introductory statistics course, and it most certainly includes using a good statistics textbook. For this purpose, there is need to return of Statistics mandatory subject in the curriculum of the Institute of Special Education and Rehabilitation at Faculty of Philosophy in Skopje. Young researchers have a need of additional courses in statistics. They need to train themselves to use statistical software on appropriate way.
Testing the Difference of Correlated Agreement Coefficients for Statistical Significance
Gwet, Kilem L.
2016-01-01
This article addresses the problem of testing the difference between two correlated agreement coefficients for statistical significance. A number of authors have proposed methods for testing the difference between two correlated kappa coefficients, which require either the use of resampling methods or the use of advanced statistical modeling…
International Nuclear Information System (INIS)
Ayodele, T.R.; Ogunjuyigbe, A.S.O.
2015-01-01
In this paper, probability distribution of clearness index is proposed for the prediction of global solar radiation. First, the clearness index is obtained from the past data of global solar radiation, then, the parameters of the appropriate distribution that best fit the clearness index are determined. The global solar radiation is thereafter predicted from the clearness index using inverse transformation of the cumulative distribution function. To validate the proposed method, eight years global solar radiation data (2000–2007) of Ibadan, Nigeria are used to determine the parameters of appropriate probability distribution for clearness index. The calculated parameters are then used to predict the future monthly average global solar radiation for the following year (2008). The predicted values are compared with the measured values using four statistical tests: the Root Mean Square Error (RMSE), MAE (Mean Absolute Error), MAPE (Mean Absolute Percentage Error) and the coefficient of determination (R"2). The proposed method is also compared to the existing regression models. The results show that logistic distribution provides the best fit for clearness index of Ibadan and the proposed method is effective in predicting the monthly average global solar radiation with overall RMSE of 0.383 MJ/m"2/day, MAE of 0.295 MJ/m"2/day, MAPE of 2% and R"2 of 0.967. - Highlights: • Distribution of clearnes index is proposed for prediction of global solar radiation. • The clearness index is obtained from the past data of global solar radiation. • The parameters of distribution that best fit the clearness index are determined. • Solar radiation is predicted from the clearness index using inverse transformation. • The method is effective in predicting the monthly average global solar radiation.
Hayslett, H T
1991-01-01
Statistics covers the basic principles of Statistics. The book starts by tackling the importance and the two kinds of statistics; the presentation of sample data; the definition, illustration and explanation of several measures of location; and the measures of variation. The text then discusses elementary probability, the normal distribution and the normal approximation to the binomial. Testing of statistical hypotheses and tests of hypotheses about the theoretical proportion of successes in a binomial population and about the theoretical mean of a normal population are explained. The text the
Corrections of the NIST Statistical Test Suite for Randomness
Kim, Song-Ju; Umeno, Ken; Hasegawa, Akio
2004-01-01
It is well known that the NIST statistical test suite was used for the evaluation of AES candidate algorithms. We have found that the test setting of Discrete Fourier Transform test and Lempel-Ziv test of this test suite are wrong. We give four corrections of mistakes in the test settings. This suggests that re-evaluation of the test results should be needed.
Global Wine Markets, 1961 to 2009: A statistical compendium
Anderson, Kym; Nelgen, Signe
2011-01-01
Until very recently, most grape-based wine was consumed close to where it was produced, and mostly that was in Europe. Barely one-tenth of the world’s wine production was exported prior to the 1970s, even counting intra-European trade. The latest wave of globalization has changed that forever. Now more than one-third of all wine consumed globally is produced in another country, and Europe’s dominance of global wine trade has been greatly diminished by the surge of exports from ‘New World’ pro...
Caveats for using statistical significance tests in research assessments
DEFF Research Database (Denmark)
Schneider, Jesper Wiborg
2013-01-01
controversial and numerous criticisms have been leveled against their use. Based on examples from articles by proponents of the use statistical significance tests in research assessments, we address some of the numerous problems with such tests. The issues specifically discussed are the ritual practice......This article raises concerns about the advantages of using statistical significance tests in research assessments as has recently been suggested in the debate about proper normalization procedures for citation indicators by Opthof and Leydesdorff (2010). Statistical significance tests are highly...... argue that applying statistical significance tests and mechanically adhering to their results are highly problematic and detrimental to critical thinking. We claim that the use of such tests do not provide any advantages in relation to deciding whether differences between citation indicators...
Statistical analysis and planning of multihundred-watt impact tests
International Nuclear Information System (INIS)
Martz, H.F. Jr.; Waterman, M.S.
1977-10-01
Modular multihundred-watt (MHW) radioisotope thermoelectric generators (RTG's) are used as a power source for spacecraft. Due to possible environmental contamination by radioactive materials, numerous tests are required to determine and verify the safety of the RTG. There are results available from 27 fueled MHW impact tests regarding hoop failure, fingerprint failure, and fuel failure. Data from the 27 tests are statistically analyzed for relationships that exist between the test design variables and the failure types. Next, these relationships are used to develop a statistical procedure for planning and conducting either future MHW impact tests or similar tests on other RTG fuel sources. Finally, some conclusions are given
Global warming and local dimming. The statistical evidence
Energy Technology Data Exchange (ETDEWEB)
Magnus, J.R.; Melenberg, B. [Department of Econometrics and Operations Research, Tilburg University, Tilburg (Netherlands); Muris, C. [CentER, Tilburg University, Tilburg (Netherlands)
2011-01-15
Two effects largely determine global warming: the well-known greenhouse effect and the less well-known solar radiation effect. An increase in concentrations of carbon dioxide and other greenhouse gases contributes to global warming: the greenhouse effect. In addition, small particles, called aerosols, reflect and absorb sunlight in the atmosphere. More pollution causes an increase in aerosols, so that less sunlight reaches the Earth (global dimming). Despite its name, global dimming is primarily a local (or regional) effect. Because of the dimming the Earth becomes cooler: the solar radiation effect. Global warming thus consists of two components: the (global) greenhouse effect and the (local) solar radiation effect, which work in opposite directions. Only the sum of the greenhouse effect and the solar radiation effect is observed, not the two effects separately. Our purpose is to identify the two effects. This is important, because the existence of the solar radiation effect obscures the magnitude of the greenhouse effect. We propose a simple climate model with a small number of parameters. We gather data from a large number of weather stations around the world for the period 1959-2002. We then estimate the parameters using dynamic panel data methods, and quantify the parameter uncertainty. Next, we decompose the estimated temperature change of 0.73C (averaged over the weather stations) into a greenhouse effect of 1.87C, a solar radiation effect of -1.09C, and a small remainder term. Finally, we subject our findings to extensive sensitivity analyses.
Global warming and local dimming. The statistical evidence
International Nuclear Information System (INIS)
Magnus, J.R.; Melenberg, B.; Muris, C.
2011-01-01
Two effects largely determine global warming: the well-known greenhouse effect and the less well-known solar radiation effect. An increase in concentrations of carbon dioxide and other greenhouse gases contributes to global warming: the greenhouse effect. In addition, small particles, called aerosols, reflect and absorb sunlight in the atmosphere. More pollution causes an increase in aerosols, so that less sunlight reaches the Earth (global dimming). Despite its name, global dimming is primarily a local (or regional) effect. Because of the dimming the Earth becomes cooler: the solar radiation effect. Global warming thus consists of two components: the (global) greenhouse effect and the (local) solar radiation effect, which work in opposite directions. Only the sum of the greenhouse effect and the solar radiation effect is observed, not the two effects separately. Our purpose is to identify the two effects. This is important, because the existence of the solar radiation effect obscures the magnitude of the greenhouse effect. We propose a simple climate model with a small number of parameters. We gather data from a large number of weather stations around the world for the period 1959-2002. We then estimate the parameters using dynamic panel data methods, and quantify the parameter uncertainty. Next, we decompose the estimated temperature change of 0.73C (averaged over the weather stations) into a greenhouse effect of 1.87C, a solar radiation effect of -1.09C, and a small remainder term. Finally, we subject our findings to extensive sensitivity analyses.
Estimation of global network statistics from incomplete data.
Directory of Open Access Journals (Sweden)
Catherine A Bliss
Full Text Available Complex networks underlie an enormous variety of social, biological, physical, and virtual systems. A profound complication for the science of complex networks is that in most cases, observing all nodes and all network interactions is impossible. Previous work addressing the impacts of partial network data is surprisingly limited, focuses primarily on missing nodes, and suggests that network statistics derived from subsampled data are not suitable estimators for the same network statistics describing the overall network topology. We generate scaling methods to predict true network statistics, including the degree distribution, from only partial knowledge of nodes, links, or weights. Our methods are transparent and do not assume a known generating process for the network, thus enabling prediction of network statistics for a wide variety of applications. We validate analytical results on four simulated network classes and empirical data sets of various sizes. We perform subsampling experiments by varying proportions of sampled data and demonstrate that our scaling methods can provide very good estimates of true network statistics while acknowledging limits. Lastly, we apply our techniques to a set of rich and evolving large-scale social networks, Twitter reply networks. Based on 100 million tweets, we use our scaling techniques to propose a statistical characterization of the Twitter Interactome from September 2008 to November 2008. Our treatment allows us to find support for Dunbar's hypothesis in detecting an upper threshold for the number of active social contacts that individuals maintain over the course of one week.
Kleibergen, F.R.
2002-01-01
We extend the novel pivotal statistics for testing the parameters in the instrumental variables regression model. We show that these statistics result from a decomposition of the Anderson-Rubin statistic into two independent pivotal statistics. The first statistic is a score statistic that tests
Kolmogorov complexity, pseudorandom generators and statistical models testing
Czech Academy of Sciences Publication Activity Database
Šindelář, Jan; Boček, Pavel
2002-01-01
Roč. 38, č. 6 (2002), s. 747-759 ISSN 0023-5954 R&D Projects: GA ČR GA102/99/1564 Institutional research plan: CEZ:AV0Z1075907 Keywords : Kolmogorov complexity * pseudorandom generators * statistical models testing Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.341, year: 2002
Common pitfalls in statistical analysis: The perils of multiple testing
Ranganathan, Priya; Pramesh, C. S.; Buyse, Marc
2016-01-01
Multiple testing refers to situations where a dataset is subjected to statistical testing multiple times - either at multiple time-points or through multiple subgroups or for multiple end-points. This amplifies the probability of a false-positive finding. In this article, we look at the consequences of multiple testing and explore various methods to deal with this issue. PMID:27141478
Statistical strategies for global monitoring of tropical forests
Raymond L. Czaplewski
1991-01-01
The Food and Agricultural Organization (FAO) of the United Nations is conducting a global assessment of tropical forest resources, which will be accomplished by mid-1992. This assessment requires, in part, estimates of the total area of tropical forest cover in 1990, and the rate of change in forest cover between 1980 and 1990. This paper describes: (1) the strategic...
Comparing statistical tests for detecting soil contamination greater than background
International Nuclear Information System (INIS)
Hardin, J.W.; Gilbert, R.O.
1993-12-01
The Washington State Department of Ecology (WSDE) recently issued a report that provides guidance on statistical issues regarding investigation and cleanup of soil and groundwater contamination under the Model Toxics Control Act Cleanup Regulation. Included in the report are procedures for determining a background-based cleanup standard and for conducting a 3-step statistical test procedure to decide if a site is contaminated greater than the background standard. The guidance specifies that the State test should only be used if the background and site data are lognormally distributed. The guidance in WSDE allows for using alternative tests on a site-specific basis if prior approval is obtained from WSDE. This report presents the results of a Monte Carlo computer simulation study conducted to evaluate the performance of the State test and several alternative tests for various contamination scenarios (background and site data distributions). The primary test performance criteria are (1) the probability the test will indicate that a contaminated site is indeed contaminated, and (2) the probability that the test will indicate an uncontaminated site is contaminated. The simulation study was conducted assuming the background concentrations were from lognormal or Weibull distributions. The site data were drawn from distributions selected to represent various contamination scenarios. The statistical tests studied are the State test, t test, Satterthwaite's t test, five distribution-free tests, and several tandem tests (wherein two or more tests are conducted using the same data set)
Statistical inferences for bearings life using sudden death test
Directory of Open Access Journals (Sweden)
Morariu Cristin-Olimpiu
2017-01-01
Full Text Available In this paper we propose a calculus method for reliability indicators estimation and a complete statistical inferences for three parameters Weibull distribution of bearings life. Using experimental values regarding the durability of bearings tested on stands by the sudden death tests involves a series of particularities of the estimation using maximum likelihood method and statistical inference accomplishment. The paper detailing these features and also provides an example calculation.
Characterizing and Addressing the Need for Statistical Adjustment of Global Climate Model Data
White, K. D.; Baker, B.; Mueller, C.; Villarini, G.; Foley, P.; Friedman, D.
2017-12-01
As part of its mission to research and measure the effects of the changing climate, the U. S. Army Corps of Engineers (USACE) regularly uses the World Climate Research Programme's Coupled Model Intercomparison Project Phase 5 (CMIP5) multi-model dataset. However, these data are generated at a global level and are not fine-tuned for specific watersheds. This often causes CMIP5 output to vary from locally observed patterns in the climate. Several downscaling methods have been developed to increase the resolution of the CMIP5 data and decrease systemic differences to support decision-makers as they evaluate results at the watershed scale. Evaluating preliminary comparisons of observed and projected flow frequency curves over the US revealed a simple framework for water resources decision makers to plan and design water resources management measures under changing conditions using standard tools. Using this framework as a basis, USACE has begun to explore to use of statistical adjustment to alter global climate model data to better match the locally observed patterns while preserving the general structure and behavior of the model data. When paired with careful measurement and hypothesis testing, statistical adjustment can be particularly effective at navigating the compromise between the locally observed patterns and the global climate model structures for decision makers.
Statistical Inference and Patterns of Inequality in the Global North
Moran, Timothy Patrick
2006-01-01
Cross-national inequality trends have historically been a crucial field of inquiry across the social sciences, and new methodological techniques of statistical inference have recently improved the ability to analyze these trends over time. This paper applies Monte Carlo, bootstrap inference methods to the income surveys of the Luxembourg Income…
Ganju, Jitendra; Yu, Xinxin; Ma, Guoguang Julie
2013-01-01
Formal inference in randomized clinical trials is based on controlling the type I error rate associated with a single pre-specified statistic. The deficiency of using just one method of analysis is that it depends on assumptions that may not be met. For robust inference, we propose pre-specifying multiple test statistics and relying on the minimum p-value for testing the null hypothesis of no treatment effect. The null hypothesis associated with the various test statistics is that the treatment groups are indistinguishable. The critical value for hypothesis testing comes from permutation distributions. Rejection of the null hypothesis when the smallest p-value is less than the critical value controls the type I error rate at its designated value. Even if one of the candidate test statistics has low power, the adverse effect on the power of the minimum p-value statistic is not much. Its use is illustrated with examples. We conclude that it is better to rely on the minimum p-value rather than a single statistic particularly when that single statistic is the logrank test, because of the cost and complexity of many survival trials. Copyright © 2013 John Wiley & Sons, Ltd.
Log-concave Probability Distributions: Theory and Statistical Testing
DEFF Research Database (Denmark)
An, Mark Yuing
1996-01-01
This paper studies the broad class of log-concave probability distributions that arise in economics of uncertainty and information. For univariate, continuous, and log-concave random variables we prove useful properties without imposing the differentiability of density functions. Discrete...... and multivariate distributions are also discussed. We propose simple non-parametric testing procedures for log-concavity. The test statistics are constructed to test one of the two implicati ons of log-concavity: increasing hazard rates and new-is-better-than-used (NBU) property. The test for increasing hazard...... rates are based on normalized spacing of the sample order statistics. The tests for NBU property fall into the category of Hoeffding's U-statistics...
Statistical Estimation of Heterogeneities: A New Frontier in Well Testing
Neuman, S. P.; Guadagnini, A.; Illman, W. A.; Riva, M.; Vesselinov, V. V.
2001-12-01
Well-testing methods have traditionally relied on analytical solutions of groundwater flow equations in relatively simple domains, consisting of one or at most a few units having uniform hydraulic properties. Recently, attention has been shifting toward methods and solutions that would allow one to characterize subsurface heterogeneities in greater detail. On one hand, geostatistical inverse methods are being used to assess the spatial variability of parameters, such as permeability and porosity, on the basis of multiple cross-hole pressure interference tests. On the other hand, analytical solutions are being developed to describe the mean and variance (first and second statistical moments) of flow to a well in a randomly heterogeneous medium. Geostatistical inverse interpretation of cross-hole tests yields a smoothed but detailed "tomographic" image of how parameters actually vary in three-dimensional space, together with corresponding measures of estimation uncertainty. Moment solutions may soon allow one to interpret well tests in terms of statistical parameters such as the mean and variance of log permeability, its spatial autocorrelation and statistical anisotropy. The idea of geostatistical cross-hole tomography is illustrated through pneumatic injection tests conducted in unsaturated fractured tuff at the Apache Leap Research Site near Superior, Arizona. The idea of using moment equations to interpret well-tests statistically is illustrated through a recently developed three-dimensional solution for steady state flow to a well in a bounded, randomly heterogeneous, statistically anisotropic aquifer.
Your Chi-Square Test Is Statistically Significant: Now What?
Sharpe, Donald
2015-01-01
Applied researchers have employed chi-square tests for more than one hundred years. This paper addresses the question of how one should follow a statistically significant chi-square test result in order to determine the source of that result. Four approaches were evaluated: calculating residuals, comparing cells, ransacking, and partitioning. Data…
Statistical test for the distribution of galaxies on plates
International Nuclear Information System (INIS)
Garcia Lambas, D.
1985-01-01
A statistical test for the distribution of galaxies on plates is presented. We apply the test to synthetic astronomical plates obtained by means of numerical simulation (Garcia Lambas and Sersic 1983) with three different models for the 3-dimensional distribution, comparison with an observational plate, suggest the presence of filamentary structure. (author)
CUSUM-based person-fit statistics for adaptive testing
van Krimpen-Stoop, Edith; Meijer, R.R.
1999-01-01
Item scores that do not fit an assumed item response theory model may cause the latent trait value to be estimated inaccurately. Several person-fit statistics for detecting nonfitting score patterns for paper-and-pencil tests have been proposed. In the context of computerized adaptive tests (CAT),
CUSUM-based person-fit statistics for adaptive testing
van Krimpen-Stoop, Edith; Meijer, R.R.
2001-01-01
Item scores that do not fit an assumed item response theory model may cause the latent trait value to be inaccurately estimated. Several person-fit statistics for detecting nonfitting score patterns for paper-and-pencil tests have been proposed. In the context of computerized adaptive tests (CAT),
Modified Distribution-Free Goodness-of-Fit Test Statistic.
Chun, So Yeon; Browne, Michael W; Shapiro, Alexander
2018-03-01
Covariance structure analysis and its structural equation modeling extensions have become one of the most widely used methodologies in social sciences such as psychology, education, and economics. An important issue in such analysis is to assess the goodness of fit of a model under analysis. One of the most popular test statistics used in covariance structure analysis is the asymptotically distribution-free (ADF) test statistic introduced by Browne (Br J Math Stat Psychol 37:62-83, 1984). The ADF statistic can be used to test models without any specific distribution assumption (e.g., multivariate normal distribution) of the observed data. Despite its advantage, it has been shown in various empirical studies that unless sample sizes are extremely large, this ADF statistic could perform very poorly in practice. In this paper, we provide a theoretical explanation for this phenomenon and further propose a modified test statistic that improves the performance in samples of realistic size. The proposed statistic deals with the possible ill-conditioning of the involved large-scale covariance matrices.
THE ATKINSON INDEX, THE MORAN STATISTIC, AND TESTING EXPONENTIALITY
Nao, Mimoto; Ricardas, Zitikis; Department of Statistics and Probability, Michigan State University; Department of Statistical and Actuarial Sciences, University of Western Ontario
2008-01-01
Constructing tests for exponentiality has been an active and fruitful research area, with numerous applications in engineering, biology and other sciences concerned with life-time data. In the present paper, we construct and investigate powerful tests for exponentiality based on two well known quantities: the Atkinson index and the Moran statistic. We provide an extensive study of the performance of the tests and compare them with those already available in the literature.
[Clinical research IV. Relevancy of the statistical test chosen].
Talavera, Juan O; Rivas-Ruiz, Rodolfo
2011-01-01
When we look at the difference between two therapies or the association of a risk factor or prognostic indicator with its outcome, we need to evaluate the accuracy of the result. This assessment is based on a judgment that uses information about the study design and statistical management of the information. This paper specifically mentions the relevance of the statistical test selected. Statistical tests are chosen mainly from two characteristics: the objective of the study and type of variables. The objective can be divided into three test groups: a) those in which you want to show differences between groups or inside a group before and after a maneuver, b) those that seek to show the relationship (correlation) between variables, and c) those that aim to predict an outcome. The types of variables are divided in two: quantitative (continuous and discontinuous) and qualitative (ordinal and dichotomous). For example, if we seek to demonstrate differences in age (quantitative variable) among patients with systemic lupus erythematosus (SLE) with and without neurological disease (two groups), the appropriate test is the "Student t test for independent samples." But if the comparison is about the frequency of females (binomial variable), then the appropriate statistical test is the χ(2).
688,112 statistical results : Content mining psychology articles for statistical test results
Hartgerink, C.H.J.
2016-01-01
In this data deposit, I describe a dataset that is the result of content mining 167,318 published articles for statistical test results reported according to the standards prescribed by the American Psychological Association (APA). Articles published by the APA, Springer, Sage, and Taylor & Francis
Martens, Pim; Akin, Su-Mia; Maud, Huynen; Mohsin, Raza
2010-09-17
It is clear that globalization is something more than a purely economic phenomenon manifesting itself on a global scale. Among the visible manifestations of globalization are the greater international movement of goods and services, financial capital, information and people. In addition, there are technological developments, more transboundary cultural exchanges, facilitated by the freer trade of more differentiated products as well as by tourism and immigration, changes in the political landscape and ecological consequences. In this paper, we link the Maastricht Globalization Index with health indicators to analyse if more globalized countries are doing better in terms of infant mortality rate, under-five mortality rate, and adult mortality rate. The results indicate a positive association between a high level of globalization and low mortality rates. In view of the arguments that globalization provides winners and losers, and might be seen as a disequalizing process, we should perhaps be careful in interpreting the observed positive association as simple evidence that globalization is mostly good for our health. It is our hope that a further analysis of health impacts of globalization may help in adjusting and optimising the process of globalization on every level in the direction of a sustainable and healthy development for all.
Is globalization healthy: a statistical indicator analysis of the impacts of globalization on health
Directory of Open Access Journals (Sweden)
Martens Pim
2010-09-01
Full Text Available Abstract It is clear that globalization is something more than a purely economic phenomenon manifesting itself on a global scale. Among the visible manifestations of globalization are the greater international movement of goods and services, financial capital, information and people. In addition, there are technological developments, more transboundary cultural exchanges, facilitated by the freer trade of more differentiated products as well as by tourism and immigration, changes in the political landscape and ecological consequences. In this paper, we link the Maastricht Globalization Index with health indicators to analyse if more globalized countries are doing better in terms of infant mortality rate, under-five mortality rate, and adult mortality rate. The results indicate a positive association between a high level of globalization and low mortality rates. In view of the arguments that globalization provides winners and losers, and might be seen as a disequalizing process, we should perhaps be careful in interpreting the observed positive association as simple evidence that globalization is mostly good for our health. It is our hope that a further analysis of health impacts of globalization may help in adjusting and optimising the process of globalization on every level in the direction of a sustainable and healthy development for all.
Statistical Analysis of Development Trends in Global Renewable Energy
Directory of Open Access Journals (Sweden)
Marina D. Simonova
2016-01-01
Full Text Available The article focuses on the economic and statistical analysis of industries associated with the use of renewable energy sources in several countries. The dynamic development and implementation of technologies based on renewable energy sources (hereinafter RES is the defining trend of world energy development. The uneven distribution of hydrocarbon reserves, increasing demand of developing countries and environmental risks associated with the production and consumption of fossil resources has led to an increasing interest of many states to this field. Creating low-carbon economies involves the implementation of plans to increase the proportion of clean energy through renewable energy sources, energy efficiency, reduce greenhouse gas emissions. The priority of this sector is a characteristic feature of modern development of developed (USA, EU, Japan and emerging economies (China, India, Brazil, etc., as evidenced by the inclusion of the development of this segment in the state energy strategies and the revision of existing approaches to energy security. The analysis of the use of renewable energy, its contribution to value added of countries-producers is of a particular interest. Over the last decade, the share of energy produced from renewable sources in the energy balances of the world's largest economies increased significantly. Every year the number of power generating capacity based on renewable energy is growing, especially, this trend is apparent in China, USA and European Union countries. There is a significant increase in direct investment in renewable energy. The total investment over the past ten years increased by 5.6 times. The most rapidly developing kinds are solar energy and wind power.
Reliability Evaluation of Concentric Butterfly Valve Using Statistical Hypothesis Test
International Nuclear Information System (INIS)
Chang, Mu Seong; Choi, Jong Sik; Choi, Byung Oh; Kim, Do Sik
2015-01-01
A butterfly valve is a type of flow-control device typically used to regulate a fluid flow. This paper presents an estimation of the shape parameter of the Weibull distribution, characteristic life, and B10 life for a concentric butterfly valve based on a statistical analysis of the reliability test data taken before and after the valve improvement. The difference in the shape and scale parameters between the existing and improved valves is reviewed using a statistical hypothesis test. The test results indicate that the shape parameter of the improved valve is similar to that of the existing valve, and that the scale parameter of the improved valve is found to have increased. These analysis results are particularly useful for a reliability qualification test and the determination of the service life cycles
Reliability Evaluation of Concentric Butterfly Valve Using Statistical Hypothesis Test
Energy Technology Data Exchange (ETDEWEB)
Chang, Mu Seong; Choi, Jong Sik; Choi, Byung Oh; Kim, Do Sik [Korea Institute of Machinery and Materials, Daejeon (Korea, Republic of)
2015-12-15
A butterfly valve is a type of flow-control device typically used to regulate a fluid flow. This paper presents an estimation of the shape parameter of the Weibull distribution, characteristic life, and B10 life for a concentric butterfly valve based on a statistical analysis of the reliability test data taken before and after the valve improvement. The difference in the shape and scale parameters between the existing and improved valves is reviewed using a statistical hypothesis test. The test results indicate that the shape parameter of the improved valve is similar to that of the existing valve, and that the scale parameter of the improved valve is found to have increased. These analysis results are particularly useful for a reliability qualification test and the determination of the service life cycles.
Testing the statistical compatibility of independent data sets
International Nuclear Information System (INIS)
Maltoni, M.; Schwetz, T.
2003-01-01
We discuss a goodness-of-fit method which tests the compatibility between statistically independent data sets. The method gives sensible results even in cases where the χ 2 minima of the individual data sets are very low or when several parameters are fitted to a large number of data points. In particular, it avoids the problem that a possible disagreement between data sets becomes diluted by data points which are insensitive to the crucial parameters. A formal derivation of the probability distribution function for the proposed test statistics is given, based on standard theorems of statistics. The application of the method is illustrated on data from neutrino oscillation experiments, and its complementarity to the standard goodness-of-fit is discussed
A comparison of test statistics for the recovery of rapid growth-based enumeration tests
van den Heuvel, Edwin R.; IJzerman-Boon, Pieta C.
This paper considers five test statistics for comparing the recovery of a rapid growth-based enumeration test with respect to the compendial microbiological method using a specific nonserial dilution experiment. The finite sample distributions of these test statistics are unknown, because they are
Statistical approach for collaborative tests, reference material certification procedures
International Nuclear Information System (INIS)
Fangmeyer, H.; Haemers, L.; Larisse, J.
1977-01-01
The first part introduces the different aspects in organizing and executing intercomparison tests of chemical or physical quantities. It follows a description of a statistical procedure to handle the data collected in a circular analysis. Finally, an example demonstrates how the tool can be applied and which conclusion can be drawn of the results obtained
Use of run statistics to validate tensile tests
International Nuclear Information System (INIS)
Eatherly, W.P.
1981-01-01
In tensile testing of irradiated graphites, it is difficult to assure alignment of sample and train for tensile measurements. By recording location of fractures, run (sequential) statistics can readily detect lack of randomness. The technique is based on partitioning binomial distributions
Conducting tests for statistically significant differences using forest inventory data
James A. Westfall; Scott A. Pugh; John W. Coulston
2013-01-01
Many forest inventory and monitoring programs are based on a sample of ground plots from which estimates of forest resources are derived. In addition to evaluating metrics such as number of trees or amount of cubic wood volume, it is often desirable to make comparisons between resource attributes. To properly conduct statistical tests for differences, it is imperative...
Test for the statistical significance of differences between ROC curves
International Nuclear Information System (INIS)
Metz, C.E.; Kronman, H.B.
1979-01-01
A test for the statistical significance of observed differences between two measured Receiver Operating Characteristic (ROC) curves has been designed and evaluated. The set of observer response data for each ROC curve is assumed to be independent and to arise from a ROC curve having a form which, in the absence of statistical fluctuations in the response data, graphs as a straight line on double normal-deviate axes. To test the significance of an apparent difference between two measured ROC curves, maximum likelihood estimates of the two parameters of each curve and the associated parameter variances and covariance are calculated from the corresponding set of observer response data. An approximate Chi-square statistic with two degrees of freedom is then constructed from the differences between the parameters estimated for each ROC curve and from the variances and covariances of these estimates. This statistic is known to be truly Chi-square distributed only in the limit of large numbers of trials in the observer performance experiments. Performance of the statistic for data arising from a limited number of experimental trials was evaluated. Independent sets of rating scale data arising from the same underlying ROC curve were paired, and the fraction of differences found (falsely) significant was compared to the significance level, α, used with the test. Although test performance was found to be somewhat dependent on both the number of trials in the data and the position of the underlying ROC curve in the ROC space, the results for various significance levels showed the test to be reliable under practical experimental conditions
Statistical considerations for harmonization of the global multicenter study on reference values.
Ichihara, Kiyoshi
2014-05-15
The global multicenter study on reference values coordinated by the Committee on Reference Intervals and Decision Limits (C-RIDL) of the IFCC was launched in December 2011, targeting 45 commonly tested analytes with the following objectives: 1) to derive reference intervals (RIs) country by country using a common protocol, and 2) to explore regionality/ethnicity of reference values by aligning test results among the countries. To achieve these objectives, it is crucial to harmonize 1) the protocol for recruitment and sampling, 2) statistical procedures for deriving the RI, and 3) test results through measurement of a panel of sera in common. For harmonized recruitment, very lenient inclusion/exclusion criteria were adopted in view of differences in interpretation of what constitutes healthiness by different cultures and investigators. This policy may require secondary exclusion of individuals according to the standard of each country at the time of deriving RIs. An iterative optimization procedure, called the latent abnormal values exclusion (LAVE) method, can be applied to automate the process of refining the choice of reference individuals. For global comparison of reference values, test results must be harmonized, based on the among-country, pair-wise linear relationships of test values for the panel. Traceability of reference values can be ensured based on values assigned indirectly to the panel through collaborative measurement of certified reference materials. The validity of the adopted strategies is discussed in this article, based on interim results obtained to date from five countries. Special considerations are made for dissociation of RIs by parametric and nonparametric methods and between-country difference in the effect of body mass index on reference values. Copyright © 2014 Elsevier B.V. All rights reserved.
688,112 statistical results: Content mining psychology articles for statistical test results
Hartgerink, C.H.J.
2016-01-01
In this data deposit, I describe a dataset that is the result of content mining 167,318 published articles for statistical test results reported according to the standards prescribed by the American Psychological Association (APA). Articles published by the APA, Springer, Sage, and Taylor & Francis were included (mining from Wiley and Elsevier was actively blocked). As a result of this content mining, 688,112 results from 50,845 articles were extracted. In order to provide a comprehensive set...
Testing statistical isotropy in cosmic microwave background polarization maps
Rath, Pranati K.; Samal, Pramoda Kumar; Panda, Srikanta; Mishra, Debesh D.; Aluri, Pavan K.
2018-04-01
We apply our symmetry based Power tensor technique to test conformity of PLANCK Polarization maps with statistical isotropy. On a wide range of angular scales (l = 40 - 150), our preliminary analysis detects many statistically anisotropic multipoles in foreground cleaned full sky PLANCK polarization maps viz., COMMANDER and NILC. We also study the effect of residual foregrounds that may still be present in the Galactic plane using both common UPB77 polarization mask, as well as the individual component separation method specific polarization masks. However, some of the statistically anisotropic modes still persist, albeit significantly in NILC map. We further probed the data for any coherent alignments across multipoles in several bins from the chosen multipole range.
A critique of statistical hypothesis testing in clinical research
Directory of Open Access Journals (Sweden)
Somik Raha
2011-01-01
Full Text Available Many have documented the difficulty of using the current paradigm of Randomized Controlled Trials (RCTs to test and validate the effectiveness of alternative medical systems such as Ayurveda. This paper critiques the applicability of RCTs for all clinical knowledge-seeking endeavors, of which Ayurveda research is a part. This is done by examining statistical hypothesis testing, the underlying foundation of RCTs, from a practical and philosophical perspective. In the philosophical critique, the two main worldviews of probability are that of the Bayesian and the frequentist. The frequentist worldview is a special case of the Bayesian worldview requiring the unrealistic assumptions of knowing nothing about the universe and believing that all observations are unrelated to each other. Many have claimed that the first belief is necessary for science, and this claim is debunked by comparing variations in learning with different prior beliefs. Moving beyond the Bayesian and frequentist worldviews, the notion of hypothesis testing itself is challenged on the grounds that a hypothesis is an unclear distinction, and assigning a probability on an unclear distinction is an exercise that does not lead to clarity of action. This critique is of the theory itself and not any particular application of statistical hypothesis testing. A decision-making frame is proposed as a way of both addressing this critique and transcending ideological debates on probability. An example of a Bayesian decision-making approach is shown as an alternative to statistical hypothesis testing, utilizing data from a past clinical trial that studied the effect of Aspirin on heart attacks in a sample population of doctors. As a big reason for the prevalence of RCTs in academia is legislation requiring it, the ethics of legislating the use of statistical methods for clinical research is also examined.
Sommer, Philipp S.; Kaplan, Jed O.
2017-10-01
While a wide range of Earth system processes occur at daily and even subdaily timescales, many global vegetation and other terrestrial dynamics models historically used monthly meteorological forcing both to reduce computational demand and because global datasets were lacking. Recently, dynamic land surface modeling has moved towards resolving daily and subdaily processes, and global datasets containing daily and subdaily meteorology have become available. These meteorological datasets, however, cover only the instrumental era of the last approximately 120 years at best, are subject to considerable uncertainty, and represent extremely large data files with associated computational costs of data input/output and file transfer. For periods before the recent past or in the future, global meteorological forcing can be provided by climate model output, but the quality of these data at high temporal resolution is low, particularly for daily precipitation frequency and amount. Here, we present GWGEN, a globally applicable statistical weather generator for the temporal downscaling of monthly climatology to daily meteorology. Our weather generator is parameterized using a global meteorological database and simulates daily values of five common variables: minimum and maximum temperature, precipitation, cloud cover, and wind speed. GWGEN is lightweight, modular, and requires a minimal set of monthly mean variables as input. The weather generator may be used in a range of applications, for example, in global vegetation, crop, soil erosion, or hydrological models. While GWGEN does not currently perform spatially autocorrelated multi-point downscaling of daily weather, this additional functionality could be implemented in future versions.
International Nuclear Information System (INIS)
Iizumi, T.; Nishimori, M.; Yokozawa, M.
2008-01-01
For this study, we developed a new statistical model to estimate the daily accumulated global solar radiation on the earth's surface and used the model to generate a high-resolution climate change scenario of the radiation field in Japan. The statistical model mainly relies on precipitable water vapor calculated from air temperature and relative humidity on the surface to estimate seasonal changes in global solar radiation. On the other hand, to estimate daily radiation fluctuations, the model uses either a diurnal temperature range or relative humidity. The diurnal temperature range, calculated from the daily maximum and minimum temperatures, and relative humidity is a general output of most climate models, and pertinent observation data are comparatively easy to access. The statistical model performed well when estimating the monthly mean value, daily fluctuation statistics, and regional differences in the radiation field in Japan. To project the change in the radiation field for the years 2081 to 2100, we applied the statistical model to the climate change scenario of a high-resolution Regional Climate Model with a 20-km mesh size (RCM20) developed at the Meteorological Research Institute based on the Special Report for Emission Scenario (SRES)-A2. The projected change shows the following tendency: global solar radiation will increase in the warm season and decrease in the cool season in many areas of Japan, indicating that global warming may cause changes in the radiation field in Japan. The generated climate change scenario for the radiation field is linked to long-term and short-term changes in air temperature and relative humidity obtained from the RCM20 and, consequently, is expected to complement the RCM20 datasets for an impact assessment study in the agricultural sector
Testing and qualification of confidence in statistical procedures
Energy Technology Data Exchange (ETDEWEB)
Serghiuta, D.; Tholammakkil, J.; Hammouda, N. [Canadian Nuclear Safety Commission (Canada); O' Hagan, A. [Sheffield Univ. (United Kingdom)
2014-07-01
This paper discusses a framework for designing artificial test problems, evaluation criteria, and two of the benchmark tests developed under a research project initiated by the Canadian Nuclear Safety Commission to investigate the approaches for qualification of tolerance limit methods and algorithms proposed for application in optimization of CANDU regional/neutron overpower protection trip setpoints for aged conditions. A significant component of this investigation has been the development of a series of benchmark problems of gradually increased complexity, from simple 'theoretical' problems up to complex problems closer to the real application. The first benchmark problem discussed in this paper is a simplified scalar problem which does not involve extremal, maximum or minimum, operations, typically encountered in the real applications. The second benchmark is a high dimensional, but still simple, problem for statistical inference of maximum channel power during normal operation. Bayesian algorithms have been developed for each benchmark problem to provide an independent way of constructing tolerance limits from the same data and allow assessing how well different methods make use of those data and, depending on the type of application, evaluating what the level of 'conservatism' is. The Bayesian method is not, however, used as a reference method, or 'gold' standard, but simply as an independent review method. The approach and the tests developed can be used as a starting point for developing a generic suite (generic in the sense of potentially applying whatever the proposed statistical method) of empirical studies, with clear criteria for passing those tests. Some lessons learned, in particular concerning the need to assure the completeness of the description of the application and the role of completeness of input information, are also discussed. It is concluded that a formal process which includes extended and detailed benchmark
A statistical test for outlier identification in data envelopment analysis
Directory of Open Access Journals (Sweden)
Morteza Khodabin
2010-09-01
Full Text Available In the use of peer group data to assess individual, typical or best practice performance, the effective detection of outliers is critical for achieving useful results. In these ‘‘deterministic’’ frontier models, statistical theory is now mostly available. This paper deals with the statistical pared sample method and its capability of detecting outliers in data envelopment analysis. In the presented method, each observation is deleted from the sample once and the resulting linear program is solved, leading to a distribution of efficiency estimates. Based on the achieved distribution, a pared test is designed to identify the potential outlier(s. We illustrate the method through a real data set. The method could be used in a first step, as an exploratory data analysis, before using any frontier estimation.
Statistical testing of association between menstruation and migraine.
Barra, Mathias; Dahl, Fredrik A; Vetvik, Kjersti G
2015-02-01
To repair and refine a previously proposed method for statistical analysis of association between migraine and menstruation. Menstrually related migraine (MRM) affects about 20% of female migraineurs in the general population. The exact pathophysiological link from menstruation to migraine is hypothesized to be through fluctuations in female reproductive hormones, but the exact mechanisms remain unknown. Therefore, the main diagnostic criterion today is concurrency of migraine attacks with menstruation. Methods aiming to exclude spurious associations are wanted, so that further research into these mechanisms can be performed on a population with a true association. The statistical method is based on a simple two-parameter null model of MRM (which allows for simulation modeling), and Fisher's exact test (with mid-p correction) applied to standard 2 × 2 contingency tables derived from the patients' headache diaries. Our method is a corrected version of a previously published flawed framework. To our best knowledge, no other published methods for establishing a menstruation-migraine association by statistical means exist today. The probabilistic methodology shows good performance when subjected to receiver operator characteristic curve analysis. Quick reference cutoff values for the clinical setting were tabulated for assessing association given a patient's headache history. In this paper, we correct a proposed method for establishing association between menstruation and migraine by statistical methods. We conclude that the proposed standard of 3-cycle observations prior to setting an MRM diagnosis should be extended with at least one perimenstrual window to obtain sufficient information for statistical processing. © 2014 American Headache Society.
Software testing and global industry future paradigms
Casey, Valentine; Richardson, Ita
2009-01-01
Today software development has truly become a globally sourced commodity. This trend has been facilitated by the availability of highly skilled software professionals in low cost locations in Eastern Europe, Latin America and the Far East. Organisations
Quantum Statistical Testing of a Quantum Random Number Generator
Energy Technology Data Exchange (ETDEWEB)
Humble, Travis S [ORNL
2014-01-01
The unobservable elements in a quantum technology, e.g., the quantum state, complicate system verification against promised behavior. Using model-based system engineering, we present methods for verifying the opera- tion of a prototypical quantum random number generator. We begin with the algorithmic design of the QRNG followed by the synthesis of its physical design requirements. We next discuss how quantum statistical testing can be used to verify device behavior as well as detect device bias. We conclude by highlighting how system design and verification methods must influence effort to certify future quantum technologies.
Evaluation of the Wishart test statistics for polarimetric SAR data
DEFF Research Database (Denmark)
Skriver, Henning; Nielsen, Allan Aasbjerg; Conradsen, Knut
2003-01-01
A test statistic for equality of two covariance matrices following the complex Wishart distribution has previously been used in new algorithms for change detection, edge detection and segmentation in polarimetric SAR images. Previously, the results for change detection and edge detection have been...... quantitatively evaluated. This paper deals with the evaluation of segmentation. A segmentation performance measure originally developed for single-channel SAR images has been extended to polarimetric SAR images, and used to evaluate segmentation for a merge-using-moment algorithm for polarimetric SAR data....
Development of modelling algorithm of technological systems by statistical tests
Shemshura, E. A.; Otrokov, A. V.; Chernyh, V. G.
2018-03-01
The paper tackles the problem of economic assessment of design efficiency regarding various technological systems at the stage of their operation. The modelling algorithm of a technological system was performed using statistical tests and with account of the reliability index allows estimating the level of machinery technical excellence and defining the efficiency of design reliability against its performance. Economic feasibility of its application shall be determined on the basis of service quality of a technological system with further forecasting of volumes and the range of spare parts supply.
Reliability assessment for safety critical systems by statistical random testing
International Nuclear Information System (INIS)
Mills, S.E.
1995-11-01
In this report we present an overview of reliability assessment for software and focus on some basic aspects of assessing reliability for safety critical systems by statistical random testing. We also discuss possible deviations from some essential assumptions on which the general methodology is based. These deviations appear quite likely in practical applications. We present and discuss possible remedies and adjustments and then undertake applying this methodology to a portion of the SDS1 software. We also indicate shortcomings of the methodology and possible avenues to address to follow to address these problems. (author). 128 refs., 11 tabs., 31 figs
Reliability assessment for safety critical systems by statistical random testing
Energy Technology Data Exchange (ETDEWEB)
Mills, S E [Carleton Univ., Ottawa, ON (Canada). Statistical Consulting Centre
1995-11-01
In this report we present an overview of reliability assessment for software and focus on some basic aspects of assessing reliability for safety critical systems by statistical random testing. We also discuss possible deviations from some essential assumptions on which the general methodology is based. These deviations appear quite likely in practical applications. We present and discuss possible remedies and adjustments and then undertake applying this methodology to a portion of the SDS1 software. We also indicate shortcomings of the methodology and possible avenues to address to follow to address these problems. (author). 128 refs., 11 tabs., 31 figs.
Statistical characteristics of mechanical heart valve cavitation in accelerated testing.
Wu, Changfu; Hwang, Ned H C; Lin, Yu-Kweng M
2004-07-01
Cavitation damage has been observed on mechanical heart valves (MHVs) undergoing accelerated testing. Cavitation itself can be modeled as a stochastic process, as it varies from beat to beat of the testing machine. This in-vitro study was undertaken to investigate the statistical characteristics of MHV cavitation. A 25-mm St. Jude Medical bileaflet MHV (SJM 25) was tested in an accelerated tester at various pulse rates, ranging from 300 to 1,000 bpm, with stepwise increments of 100 bpm. A miniature pressure transducer was placed near a leaflet tip on the inflow side of the valve, to monitor regional transient pressure fluctuations at instants of valve closure. The pressure trace associated with each beat was passed through a 70 kHz high-pass digital filter to extract the high-frequency oscillation (HFO) components resulting from the collapse of cavitation bubbles. Three intensity-related measures were calculated for each HFO burst: its time span; its local root-mean-square (LRMS) value; and the area enveloped by the absolute value of the HFO pressure trace and the time axis, referred to as cavitation impulse. These were treated as stochastic processes, of which the first-order probability density functions (PDFs) were estimated for each test rate. Both the LRMS value and cavitation impulse were log-normal distributed, and the time span was normal distributed. These distribution laws were consistent at different test rates. The present investigation was directed at understanding MHV cavitation as a stochastic process. The results provide a basis for establishing further the statistical relationship between cavitation intensity and time-evolving cavitation damage on MHV surfaces. These data are required to assess and compare the performance of MHVs of different designs.
Statistical tests for power-law cross-correlated processes
Podobnik, Boris; Jiang, Zhi-Qiang; Zhou, Wei-Xing; Stanley, H. Eugene
2011-12-01
For stationary time series, the cross-covariance and the cross-correlation as functions of time lag n serve to quantify the similarity of two time series. The latter measure is also used to assess whether the cross-correlations are statistically significant. For nonstationary time series, the analogous measures are detrended cross-correlations analysis (DCCA) and the recently proposed detrended cross-correlation coefficient, ρDCCA(T,n), where T is the total length of the time series and n the window size. For ρDCCA(T,n), we numerically calculated the Cauchy inequality -1≤ρDCCA(T,n)≤1. Here we derive -1≤ρDCCA(T,n)≤1 for a standard variance-covariance approach and for a detrending approach. For overlapping windows, we find the range of ρDCCA within which the cross-correlations become statistically significant. For overlapping windows we numerically determine—and for nonoverlapping windows we derive—that the standard deviation of ρDCCA(T,n) tends with increasing T to 1/T. Using ρDCCA(T,n) we show that the Chinese financial market's tendency to follow the U.S. market is extremely weak. We also propose an additional statistical test that can be used to quantify the existence of cross-correlations between two power-law correlated time series.
Komatsu, Hikaru; Rappleye, Jeremy
2017-01-01
Several recent, highly influential comparative studies have made strong statistical claims that improvements on global learning assessments such as PISA will lead to higher GDP growth rates. These claims have provided the primary source of legitimation for policy reforms championed by leading international organisations, most notably the World…
Global health business: the production and performativity of statistics in Sierra Leone and Germany.
Erikson, Susan L
2012-01-01
The global push for health statistics and electronic digital health information systems is about more than tracking health incidence and prevalence. It is also experienced on the ground as means to develop and maintain particular norms of health business, knowledge, and decision- and profit-making that are not innocent. Statistics make possible audit and accountability logics that undergird the management of health at a distance and that are increasingly necessary to the business of health. Health statistics are inextricable from their social milieus, yet as business artifacts they operate as if they are freely formed, objectively originated, and accurate. This article explicates health statistics as cultural forms and shows how they have been produced and performed in two very different countries: Sierra Leone and Germany. In both familiar and surprising ways, this article shows how statistics and their pursuit organize and discipline human behavior, constitute subject positions, and reify existing relations of power.
Directory of Open Access Journals (Sweden)
Ki Su Kim
2005-02-01
Full Text Available This article examines the relationship between globalization and national education reforms, especially those of educational systems. Instead of exploring the much debated issues of how globalization affects national educational systems and how the nations react by what kinds of systemic education reform, however, it focuses on what such a method often leaves out, viz., the internal conditions of a nation that facilitates or hampers reform efforts. Taking South Korea as an example, it explores that country's unique national context which restricts and even inhibits education reforms. Especially noted here is the established "statist" political economy in education. In the paper's analysis, although South Korea's statist political economy has made a substantial contribution to economic and educational development, it is now considered increasingly unviable as globalization progresses. Nevertheless, the internal conditions, resultant from the previous statist policies, set limits on policy makers' efforts to alter the existing educational system. The analysis suggests that a fuller assessment of globalization's impact upon national educational systems or their reforms requires a perspective which is broad enough to encompass not only the concepts and/or theories of globalization and nation states but also the power relations and ideological setup of individual nations.
Why the null matters: statistical tests, random walks and evolution.
Sheets, H D; Mitchell, C E
2001-01-01
A number of statistical tests have been developed to determine what type of dynamics underlie observed changes in morphology in evolutionary time series, based on the pattern of change within the time series. The theory of the 'scaled maximum', the 'log-rate-interval' (LRI) method, and the Hurst exponent all operate on the same principle of comparing the maximum change, or rate of change, in the observed dataset to the maximum change expected of a random walk. Less change in a dataset than expected of a random walk has been interpreted as indicating stabilizing selection, while more change implies directional selection. The 'runs test' in contrast, operates on the sequencing of steps, rather than on excursion. Applications of these tests to computer generated, simulated time series of known dynamical form and various levels of additive noise indicate that there is a fundamental asymmetry in the rate of type II errors of the tests based on excursion: they are all highly sensitive to noise in models of directional selection that result in a linear trend within a time series, but are largely noise immune in the case of a simple model of stabilizing selection. Additionally, the LRI method has a lower sensitivity than originally claimed, due to the large range of LRI rates produced by random walks. Examination of the published results of these tests show that they have seldom produced a conclusion that an observed evolutionary time series was due to directional selection, a result which needs closer examination in light of the asymmetric response of these tests.
Transfer of drug dissolution testing by statistical approaches: Case study
AL-Kamarany, Mohammed Amood; EL Karbane, Miloud; Ridouan, Khadija; Alanazi, Fars K.; Hubert, Philippe; Cherrah, Yahia; Bouklouze, Abdelaziz
2011-01-01
The analytical transfer is a complete process that consists in transferring an analytical procedure from a sending laboratory to a receiving laboratory. After having experimentally demonstrated that also masters the procedure in order to avoid problems in the future. Method of transfers is now commonplace during the life cycle of analytical method in the pharmaceutical industry. No official guideline exists for a transfer methodology in pharmaceutical analysis and the regulatory word of transfer is more ambiguous than for validation. Therefore, in this study, Gauge repeatability and reproducibility (R&R) studies associated with other multivariate statistics appropriates were successfully applied for the transfer of the dissolution test of diclofenac sodium as a case study from a sending laboratory A (accredited laboratory) to a receiving laboratory B. The HPLC method for the determination of the percent release of diclofenac sodium in solid pharmaceutical forms (one is the discovered product and another generic) was validated using accuracy profile (total error) in the sender laboratory A. The results showed that the receiver laboratory B masters the test dissolution process, using the same HPLC analytical procedure developed in laboratory A. In conclusion, if the sender used the total error to validate its analytical method, dissolution test can be successfully transferred without mastering the analytical method validation by receiving laboratory B and the pharmaceutical analysis method state should be maintained to ensure the same reliable results in the receiving laboratory. PMID:24109204
Comparison of Statistical Methods for Detector Testing Programs
Energy Technology Data Exchange (ETDEWEB)
Rennie, John Alan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Abhold, Mark [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-10-14
A typical goal for any detector testing program is to ascertain not only the performance of the detector systems under test, but also the confidence that systems accepted using that testing program’s acceptance criteria will exceed a minimum acceptable performance (which is usually expressed as the minimum acceptable success probability, p). A similar problem often arises in statistics, where we would like to ascertain the fraction, p, of a population of items that possess a property that may take one of two possible values. Typically, the problem is approached by drawing a fixed sample of size n, with the number of items out of n that possess the desired property, x, being termed successes. The sample mean gives an estimate of the population mean p ≈ x/n, although usually it is desirable to accompany such an estimate with a statement concerning the range within which p may fall and the confidence associated with that range. Procedures for establishing such ranges and confidence limits are described in detail by Clopper, Brown, and Agresti for two-sided symmetric confidence intervals.
Statistical distributions of optimal global alignment scores of random protein sequences
Directory of Open Access Journals (Sweden)
Tang Jiaowei
2005-10-01
Full Text Available Abstract Background The inference of homology from statistically significant sequence similarity is a central issue in sequence alignments. So far the statistical distribution function underlying the optimal global alignments has not been completely determined. Results In this study, random and real but unrelated sequences prepared in six different ways were selected as reference datasets to obtain their respective statistical distributions of global alignment scores. All alignments were carried out with the Needleman-Wunsch algorithm and optimal scores were fitted to the Gumbel, normal and gamma distributions respectively. The three-parameter gamma distribution performs the best as the theoretical distribution function of global alignment scores, as it agrees perfectly well with the distribution of alignment scores. The normal distribution also agrees well with the score distribution frequencies when the shape parameter of the gamma distribution is sufficiently large, for this is the scenario when the normal distribution can be viewed as an approximation of the gamma distribution. Conclusion We have shown that the optimal global alignment scores of random protein sequences fit the three-parameter gamma distribution function. This would be useful for the inference of homology between sequences whose relationship is unknown, through the evaluation of gamma distribution significance between sequences.
MODIS/Aqua Clear Radiance Statistics Indexed to Global Grid 5-Min L2 Swath 10km V006
National Aeronautics and Space Administration — The MODIS/Aqua Clear Radiance Statistics Indexed to Global Grid 5-Min L2 Swath 10km (MYDCSR_G) provides a variety of statistical measures that characterize observed...
Bayesian models based on test statistics for multiple hypothesis testing problems.
Ji, Yuan; Lu, Yiling; Mills, Gordon B
2008-04-01
We propose a Bayesian method for the problem of multiple hypothesis testing that is routinely encountered in bioinformatics research, such as the differential gene expression analysis. Our algorithm is based on modeling the distributions of test statistics under both null and alternative hypotheses. We substantially reduce the complexity of the process of defining posterior model probabilities by modeling the test statistics directly instead of modeling the full data. Computationally, we apply a Bayesian FDR approach to control the number of rejections of null hypotheses. To check if our model assumptions for the test statistics are valid for various bioinformatics experiments, we also propose a simple graphical model-assessment tool. Using extensive simulations, we demonstrate the performance of our models and the utility of the model-assessment tool. In the end, we apply the proposed methodology to an siRNA screening and a gene expression experiment.
A statistical design for testing apomictic diversification through linkage analysis.
Zeng, Yanru; Hou, Wei; Song, Shuang; Feng, Sisi; Shen, Lin; Xia, Guohua; Wu, Rongling
2014-03-01
The capacity of apomixis to generate maternal clones through seed reproduction has made it a useful characteristic for the fixation of heterosis in plant breeding. It has been observed that apomixis displays pronounced intra- and interspecific diversification, but the genetic mechanisms underlying this diversification remains elusive, obstructing the exploitation of this phenomenon in practical breeding programs. By capitalizing on molecular information in mapping populations, we describe and assess a statistical design that deploys linkage analysis to estimate and test the pattern and extent of apomictic differences at various levels from genotypes to species. The design is based on two reciprocal crosses between two individuals each chosen from a hermaphrodite or monoecious species. A multinomial distribution likelihood is constructed by combining marker information from two crosses. The EM algorithm is implemented to estimate the rate of apomixis and test its difference between two plant populations or species as the parents. The design is validated by computer simulation. A real data analysis of two reciprocal crosses between hickory (Carya cathayensis) and pecan (C. illinoensis) demonstrates the utilization and usefulness of the design in practice. The design provides a tool to address fundamental and applied questions related to the evolution and breeding of apomixis.
To test photon statistics by atomic beam deflection
International Nuclear Information System (INIS)
Wang Yuzhu; Chen Yudan; Huang Weigang; Liu Liang
1985-02-01
There exists a simple relation between the photon statistics in resonance fluorescence and the statistics of the momentum transferred to an atom by a plane travelling wave [Cook, R.J., Opt. Commun., 35, 347(1980)]. Using an atomic beam deflection by light pressure, we have observed sub-Poissonian statistics in resonance fluorescence of two-level atoms. (author)
Development and testing of improved statistical wind power forecasting methods.
Energy Technology Data Exchange (ETDEWEB)
Mendes, J.; Bessa, R.J.; Keko, H.; Sumaili, J.; Miranda, V.; Ferreira, C.; Gama, J.; Botterud, A.; Zhou, Z.; Wang, J. (Decision and Information Sciences); (INESC Porto)
2011-12-06
Wind power forecasting (WPF) provides important inputs to power system operators and electricity market participants. It is therefore not surprising that WPF has attracted increasing interest within the electric power industry. In this report, we document our research on improving statistical WPF algorithms for point, uncertainty, and ramp forecasting. Below, we provide a brief introduction to the research presented in the following chapters. For a detailed overview of the state-of-the-art in wind power forecasting, we refer to [1]. Our related work on the application of WPF in operational decisions is documented in [2]. Point forecasts of wind power are highly dependent on the training criteria used in the statistical algorithms that are used to convert weather forecasts and observational data to a power forecast. In Chapter 2, we explore the application of information theoretic learning (ITL) as opposed to the classical minimum square error (MSE) criterion for point forecasting. In contrast to the MSE criterion, ITL criteria do not assume a Gaussian distribution of the forecasting errors. We investigate to what extent ITL criteria yield better results. In addition, we analyze time-adaptive training algorithms and how they enable WPF algorithms to cope with non-stationary data and, thus, to adapt to new situations without requiring additional offline training of the model. We test the new point forecasting algorithms on two wind farms located in the U.S. Midwest. Although there have been advancements in deterministic WPF, a single-valued forecast cannot provide information on the dispersion of observations around the predicted value. We argue that it is essential to generate, together with (or as an alternative to) point forecasts, a representation of the wind power uncertainty. Wind power uncertainty representation can take the form of probabilistic forecasts (e.g., probability density function, quantiles), risk indices (e.g., prediction risk index) or scenarios
A statistical test for the habitable zone concept
Checlair, J.; Abbot, D. S.
2017-12-01
Traditional habitable zone theory assumes that the silicate-weathering feedback regulates the atmospheric CO2 of planets within the habitable zone to maintain surface temperatures that allow for liquid water. There is some non-definitive evidence that this feedback has worked in Earth history, but it is untested in an exoplanet context. A critical prediction of the silicate-weathering feedback is that, on average, within the habitable zone planets that receive a higher stellar flux should have a lower CO2 in order to maintain liquid water at their surface. We can test this prediction directly by using a statistical approach involving low-precision CO2 measurements on many planets with future instruments such as JWST, LUVOIR, or HabEx. The purpose of this work is to carefully outline the requirements for such a test. First, we use a radiative-transfer model to compute the amount of CO2 necessary to maintain surface liquid water on planets for different values of insolation and planetary parameters. We run a large ensemble of Earth-like planets with different masses, atmospheric masses, inert atmospheric composition, cloud composition and level, and other greenhouse gases. Second, we post-process this data to determine the precision with which future instruments such as JWST, LUVOIR, and HabEx could measure the CO2. We then combine the variation due to planetary parameters and observational error to determine the number of planet measurements that would be needed to effectively marginalize over uncertainties and resolve the predicted trend in CO2 vs. stellar flux. The results of this work may influence the usage of JWST and will enhance mission planning for LUVOIR and HabEx.
TESTING CONVERGENCE FOR GLOBAL ACCRETION DISKS
Energy Technology Data Exchange (ETDEWEB)
Hawley, John F.; Richers, Sherwood A.; Guan Xiaoyue [Department of Astronomy, University of Virginia, P.O. Box 400325, Charlottesville, VA 22904-4325 (United States); Krolik, Julian H., E-mail: jh8h@virginia.edu, E-mail: xg3z@virginia.edu, E-mail: jhk@pha.jhu.edu [Department of Physics and Astronomy, Johns Hopkins University, Baltimore, MD 21218 (United States)
2013-08-01
Global disk simulations provide a powerful tool for investigating accretion and the underlying magnetohydrodynamic turbulence driven by magneto-rotational instability (MRI). Using them to accurately predict quantities such as stress, accretion rate, and surface brightness profile requires that purely numerical effects, arising from both resolution and algorithm, be understood and controlled. We use the flux-conservative Athena code to conduct a series of experiments on disks having a variety of magnetic topologies to determine what constitutes adequate resolution. We develop and apply several resolution metrics: (Q{sub z} ) and (Q{sub {phi}}), the ratio of the grid zone size to the characteristic MRI wavelength, {alpha}{sub mag}, the ratio of the Maxwell stress to the magnetic pressure, and /, the ratio of radial to toroidal magnetic field energy. For the initial conditions considered here, adequate resolution is characterized by (Q{sub z} ) {>=} 15, (Q{sub {phi}}) {>=} 20, {alpha}{sub mag} Almost-Equal-To 0.45, and /{approx}0.2. These values are associated with {>=}35 zones per scaleheight H, a result consistent with shearing box simulations. Numerical algorithm is also important. Use of the Harten-Lax-van Leer-Einfeldt flux solver or second-order interpolation can significantly degrade the effective resolution compared to the Harten-Lax-van Leer discontinuities flux solver and third-order interpolation. Resolution at this standard can be achieved only with large numbers of grid zones, arranged in a fashion that matches the symmetries of the problem and the scientific goals of the simulation. Without it, however, quantitative measures important to predictions of observables are subject to large systematic errors.
Hayen, Andrew; Macaskill, Petra; Irwig, Les; Bossuyt, Patrick
2010-01-01
To explain which measures of accuracy and which statistical methods should be used in studies to assess the value of a new binary test as a replacement test, an add-on test, or a triage test. Selection and explanation of statistical methods, illustrated with examples. Statistical methods for
Statistical Modelling of Global Tectonic Activity and some Physical Consequences of its Results
Directory of Open Access Journals (Sweden)
Konstantin Statnikov
2015-02-01
Full Text Available Based on the analysis of global earthquake data bank for the last thirty years, a global tectonic activity indicator was proposed comprising a weekly globally averaged mean earthquake magnitude value. It was shown that 84% of indicator variability is a harmonic oscillation with a fundamental period of 37.2 years, twice the maximum period in the tidal oscillation spectrum (18.6 years. From this observation, a conclusion was drawn that parametric resonance (PR exists between global tectonic activity and low-frequency tides. The conclusion was also confirmed by the existence of the statistically significant PR response at the second lowest tidal frequency i.e. 182.6 days. It was shown that the global earthquake flow, with a determination factor 93%, is a sum of two Gaussian streams, nearly equally intense, with mean values of 23 and 83 events per week and standard deviations of 9 and 30 events per week, respectively. The Earth periphery to 'mean time interval between earthquakes' ratios in the first and the second flow modes described above match, by the order of magnitude, the sound velocity in the fluid (~1500 m/s and in elastic medium (5500 m/s.
Tree-Based Global Model Tests for Polytomous Rasch Models
Komboz, Basil; Strobl, Carolin; Zeileis, Achim
2018-01-01
Psychometric measurement models are only valid if measurement invariance holds between test takers of different groups. Global model tests, such as the well-established likelihood ratio (LR) test, are sensitive to violations of measurement invariance, such as differential item functioning and differential step functioning. However, these…
Decision Support Systems: Applications in Statistics and Hypothesis Testing.
Olsen, Christopher R.; Bozeman, William C.
1988-01-01
Discussion of the selection of appropriate statistical procedures by educators highlights a study conducted to investigate the effectiveness of decision aids in facilitating the use of appropriate statistics. Experimental groups and a control group using a printed flow chart, a computer-based decision aid, and a standard text are described. (11…
Links to sources of cancer-related statistics, including the Surveillance, Epidemiology and End Results (SEER) Program, SEER-Medicare datasets, cancer survivor prevalence data, and the Cancer Trends Progress Report.
Policies of Global English Tests: Test-Takers' Perspectives on the IELTS Retake Policy
Hamid, M. Obaidul
2016-01-01
Globalized English proficiency tests such as the International English Language Testing System (IELTS) are increasingly playing the role of gatekeepers in a globalizing world. Although the use of the IELTS as a "policy tool" for making decisions in the areas of study, work and migration impacts on test-takers' lives and life chances, not…
International Nuclear Information System (INIS)
2005-01-01
For the years 2004 and 2005 the figures shown in the tables of Energy Review are partly preliminary. The annual statistics published in Energy Review are presented in more detail in a publication called Energy Statistics that comes out yearly. Energy Statistics also includes historical time-series over a longer period of time (see e.g. Energy Statistics, Statistics Finland, Helsinki 2004.) The applied energy units and conversion coefficients are shown in the back cover of the Review. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in GDP, energy consumption and electricity consumption, Carbon dioxide emissions from fossile fuels use, Coal consumption, Consumption of natural gas, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices in heat production, Fuel prices in electricity production, Price of electricity by type of consumer, Average monthly spot prices at the Nord pool power exchange, Total energy consumption by source and CO 2 -emissions, Supplies and total consumption of electricity GWh, Energy imports by country of origin in January-June 2003, Energy exports by recipient country in January-June 2003, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Price of natural gas by type of consumer, Price of electricity by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes, precautionary stock fees and oil pollution fees
Statistical analysis of global surface temperature and sea level using cointegration methods
DEFF Research Database (Denmark)
Schmidt, Torben; Johansen, Søren; Thejll, Peter
2012-01-01
Global sea levels are rising which is widely understood as a consequence of thermal expansion and melting of glaciers and land-based ice caps. Due to the lack of representation of ice-sheet dynamics in present-day physically-based climate models being unable to simulate observed sea level trends......, semi-empirical models have been applied as an alternative for projecting of future sea levels. There is in this, however, potential pitfalls due to the trending nature of the time series. We apply a statistical method called cointegration analysis to observed global sea level and land-ocean surface air...... temperature, capable of handling such peculiarities. We find a relationship between sea level and temperature and find that temperature causally depends on the sea level, which can be understood as a consequence of the large heat capacity of the ocean. We further find that the warming episode in the 1940s...
Statistical analysis of global surface air temperature and sea level using cointegration methods
DEFF Research Database (Denmark)
Schmith, Torben; Johansen, Søren; Thejll, Peter
Global sea levels are rising which is widely understood as a consequence of thermal expansion and melting of glaciers and land-based ice caps. Due to physically-based models being unable to simulate observed sea level trends, semi-empirical models have been applied as an alternative for projecting...... of future sea levels. There is in this, however, potential pitfalls due to the trending nature of the time series. We apply a statistical method called cointegration analysis to observed global sea level and surface air temperature, capable of handling such peculiarities. We find a relationship between sea...... level and temperature and find that temperature causally depends on the sea level, which can be understood as a consequence of the large heat capacity of the ocean. We further find that the warming episode in the 1940s is exceptional in the sense that sea level and warming deviates from the expected...
International Nuclear Information System (INIS)
2001-01-01
For the year 2000, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period (see e.g. Energiatilastot 1999, Statistics Finland, Helsinki 2000, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions from the use of fossil fuels, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in 2000, Energy exports by recipient country in 2000, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products
International Nuclear Information System (INIS)
2000-01-01
For the year 1999 and 2000, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period (see e.g., Energiatilastot 1998, Statistics Finland, Helsinki 1999, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in January-March 2000, Energy exports by recipient country in January-March 2000, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products
International Nuclear Information System (INIS)
1999-01-01
For the year 1998 and the year 1999, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period (see e.g. Energiatilastot 1998, Statistics Finland, Helsinki 1999, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in January-June 1999, Energy exports by recipient country in January-June 1999, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products
Ten Years of Cloud Properties from MODIS: Global Statistics and Use in Climate Model Evaluation
Platnick, Steven E.
2011-01-01
The NASA Moderate Resolution Imaging Spectroradiometer (MODIS), launched onboard the Terra and Aqua spacecrafts, began Earth observations on February 24, 2000 and June 24,2002, respectively. Among the algorithms developed and applied to this sensor, a suite of cloud products includes cloud masking/detection, cloud-top properties (temperature, pressure), and optical properties (optical thickness, effective particle radius, water path, and thermodynamic phase). All cloud algorithms underwent numerous changes and enhancements between for the latest Collection 5 production version; this process continues with the current Collection 6 development. We will show example MODIS Collection 5 cloud climatologies derived from global spatial . and temporal aggregations provided in the archived gridded Level-3 MODIS atmosphere team product (product names MOD08 and MYD08 for MODIS Terra and Aqua, respectively). Data sets in this Level-3 product include scalar statistics as well as 1- and 2-D histograms of many cloud properties, allowing for higher order information and correlation studies. In addition to these statistics, we will show trends and statistical significance in annual and seasonal means for a variety of the MODIS cloud properties, as well as the time required for detection given assumed trends. To assist in climate model evaluation, we have developed a MODIS cloud simulator with an accompanying netCDF file containing subsetted monthly Level-3 statistical data sets that correspond to the simulator output. Correlations of cloud properties with ENSO offer the potential to evaluate model cloud sensitivity; initial results will be discussed.
International Nuclear Information System (INIS)
2003-01-01
For the year 2002, part of the figures shown in the tables of the Energy Review are partly preliminary. The annual statistics of the Energy Review also includes historical time-series over a longer period (see e.g. Energiatilastot 2001, Statistics Finland, Helsinki 2002). The applied energy units and conversion coefficients are shown in the inside back cover of the Review. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in GDP, energy consumption and electricity consumption, Carbon dioxide emissions from fossile fuels use, Coal consumption, Consumption of natural gas, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices in heat production, Fuel prices in electricity production, Price of electricity by type of consumer, Average monthly spot prices at the Nord pool power exchange, Total energy consumption by source and CO 2 -emissions, Supply and total consumption of electricity GWh, Energy imports by country of origin in January-June 2003, Energy exports by recipient country in January-June 2003, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Price of natural gas by type of consumer, Price of electricity by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Excise taxes, precautionary stock fees on oil pollution fees on energy products
International Nuclear Information System (INIS)
2004-01-01
For the year 2003 and 2004, the figures shown in the tables of the Energy Review are partly preliminary. The annual statistics of the Energy Review also includes historical time-series over a longer period (see e.g. Energiatilastot, Statistics Finland, Helsinki 2003, ISSN 0785-3165). The applied energy units and conversion coefficients are shown in the inside back cover of the Review. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in GDP, energy consumption and electricity consumption, Carbon dioxide emissions from fossile fuels use, Coal consumption, Consumption of natural gas, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices in heat production, Fuel prices in electricity production, Price of electricity by type of consumer, Average monthly spot prices at the Nord pool power exchange, Total energy consumption by source and CO 2 -emissions, Supplies and total consumption of electricity GWh, Energy imports by country of origin in January-March 2004, Energy exports by recipient country in January-March 2004, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Price of natural gas by type of consumer, Price of electricity by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Excise taxes, precautionary stock fees on oil pollution fees
International Nuclear Information System (INIS)
2000-01-01
For the year 1999 and 2000, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy also includes historical time series over a longer period (see e.g., Energiatilastot 1999, Statistics Finland, Helsinki 2000, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in January-June 2000, Energy exports by recipient country in January-June 2000, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products
Collignon, Marine; Hammer, Øyvind; Fallahi, Mohammad J.; Lupi, Matteo; Schmid, Daniel W.; Alwi, Husein; Hadi, Soffian; Mazzini, Adriano
2017-04-01
The 29th May 2006, gas water and mud breccia started to erupt at several localities along the Watukosek fault system in the Sidoarjo Regency in East Java Indonesia. The most prominent eruption site, named Lusi, is still active and the emitted material now covers a surface of nearly 7 km2, resulting in the displacement of 60.000 people (up to date). Due to its social and economic impacts, as well as its spectacular dimensions, the Lusi eruption still attracts the attention of international media and scientists. In the framework of the Lusi Lab project (ERC grant n° 308126), many efforts were made to develop a quasi-constant monitoring of the site and the regional areas. Several studies attempted to predict the flow rate evolution or ground deformation, resulting in either overestimating or underestimating the longevity of the eruption. Models have failed because Lusi is not a mud volcano but a sedimentary hosted hydrothermal system that became apparent after the M6.3 Yogyakarta earthquake. Another reason is because such models usually assume that the flow will decrease pacing the overpressure reduction during the deflation of the chamber. These models typically consider a closed system with a unique chamber that is not being recharged. Overall the flow rate has decreased over the past ten years, although it has been largely fluctuating with monthly periods of higher mud breccia discharge. Monitoring of the eruption has revealed that numerous anomalous events are temporally linked to punctual events such as earthquakes or volcanic eruptions. Nevertheless, the quantification of these events has never been investigated in details. In this study, we present a compilation of anomalous events observed at the Lusi site during the last 10 years. Using Monte Carlo simulations, we then statistically compare the displacement, recorded at different seismic stations around Lusi, with the regional and global earthquakes catalogue to test the probability that an earthquake
Statistical Analysis of Geo-electric Imaging and Geotechnical Test ...
Indian Academy of Sciences (India)
12
On the other hand cost-effective geoelctric imaging methods provide 2-D / 3-D .... SPSS (Statistical package for social sciences) have been used to carry out linear ..... P W J 1997 Theory of ionic surface electrical conduction in porous media;.
Testing for changes using permutations of U-statistics
Czech Academy of Sciences Publication Activity Database
Horvath, L.; Hušková, Marie
2005-01-01
Roč. 2005, č. 128 (2005), s. 351-371 ISSN 0378-3758 R&D Projects: GA ČR GA201/00/0769 Institutional research plan: CEZ:AV0Z10750506 Keywords : U-statistics * permutations * change-point * weighted approximation * Brownian bridge Subject RIV: BD - Theory of Information Impact factor: 0.481, year: 2005
Statistical Maps of Ground Magnetic Disturbance Derived from Global Geospace Models
Rigler, E. J.; Wiltberger, M. J.; Love, J. J.
2017-12-01
Electric currents in space are the principal driver of magnetic variations measured at Earth's surface. These in turn induce geoelectric fields that present a natural hazard for technological systems like high-voltage power distribution networks. Modern global geospace models can reasonably simulate large-scale geomagnetic response to solar wind variations, but they are less successful at deterministic predictions of intense localized geomagnetic activity that most impacts technological systems on the ground. Still, recent studies have shown that these models can accurately reproduce the spatial statistical distributions of geomagnetic activity, suggesting that their physics are largely correct. Since the magnetosphere is a largely externally driven system, most model-measurement discrepancies probably arise from uncertain boundary conditions. So, with realistic distributions of solar wind parameters to establish its boundary conditions, we use the Lyon-Fedder-Mobarry (LFM) geospace model to build a synthetic multivariate statistical model of gridded ground magnetic disturbance. From this, we analyze the spatial modes of geomagnetic response, regress on available measurements to fill in unsampled locations on the grid, and estimate the global probability distribution of extreme magnetic disturbance. The latter offers a prototype geomagnetic "hazard map", similar to those used to characterize better-known geophysical hazards like earthquakes and floods.
Optimal allocation of testing resources for statistical simulations
Quintana, Carolina; Millwater, Harry R.; Singh, Gulshan; Golden, Patrick
2015-07-01
Statistical estimates from simulation involve uncertainty caused by the variability in the input random variables due to limited data. Allocating resources to obtain more experimental data of the input variables to better characterize their probability distributions can reduce the variance of statistical estimates. The methodology proposed determines the optimal number of additional experiments required to minimize the variance of the output moments given single or multiple constraints. The method uses multivariate t-distribution and Wishart distribution to generate realizations of the population mean and covariance of the input variables, respectively, given an amount of available data. This method handles independent and correlated random variables. A particle swarm method is used for the optimization. The optimal number of additional experiments per variable depends on the number and variance of the initial data, the influence of the variable in the output function and the cost of each additional experiment. The methodology is demonstrated using a fretting fatigue example.
Decoding β-decay systematics: A global statistical model for β- half-lives
International Nuclear Information System (INIS)
Costiris, N. J.; Mavrommatis, E.; Gernoth, K. A.; Clark, J. W.
2009-01-01
Statistical modeling of nuclear data provides a novel approach to nuclear systematics complementary to established theoretical and phenomenological approaches based on quantum theory. Continuing previous studies in which global statistical modeling is pursued within the general framework of machine learning theory, we implement advances in training algorithms designed to improve generalization, in application to the problem of reproducing and predicting the half-lives of nuclear ground states that decay 100% by the β - mode. More specifically, fully connected, multilayer feed-forward artificial neural network models are developed using the Levenberg-Marquardt optimization algorithm together with Bayesian regularization and cross-validation. The predictive performance of models emerging from extensive computer experiments is compared with that of traditional microscopic and phenomenological models as well as with the performance of other learning systems, including earlier neural network models as well as the support vector machines recently applied to the same problem. In discussing the results, emphasis is placed on predictions for nuclei that are far from the stability line, and especially those involved in r-process nucleosynthesis. It is found that the new statistical models can match or even surpass the predictive performance of conventional models for β-decay systematics and accordingly should provide a valuable additional tool for exploring the expanding nuclear landscape.
Coding and classification in drug statistics – From national to global application
Directory of Open Access Journals (Sweden)
Marit Rønning
2009-11-01
Full Text Available SUMMARYThe Anatomical Therapeutic Chemical (ATC classification system and the defined daily dose (DDDwas developed in Norway in the early seventies. The creation of the ATC/DDD methodology was animportant basis for presenting drug utilisation statistics in a sensible way. Norway was in 1977 also thefirst country to publish national drug utilisation statistics from wholesalers on an annual basis. Thecombination of these activities in Norway in the seventies made us a pioneer country in the area of drugutilisation research. Over the years, the use of the ATC/DDD methodology has gradually increased incountries outside Norway. Since 1996, the methodology has been recommended by WHO for use ininternational drug utilisation studies. The WHO Collaborating Centre for Drug Statistics Methodologyin Oslo handles the maintenance and development of the ATC/DDD system. The Centre is now responsiblefor the global co-ordination. After nearly 30 years of experience with ATC/DDD, the methodologyhas demonstrated its suitability in drug use research. The main challenge in the coming years is toeducate the users worldwide in how to use the methodology properly.
Understanding the Sampling Distribution and Its Use in Testing Statistical Significance.
Breunig, Nancy A.
Despite the increasing criticism of statistical significance testing by researchers, particularly in the publication of the 1994 American Psychological Association's style manual, statistical significance test results are still popular in journal articles. For this reason, it remains important to understand the logic of inferential statistics. A…
Statistical Analysis for Test Papers with Software SPSS
Institute of Scientific and Technical Information of China (English)
张燕君
2012-01-01
Test paper evaluation is an important work for the management of tests, which results are significant bases for scientific summation of teaching and learning. Taking an English test paper of high students’monthly examination as the object, it focuses on the interpretation of SPSS output concerning item and whole quantitative analysis of papers. By analyzing and evaluating the papers, it can be a feedback for teachers to check the students’progress and adjust their teaching process.
Rolinski, S.; Müller, C.; Lotze-Campen, H.; Bondeau, A.
2010-12-01
information on boundary conditions such as water and light availability or temperature sensibility. Based on the given limitation factors, a number of sensitive parameters are chosen, e.g. for the phenological development, biomass allocation, and different management regimes. These are introduced to a sensitivity analysis and Bayesian parameter evaluation using the R package FME (Soetart & Petzoldt, Journal of Statistical Software, 2010). Given the extremely different climatic conditions at the FluxNet grass sites, the premises for the global sensitivity analysis are very promising.
Statistics of sampling for microbiological testing of foodborne pathogens
Despite the many recent advances in protocols for testing for pathogens in foods, a number of challenges still exist. For example, the microbiological safety of food cannot be completely ensured by testing because microorganisms are not evenly distributed throughout the food. Therefore, since it i...
Statistical tests for equal predictive ability across multiple forecasting methods
DEFF Research Database (Denmark)
Borup, Daniel; Thyrsgaard, Martin
We develop a multivariate generalization of the Giacomini-White tests for equal conditional predictive ability. The tests are applicable to a mixture of nested and non-nested models, incorporate estimation uncertainty explicitly, and allow for misspecification of the forecasting model as well as ...
Statistical analysis of nematode counts from interlaboratory proficiency tests
Berg, van den W.; Hartsema, O.; Nijs, Den J.M.F.
2014-01-01
A series of proficiency tests on potato cyst nematode (PCN; n=29) and free-living stages of Meloidogyne and Pratylenchus (n=23) were investigated to determine the accuracy and precision of the nematode counts and to gain insights into possible trends and potential improvements. In each test, each
Jsub(Ic)-testing of A-533 B - statistical evaluation of some different testing techniques
International Nuclear Information System (INIS)
Nilsson, F.
1978-01-01
The purpose of the present study was to compare statistically some different methods for the evaluation of fracture toughness of the nuclear reactor material A-533 B. Since linear elastic fracture mechanics is not applicable to this material at the interesting temperature (275 0 C), the so-called Jsub(Ic) testing method was employed. Two main difficulties are inherent in this type of testing. The first one is to determine the quantity J as a function of the deflection of the three-point bend specimens used. Three different techniques were used, the first two based on the experimentally observed input of energy to the specimen and the third employing finite element calculations. The second main problem is to determine the point when crack growth begins. For this, two methods were used, a direct electrical method and the indirect R-curve method. A total of forty specimens were tested at two laboratories. No statistically significant different results were obtained from the respective laboratories. The three methods of calculating J yielded somewhat different results, although the discrepancy was small. Also the two methods of determination of the growth initiation point yielded consistent results. The R-curve method, however, exhibited a larger uncertainty as measured by the standard deviation. The resulting Jsub(Ic) value also agreed well with earlier presented results. The relative standard deviation was of the order of 25%, which is quite small for this type of experiment. (author)
Statistics applied to the testing of cladding tubes
International Nuclear Information System (INIS)
Perdijon, J.
1987-01-01
Cladding tubes, either steel or zircaloy, are generally given a 100 % inspection through ultrasonic non-destructive testing. This inspection may be completed beneficially with an eddy current test, as this is not sensitive to the same defects as those typically traced by ultrasonic testing. Unfortunately, the two methods (as with other non-destructive tests) exhibit poor precision; this means that a flaw, whose size is close to that denoted as rejection limit, may be accepted or rejected. Currently, rejection, i.e. the measurement above which a tube is rejected, is generally determined through measuring a calibration tube at regular time intervals, and the signal of a given tube is compared to that of the most recently completed calibration. This measurement is thus subject to variations which can be attributed to an actual shift of adjustments as well as to poor precision. For this reason, monitoring instrument adjustments using the so-called control chart method are proposed
Statistics of software vulnerability detection in certification testing
Barabanov, A. V.; Markov, A. S.; Tsirlov, V. L.
2018-05-01
The paper discusses practical aspects of introduction of the methods to detect software vulnerability in the day-to-day activities of the accredited testing laboratory. It presents the approval results of the vulnerability detection methods as part of the study of the open source software and the software that is a test object of the certification tests under information security requirements, including software for communication networks. Results of the study showing the allocation of identified vulnerabilities by types of attacks, country of origin, programming languages used in the development, methods for detecting vulnerability, etc. are given. The experience of foreign information security certification systems related to the detection of certified software vulnerabilities is analyzed. The main conclusion based on the study is the need to implement practices for developing secure software in the development life cycle processes. The conclusions and recommendations for the testing laboratories on the implementation of the vulnerability analysis methods are laid down.
Statistical modeling of dental unit water bacterial test kit performance.
Cohen, Mark E; Harte, Jennifer A; Stone, Mark E; O'Connor, Karen H; Coen, Michael L; Cullum, Malford E
2007-01-01
While it is important to monitor dental water quality, it is unclear whether in-office test kits provide bacterial counts comparable to the gold standard method (R2A). Studies were conducted on specimens with known bacterial concentrations, and from dental units, to evaluate test kit accuracy across a range of bacterial types and loads. Colony forming units (CFU) were counted for samples from each source, using R2A and two types of test kits, and conformity to Poisson distribution expectations was evaluated. Poisson regression was used to test for effects of source and device, and to estimate rate ratios for kits relative to R2A. For all devices, distributions were Poisson for low CFU/mL when only beige-pigmented bacteria were considered. For higher counts, R2A remained Poisson, but kits exhibited over-dispersion. Both kits undercounted relative to R2A, but the degree of undercounting was reasonably stable. Kits did not grow pink-pigmented bacteria from dental-unit water identified as Methylobacterium rhodesianum. Only one of the test kits provided results with adequate reliability at higher bacterial concentrations. Undercount bias could be estimated for this device and used to adjust test kit results. Insensitivity to methylobacteria spp. is problematic.
STATISTICAL EVALUATION OF EXAMINATION TESTS IN MATHEMATICS FOR ECONOMISTS
Directory of Open Access Journals (Sweden)
KASPŘÍKOVÁ, Nikola
2012-12-01
Full Text Available Examination results are rather important for many students with regard to their future profession development. Results of exams should be carefully inspected by the teachers to help improve design and evaluation of tests and education process in general. Analysis of examination papers in mathematics taken by students of basic mathematics course at University of Economics in Prague is reported. The first issue addressed is identification of significant dependencies between performance in particular problem areas covered in the test and also between particular items and total score in test or ability level as a latent trait. The assessment is first performed with Spearman correlation coefficient, items in the test are then evaluated within Item Response Theory framework. The second analytical task addressed is a search for groups of students who are similar with respect to performance in test. Cluster analysis is performed using partitioning around medoids method and final model selection is made according to average silhouette width. Results of clustering, which may be also considered in connection with setting of the minimum score for passing the exam, show that two groups of students can be identified. The group which may be called "well-performers" is the more clearly defined one.
The Global Statistical Response of the Outer Radiation Belt During Geomagnetic Storms
Murphy, K. R.; Watt, C. E. J.; Mann, I. R.; Jonathan Rae, I.; Sibeck, D. G.; Boyd, A. J.; Forsyth, C. F.; Turner, D. L.; Claudepierre, S. G.; Baker, D. N.; Spence, H. E.; Reeves, G. D.; Blake, J. B.; Fennell, J.
2018-05-01
Using the total radiation belt electron content calculated from Van Allen Probe phase space density, the time-dependent and global response of the outer radiation belt during storms is statistically studied. Using phase space density reduces the impacts of adiabatic changes in the main phase, allowing a separation of adiabatic and nonadiabatic effects and revealing a clear modality and repeatable sequence of events in storm time radiation belt electron dynamics. This sequence exhibits an important first adiabatic invariant (μ)-dependent behavior in the seed (150 MeV/G), relativistic (1,000 MeV/G), and ultrarelativistic (4,000 MeV/G) populations. The outer radiation belt statistically shows an initial phase dominated by loss followed by a second phase of rapid acceleration, while the seed population shows little loss and immediate enhancement. The time sequence of the transition to the acceleration is also strongly μ dependent and occurs at low μ first, appearing to be repeatable from storm to storm.
Castruccio, Stefano
2015-04-02
One of the main challenges when working with modern climate model ensembles is the increasingly larger size of the data produced, and the consequent difficulty in storing large amounts of spatio-temporally resolved information. Many compression algorithms can be used to mitigate this problem, but since they are designed to compress generic scientific data sets, they do not account for the nature of climate model output and they compress only individual simulations. In this work, we propose a different, statistics-based approach that explicitly accounts for the space-time dependence of the data for annual global three-dimensional temperature fields in an initial condition ensemble. The set of estimated parameters is small (compared to the data size) and can be regarded as a summary of the essential structure of the ensemble output; therefore, it can be used to instantaneously reproduce the temperature fields in an ensemble with a substantial saving in storage and time. The statistical model exploits the gridded geometry of the data and parallelization across processors. It is therefore computationally convenient and allows to fit a non-trivial model to a data set of one billion data points with a covariance matrix comprising of 10^18 entries.
Castruccio, Stefano; Genton, Marc G.
2015-01-01
One of the main challenges when working with modern climate model ensembles is the increasingly larger size of the data produced, and the consequent difficulty in storing large amounts of spatio-temporally resolved information. Many compression algorithms can be used to mitigate this problem, but since they are designed to compress generic scientific data sets, they do not account for the nature of climate model output and they compress only individual simulations. In this work, we propose a different, statistics-based approach that explicitly accounts for the space-time dependence of the data for annual global three-dimensional temperature fields in an initial condition ensemble. The set of estimated parameters is small (compared to the data size) and can be regarded as a summary of the essential structure of the ensemble output; therefore, it can be used to instantaneously reproduce the temperature fields in an ensemble with a substantial saving in storage and time. The statistical model exploits the gridded geometry of the data and parallelization across processors. It is therefore computationally convenient and allows to fit a non-trivial model to a data set of one billion data points with a covariance matrix comprising of 10^18 entries.
After statistics reform : Should we still teach significance testing?
A. Hak (Tony)
2014-01-01
textabstractIn the longer term null hypothesis significance testing (NHST) will disappear because p- values are not informative and not replicable. Should we continue to teach in the future the procedures of then abolished routines (i.e., NHST)? Three arguments are discussed for not teaching NHST in
Statistical Tests for Frequency Distribution of Mean Gravity Anomalies
African Journals Online (AJOL)
The hypothesis that a very large number of lOx 10mean gravity anomalies are normally distributed has been rejected at 5% Significance level based on the X2 and the unit normal deviate tests. However, the 50 equal area mean anomalies derived from the lOx 10data, have been found to be normally distributed at the same ...
Testing the performance of a blind burst statistic
Energy Technology Data Exchange (ETDEWEB)
Vicere, A [Istituto di Fisica, Universita di Urbino (Italy); Calamai, G [Istituto Nazionale di Fisica Nucleare, Sez. Firenze/Urbino (Italy); Campagna, E [Istituto Nazionale di Fisica Nucleare, Sez. Firenze/Urbino (Italy); Conforto, G [Istituto di Fisica, Universita di Urbino (Italy); Cuoco, E [Istituto Nazionale di Fisica Nucleare, Sez. Firenze/Urbino (Italy); Dominici, P [Istituto di Fisica, Universita di Urbino (Italy); Fiori, I [Istituto di Fisica, Universita di Urbino (Italy); Guidi, G M [Istituto di Fisica, Universita di Urbino (Italy); Losurdo, G [Istituto Nazionale di Fisica Nucleare, Sez. Firenze/Urbino (Italy); Martelli, F [Istituto di Fisica, Universita di Urbino (Italy); Mazzoni, M [Istituto Nazionale di Fisica Nucleare, Sez. Firenze/Urbino (Italy); Perniola, B [Istituto di Fisica, Universita di Urbino (Italy); Stanga, R [Istituto Nazionale di Fisica Nucleare, Sez. Firenze/Urbino (Italy); Vetrano, F [Istituto di Fisica, Universita di Urbino (Italy)
2003-09-07
In this work, we estimate the performance of a method for the detection of burst events in the data produced by interferometric gravitational wave detectors. We compute the receiver operating characteristics in the specific case of a simulated noise having the spectral density expected for Virgo, using test signals taken from a library of possible waveforms emitted during the collapse of the core of type II supernovae.
A weighted generalized score statistic for comparison of predictive values of diagnostic tests.
Kosinski, Andrzej S
2013-03-15
Positive and negative predictive values are important measures of a medical diagnostic test performance. We consider testing equality of two positive or two negative predictive values within a paired design in which all patients receive two diagnostic tests. The existing statistical tests for testing equality of predictive values are either Wald tests based on the multinomial distribution or the empirical Wald and generalized score tests within the generalized estimating equations (GEE) framework. As presented in the literature, these test statistics have considerably complex formulas without clear intuitive insight. We propose their re-formulations that are mathematically equivalent but algebraically simple and intuitive. As is clearly seen with a new re-formulation we presented, the generalized score statistic does not always reduce to the commonly used score statistic in the independent samples case. To alleviate this, we introduce a weighted generalized score (WGS) test statistic that incorporates empirical covariance matrix with newly proposed weights. This statistic is simple to compute, always reduces to the score statistic in the independent samples situation, and preserves type I error better than the other statistics as demonstrated by simulations. Thus, we believe that the proposed WGS statistic is the preferred statistic for testing equality of two predictive values and for corresponding sample size computations. The new formulas of the Wald statistics may be useful for easy computation of confidence intervals for difference of predictive values. The introduced concepts have potential to lead to development of the WGS test statistic in a general GEE setting. Copyright © 2012 John Wiley & Sons, Ltd.
Mathur, Sunil; Sadana, Ajit
2015-12-01
We present a rank-based test statistic for the identification of differentially expressed genes using a distance measure. The proposed test statistic is highly robust against extreme values and does not assume the distribution of parent population. Simulation studies show that the proposed test is more powerful than some of the commonly used methods, such as paired t-test, Wilcoxon signed rank test, and significance analysis of microarray (SAM) under certain non-normal distributions. The asymptotic distribution of the test statistic, and the p-value function are discussed. The application of proposed method is shown using a real-life data set. © The Author(s) 2011.
A statistical light use efficiency model explains 85% variations in global GPP
Jiang, C.; Ryu, Y.
2016-12-01
Photosynthesis is a complicated process whose modeling requires different levels of assumptions, simplification, and parameterization. Among models, light use efficiency (LUE) model is highly compact but powerful in monitoring gross primary production (GPP) from satellite data. Most of LUE models adopt a multiplicative from of maximum LUE, absorbed photosynthetically active radiation (APAR), and temperature and water stress functions. However, maximum LUE is a fitting parameter with large spatial variations, but most studies only use several biome dependent constants. In addition, stress functions are empirical and arbitrary in literatures. Moreover, meteorological data used are usually coarse-resolution, e.g., 1°, which could cause large errors. Finally, sunlit and shade canopy have completely different light responses but little considered. Targeting these issues, we derived a new statistical LUE model from a process-based and satellite-driven model, the Breathing Earth System Simulator (BESS). We have already derived a set of global radiation (5-km resolution), carbon and water fluxes (1-km resolution) products from 2000 to 2015 from BESS. By exploring these datasets, we found strong correlation between APAR and GPP for sunlit (R2=0.84) and shade (R2=0.96) canopy, respectively. A simple model, only driven by sunlit and shade APAR, was thus built based on linear relationships. The slopes of the linear function act as effective LUE of global ecosystem, with values of 0.0232 and 0.0128 umol C/umol quanta for sunlit and shade canopy, respectively. When compared with MPI-BGC GPP products, a global proxy of FLUXNET data, BESS-LUE achieved an overall accuracy of R2 = 0.85, whereas original BESS was R2 = 0.83 and MODIS GPP product was R2 = 0.76. We investigated spatiotemporal variations of the effective LUE. Spatially, the ratio of sunlit to shade values ranged from 0.1 (wet tropic) to 4.5 (dry inland). By using maps of sunlit and shade effective LUE the accuracy of
Normality Tests for Statistical Analysis: A Guide for Non-Statisticians
Ghasemi, Asghar; Zahediasl, Saleh
2012-01-01
Statistical errors are common in scientific literature and about 50% of the published articles have at least one error. The assumption of normality needs to be checked for many statistical procedures, namely parametric tests, because their validity depends on it. The aim of this commentary is to overview checking for normality in statistical analysis using SPSS. PMID:23843808
Amalia, Junita; Purhadi, Otok, Bambang Widjanarko
2017-11-01
Poisson distribution is a discrete distribution with count data as the random variables and it has one parameter defines both mean and variance. Poisson regression assumes mean and variance should be same (equidispersion). Nonetheless, some case of the count data unsatisfied this assumption because variance exceeds mean (over-dispersion). The ignorance of over-dispersion causes underestimates in standard error. Furthermore, it causes incorrect decision in the statistical test. Previously, paired count data has a correlation and it has bivariate Poisson distribution. If there is over-dispersion, modeling paired count data is not sufficient with simple bivariate Poisson regression. Bivariate Poisson Inverse Gaussian Regression (BPIGR) model is mix Poisson regression for modeling paired count data within over-dispersion. BPIGR model produces a global model for all locations. In another hand, each location has different geographic conditions, social, cultural and economic so that Geographically Weighted Regression (GWR) is needed. The weighting function of each location in GWR generates a different local model. Geographically Weighted Bivariate Poisson Inverse Gaussian Regression (GWBPIGR) model is used to solve over-dispersion and to generate local models. Parameter estimation of GWBPIGR model obtained by Maximum Likelihood Estimation (MLE) method. Meanwhile, hypothesis testing of GWBPIGR model acquired by Maximum Likelihood Ratio Test (MLRT) method.
Purves, L.; Strang, R. F.; Dube, M. P.; Alea, P.; Ferragut, N.; Hershfeld, D.
1983-01-01
The software and procedures of a system of programs used to generate a report of the statistical correlation between NASTRAN modal analysis results and physical tests results from modal surveys are described. Topics discussed include: a mathematical description of statistical correlation, a user's guide for generating a statistical correlation report, a programmer's guide describing the organization and functions of individual programs leading to a statistical correlation report, and a set of examples including complete listings of programs, and input and output data.
Statistical characterization of global Sea Surface Salinity for SMOS level 3 and 4 products
Gourrion, J.; Aretxabaleta, A. L.; Ballabrera, J.; Mourre, B.
2009-04-01
The Soil Moisture and Ocean Salinity (SMOS) mission of the European Space Agency will soon provide sea surface salinity (SSS) estimates to the scientific community. Because of the numerous geophysical contamination sources and the instrument complexity, the salinity products will have a low signal to noise ratio at level 2 (individual estimates??) that is expected to increase up to mission requirements (0.1 psu) at level 3 (global maps with regular distribution) after spatio-temporal accumulation of the observations. Geostatistical methods such as Optimal Interpolation are being implemented at the level 3/4 production centers to operate this noise reduction step. The methodologies require auxiliary information about SSS statistics that, under Gaussian assumption, consist in the mean field and the covariance of the departures from it. The present study is a contribution to the definition of the best estimates for mean field and covariances to be used in the near-future SMOS level 3 and 4 products. We use complementary information from sparse in-situ observations and imperfect outputs from state-of-art model simulations. Various estimates of the mean field are compared. An alternative is the use of a SSS climatology such as the one provided by the World Ocean Atlas 2005. An historical SSS dataset from the World Ocean Database 2005 is reanalyzed and combined with the recent global observations obtained by the Array for Real-Time Geostrophic Oceanography (ARGO). Regional tendencies in the long-term temporal evolution of the near-surface ocean salinity are evident, suggesting that the use of a SSS climatology to describe the current mean field may introduce biases of magnitude similar to the precision goal. Consequently, a recent SSS dataset may be preferred to define the mean field needed for SMOS level 3 and 4 production. The in-situ observation network allows a global mapping of the low frequency component of the variability, i.e. decadal, interannual and seasonal
"What If" Analyses: Ways to Interpret Statistical Significance Test Results Using EXCEL or "R"
Ozturk, Elif
2012-01-01
The present paper aims to review two motivations to conduct "what if" analyses using Excel and "R" to understand the statistical significance tests through the sample size context. "What if" analyses can be used to teach students what statistical significance tests really do and in applied research either prospectively to estimate what sample size…
EVALUATION OF A NEW MEAN SCALED AND MOMENT ADJUSTED TEST STATISTIC FOR SEM.
Tong, Xiaoxiao; Bentler, Peter M
2013-01-01
Recently a new mean scaled and skewness adjusted test statistic was developed for evaluating structural equation models in small samples and with potentially nonnormal data, but this statistic has received only limited evaluation. The performance of this statistic is compared to normal theory maximum likelihood and two well-known robust test statistics. A modification to the Satorra-Bentler scaled statistic is developed for the condition that sample size is smaller than degrees of freedom. The behavior of the four test statistics is evaluated with a Monte Carlo confirmatory factor analysis study that varies seven sample sizes and three distributional conditions obtained using Headrick's fifth-order transformation to nonnormality. The new statistic performs badly in most conditions except under the normal distribution. The goodness-of-fit χ(2) test based on maximum-likelihood estimation performed well under normal distributions as well as under a condition of asymptotic robustness. The Satorra-Bentler scaled test statistic performed best overall, while the mean scaled and variance adjusted test statistic outperformed the others at small and moderate sample sizes under certain distributional conditions.
Brain SPECT analysis using statistical parametric mapping in patients with transient global amnesia
Energy Technology Data Exchange (ETDEWEB)
Kim, E. N.; Sohn, H. S.; Kim, S. H; Chung, S. K.; Yang, D. W. [College of Medicine, The Catholic Univ. of Korea, Seoul (Korea, Republic of)
2001-07-01
This study investigated alterations in regional cerebral blood flow (rCBF) in patients with transient global amnesia (TGA) using statistical parametric mapping 99 (SPM99). Noninvasive rCBF measurements using 99mTc-ethyl cysteinate dimer (ECD) SPECT were performed on 8 patients with TGA and 17 age matched controls. The relative rCBF maps in patients with TGA and controls were compared. In patients with TGA, significantly decreased rCBF was found along the left superior temporal extending to left parietal region of the brain and left thalamus. There were areas of increased rCBF in the right temporal, right frontal region and right thalamus. We could demonstrate decreased perfusion in left cerebral hemisphere and increased perfusion in right cerebral hemisphere in patients with TGA using SPM99. The reciprocal change of rCBF between right and left cerebral hemisphere in patients with TGA might suggest that imbalanced neuronal activity between the bilateral hemispheres may be important role in the pathogenesis of the TGA. For quantitative SPECT analysis in TGA patients, we recommend SPM99 rather than the ROI method because of its definitive advantages.
Statistical model of global uranium resources and long-term availability
International Nuclear Information System (INIS)
Monnet, A.; Gabriel, S.; Percebois, J.
2016-01-01
Most recent studies on the long-term supply of uranium make simplistic assumptions on the available resources and their production costs. Some consider the whole uranium quantities in the Earth's crust and then estimate the production costs based on the ore grade only, disregarding the size of ore bodies and the mining techniques. Other studies consider the resources reported by countries for a given cost category, disregarding undiscovered or unreported quantities. In both cases, the resource estimations are sorted following a cost merit order. In this paper, we describe a methodology based on 'geological environments'. It provides a more detailed resource estimation and it is more flexible regarding cost modelling. The global uranium resource estimation introduced in this paper results from the sum of independent resource estimations from different geological environments. A geological environment is defined by its own geographical boundaries, resource dispersion (average grade and size of ore bodies and their variance), and cost function. With this definition, uranium resources are considered within ore bodies. The deposit breakdown of resources is modelled using a bivariate statistical approach where size and grade are the two random variables. This makes resource estimates possible for individual projects. Adding up all geological environments provides a distribution of all Earth's crust resources in which ore bodies are sorted by size and grade. This subset-based estimation is convenient to model specific cost structures. (authors)
Brain SPECT analysis using statistical parametric mapping in patients with transient global amnesia
International Nuclear Information System (INIS)
Kim, E. N.; Sohn, H. S.; Kim, S. H; Chung, S. K.; Yang, D. W.
2001-01-01
This study investigated alterations in regional cerebral blood flow (rCBF) in patients with transient global amnesia (TGA) using statistical parametric mapping 99 (SPM99). Noninvasive rCBF measurements using 99mTc-ethyl cysteinate dimer (ECD) SPECT were performed on 8 patients with TGA and 17 age matched controls. The relative rCBF maps in patients with TGA and controls were compared. In patients with TGA, significantly decreased rCBF was found along the left superior temporal extending to left parietal region of the brain and left thalamus. There were areas of increased rCBF in the right temporal, right frontal region and right thalamus. We could demonstrate decreased perfusion in left cerebral hemisphere and increased perfusion in right cerebral hemisphere in patients with TGA using SPM99. The reciprocal change of rCBF between right and left cerebral hemisphere in patients with TGA might suggest that imbalanced neuronal activity between the bilateral hemispheres may be important role in the pathogenesis of the TGA. For quantitative SPECT analysis in TGA patients, we recommend SPM99 rather than the ROI method because of its definitive advantages
Zhang, Fanghong; Miyaoka, Etsuo; Huang, Fuping; Tanaka, Yutaka
2015-01-01
The problem for establishing noninferiority is discussed between a new treatment and a standard (control) treatment with ordinal categorical data. A measure of treatment effect is used and a method of specifying noninferiority margin for the measure is provided. Two Z-type test statistics are proposed where the estimation of variance is constructed under the shifted null hypothesis using U-statistics. Furthermore, the confidence interval and the sample size formula are given based on the proposed test statistics. The proposed procedure is applied to a dataset from a clinical trial. A simulation study is conducted to compare the performance of the proposed test statistics with that of the existing ones, and the results show that the proposed test statistics are better in terms of the deviation from nominal level and the power.
Directory of Open Access Journals (Sweden)
J. Sunil Rao
2007-01-01
Full Text Available In gene selection for cancer classifi cation using microarray data, we define an eigenvalue-ratio statistic to measure a gene’s contribution to the joint discriminability when this gene is included into a set of genes. Based on this eigenvalueratio statistic, we define a novel hypothesis testing for gene statistical redundancy and propose two gene selection methods. Simulation studies illustrate the agreement between statistical redundancy testing and gene selection methods. Real data examples show the proposed gene selection methods can select a compact gene subset which can not only be used to build high quality cancer classifiers but also show biological relevance.
Xu, Kuan-Man
2006-01-01
A new method is proposed to compare statistical differences between summary histograms, which are the histograms summed over a large ensemble of individual histograms. It consists of choosing a distance statistic for measuring the difference between summary histograms and using a bootstrap procedure to calculate the statistical significance level. Bootstrapping is an approach to statistical inference that makes few assumptions about the underlying probability distribution that describes the data. Three distance statistics are compared in this study. They are the Euclidean distance, the Jeffries-Matusita distance and the Kuiper distance. The data used in testing the bootstrap method are satellite measurements of cloud systems called cloud objects. Each cloud object is defined as a contiguous region/patch composed of individual footprints or fields of view. A histogram of measured values over footprints is generated for each parameter of each cloud object and then summary histograms are accumulated over all individual histograms in a given cloud-object size category. The results of statistical hypothesis tests using all three distances as test statistics are generally similar, indicating the validity of the proposed method. The Euclidean distance is determined to be most suitable after comparing the statistical tests of several parameters with distinct probability distributions among three cloud-object size categories. Impacts on the statistical significance levels resulting from differences in the total lengths of satellite footprint data between two size categories are also discussed.
Identification of significant features by the Global Mean Rank test.
Klammer, Martin; Dybowski, J Nikolaj; Hoffmann, Daniel; Schaab, Christoph
2014-01-01
With the introduction of omics-technologies such as transcriptomics and proteomics, numerous methods for the reliable identification of significantly regulated features (genes, proteins, etc.) have been developed. Experimental practice requires these tests to successfully deal with conditions such as small numbers of replicates, missing values, non-normally distributed expression levels, and non-identical distributions of features. With the MeanRank test we aimed at developing a test that performs robustly under these conditions, while favorably scaling with the number of replicates. The test proposed here is a global one-sample location test, which is based on the mean ranks across replicates, and internally estimates and controls the false discovery rate. Furthermore, missing data is accounted for without the need of imputation. In extensive simulations comparing MeanRank to other frequently used methods, we found that it performs well with small and large numbers of replicates, feature dependent variance between replicates, and variable regulation across features on simulation data and a recent two-color microarray spike-in dataset. The tests were then used to identify significant changes in the phosphoproteomes of cancer cells induced by the kinase inhibitors erlotinib and 3-MB-PP1 in two independently published mass spectrometry-based studies. MeanRank outperformed the other global rank-based methods applied in this study. Compared to the popular Significance Analysis of Microarrays and Linear Models for Microarray methods, MeanRank performed similar or better. Furthermore, MeanRank exhibits more consistent behavior regarding the degree of regulation and is robust against the choice of preprocessing methods. MeanRank does not require any imputation of missing values, is easy to understand, and yields results that are easy to interpret. The software implementing the algorithm is freely available for academic and commercial use.
Humane Society International's global campaign to end animal testing.
Seidle, Troy
2013-12-01
The Research & Toxicology Department of Humane Society International (HSI) operates a multifaceted and science-driven global programme aimed at ending the use of animals in toxicity testing and research. The key strategic objectives include: a) ending cosmetics animal testing worldwide, via the multinational Be Cruelty-Free campaign; b) achieving near-term reductions in animal testing requirements through revision of product sector regulations; and c) advancing humane science by exposing failing animal models of human disease and shifting science funding toward human biology-based research and testing tools fit for the 21st century. HSI was instrumental in ensuring the implementation of the March 2013 European sales ban for newly animal-tested cosmetics, in achieving the June 2013 cosmetics animal testing ban in India as well as major cosmetics regulatory policy shifts in China and South Korea, and in securing precedent-setting reductions in in vivo data requirements for pesticides in the EU through the revision of biocides and plant protection product regulations, among others. HSI is currently working to export these life-saving measures to more than a dozen industrial and emerging economies. 2013 FRAME.
Ensuring Positiveness of the Scaled Difference Chi-square Test Statistic.
Satorra, Albert; Bentler, Peter M
2010-06-01
A scaled difference test statistic [Formula: see text] that can be computed from standard software of structural equation models (SEM) by hand calculations was proposed in Satorra and Bentler (2001). The statistic [Formula: see text] is asymptotically equivalent to the scaled difference test statistic T̄(d) introduced in Satorra (2000), which requires more involved computations beyond standard output of SEM software. The test statistic [Formula: see text] has been widely used in practice, but in some applications it is negative due to negativity of its associated scaling correction. Using the implicit function theorem, this note develops an improved scaling correction leading to a new scaled difference statistic T̄(d) that avoids negative chi-square values.
Cluster Detection Tests in Spatial Epidemiology: A Global Indicator for Performance Assessment.
Directory of Open Access Journals (Sweden)
Aline Guttmann
Full Text Available In cluster detection of disease, the use of local cluster detection tests (CDTs is current. These methods aim both at locating likely clusters and testing for their statistical significance. New or improved CDTs are regularly proposed to epidemiologists and must be subjected to performance assessment. Because location accuracy has to be considered, performance assessment goes beyond the raw estimation of type I or II errors. As no consensus exists for performance evaluations, heterogeneous methods are used, and therefore studies are rarely comparable. A global indicator of performance, which assesses both spatial accuracy and usual power, would facilitate the exploration of CDTs behaviour and help between-studies comparisons. The Tanimoto coefficient (TC is a well-known measure of similarity that can assess location accuracy but only for one detected cluster. In a simulation study, performance is measured for many tests. From the TC, we here propose two statistics, the averaged TC and the cumulated TC, as indicators able to provide a global overview of CDTs performance for both usual power and location accuracy. We evidence the properties of these two indicators and the superiority of the cumulated TC to assess performance. We tested these indicators to conduct a systematic spatial assessment displayed through performance maps.
Cluster Detection Tests in Spatial Epidemiology: A Global Indicator for Performance Assessment
Guttmann, Aline; Li, Xinran; Feschet, Fabien; Gaudart, Jean; Demongeot, Jacques; Boire, Jean-Yves; Ouchchane, Lemlih
2015-01-01
In cluster detection of disease, the use of local cluster detection tests (CDTs) is current. These methods aim both at locating likely clusters and testing for their statistical significance. New or improved CDTs are regularly proposed to epidemiologists and must be subjected to performance assessment. Because location accuracy has to be considered, performance assessment goes beyond the raw estimation of type I or II errors. As no consensus exists for performance evaluations, heterogeneous methods are used, and therefore studies are rarely comparable. A global indicator of performance, which assesses both spatial accuracy and usual power, would facilitate the exploration of CDTs behaviour and help between-studies comparisons. The Tanimoto coefficient (TC) is a well-known measure of similarity that can assess location accuracy but only for one detected cluster. In a simulation study, performance is measured for many tests. From the TC, we here propose two statistics, the averaged TC and the cumulated TC, as indicators able to provide a global overview of CDTs performance for both usual power and location accuracy. We evidence the properties of these two indicators and the superiority of the cumulated TC to assess performance. We tested these indicators to conduct a systematic spatial assessment displayed through performance maps. PMID:26086911
Selecting the most appropriate inferential statistical test for your quantitative research study.
Bettany-Saltikov, Josette; Whittaker, Victoria Jane
2014-06-01
To discuss the issues and processes relating to the selection of the most appropriate statistical test. A review of the basic research concepts together with a number of clinical scenarios is used to illustrate this. Quantitative nursing research generally features the use of empirical data which necessitates the selection of both descriptive and statistical tests. Different types of research questions can be answered by different types of research designs, which in turn need to be matched to a specific statistical test(s). Discursive paper. This paper discusses the issues relating to the selection of the most appropriate statistical test and makes some recommendations as to how these might be dealt with. When conducting empirical quantitative studies, a number of key issues need to be considered. Considerations for selecting the most appropriate statistical tests are discussed and flow charts provided to facilitate this process. When nursing clinicians and researchers conduct quantitative research studies, it is crucial that the most appropriate statistical test is selected to enable valid conclusions to be made. © 2013 John Wiley & Sons Ltd.
A testing procedure for wind turbine generators based on the power grid statistical model
DEFF Research Database (Denmark)
Farajzadehbibalan, Saber; Ramezani, Mohammad Hossein; Nielsen, Peter
2017-01-01
In this study, a comprehensive test procedure is developed to test wind turbine generators with a hardware-in-loop setup. The procedure employs the statistical model of the power grid considering the restrictions of the test facility and system dynamics. Given the model in the latent space...
Common pitfalls in statistical analysis: Understanding the properties of diagnostic tests - Part 1.
Ranganathan, Priya; Aggarwal, Rakesh
2018-01-01
In this article in our series on common pitfalls in statistical analysis, we look at some of the attributes of diagnostic tests (i.e., tests which are used to determine whether an individual does or does not have disease). The next article in this series will focus on further issues related to diagnostic tests.
A Third Moment Adjusted Test Statistic for Small Sample Factor Analysis.
Lin, Johnny; Bentler, Peter M
2012-01-01
Goodness of fit testing in factor analysis is based on the assumption that the test statistic is asymptotically chi-square; but this property may not hold in small samples even when the factors and errors are normally distributed in the population. Robust methods such as Browne's asymptotically distribution-free method and Satorra Bentler's mean scaling statistic were developed under the presumption of non-normality in the factors and errors. This paper finds new application to the case where factors and errors are normally distributed in the population but the skewness of the obtained test statistic is still high due to sampling error in the observed indicators. An extension of Satorra Bentler's statistic is proposed that not only scales the mean but also adjusts the degrees of freedom based on the skewness of the obtained test statistic in order to improve its robustness under small samples. A simple simulation study shows that this third moment adjusted statistic asymptotically performs on par with previously proposed methods, and at a very small sample size offers superior Type I error rates under a properly specified model. Data from Mardia, Kent and Bibby's study of students tested for their ability in five content areas that were either open or closed book were used to illustrate the real-world performance of this statistic.
Directory of Open Access Journals (Sweden)
Shirin Iranfar
2013-12-01
Full Text Available Introduction: Test anxiety is a common phenomenon among students and is one of the problems of educational system. The present study was conducted to investigate the test anxiety in vital statistics course and its association with academic performance of students at Kermanshah University of Medical Sciences. This study was descriptive-analytical and the study sample included the students studying in nursing and midwifery, paramedicine and health faculties that had taken vital statistics course and were selected through census method. Sarason questionnaire was used to analyze the test anxiety. Data were analyzed by descriptive and inferential statistics. The findings indicated no significant correlation between test anxiety and score of vital statistics course.
A Modified Jonckheere Test Statistic for Ordered Alternatives in Repeated Measures Design
Directory of Open Access Journals (Sweden)
Hatice Tül Kübra AKDUR
2016-09-01
Full Text Available In this article, a new test based on Jonckheere test [1] for randomized blocks which have dependent observations within block is presented. A weighted sum for each block statistic rather than the unweighted sum proposed by Jonckheereis included. For Jonckheere type statistics, the main assumption is independency of observations within block. In the case of repeated measures design, the assumption of independence is violated. The weighted Jonckheere type statistic for the situation of dependence for different variance-covariance structure and the situation based on ordered alternative hypothesis structure of each block on the design is used. Also, the proposed statistic is compared to the existing test based on Jonckheere in terms of type I error rates by performing Monte Carlo simulation. For the strong correlations, circular bootstrap version of the proposed Jonckheere test provides lower rates of type I error.
The Statistic Test on Influence of Surface Treatment to Fatigue Lifetime with Limited Data
Suhartono, Agus
2009-01-01
Justifications on the influences of two or more parameters on fatigue strength are some times problematic due to the scatter nature of the fatigue data. Statistic test can facilitate the evaluation, whether the changes in material characteristics as a result of specific parameters of interest is significant. The statistic tests were applied to fatigue data of AISI 1045 steel specimens. The specimens are consisted of as received specimen, shot peened specimen with 15 and 16 Almen intensity as ...
Global regionalized seismicity in view of Non-Extensive Statistical Physics
Chochlaki, Kalliopi; Vallianatos, Filippos; Michas, Georgios
2018-03-01
In the present work we study the distribution of Earth's shallow seismicity on different seismic zones, as occurred from 1981 to 2011 and extracted from the Centroid Moment Tensor (CMT) catalog. Our analysis is based on the subdivision of the Earth's surface into seismic zones that are homogeneous with regards to seismic activity and orientation of the predominant stress field. For this, we use the Flinn-Engdahl regionalization (FE) (Flinn and Engdahl, 1965), which consists of fifty seismic zones as modified by Lombardi and Marzocchi (2007). The latter authors grouped the 50 FE zones into larger tectonically homogeneous ones, utilizing the cumulative moment tensor method, resulting into thirty-nine seismic zones. In each one of these seismic zones we study the distribution of seismicity in terms of the frequency-magnitude distribution and the inter-event time distribution between successive earthquakes, a task that is essential for hazard assessments and to better understand the global and regional geodynamics. In our analysis we use non-extensive statistical physics (NESP), which seems to be one of the most adequate and promising methodological tools for analyzing complex systems, such as the Earth's seismicity, introducing the q-exponential formulation as the expression of probability distribution function that maximizes the Sq entropy as defined by Tsallis, (1988). The qE parameter is significantly greater than one for all the seismic regions analyzed with value range from 1.294 to 1.504, indicating that magnitude correlations are particularly strong. Furthermore, the qT parameter shows some temporal correlations but variations with cut-off magnitude show greater temporal correlations when the smaller magnitude earthquakes are included. The qT for earthquakes with magnitude greater than 5 takes values from 1.043 to 1.353 and as we increase the cut-off magnitude to 5.5 and 6 the qT value ranges from 1.001 to 1.242 and from 1.001 to 1.181 respectively, presenting
Comment on the asymptotics of a distribution-free goodness of fit test statistic.
Browne, Michael W; Shapiro, Alexander
2015-03-01
In a recent article Jennrich and Satorra (Psychometrika 78: 545-552, 2013) showed that a proof by Browne (British Journal of Mathematical and Statistical Psychology 37: 62-83, 1984) of the asymptotic distribution of a goodness of fit test statistic is incomplete because it fails to prove that the orthogonal component function employed is continuous. Jennrich and Satorra (Psychometrika 78: 545-552, 2013) showed how Browne's proof can be completed satisfactorily but this required the development of an extensive and mathematically sophisticated framework for continuous orthogonal component functions. This short note provides a simple proof of the asymptotic distribution of Browne's (British Journal of Mathematical and Statistical Psychology 37: 62-83, 1984) test statistic by using an equivalent form of the statistic that does not involve orthogonal component functions and consequently avoids all complicating issues associated with them.
DEFF Research Database (Denmark)
Mischke, Peggy
2013-01-01
for research and policy analysis. An improved understanding of the quality and reliability of Chinese economic and energy data is becoming more important to to understanding global energy markets and future greenhouse gas emissions. China’s national statistical system to track such changes is however still...... developing and, in some instances, energy data remain unavailable in the public domain. This working paper discusses China’s energy and economic statistics in view of identifying suitable indicators to develop a simplified regional energy systems for China from a variety of publicly available data. As China......’s national statistical system continuous to be debated and criticised in terms of data quality, comparability and reliability, an overview of the milestones, status and main issues of China’s energy statistics is given. In a next step, the energy balance format of the International Energy Agency is used...
International Nuclear Information System (INIS)
Ihara, Hitoshi; Nishimura, Hideo; Ikawa, Koji; Miura, Nobuyuki; Iwanaga, Masayuki; Kusano, Toshitsugu.
1988-03-01
An Near-Real-Time Materials Accountancy(NRTA) system had been developed as an advanced safeguards measure for PNC Tokai Reprocessing Plant; a minicomputer system for NRTA data processing was designed and constructed. A full scale field test was carried out as a JASPAS(Japan Support Program for Agency Safeguards) project with the Agency's participation and the NRTA data processing system was used. Using this field test data, investigation of the detection power of a statistical test under real circumstances was carried out for five statistical tests, i.e., a significance test of MUF, CUMUF test, average loss test, MUF residual test and Page's test on MUF residuals. The result shows that the CUMUF test, average loss test, MUF residual test and the Page's test on MUF residual test are useful to detect a significant loss or diversion. An unmeasured inventory estimation model for the PNC reprocessing plant was developed in this study. Using this model, the field test data from the C-1 to 85 - 2 campaigns were re-analyzed. (author)
Comparison of small n statistical tests of differential expression applied to microarrays
Directory of Open Access Journals (Sweden)
Lee Anna Y
2009-02-01
Full Text Available Abstract Background DNA microarrays provide data for genome wide patterns of expression between observation classes. Microarray studies often have small samples sizes, however, due to cost constraints or specimen availability. This can lead to poor random error estimates and inaccurate statistical tests of differential expression. We compare the performance of the standard t-test, fold change, and four small n statistical test methods designed to circumvent these problems. We report results of various normalization methods for empirical microarray data and of various random error models for simulated data. Results Three Empirical Bayes methods (CyberT, BRB, and limma t-statistics were the most effective statistical tests across simulated and both 2-colour cDNA and Affymetrix experimental data. The CyberT regularized t-statistic in particular was able to maintain expected false positive rates with simulated data showing high variances at low gene intensities, although at the cost of low true positive rates. The Local Pooled Error (LPE test introduced a bias that lowered false positive rates below theoretically expected values and had lower power relative to the top performers. The standard two-sample t-test and fold change were also found to be sub-optimal for detecting differentially expressed genes. The generalized log transformation was shown to be beneficial in improving results with certain data sets, in particular high variance cDNA data. Conclusion Pre-processing of data influences performance and the proper combination of pre-processing and statistical testing is necessary for obtaining the best results. All three Empirical Bayes methods assessed in our study are good choices for statistical tests for small n microarray studies for both Affymetrix and cDNA data. Choice of method for a particular study will depend on software and normalization preferences.
Shaikh, Masood Ali
2017-09-01
Assessment of research articles in terms of study designs used, statistical tests applied and the use of statistical analysis programmes help determine research activity profile and trends in the country. In this descriptive study, all original articles published by Journal of Pakistan Medical Association (JPMA) and Journal of the College of Physicians and Surgeons Pakistan (JCPSP), in the year 2015 were reviewed in terms of study designs used, application of statistical tests, and the use of statistical analysis programmes. JPMA and JCPSP published 192 and 128 original articles, respectively, in the year 2015. Results of this study indicate that cross-sectional study design, bivariate inferential statistical analysis entailing comparison between two variables/groups, and use of statistical software programme SPSS to be the most common study design, inferential statistical analysis, and statistical analysis software programmes, respectively. These results echo previously published assessment of these two journals for the year 2014.
Walker, Jennie L.
2018-01-01
As world communication, technology, and trade become increasingly integrated through globalization, multinational corporations seek employees with global leadership skills. However, the demand for these skills currently outweighs the supply. Given the rarity of globally ready leaders, global competency development should be emphasized in business…
A Note on Three Statistical Tests in the Logistic Regression DIF Procedure
Paek, Insu
2012-01-01
Although logistic regression became one of the well-known methods in detecting differential item functioning (DIF), its three statistical tests, the Wald, likelihood ratio (LR), and score tests, which are readily available under the maximum likelihood, do not seem to be consistently distinguished in DIF literature. This paper provides a clarifying…
Evaluating Two Models of Collaborative Tests in an Online Introductory Statistics Course
Björnsdóttir, Auðbjörg; Garfield, Joan; Everson, Michelle
2015-01-01
This study explored the use of two different types of collaborative tests in an online introductory statistics course. A study was designed and carried out to investigate three research questions: (1) What is the difference in students' learning between using consensus and non-consensus collaborative tests in the online environment?, (2) What is…
P-Value, a true test of statistical significance? a cautionary note ...
African Journals Online (AJOL)
While it's not the intention of the founders of significance testing and hypothesis testing to have the two ideas intertwined as if they are complementary, the inconvenient marriage of the two practices into one coherent, convenient, incontrovertible and misinterpreted practice has dotted our standard statistics textbooks and ...
DEFF Research Database (Denmark)
Steffen, J.H.; Ford, E.B.; Rowe, J.F.
2012-01-01
We analyze the deviations of transit times from a linear ephemeris for the Kepler Objects of Interest (KOI) through quarter six of science data. We conduct two statistical tests for all KOIs and a related statistical test for all pairs of KOIs in multi-transiting systems. These tests identify...... several systems which show potentially interesting transit timing variations (TTVs). Strong TTV systems have been valuable for the confirmation of planets and their mass measurements. Many of the systems identified in this study should prove fruitful for detailed TTV studies....
International Nuclear Information System (INIS)
Steffen, Jason H.; Ford, Eric B.; Rowe, Jason F.; Borucki, William J.; Bryson, Steve; Caldwell, Douglas A.; Jenkins, Jon M.; Koch, David G.; Sanderfer, Dwight T.; Seader, Shawn; Twicken, Joseph D.; Fabrycky, Daniel C.; Holman, Matthew J.; Welsh, William F.; Batalha, Natalie M.; Ciardi, David R.; Kjeldsen, Hans; Prša, Andrej
2012-01-01
We analyze the deviations of transit times from a linear ephemeris for the Kepler Objects of Interest (KOI) through quarter six of science data. We conduct two statistical tests for all KOIs and a related statistical test for all pairs of KOIs in multi-transiting systems. These tests identify several systems which show potentially interesting transit timing variations (TTVs). Strong TTV systems have been valuable for the confirmation of planets and their mass measurements. Many of the systems identified in this study should prove fruitful for detailed TTV studies.
Statistical alignment: computational properties, homology testing and goodness-of-fit
DEFF Research Database (Denmark)
Hein, J; Wiuf, Carsten; Møller, Martin
2000-01-01
The model of insertions and deletions in biological sequences, first formulated by Thorne, Kishino, and Felsenstein in 1991 (the TKF91 model), provides a basis for performing alignment within a statistical framework. Here we investigate this model.Firstly, we show how to accelerate the statistical...... alignment algorithms several orders of magnitude. The main innovations are to confine likelihood calculations to a band close to the similarity based alignment, to get good initial guesses of the evolutionary parameters and to apply an efficient numerical optimisation algorithm for finding the maximum...... analysis.Secondly, we propose a new homology test based on this model, where homology means that an ancestor to a sequence pair can be found finitely far back in time. This test has statistical advantages relative to the traditional shuffle test for proteins.Finally, we describe a goodness-of-fit test...
Improved Test Planning and Analysis Through the Use of Advanced Statistical Methods
Green, Lawrence L.; Maxwell, Katherine A.; Glass, David E.; Vaughn, Wallace L.; Barger, Weston; Cook, Mylan
2016-01-01
The goal of this work is, through computational simulations, to provide statistically-based evidence to convince the testing community that a distributed testing approach is superior to a clustered testing approach for most situations. For clustered testing, numerous, repeated test points are acquired at a limited number of test conditions. For distributed testing, only one or a few test points are requested at many different conditions. The statistical techniques of Analysis of Variance (ANOVA), Design of Experiments (DOE) and Response Surface Methods (RSM) are applied to enable distributed test planning, data analysis and test augmentation. The D-Optimal class of DOE is used to plan an optimally efficient single- and multi-factor test. The resulting simulated test data are analyzed via ANOVA and a parametric model is constructed using RSM. Finally, ANOVA can be used to plan a second round of testing to augment the existing data set with new data points. The use of these techniques is demonstrated through several illustrative examples. To date, many thousands of comparisons have been performed and the results strongly support the conclusion that the distributed testing approach outperforms the clustered testing approach.
DEFF Research Database (Denmark)
Schneider, Jesper Wiborg
2012-01-01
In this paper we discuss and question the use of statistical significance tests in relation to university rankings as recently suggested. We outline the assumptions behind and interpretations of statistical significance tests and relate this to examples from the recent SCImago Institutions Rankin...
Swanson, David M; Blacker, Deborah; Alchawa, Taofik; Ludwig, Kerstin U; Mangold, Elisabeth; Lange, Christoph
2013-11-07
The advent of genome-wide association studies has led to many novel disease-SNP associations, opening the door to focused study on their biological underpinnings. Because of the importance of analyzing these associations, numerous statistical methods have been devoted to them. However, fewer methods have attempted to associate entire genes or genomic regions with outcomes, which is potentially more useful knowledge from a biological perspective and those methods currently implemented are often permutation-based. One property of some permutation-based tests is that their power varies as a function of whether significant markers are in regions of linkage disequilibrium (LD) or not, which we show from a theoretical perspective. We therefore develop two methods for quantifying the degree of association between a genomic region and outcome, both of whose power does not vary as a function of LD structure. One method uses dimension reduction to "filter" redundant information when significant LD exists in the region, while the other, called the summary-statistic test, controls for LD by scaling marker Z-statistics using knowledge of the correlation matrix of markers. An advantage of this latter test is that it does not require the original data, but only their Z-statistics from univariate regressions and an estimate of the correlation structure of markers, and we show how to modify the test to protect the type 1 error rate when the correlation structure of markers is misspecified. We apply these methods to sequence data of oral cleft and compare our results to previously proposed gene tests, in particular permutation-based ones. We evaluate the versatility of the modification of the summary-statistic test since the specification of correlation structure between markers can be inaccurate. We find a significant association in the sequence data between the 8q24 region and oral cleft using our dimension reduction approach and a borderline significant association using the
Price limits and stock market efficiency: Evidence from rolling bicorrelation test statistic
International Nuclear Information System (INIS)
Lim, Kian-Ping; Brooks, Robert D.
2009-01-01
Using the rolling bicorrelation test statistic, the present paper compares the efficiency of stock markets from China, Korea and Taiwan in selected sub-periods with different price limits regimes. The statistical results do not support the claims that restrictive price limits and price limits per se are jeopardizing market efficiency. However, the evidence does not imply that price limits have no effect on the price discovery process but rather suggesting that market efficiency is not merely determined by price limits.
DEFF Research Database (Denmark)
Christensen, Ole Fredslund; Frydenberg, Morten; Jensen, Jens Ledet
2005-01-01
The large deviation modified likelihood ratio statistic is studied for testing a variance component equal to a specified value. Formulas are presented in the general balanced case, whereas in the unbalanced case only the one-way random effects model is studied. Simulation studies are presented......, showing that the normal approximation to the large deviation modified likelihood ratio statistic gives confidence intervals for variance components with coverage probabilities very close to the nominal confidence coefficient....
A NEW TEST OF THE STATISTICAL NATURE OF THE BRIGHTEST CLUSTER GALAXIES
International Nuclear Information System (INIS)
Lin, Yen-Ting; Ostriker, Jeremiah P.; Miller, Christopher J.
2010-01-01
A novel statistic is proposed to examine the hypothesis that all cluster galaxies are drawn from the same luminosity distribution (LD). In such a 'statistical model' of galaxy LD, the brightest cluster galaxies (BCGs) are simply the statistical extreme of the galaxy population. Using a large sample of nearby clusters, we show that BCGs in high luminosity clusters (e.g., L tot ∼> 4 x 10 11 h -2 70 L sun ) are unlikely (probability ≤3 x 10 -4 ) to be drawn from the LD defined by all red cluster galaxies more luminous than M r = -20. On the other hand, BCGs in less luminous clusters are consistent with being the statistical extreme. Applying our method to the second brightest galaxies, we show that they are consistent with being the statistical extreme, which implies that the BCGs are also distinct from non-BCG luminous, red, cluster galaxies. We point out some issues with the interpretation of the classical tests proposed by Tremaine and Richstone (TR) that are designed to examine the statistical nature of BCGs, investigate the robustness of both our statistical test and those of TR against difficulties in photometry of galaxies of large angular size, and discuss the implication of our findings on surveys that use the luminous red galaxies to measure the baryon acoustic oscillation features in the galaxy power spectrum.
Baseline Testing of The EV Global E-Bike
Eichenberg, Dennis J.; Kolacz, John S.; Tavernelli, Paul F.
2001-01-01
The NASA John H. Glenn Research Center initiated baseline testing of the EV Global E-Bike as a way to reduce pollution in urban areas, reduce fossil fuel consumption and reduce Operating costs for transportation systems. The work was done Linder the Hybrid Power Management (HPM) Program, which includes the Hybrid Electric Transit Bus (HETB). The E-Bike is a state of the art, ground up, hybrid electric bicycle. Unique features of the vehicle's power system include the use of an efficient, 400 W. electric hub motor and a 7-speed derailleur system that permits operation as fully electric, fully pedal, or a combination of the two. Other innovative features, such as regenerative braking through ultracapacitor energy storage are planned. Regenerative braking recovers much of the kinetic energy of the vehicle during deceleration. The E-Bike is an inexpensive approach to advance the state of the art in hybrid technology in a practical application. The project transfers space technology to terrestrial use via nontraditional partners, and provides power system data valuable for future space applications. A description of the E-bike, the results of performance testing, and future vehicle development plans is the subject of this report. The report concludes that the E-Bike provides excellent performance, and that the implementation of ultracapacitors in the power system can provide significant performance improvements.
Statistical test data selection for reliability evalution of process computer software
International Nuclear Information System (INIS)
Volkmann, K.P.; Hoermann, H.; Ehrenberger, W.
1976-01-01
The paper presents a concept for converting knowledge about the characteristics of process states into practicable procedures for the statistical selection of test cases in testing process computer software. Process states are defined as vectors whose components consist of values of input variables lying in discrete positions or within given limits. Two approaches for test data selection, based on knowledge about cases of demand, are outlined referring to a purely probabilistic method and to the mathematics of stratified sampling. (orig.) [de
Evaluating statistical tests on OLAP cubes to compare degree of disease.
Ordonez, Carlos; Chen, Zhibo
2009-09-01
Statistical tests represent an important technique used to formulate and validate hypotheses on a dataset. They are particularly useful in the medical domain, where hypotheses link disease with medical measurements, risk factors, and treatment. In this paper, we propose to compute parametric statistical tests treating patient records as elements in a multidimensional cube. We introduce a technique that combines dimension lattice traversal and statistical tests to discover significant differences in the degree of disease within pairs of patient groups. In order to understand a cause-effect relationship, we focus on patient group pairs differing in one dimension. We introduce several optimizations to prune the search space, to discover significant group pairs, and to summarize results. We present experiments showing important medical findings and evaluating scalability with medical datasets.
International Nuclear Information System (INIS)
Sengupta, S.K.; Boyle, J.S.
1993-05-01
Variables describing atmospheric circulation and other climate parameters derived from various GCMs and obtained from observations can be represented on a spatio-temporal grid (lattice) structure. The primary objective of this paper is to explore existing as well as some new statistical methods to analyze such data structures for the purpose of model diagnostics and intercomparison from a statistical perspective. Among the several statistical methods considered here, a new method based on common principal components appears most promising for the purpose of intercomparison of spatio-temporal data structures arising in the task of model/model and model/data intercomparison. A complete strategy for such an intercomparison is outlined. The strategy includes two steps. First, the commonality of spatial structures in two (or more) fields is captured in the common principal vectors. Second, the corresponding principal components obtained as time series are then compared on the basis of similarities in their temporal evolution
DEFF Research Database (Denmark)
Zhai, Weiwei; Nielsen, Rasmus; Slatkin, Montgomery
2009-01-01
In this report, we investigate the statistical power of several tests of selective neutrality based on patterns of genetic diversity within and between species. The goal is to compare tests based solely on population genetic data with tests using comparative data or a combination of comparative...... and population genetic data. We show that in the presence of repeated selective sweeps on relatively neutral background, tests based on the d(N)/d(S) ratios in comparative data almost always have more power to detect selection than tests based on population genetic data, even if the overall level of divergence...... selection. The Hudson-Kreitman-Aguadé test is the most powerful test for detecting positive selection among the population genetic tests investigated, whereas McDonald-Kreitman test typically has more power to detect negative selection. We discuss our findings in the light of the discordant results obtained...
Operational statistical analysis of the results of computer-based testing of students
Directory of Open Access Journals (Sweden)
Виктор Иванович Нардюжев
2018-12-01
Full Text Available The article is devoted to the issues of statistical analysis of results of computer-based testing for evaluation of educational achievements of students. The issues are relevant due to the fact that computerbased testing in Russian universities has become an important method for evaluation of educational achievements of students and quality of modern educational process. Usage of modern methods and programs for statistical analysis of results of computer-based testing and assessment of quality of developed tests is an actual problem for every university teacher. The article shows how the authors solve this problem using their own program “StatInfo”. For several years the program has been successfully applied in a credit system of education at such technological stages as loading computerbased testing protocols into a database, formation of queries, generation of reports, lists, and matrices of answers for statistical analysis of quality of test items. Methodology, experience and some results of its usage by university teachers are described in the article. Related topics of a test development, models, algorithms, technologies, and software for large scale computer-based testing has been discussed by the authors in their previous publications which are presented in the reference list.
Rivoirard, Romain; Duplay, Vianney; Oriol, Mathieu; Tinquaut, Fabien; Chauvin, Franck; Magne, Nicolas; Bourmaud, Aurelie
2016-01-01
Quality of reporting for Randomized Clinical Trials (RCTs) in oncology was analyzed in several systematic reviews, but, in this setting, there is paucity of data for the outcomes definitions and consistency of reporting for statistical tests in RCTs and Observational Studies (OBS). The objective of this review was to describe those two reporting aspects, for OBS and RCTs in oncology. From a list of 19 medical journals, three were retained for analysis, after a random selection: British Medical Journal (BMJ), Annals of Oncology (AoO) and British Journal of Cancer (BJC). All original articles published between March 2009 and March 2014 were screened. Only studies whose main outcome was accompanied by a corresponding statistical test were included in the analysis. Studies based on censored data were excluded. Primary outcome was to assess quality of reporting for description of primary outcome measure in RCTs and of variables of interest in OBS. A logistic regression was performed to identify covariates of studies potentially associated with concordance of tests between Methods and Results parts. 826 studies were included in the review, and 698 were OBS. Variables were described in Methods section for all OBS studies and primary endpoint was clearly detailed in Methods section for 109 RCTs (85.2%). 295 OBS (42.2%) and 43 RCTs (33.6%) had perfect agreement for reported statistical test between Methods and Results parts. In multivariable analysis, variable "number of included patients in study" was associated with test consistency: aOR (adjusted Odds Ratio) for third group compared to first group was equal to: aOR Grp3 = 0.52 [0.31-0.89] (P value = 0.009). Variables in OBS and primary endpoint in RCTs are reported and described with a high frequency. However, statistical tests consistency between methods and Results sections of OBS is not always noted. Therefore, we encourage authors and peer reviewers to verify consistency of statistical tests in oncology studies.
Global income-related inequalities in HIV testing.
Larose, Auburn; Moore, Spencer; Harper, Sam; Lynch, John
2011-09-01
Voluntary counseling and testing (VCT) is an important prevention initiative in reducing HIV/AIDS transmission. Despite current global prevention efforts, many low- and middle-income countries continue reporting low VCT levels. Little is known about the association of within- and between-country socioeconomic inequalities and VCT. Based on the 'inverse equity hypothesis,' this study examines the degree to which low socioeconomic groups in developing countries are disadvantaged in VCT. Using recently released data from the 2002 to 2003 World Health Survey (WHS) for 106 705 individuals in 49 countries, this study used multilevel logistic regression to examine the association of individual- and national-level factors with VCT, and whether national economic development moderated the association between individual income and VCT. Individual income was based on country-specific income quintiles. National economic development was based on national gross domestic product per capita (GDP/c). Effect modification was evaluated with the likelihood ratio test (G(2)). Individuals eligible for the VCT question of the WHS were adults between the ages of 18-49 years; women who had given birth in the last 2 years were excluded from this question. VCT was more likely among higher income quintiles and in countries with higher GDP/c. GDP/c moderated the association between individual income and VCT whereby relative income differences in VCT were greater in countries with lower GDP/c (G(2)= 9.21; P= 0.002). Individual socio-demographic characteristics were also associated with the likelihood of a person having VCT. Relative socioeconomic inequalities in VCT coverage appear to decline when higher SES groups reach a certain level of coverage. These findings suggest that changes to international VCT programs may be necessary to moderate the relative VCT differences between high- and low-income individuals in lower GDP/c nations.
Global optimization based on noisy evaluations: An empirical study of two statistical approaches
International Nuclear Information System (INIS)
Vazquez, Emmanuel; Villemonteix, Julien; Sidorkiewicz, Maryan; Walter, Eric
2008-01-01
The optimization of the output of complex computer codes has often to be achieved with a small budget of evaluations. Algorithms dedicated to such problems have been developed and compared, such as the Expected Improvement algorithm (El) or the Informational Approach to Global Optimization (IAGO). However, the influence of noisy evaluation results on the outcome of these comparisons has often been neglected, despite its frequent appearance in industrial problems. In this paper, empirical convergence rates for El and IAGO are compared when an additive noise corrupts the result of an evaluation. IAGO appears more efficient than El and various modifications of El designed to deal with noisy evaluations. Keywords. Global optimization; computer simulations; kriging; Gaussian process; noisy evaluations.
Testing independence of bivariate interval-censored data using modified Kendall's tau statistic.
Kim, Yuneung; Lim, Johan; Park, DoHwan
2015-11-01
In this paper, we study a nonparametric procedure to test independence of bivariate interval censored data; for both current status data (case 1 interval-censored data) and case 2 interval-censored data. To do it, we propose a score-based modification of the Kendall's tau statistic for bivariate interval-censored data. Our modification defines the Kendall's tau statistic with expected numbers of concordant and disconcordant pairs of data. The performance of the modified approach is illustrated by simulation studies and application to the AIDS study. We compare our method to alternative approaches such as the two-stage estimation method by Sun et al. (Scandinavian Journal of Statistics, 2006) and the multiple imputation method by Betensky and Finkelstein (Statistics in Medicine, 1999b). © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Directory of Open Access Journals (Sweden)
Chaeyoung Lee
2012-11-01
Full Text Available Epistasis that may explain a large portion of the phenotypic variation for complex economic traits of animals has been ignored in many genetic association studies. A Baysian method was introduced to draw inferences about multilocus genotypic effects based on their marginal posterior distributions by a Gibbs sampler. A simulation study was conducted to provide statistical powers under various unbalanced designs by using this method. Data were simulated by combined designs of number of loci, within genotype variance, and sample size in unbalanced designs with or without null combined genotype cells. Mean empirical statistical power was estimated for testing posterior mean estimate of combined genotype effect. A practical example for obtaining empirical statistical power estimates with a given sample size was provided under unbalanced designs. The empirical statistical powers would be useful for determining an optimal design when interactive associations of multiple loci with complex phenotypes were examined.
Effect of non-normality on test statistics for one-way independent groups designs.
Cribbie, Robert A; Fiksenbaum, Lisa; Keselman, H J; Wilcox, Rand R
2012-02-01
The data obtained from one-way independent groups designs is typically non-normal in form and rarely equally variable across treatment populations (i.e., population variances are heterogeneous). Consequently, the classical test statistic that is used to assess statistical significance (i.e., the analysis of variance F test) typically provides invalid results (e.g., too many Type I errors, reduced power). For this reason, there has been considerable interest in finding a test statistic that is appropriate under conditions of non-normality and variance heterogeneity. Previously recommended procedures for analysing such data include the James test, the Welch test applied either to the usual least squares estimators of central tendency and variability, or the Welch test with robust estimators (i.e., trimmed means and Winsorized variances). A new statistic proposed by Krishnamoorthy, Lu, and Mathew, intended to deal with heterogeneous variances, though not non-normality, uses a parametric bootstrap procedure. In their investigation of the parametric bootstrap test, the authors examined its operating characteristics under limited conditions and did not compare it to the Welch test based on robust estimators. Thus, we investigated how the parametric bootstrap procedure and a modified parametric bootstrap procedure based on trimmed means perform relative to previously recommended procedures when data are non-normal and heterogeneous. The results indicated that the tests based on trimmed means offer the best Type I error control and power when variances are unequal and at least some of the distribution shapes are non-normal. © 2011 The British Psychological Society.
Efficient statistical tests to compare Youden index: accounting for contingency correlation.
Chen, Fangyao; Xue, Yuqiang; Tan, Ming T; Chen, Pingyan
2015-04-30
Youden index is widely utilized in studies evaluating accuracy of diagnostic tests and performance of predictive, prognostic, or risk models. However, both one and two independent sample tests on Youden index have been derived ignoring the dependence (association) between sensitivity and specificity, resulting in potentially misleading findings. Besides, paired sample test on Youden index is currently unavailable. This article develops efficient statistical inference procedures for one sample, independent, and paired sample tests on Youden index by accounting for contingency correlation, namely associations between sensitivity and specificity and paired samples typically represented in contingency tables. For one and two independent sample tests, the variances are estimated by Delta method, and the statistical inference is based on the central limit theory, which are then verified by bootstrap estimates. For paired samples test, we show that the estimated covariance of the two sensitivities and specificities can be represented as a function of kappa statistic so the test can be readily carried out. We then show the remarkable accuracy of the estimated variance using a constrained optimization approach. Simulation is performed to evaluate the statistical properties of the derived tests. The proposed approaches yield more stable type I errors at the nominal level and substantially higher power (efficiency) than does the original Youden's approach. Therefore, the simple explicit large sample solution performs very well. Because we can readily implement the asymptotic and exact bootstrap computation with common software like R, the method is broadly applicable to the evaluation of diagnostic tests and model performance. Copyright © 2015 John Wiley & Sons, Ltd.
Reliability Verification of DBE Environment Simulation Test Facility by using Statistics Method
International Nuclear Information System (INIS)
Jang, Kyung Nam; Kim, Jong Soeg; Jeong, Sun Chul; Kyung Heum
2011-01-01
In the nuclear power plant, all the safety-related equipment including cables under the harsh environment should perform the equipment qualification (EQ) according to the IEEE std 323. There are three types of qualification methods including type testing, operating experience and analysis. In order to environmentally qualify the safety-related equipment using type testing method, not analysis or operation experience method, the representative sample of equipment, including interfaces, should be subjected to a series of tests. Among these tests, Design Basis Events (DBE) environment simulating test is the most important test. DBE simulation test is performed in DBE simulation test chamber according to the postulated DBE conditions including specified high-energy line break (HELB), loss of coolant accident (LOCA), main steam line break (MSLB) and etc, after thermal and radiation aging. Because most DBE conditions have 100% humidity condition, in order to trace temperature and pressure of DBE condition, high temperature steam should be used. During DBE simulation test, if high temperature steam under high pressure inject to the DBE test chamber, the temperature and pressure in test chamber rapidly increase over the target temperature. Therefore, the temperature and pressure in test chamber continue fluctuating during the DBE simulation test to meet target temperature and pressure. We should ensure fairness and accuracy of test result by confirming the performance of DBE environment simulation test facility. In this paper, in order to verify reliability of DBE environment simulation test facility, statistics method is used
Energy Technology Data Exchange (ETDEWEB)
Cho, Su Gil; Jang, Jun Yong; Kim, Ji Hoon; Lee, Tae Hee [Hanyang University, Seoul (Korea, Republic of); Lee, Min Uk [Romax Technology Ltd., Seoul (Korea, Republic of); Choi, Jong Su; Hong, Sup [Korea Research Institute of Ships and Ocean Engineering, Daejeon (Korea, Republic of)
2015-04-15
Sequential surrogate model-based global optimization algorithms, such as super-EGO, have been developed to increase the efficiency of commonly used global optimization technique as well as to ensure the accuracy of optimization. However, earlier studies have drawbacks because there are three phases in the optimization loop and empirical parameters. We propose a united sampling criterion to simplify the algorithm and to achieve the global optimum of problems with constraints without any empirical parameters. It is able to select the points located in a feasible region with high model uncertainty as well as the points along the boundary of constraint at the lowest objective value. The mean squared error determines which criterion is more dominant among the infill sampling criterion and boundary sampling criterion. Also, the method guarantees the accuracy of the surrogate model because the sample points are not located within extremely small regions like super-EGO. The performance of the proposed method, such as the solvability of a problem, convergence properties, and efficiency, are validated through nonlinear numerical examples with disconnected feasible regions.
Page, Robert; Satake, Eiki
2017-01-01
While interest in Bayesian statistics has been growing in statistics education, the treatment of the topic is still inadequate in both textbooks and the classroom. Because so many fields of study lead to careers that involve a decision-making process requiring an understanding of Bayesian methods, it is becoming increasingly clear that Bayesian…
IEEE Std 101-1987: IEEE guide for the statistical analysis of thermal life test data
International Nuclear Information System (INIS)
Anon.
1992-01-01
This revision of IEEE Std 101-1972 describes statistical analyses for data from thermally accelerated aging tests. It explains the basis and use of statistical calculations for an engineer or scientist. Accelerated test procedures usually call for a number of specimens to be aged at each of several temperatures appreciably above normal operating temperatures. High temperatures are chosen to produce specimen failures (according to specified failure criteria) in typically one week to one year. The test objective is to determine the dependence of median life on temperature from the data, and to estimate, by extrapolation, the median life to be expected at service temperature. This guide presents methods for analyzing such data and for comparing test data on different materials
A general statistical test for correlations in a finite-length time series.
Hanson, Jeffery A; Yang, Haw
2008-06-07
The statistical properties of the autocorrelation function from a time series composed of independently and identically distributed stochastic variables has been studied. Analytical expressions for the autocorrelation function's variance have been derived. It has been found that two common ways of calculating the autocorrelation, moving-average and Fourier transform, exhibit different uncertainty characteristics. For periodic time series, the Fourier transform method is preferred because it gives smaller uncertainties that are uniform through all time lags. Based on these analytical results, a statistically robust method has been proposed to test the existence of correlations in a time series. The statistical test is verified by computer simulations and an application to single-molecule fluorescence spectroscopy is discussed.
Energy Technology Data Exchange (ETDEWEB)
NONE
2009-08-15
This comprehensive report made by the Swiss Federal Office of Energy (SFOE) presents the statistics on total energy production and usage in Switzerland for the year 2008. First of all, an overview of Switzerland's energy consumption in 2008 is presented. Details are noted of the proportions of consumption of oil-fuels for heating, oil products for mobility, electricity, gas and various other fuels. The development of consumption over the years 1910 to 2008 is illustrated graphically. A second chapter takes a look at energy flow from production (and import) to the consumer and export. An extensive collection of illustrative flow diagrams, tables and graphical representations of energy flows, statistics for various energy carriers and of the various uses of energy in Switzerland is presented.
Statistical tests for the Gaussian nature of primordial fluctuations through CBR experiments
International Nuclear Information System (INIS)
Luo, X.
1994-01-01
Information about the physical processes that generate the primordial fluctuations in the early Universe can be gained by testing the Gaussian nature of the fluctuations through cosmic microwave background radiation (CBR) temperature anisotropy experiments. One of the crucial aspects of density perturbations that are produced by the standard inflation scenario is that they are Gaussian, whereas seeds produced by topological defects left over from an early cosmic phase transition tend to be non-Gaussian. To carry out this test, sophisticated statistical tools are required. In this paper, we will discuss several such statistical tools, including multivariant skewness and kurtosis, Euler-Poincare characteristics, the three-point temperature correlation function, and Hotelling's T 2 statistic defined through bispectral estimates of a one-dimensional data set. The effect of noise present in the current data is discussed in detail and the COBE 53 GHz data set is analyzed. Our analysis shows that, on the large angular scale to which COBE is sensitive, the statistics are probably Gaussian. On the small angular scales, the importance of Hotelling's T 2 statistic is stressed, and the minimum sample size required to test Gaussianity is estimated. Although the current data set available from various experiments at half-degree scales is still too small, improvement of the data set by roughly a factor of 2 will be enough to test the Gaussianity statistically. On the arc min scale, we analyze the recent RING data through bispectral analysis, and the result indicates possible deviation from Gaussianity. Effects of point sources are also discussed. It is pointed out that the Gaussianity problem can be resolved in the near future by ground-based or balloon-borne experiments
DEFF Research Database (Denmark)
Conradsen, Knut; Nielsen, Allan Aasbjerg; Schou, Jesper
2003-01-01
. Based on this distribution, a test statistic for equality of two such matrices and an associated asymptotic probability for obtaining a smaller value of the test statistic are derived and applied successfully to change detection in polarimetric SAR data. In a case study, EMISAR L-band data from April 17...... to HH, VV, or HV data alone, the derived test statistic reduces to the well-known gamma likelihood-ratio test statistic. The derived test statistic and the associated significance value can be applied as a line or edge detector in fully polarimetric SAR data also....
Application of statistical methods to the testing of nuclear counting assemblies
International Nuclear Information System (INIS)
Gilbert, J.P.; Friedling, G.
1965-01-01
This report describes the application of the hypothesis test theory to the control of the 'statistical purity' and of the stability of the counting batteries used for measurements on activation detectors in research reactors. The principles involved and the experimental results obtained at Cadarache on batteries operating with the reactors PEGGY and AZUR are given. (authors) [fr
Interpreting Statistical Significance Test Results: A Proposed New "What If" Method.
Kieffer, Kevin M.; Thompson, Bruce
As the 1994 publication manual of the American Psychological Association emphasized, "p" values are affected by sample size. As a result, it can be helpful to interpret the results of statistical significant tests in a sample size context by conducting so-called "what if" analyses. However, these methods can be inaccurate…
Recent Literature on Whether Statistical Significance Tests Should or Should Not Be Banned.
Deegear, James
This paper summarizes the literature regarding statistical significant testing with an emphasis on recent literature in various discipline and literature exploring why researchers have demonstrably failed to be influenced by the American Psychological Association publication manual's encouragement to report effect sizes. Also considered are…
Statistical Methods for the detection of answer copying on achievement tests
Sotaridona, Leonardo
2003-01-01
This thesis contains a collection of studies where statistical methods for the detection of answer copying on achievement tests in multiple-choice format are proposed and investigated. Although all methods are suited to detect answer copying, each method is designed to address specific
Pivotal statistics for testing subsets of structural parameters in the IV Regression Model
Kleibergen, F.R.
2000-01-01
We construct a novel statistic to test hypothezes on subsets of the structural parameters in anInstrumental Variables (IV) regression model. We derive the chi squared limiting distribution of thestatistic and show that it has a degrees of freedom parameter that is equal to the number ofstructural
A Critique of One-Tailed Hypothesis Test Procedures in Business and Economics Statistics Textbooks.
Liu, Tung; Stone, Courtenay C.
1999-01-01
Surveys introductory business and economics statistics textbooks and finds that they differ over the best way to explain one-tailed hypothesis tests: the simple null-hypothesis approach or the composite null-hypothesis approach. Argues that the composite null-hypothesis approach contains methodological shortcomings that make it more difficult for…
A Comparison of Several Statistical Tests of Reciprocity of Self-Disclosure.
Dindia, Kathryn
1988-01-01
Reports the results of a study that used several statistical tests of reciprocity of self-disclosure. Finds little evidence for reciprocity of self-disclosure, and concludes that either reciprocity is an illusion, or that different or more sophisticated methods are needed to detect it. (MS)
Statistical Requirements For Pass-Fail Testing Of Contraband Detection Systems
International Nuclear Information System (INIS)
Gilliam, David M.
2011-01-01
Contraband detection systems for homeland security applications are typically tested for probability of detection (PD) and probability of false alarm (PFA) using pass-fail testing protocols. Test protocols usually require specified values for PD and PFA to be demonstrated at a specified level of statistical confidence CL. Based on a recent more theoretical treatment of this subject [1], this summary reviews the definition of CL and provides formulas and spreadsheet functions for constructing tables of general test requirements and for determining the minimum number of tests required. The formulas and tables in this article may be generally applied to many other applications of pass-fail testing, in addition to testing of contraband detection systems.
An Evaluation of the Sniffer Global Optimization Algorithm Using Standard Test Functions
Butler, Roger A. R.; Slaminka, Edward E.
1992-03-01
The performance of Sniffer—a new global optimization algorithm—is compared with that of Simulated Annealing. Using the number of function evaluations as a measure of efficiency, the new algorithm is shown to be significantly better at finding the global minimum of seven standard test functions. Several of the test functions used have many local minima and very steep walls surrounding the global minimum. Such functions are intended to thwart global minimization algorithms.
Energy Technology Data Exchange (ETDEWEB)
Franchito, Sergio H.; Brahmananda Rao, V. [Instituto Nacional de Pesquisas Espaciais, Centro de Ciencia do Sistema Terrestre, CCST, Sau Paulo, SP (Brazil); Moraes, E.C. [Instituto Nacional de Pesquisas Espaciais, Divisao de Sensoriamento Remoto, DSR, Sau Paulo, SP (Brazil)
2011-11-15
In this study, a zonally-averaged statistical climate model (SDM) is used to investigate the impact of global warming on the distribution of the geobotanic zones over the globe. The model includes a parameterization of the biogeophysical feedback mechanism that links the state of surface to the atmosphere (a bidirectional interaction between vegetation and climate). In the control experiment (simulation of the present-day climate) the geobotanic state is well simulated by the model, so that the distribution of the geobotanic zones over the globe shows a very good agreement with the observed ones. The impact of global warming on the distribution of the geobotanic zones is investigated considering the increase of CO{sub 2} concentration for the B1, A2 and A1FI scenarios. The results showed that the geobotanic zones over the entire earth can be modified in future due to global warming. Expansion of subtropical desert and semi-desert zones in the Northern and Southern Hemispheres, retreat of glaciers and sea-ice, with the Arctic region being particularly affected and a reduction of the tropical rainforest and boreal forest can occur due to the increase of the greenhouse gases concentration. The effects were more pronounced in the A1FI and A2 scenarios compared with the B1 scenario. The SDM results confirm IPCC AR4 projections of future climate and are consistent with simulations of more complex GCMs, reinforcing the necessity of the mitigation of climate change associated to global warming. (orig.)
Testing statistical self-similarity in the topology of river networks
Troutman, Brent M.; Mantilla, Ricardo; Gupta, Vijay K.
2010-01-01
Recent work has demonstrated that the topological properties of real river networks deviate significantly from predictions of Shreve's random model. At the same time the property of mean self-similarity postulated by Tokunaga's model is well supported by data. Recently, a new class of network model called random self-similar networks (RSN) that combines self-similarity and randomness has been introduced to replicate important topological features observed in real river networks. We investigate if the hypothesis of statistical self-similarity in the RSN model is supported by data on a set of 30 basins located across the continental United States that encompass a wide range of hydroclimatic variability. We demonstrate that the generators of the RSN model obey a geometric distribution, and self-similarity holds in a statistical sense in 26 of these 30 basins. The parameters describing the distribution of interior and exterior generators are tested to be statistically different and the difference is shown to produce the well-known Hack's law. The inter-basin variability of RSN parameters is found to be statistically significant. We also test generator dependence on two climatic indices, mean annual precipitation and radiative index of dryness. Some indication of climatic influence on the generators is detected, but this influence is not statistically significant with the sample size available. Finally, two key applications of the RSN model to hydrology and geomorphology are briefly discussed.
Assessment of the beryllium lymphocyte proliferation test using statistical process control.
Cher, Daniel J; Deubner, David C; Kelsh, Michael A; Chapman, Pamela S; Ray, Rose M
2006-10-01
Despite more than 20 years of surveillance and epidemiologic studies using the beryllium blood lymphocyte proliferation test (BeBLPT) as a measure of beryllium sensitization (BeS) and as an aid for diagnosing subclinical chronic beryllium disease (CBD), improvements in specific understanding of the inhalation toxicology of CBD have been limited. Although epidemiologic data suggest that BeS and CBD risks vary by process/work activity, it has proven difficult to reach specific conclusions regarding the dose-response relationship between workplace beryllium exposure and BeS or subclinical CBD. One possible reason for this uncertainty could be misclassification of BeS resulting from variation in BeBLPT testing performance. The reliability of the BeBLPT, a biological assay that measures beryllium sensitization, is unknown. To assess the performance of four laboratories that conducted this test, we used data from a medical surveillance program that offered testing for beryllium sensitization with the BeBLPT. The study population was workers exposed to beryllium at various facilities over a 10-year period (1992-2001). Workers with abnormal results were offered diagnostic workups for CBD. Our analyses used a standard statistical technique, statistical process control (SPC), to evaluate test reliability. The study design involved a repeated measures analysis of BeBLPT results generated from the company-wide, longitudinal testing. Analytical methods included use of (1) statistical process control charts that examined temporal patterns of variation for the stimulation index, a measure of cell reactivity to beryllium; (2) correlation analysis that compared prior perceptions of BeBLPT instability to the statistical measures of test variation; and (3) assessment of the variation in the proportion of missing test results and how time periods with more missing data influenced SPC findings. During the period of this study, all laboratories displayed variation in test results that
Testing the developed world: Global CAPM vs. Local CAPM
Knudsen, John
2009-01-01
The purpose of this paper is to assess the extent to which the developed world is integrated that the pricing difference between using the local CAPM and the global CAPM is not relevant. This paper has analysed the twenty developed countries which have been classified as such in the MSCI global index. The paper breaks down the country and stock to identify where there is a significant difference in the pricing of assets between the local and global CAPM, and the significance of the result.
Coelho, Carlos A.; Marques, Filipe J.
2013-09-01
In this paper the authors combine the equicorrelation and equivariance test introduced by Wilks [13] with the likelihood ratio test (l.r.t.) for independence of groups of variables to obtain the l.r.t. of block equicorrelation and equivariance. This test or its single block version may find applications in many areas as in psychology, education, medicine, genetics and they are important "in many tests of multivariate analysis, e.g. in MANOVA, Profile Analysis, Growth Curve analysis, etc" [12, 9]. By decomposing the overall hypothesis into the hypotheses of independence of groups of variables and the hypothesis of equicorrelation and equivariance we are able to obtain the expressions for the overall l.r.t. statistic and its moments. From these we obtain a suitable factorization of the characteristic function (c.f.) of the logarithm of the l.r.t. statistic, which enables us to develop highly manageable and precise near-exact distributions for the test statistic.
Determination of Geometrical REVs Based on Volumetric Fracture Intensity and Statistical Tests
Directory of Open Access Journals (Sweden)
Ying Liu
2018-05-01
Full Text Available This paper presents a method to estimate a representative element volume (REV of a fractured rock mass based on the volumetric fracture intensity P32 and statistical tests. A 150 m × 80 m × 50 m 3D fracture network model was generated based on field data collected at the Maji dam site by using the rectangular window sampling method. The volumetric fracture intensity P32 of each cube was calculated by varying the cube location in the generated 3D fracture network model and varying the cube side length from 1 to 20 m, and the distribution of the P32 values was described. The size effect and spatial effect of the fractured rock mass were studied; the P32 values from the same cube sizes and different locations were significantly different, and the fluctuation in P32 values clearly decreases as the cube side length increases. In this paper, a new method that comprehensively considers the anisotropy of rock masses, simplicity of calculation and differences between different methods was proposed to estimate the geometrical REV size. The geometrical REV size of the fractured rock mass was determined based on the volumetric fracture intensity P32 and two statistical test methods, namely, the likelihood ratio test and the Wald–Wolfowitz runs test. The results of the two statistical tests were substantially different; critical cube sizes of 13 m and 12 m were estimated by the Wald–Wolfowitz runs test and the likelihood ratio test, respectively. Because the different test methods emphasize different considerations and impact factors, considering a result that these two tests accept, the larger cube size, 13 m, was selected as the geometrical REV size of the fractured rock mass at the Maji dam site in China.
DEFF Research Database (Denmark)
Jones, Allan; Sommerlund, Bo
2007-01-01
The uses of null hypothesis significance testing (NHST) and statistical power analysis within psychological research are critically discussed. The article looks at the problems of relying solely on NHST when dealing with small and large sample sizes. The use of power-analysis in estimating...... the potential error introduced by small and large samples is advocated. Power analysis is not recommended as a replacement to NHST but as an additional source of information about the phenomena under investigation. Moreover, the importance of conceptual analysis in relation to statistical analysis of hypothesis...
Statistical power analysis a simple and general model for traditional and modern hypothesis tests
Murphy, Kevin R; Wolach, Allen
2014-01-01
Noted for its accessible approach, this text applies the latest approaches of power analysis to both null hypothesis and minimum-effect testing using the same basic unified model. Through the use of a few simple procedures and examples, the authors show readers with little expertise in statistical analysis how to obtain the values needed to carry out the power analysis for their research. Illustrations of how these analyses work and how they can be used to choose the appropriate criterion for defining statistically significant outcomes are sprinkled throughout. The book presents a simple and g
Energy Technology Data Exchange (ETDEWEB)
NONE
2003-07-01
This comprehensive report by the Swiss Federal Office of Energy (SFOE) presents statistics on energy production and consumption in Switzerland in 2002. Facts and figures are presented in tables and diagrams. First of all, a general overview of Swiss energy consumption is presented that includes details on the shares taken by the various energy carriers involved and their development during the period reviewed. The report also includes graphical representations of energy usage in various sectors such as households, trade and industry, transport and the services sector. Also, economic data on energy consumption is presented. A second chapter takes a look at energy flows from producers to consumers and presents an energy balance for Switzerland in the form of tables and an energy-flow diagram. The individual energy sources and the import, export and storage of energy carriers are discussed as is the conversion between various forms and categories of energy. Details on the consumption of energy, its growth over the years up to 2002 and energy use in various sectors are presented. Also, the Swiss energy balance with reference to the use of renewable forms of energy such as solar energy, biomass, wastes and ambient heat is discussed and figures are presented on the contribution of renewables to heating and the generation of electrical power. The third chapter provides data on the individual energy carriers and the final chapter looks at economical and ecological aspects. An appendix provides information on the methodology used in collecting the statistics and on data available in the Swiss cantons.
Energy Technology Data Exchange (ETDEWEB)
NONE
2005-07-01
This comprehensive report by the Swiss Federal Office of Energy (SFOE) presents statistics on energy production and consumption in Switzerland in 2004. Facts and figures are presented in tables and diagrams. First of all, a general overview of Swiss energy consumption is presented that includes details on the shares taken by the various energy carriers involved and their development during the period reviewed. The report also includes graphical representations of energy usage in various sectors such as households, trade and industry, transport and the services sector. Also, economic data on energy consumption is presented. A second chapter takes a look at energy flows from producers to consumers and presents an energy balance for Switzerland in the form of tables and an energy-flow diagram. The individual energy sources and the import, export and storage of energy carriers are discussed as is the conversion between various forms and categories of energy. Details on the consumption of energy, its growth over the years up to 2004 and energy use in various sectors are presented. Also, the Swiss energy balance with reference to the use of renewable forms of energy such as solar energy, biomass, wastes and ambient heat is discussed and figures are presented on the contribution of renewables to heating and the generation of electrical power. The third chapter provides data on the individual energy carriers and the final chapter looks at economical and ecological aspects. An appendix provides information on the methodology used in collecting the statistics and on data available in the Swiss cantons.
Energy Technology Data Exchange (ETDEWEB)
NONE
2006-07-01
This comprehensive report by the Swiss Federal Office of Energy (SFOE) presents statistics on energy production and consumption in Switzerland in 2005. Facts and figures are presented in tables and diagrams. First of all, a general overview of Swiss energy consumption is presented that includes details on the shares taken by the various energy carriers involved and their development during the period reviewed. The report also includes graphical representations of energy usage in various sectors such as households, trade and industry, transport and the services sector. Also, economic data on energy consumption is presented. A second chapter takes a look at energy flows from producers to consumers and presents an energy balance for Switzerland in the form of tables and an energy-flow diagram. The individual energy sources and the import, export and storage of energy carriers are discussed as is the conversion between various forms and categories of energy. Details on the consumption of energy, its growth over the years up to 2005 and energy use in various sectors are presented. Also, the Swiss energy balance with reference to the use of renewable forms of energy such as solar energy, biomass, wastes and ambient heat is discussed and figures are presented on the contribution of renewables to heating and the generation of electrical power. The third chapter provides data on the individual energy carriers and the final chapter looks at economical and ecological aspects. An appendix provides information on the methodology used in collecting the statistics and on data available in the Swiss cantons.
Energy Technology Data Exchange (ETDEWEB)
NONE
2004-07-01
This comprehensive report by the Swiss Federal Office of Energy (SFOE) presents statistics on energy production and consumption in Switzerland in 2003. Facts and figures are presented in tables and diagrams. First of all, a general overview of Swiss energy consumption is presented that includes details on the shares taken by the various energy carriers involved and their development during the period reviewed. The report also includes graphical representations of energy usage in various sectors such as households, trade and industry, transport and the services sector. Also, economic data on energy consumption is presented. A second chapter takes a look at energy flows from producers to consumers and presents an energy balance for Switzerland in the form of tables and an energy-flow diagram. The individual energy sources and the import, export and storage of energy carriers are discussed as is the conversion between various forms and categories of energy. Details on the consumption of energy, its growth over the years up to 2003 and energy use in various sectors are presented. Also, the Swiss energy balance with reference to the use of renewable forms of energy such as solar energy, biomass, wastes and ambient heat is discussed and figures are presented on the contribution of renewables to heating and the generation of electrical power. The third chapter provides data on the individual energy carriers and the final chapter looks at economical and ecological aspects. An appendix provides information on the methodology used in collecting the statistics and on data available in the Swiss cantons.
Energy Technology Data Exchange (ETDEWEB)
NONE
2007-07-01
This comprehensive report by the Swiss Federal Office of Energy (SFOE) presents statistics on energy production and consumption in Switzerland in 2006. Facts and figures are presented in tables and diagrams. First of all, a general overview of Swiss energy consumption is presented that includes details on the shares taken by the various energy carriers involved and their development during the period reviewed. The report also includes graphical representations of energy usage in various sectors such as households, trade and industry, transport and the services sector. Also, economic data on energy consumption is presented. A second chapter takes a look at energy flows from producers to consumers and presents an energy balance for Switzerland in the form of tables and an energy-flow diagram. The individual energy sources and the import, export and storage of energy carriers are discussed as is the conversion between various forms and categories of energy. Details on the consumption of energy, its growth over the years up to 2006 and energy use in various sectors are presented. Also, the Swiss energy balance with reference to the use of renewable forms of energy such as solar energy, biomass, wastes and ambient heat is discussed and figures are presented on the contribution of renewables to heating and the generation of electrical power. The third chapter provides data on the individual energy carriers and the final chapter looks at economical and ecological aspects. An appendix provides information on the methodology used in collecting the statistics and on data available in the Swiss cantons.
Kucukgoz, Mehmet; Harmanci, Oztan; Mihcak, Mehmet K.; Venkatesan, Ramarathnam
2005-03-01
In this paper, we propose a novel semi-blind video watermarking scheme, where we use pseudo-random robust semi-global features of video in the three dimensional wavelet transform domain. We design the watermark sequence via solving an optimization problem, such that the features of the mark-embedded video are the quantized versions of the features of the original video. The exact realizations of the algorithmic parameters are chosen pseudo-randomly via a secure pseudo-random number generator, whose seed is the secret key, that is known (resp. unknown) by the embedder and the receiver (resp. by the public). We experimentally show the robustness of our algorithm against several attacks, such as conventional signal processing modifications and adversarial estimation attacks.
Testing the statistical isotropy of large scale structure with multipole vectors
International Nuclear Information System (INIS)
Zunckel, Caroline; Huterer, Dragan; Starkman, Glenn D.
2011-01-01
A fundamental assumption in cosmology is that of statistical isotropy - that the Universe, on average, looks the same in every direction in the sky. Statistical isotropy has recently been tested stringently using cosmic microwave background data, leading to intriguing results on large angular scales. Here we apply some of the same techniques used in the cosmic microwave background to the distribution of galaxies on the sky. Using the multipole vector approach, where each multipole in the harmonic decomposition of galaxy density field is described by unit vectors and an amplitude, we lay out the basic formalism of how to reconstruct the multipole vectors and their statistics out of galaxy survey catalogs. We apply the algorithm to synthetic galaxy maps, and study the sensitivity of the multipole vector reconstruction accuracy to the density, depth, sky coverage, and pixelization of galaxy catalog maps.
Observations in the statistical analysis of NBG-18 nuclear graphite strength tests
International Nuclear Information System (INIS)
Hindley, Michael P.; Mitchell, Mark N.; Blaine, Deborah C.; Groenwold, Albert A.
2012-01-01
Highlights: ► Statistical analysis of NBG-18 nuclear graphite strength test. ► A Weibull distribution and normal distribution is tested for all data. ► A Bimodal distribution in the CS data is confirmed. ► The CS data set has the lowest variance. ► A Combined data set is formed and has Weibull distribution. - Abstract: The purpose of this paper is to report on the selection of a statistical distribution chosen to represent the experimental material strength of NBG-18 nuclear graphite. Three large sets of samples were tested during the material characterisation of the Pebble Bed Modular Reactor and Core Structure Ceramics materials. These sets of samples are tensile strength, flexural strength and compressive strength (CS) measurements. A relevant statistical fit is determined and the goodness of fit is also evaluated for each data set. The data sets are also normalised for ease of comparison, and combined into one representative data set. The validity of this approach is demonstrated. A second failure mode distribution is found on the CS test data. Identifying this failure mode supports the similar observations made in the past. The success of fitting the Weibull distribution through the normalised data sets allows us to improve the basis for the estimates of the variability. This could also imply that the variability on the graphite strength for the different strength measures is based on the same flaw distribution and thus a property of the material.
Global Environmental Micro Sensors Test Operations in the Natural Environment
Adams, Mark L.; Buza, Matthew; Manobianco, John; Merceret, Francis J.
2007-01-01
ENSCO, Inc. is developing an innovative atmospheric observing system known as Global Environmental Micro Sensors (GEMS). The GEMS concept features an integrated system of miniaturized in situ, airborne probes measuring temperature, relative humidity, pressure, and vector wind velocity. In order for the probes to remain airborne for long periods of time, their design is based on a helium-filled super-pressure balloon. The GEMS probes are neutrally buoyant and carried passively by the wind at predetermined levels. Each probe contains onboard satellite communication, power generation, processing, and geolocation capabilities. ENSCO has partnered with the National Aeronautics and Space Administration's Kennedy Space Center (KSC) for a project called GEMS Test Operations in the Natural Environment (GEMSTONE) that will culminate with limited prototype flights of the system in spring 2007. By leveraging current advances in micro and nanotechnology, the probe mass, size, cost, and complexity can be reduced substantially so that large numbers of probes could be deployed routinely to support ground, launch, and landing operations at KSC and other locations. A full-scale system will improve the data density for the local initialization of high-resolution numerical weather prediction systems by at least an order of magnitude and provide a significantly expanded in situ data base to evaluate launch commit criteria and flight rules. When applied to launch or landing sites, this capability will reduce both weather hazards and weather-related scrubs, thus enhancing both safety and cost-avoidance for vehicles processed by the Shuttle, Launch Services Program, and Constellation Directorates. The GEMSTONE project will conclude with a field experiment in which 10 to 15 probes are released over KSC in east central Florida. The probes will be neutrally buoyant at different altitudes from 500 to 3000 meters and will report their position, speed, heading, temperature, humidity, and
Computer processing of 14C data; statistical tests and corrections of data
International Nuclear Information System (INIS)
Obelic, B.; Planinic, J.
1977-01-01
The described computer program calculates the age of samples and performs statistical tests and corrections of data. Data are obtained from the proportional counter that measures anticoincident pulses per 20 minute intervals. After every 9th interval the counter measures total number of counts per interval. Input data are punched on cards. The output list contains input data schedule and the following results: mean CPM value, correction of CPM for normal pressure and temperature (NTP), sample age calculation based on 14 C half life of 5570 and 5730 years, age correction for NTP, dendrochronological corrections and the relative radiocarbon concentration. All results are given with one standard deviation. Input data test (Chauvenet's criterion), gas purity test, standard deviation test and test of the data processor are also included in the program. (author)
DEFF Research Database (Denmark)
Boonstra, Philip S; Gruber, Stephen B; Raymond, Victoria M
2010-01-01
Anticipation, manifested through decreasing age of onset or increased severity in successive generations, has been noted in several genetic diseases. Statistical methods for genetic anticipation range from a simple use of the paired t-test for age of onset restricted to affected parent-child pairs......, and this right truncation effect is more pronounced in children than in parents. In this study, we first review different statistical methods for testing genetic anticipation in affected parent-child pairs that address the issue of bias due to right truncation. Using affected parent-child pair data, we compare...... the issue of multiplex ascertainment and its effect on the different methods. We then focus on exploring genetic anticipation in Lynch syndrome and analyze new data on the age of onset in affected parent-child pairs from families seen at the University of Michigan Cancer Genetics clinic with a mutation...
Taroni, F; Biedermann, A; Bozza, S
2016-02-01
Many people regard the concept of hypothesis testing as fundamental to inferential statistics. Various schools of thought, in particular frequentist and Bayesian, have promoted radically different solutions for taking a decision about the plausibility of competing hypotheses. Comprehensive philosophical comparisons about their advantages and drawbacks are widely available and continue to span over large debates in the literature. More recently, controversial discussion was initiated by an editorial decision of a scientific journal [1] to refuse any paper submitted for publication containing null hypothesis testing procedures. Since the large majority of papers published in forensic journals propose the evaluation of statistical evidence based on the so called p-values, it is of interest to expose the discussion of this journal's decision within the forensic science community. This paper aims to provide forensic science researchers with a primer on the main concepts and their implications for making informed methodological choices. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Zhang, Chidong [Univ. of Miami, Coral Gables, FL (United States)
2016-08-14
Motivated by the success of the AMIE/DYNAMO field campaign, which collected unprecedented observations of cloud and precipitation from the tropical Indian Ocean in Octber 2011 – March 2012, this project explored how such observations can be applied to assist the development of global cloud-permitting models through evaluating and correcting model biases in cloud statistics. The main accomplishment of this project were made in four categories: generating observational products for model evaluation, using AMIE/DYNAMO observations to validate global model simulations, using AMIE/DYNAMO observations in numerical studies of cloud-permitting models, and providing leadership in the field. Results from this project provide valuable information for building a seamless bridge between DOE ASR program’s component on process level understanding of cloud processes in the tropics and RGCM focus on global variability and regional extremes. In particular, experience gained from this project would be directly applicable to evaluation and improvements of ACME, especially as it transitions to a non-hydrostatic variable resolution model.
Love, Jeffrey J.; Coisson, Pierdavide; Pulkkinen, Antti
2016-01-01
Analysis is made of the long-term statistics of three different measures of ground level, storm time geomagnetic activity: instantaneous 1 min first differences in horizontal intensity ΔBh, the root-mean-square of 10 consecutive 1 min differences S, and the ramp change R over 10 min. Geomagnetic latitude maps of the cumulative exceedances of these three quantities are constructed, giving the threshold (nT/min) for which activity within a 24 h period can be expected to occur once per year, decade, and century. Specifically, at geomagnetic 55°, we estimate once-per-century ΔBh, S, and R exceedances and a site-to-site, proportional, 1 standard deviation range [1 σ, lower and upper] to be, respectively, 1000, [690, 1450]; 500, [350, 720]; and 200, [140, 280] nT/min. At 40°, we estimate once-per-century ΔBh, S, and R exceedances and 1 σ values to be 200, [140, 290]; 100, [70, 140]; and 40, [30, 60] nT/min.
Energy Technology Data Exchange (ETDEWEB)
NONE
2002-07-01
This comprehensive report presents the Swiss Federal Office of Energy's statistics on energy production and consumption in Switzerland in 2001. Facts and figures are presented in tables and diagrams. First of all, a general overview of Swiss energy consumption is presented that includes details on the shares taken by the various energy carriers involved and their development during the period reviewed. The article also includes graphical representations of energy usage in various sectors such as households, trade and industry, transport and the services sector. Also, economic data on energy consumption is presented. A second chapter takes a look at energy flows from producers to consumers and presents an energy balance for Switzerland in the form of tables and an energy-flow diagram. The individual energy sources and the import, export and storage of energy carriers are discussed as is the conversion between various forms and categories of energy. Details on the consumption of energy, its growth over the years up to 2001 and energy use in various sectors are presented. Finally, the Swiss energy balance with reference to the use of renewable sources of energy such as solar energy, biomass, wastes and ambient heat is discussed and figures are presented on the contribution of renewables to heating and the generation of electrical power.
Energy Technology Data Exchange (ETDEWEB)
NONE
2001-07-01
This comprehensive report presents the Swiss Federal Office of Energy's statistics on energy production and consumption in Switzerland in 2000. Facts and figures are presented in tables and diagrams. First of all, a general overview of Swiss energy consumption is presented that includes details on the shares taken by the various energy carriers involved and their development during the period reviewed. The article also includes graphical representations of energy usage in various sectors such as households, trade and industry, transport and the services sector. Also, economic data on energy consumption is presented. A second chapter takes a look at energy flows from producers to consumers and presents an energy balance for Switzerland in the form of tables and an energy-flow diagram. The individual energy sources and the import, export and storage of energy carriers are discussed as is the conversion between various forms and categories of energy. Details on the consumption of energy, its growth over the years up to 2000 and energy use in various sectors are presented. Finally, the Swiss energy balance with reference to the use of renewable sources of energy such as solar energy, biomass, wastes and ambient heat is discussed and figures are presented on the contribution of renewables to heating and the generation of electrical power.
Energy Technology Data Exchange (ETDEWEB)
NONE
2008-07-01
This comprehensive report presents the Swiss Federal Office of Energy's statistics on energy production and consumption in Switzerland in 2007. Facts and figures are presented in tables and diagrams. First of all, a general overview of Swiss energy consumption is presented that includes details on the shares taken by the various energy carriers involved and their development during the period reviewed. The article also includes graphical representations of energy usage in various sectors such as households, trade and industry, transport and the services sector. Also, economic data on energy consumption is presented. A second chapter takes a look at energy flows from producers to consumers and presents an energy balance for Switzerland in the form of tables and an energy-flow diagram. The individual energy sources and the import, export and storage of energy carriers are discussed as is the conversion between various forms and categories of energy. Details on the consumption of energy, its growth over the years up to 2007 and energy use in various sectors are presented. Finally, the Swiss energy balance with reference to the use of renewable sources of energy such as solar energy, biomass, wastes and ambient heat is discussed and figures are presented on the contribution of renewables to heating and the generation of electrical power.
MacDonald, Alphonse L
2016-01-01
At the invitation of the University of Minnesota Population Center (MPC) the author carried out an assessment of the IPUMS International integrated census microdata programme during January - March 2016. The terms of reference included the assessment of the measures taken by the MPC to safe guard the security of the microdata, the quality and adequacy of services provided, characteristics of users and satisfaction with IPUMS, use of available microdata, support to participating developing country National Statistical Offices (NSOs) and adequacy of a proposed Remote Data Center (RDC). The conclusions of the review are that IPUMS International is a unique, flexible, successful and secure programme for managing access to anonymized, harmonised and integrated microdata to academic users and policy makers. While currently the user base is predominantly in developed countries, steps are being taken to expand usage by researchers world-wide. The physical, methodological and technical arrangements for safeguarding the security and confidentiality of the data files are excellent; the possibilities of breaches are minimal. Data users have very positive opinions of the quality of the data, scope of services and expertise of staff but desire more detailed, up-to-date microdata. NSOs rate IPUMS International and its services positively but request advanced methodological training for staff and regular information on the use of their country's data. IPUMS International planned activities are presented and their contributions to census methodology are highlighted.
International Nuclear Information System (INIS)
Brodsky, A.
1979-01-01
Some recent reports of Mancuso, Stewart and Kneale claim findings of radiation-produced cancer in the Hanford worker population. These claims are based on statistical computations that use small differences in accumulated exposures between groups dying of cancer and groups dying of other causes; actual mortality and longevity were not reported. This paper presents a statistical method for evaluation of actual mortality and longevity longitudinally over time, as applied in a primary analysis of the mortality experience of the Hanford worker population. Although available, this method was not utilized in the Mancuso-Stewart-Kneale paper. The author's preliminary longitudinal analysis shows that the gross mortality experience of persons employed at Hanford during 1943-70 interval did not differ significantly from that of certain controls, when both employees and controls were selected from families with two or more offspring and comparison were matched by age, sex, race and year of entry into employment. This result is consistent with findings reported by Sanders (Health Phys. vol.35, 521-538, 1978). The method utilizes an approximate chi-square (1 D.F.) statistic for testing population subgroup comparisons, as well as the cumulation of chi-squares (1 D.F.) for testing the overall result of a particular type of comparison. The method is available for computer testing of the Hanford mortality data, and could also be adapted to morbidity or other population studies. (author)
Statistical power analyses using G*Power 3.1: tests for correlation and regression analyses.
Faul, Franz; Erdfelder, Edgar; Buchner, Axel; Lang, Albert-Georg
2009-11-01
G*Power is a free power analysis program for a variety of statistical tests. We present extensions and improvements of the version introduced by Faul, Erdfelder, Lang, and Buchner (2007) in the domain of correlation and regression analyses. In the new version, we have added procedures to analyze the power of tests based on (1) single-sample tetrachoric correlations, (2) comparisons of dependent correlations, (3) bivariate linear regression, (4) multiple linear regression based on the random predictor model, (5) logistic regression, and (6) Poisson regression. We describe these new features and provide a brief introduction to their scope and handling.
Testing statistical significance scores of sequence comparison methods with structure similarity
Directory of Open Access Journals (Sweden)
Leunissen Jack AM
2006-10-01
Full Text Available Abstract Background In the past years the Smith-Waterman sequence comparison algorithm has gained popularity due to improved implementations and rapidly increasing computing power. However, the quality and sensitivity of a database search is not only determined by the algorithm but also by the statistical significance testing for an alignment. The e-value is the most commonly used statistical validation method for sequence database searching. The CluSTr database and the Protein World database have been created using an alternative statistical significance test: a Z-score based on Monte-Carlo statistics. Several papers have described the superiority of the Z-score as compared to the e-value, using simulated data. We were interested if this could be validated when applied to existing, evolutionary related protein sequences. Results All experiments are performed on the ASTRAL SCOP database. The Smith-Waterman sequence comparison algorithm with both e-value and Z-score statistics is evaluated, using ROC, CVE and AP measures. The BLAST and FASTA algorithms are used as reference. We find that two out of three Smith-Waterman implementations with e-value are better at predicting structural similarities between proteins than the Smith-Waterman implementation with Z-score. SSEARCH especially has very high scores. Conclusion The compute intensive Z-score does not have a clear advantage over the e-value. The Smith-Waterman implementations give generally better results than their heuristic counterparts. We recommend using the SSEARCH algorithm combined with e-values for pairwise sequence comparisons.
Directory of Open Access Journals (Sweden)
Melissa Coulson
2010-07-01
Full Text Available A statistically significant result, and a non-significant result may differ little, although significance status may tempt an interpretation of difference. Two studies are reported that compared interpretation of such results presented using null hypothesis significance testing (NHST, or confidence intervals (CIs. Authors of articles published in psychology, behavioural neuroscience, and medical journals were asked, via email, to interpret two fictitious studies that found similar results, one statistically significant, and the other non-significant. Responses from 330 authors varied greatly, but interpretation was generally poor, whether results were presented as CIs or using NHST. However, when interpreting CIs respondents who mentioned NHST were 60% likely to conclude, unjustifiably, the two results conflicted, whereas those who interpreted CIs without reference to NHST were 95% likely to conclude, justifiably, the two results were consistent. Findings were generally similar for all three disciplines. An email survey of academic psychologists confirmed that CIs elicit better interpretations if NHST is not invoked. Improved statistical inference can result from encouragement of meta-analytic thinking and use of CIs but, for full benefit, such highly desirable statistical reform requires also that researchers interpret CIs without recourse to NHST.
National Research Council Canada - National Science Library
Posey, Pamela
2002-01-01
The purpose of this Software Test Description (STD) is to establish formal test cases to be used by personnel tasked with the installation and verification of the Globally Relocatable Navy Tide/Atmospheric Modeling System (PCTides...
Zammit-Mangion, Andrew; Stavert, Ann; Rigby, Matthew; Ganesan, Anita; Rayner, Peter; Cressie, Noel
2017-04-01
The Orbiting Carbon Observatory-2 (OCO-2) satellite was launched on 2 July 2014, and it has been a source of atmospheric CO2 data since September 2014. The OCO-2 dataset contains a number of variables, but the one of most interest for flux inversion has been the column-averaged dry-air mole fraction (in units of ppm). These global level-2 data offer the possibility of inferring CO2 fluxes at Earth's surface and tracking those fluxes over time. However, as well as having a component of random error, the OCO-2 data have a component of systematic error that is dependent on the instrument's mode, namely land nadir, land glint, and ocean glint. Our statistical approach to CO2-flux inversion starts with constructing a statistical model for the random and systematic errors with parameters that can be estimated from the OCO-2 data and possibly in situ sources from flasks, towers, and the Total Column Carbon Observing Network (TCCON). Dimension reduction of the flux field is achieved through the use of physical basis functions, while temporal evolution of the flux is captured by modelling the basis-function coefficients as a vector autoregressive process. For computational efficiency, flux inversion uses only three months of sensitivities of mole fraction to changes in flux, computed using MOZART; any residual variation is captured through the modelling of a stochastic process that varies smoothly as a function of latitude. The second stage of our statistical approach is to simulate from the posterior distribution of the basis-function coefficients and all unknown parameters given the data using a fully Bayesian Markov chain Monte Carlo (MCMC) algorithm. Estimates and posterior variances of the flux field can then be obtained straightforwardly from this distribution. Our statistical approach is different than others, as it simultaneously makes inference (and quantifies uncertainty) on both the error components' parameters and the CO2 fluxes. We compare it to more classical
International Nuclear Information System (INIS)
Gershgorin, B.; Majda, A.J.
2011-01-01
A statistically exactly solvable model for passive tracers is introduced as a test model for the authors' Nonlinear Extended Kalman Filter (NEKF) as well as other filtering algorithms. The model involves a Gaussian velocity field and a passive tracer governed by the advection-diffusion equation with an imposed mean gradient. The model has direct relevance to engineering problems such as the spread of pollutants in the air or contaminants in the water as well as climate change problems concerning the transport of greenhouse gases such as carbon dioxide with strongly intermittent probability distributions consistent with the actual observations of the atmosphere. One of the attractive properties of the model is the existence of the exact statistical solution. In particular, this unique feature of the model provides an opportunity to design and test fast and efficient algorithms for real-time data assimilation based on rigorous mathematical theory for a turbulence model problem with many active spatiotemporal scales. Here, we extensively study the performance of the NEKF which uses the exact first and second order nonlinear statistics without any approximations due to linearization. The role of partial and sparse observations, the frequency of observations and the observation noise strength in recovering the true signal, its spectrum, and fat tail probability distribution are the central issues discussed here. The results of our study provide useful guidelines for filtering realistic turbulent systems with passive tracers through partial observations.
Examining publication bias—a simulation-based evaluation of statistical tests on publication bias
Directory of Open Access Journals (Sweden)
Andreas Schneck
2017-11-01
Full Text Available Background Publication bias is a form of scientific misconduct. It threatens the validity of research results and the credibility of science. Although several tests on publication bias exist, no in-depth evaluations are available that examine which test performs best for different research settings. Methods Four tests on publication bias, Egger’s test (FAT, p-uniform, the test of excess significance (TES, as well as the caliper test, were evaluated in a Monte Carlo simulation. Two different types of publication bias and its degree (0%, 50%, 100% were simulated. The type of publication bias was defined either as file-drawer, meaning the repeated analysis of new datasets, or p-hacking, meaning the inclusion of covariates in order to obtain a significant result. In addition, the underlying effect (β = 0, 0.5, 1, 1.5, effect heterogeneity, the number of observations in the simulated primary studies (N = 100, 500, and the number of observations for the publication bias tests (K = 100, 1,000 were varied. Results All tests evaluated were able to identify publication bias both in the file-drawer and p-hacking condition. The false positive rates were, with the exception of the 15%- and 20%-caliper test, unbiased. The FAT had the largest statistical power in the file-drawer conditions, whereas under p-hacking the TES was, except under effect heterogeneity, slightly better. The CTs were, however, inferior to the other tests under effect homogeneity and had a decent statistical power only in conditions with 1,000 primary studies. Discussion The FAT is recommended as a test for publication bias in standard meta-analyses with no or only small effect heterogeneity. If two-sided publication bias is suspected as well as under p-hacking the TES is the first alternative to the FAT. The 5%-caliper test is recommended under conditions of effect heterogeneity and a large number of primary studies, which may be found if publication bias is examined in a
Energy Technology Data Exchange (ETDEWEB)
Zain, Zakiyah, E-mail: zac@uum.edu.my; Ahmad, Yuhaniz, E-mail: yuhaniz@uum.edu.my [School of Quantitative Sciences, Universiti Utara Malaysia, UUM Sintok 06010, Kedah (Malaysia); Azwan, Zairul, E-mail: zairulazwan@gmail.com, E-mail: farhanaraduan@gmail.com, E-mail: drisagap@yahoo.com; Raduan, Farhana, E-mail: zairulazwan@gmail.com, E-mail: farhanaraduan@gmail.com, E-mail: drisagap@yahoo.com; Sagap, Ismail, E-mail: zairulazwan@gmail.com, E-mail: farhanaraduan@gmail.com, E-mail: drisagap@yahoo.com [Surgery Department, Universiti Kebangsaan Malaysia Medical Centre, Jalan Yaacob Latif, 56000 Bandar Tun Razak, Kuala Lumpur (Malaysia); Aziz, Nazrina, E-mail: nazrina@uum.edu.my
2014-12-04
Colorectal cancer is the third and the second most common cancer worldwide in men and women respectively, and the second in Malaysia for both genders. Surgery, chemotherapy and radiotherapy are among the options available for treatment of patients with colorectal cancer. In clinical trials, the main purpose is often to compare efficacy between experimental and control treatments. Treatment comparisons often involve several responses or endpoints, and this situation complicates the analysis. In the case of colorectal cancer, sets of responses concerned with survival times include: times from tumor removal until the first, the second and the third tumor recurrences, and time to death. For a patient, the time to recurrence is correlated to the overall survival. In this study, global score test methodology is used in combining the univariate score statistics for comparing treatments with respect to each survival endpoint into a single statistic. The data of tumor recurrence and overall survival of colorectal cancer patients are taken from a Malaysian hospital. The results are found to be similar to those computed using the established Wei, Lin and Weissfeld method. Key factors such as ethnic, gender, age and stage at diagnose are also reported.
Association testing for next-generation sequencing data using score statistics
DEFF Research Database (Denmark)
Skotte, Line; Korneliussen, Thorfinn Sand; Albrechtsen, Anders
2012-01-01
computationally feasible due to the use of score statistics. As part of the joint likelihood, we model the distribution of the phenotypes using a generalized linear model framework, which works for both quantitative and discrete phenotypes. Thus, the method presented here is applicable to case-control studies...... of genotype calls into account have been proposed; most require numerical optimization which for large-scale data is not always computationally feasible. We show that using a score statistic for the joint likelihood of observed phenotypes and observed sequencing data provides an attractive approach...... to association testing for next-generation sequencing data. The joint model accounts for the genotype classification uncertainty via the posterior probabilities of the genotypes given the observed sequencing data, which gives the approach higher power than methods based on called genotypes. This strategy remains...
Statistical auditing and randomness test of lotto k/N-type games
Coronel-Brizio, H. F.; Hernández-Montoya, A. R.; Rapallo, F.; Scalas, E.
2008-11-01
One of the most popular lottery games worldwide is the so-called “lotto k/N”. It considers N numbers 1,2,…,N from which k are drawn randomly, without replacement. A player selects k or more numbers and the first prize is shared amongst those players whose selected numbers match all of the k randomly drawn. Exact rules may vary in different countries. In this paper, mean values and covariances for the random variables representing the numbers drawn from this kind of game are presented, with the aim of using them to audit statistically the consistency of a given sample of historical results with theoretical values coming from a hypergeometric statistical model. The method can be adapted to test pseudorandom number generators.
Sommer, Philipp; Kaplan, Jed
2016-04-01
Accurate modelling of large-scale vegetation dynamics, hydrology, and other environmental processes requires meteorological forcing on daily timescales. While meteorological data with high temporal resolution is becoming increasingly available, simulations for the future or distant past are limited by lack of data and poor performance of climate models, e.g., in simulating daily precipitation. To overcome these limitations, we may temporally downscale monthly summary data to a daily time step using a weather generator. Parameterization of such statistical models has traditionally been based on a limited number of observations. Recent developments in the archiving, distribution, and analysis of "big data" datasets provide new opportunities for the parameterization of a temporal downscaling model that is applicable over a wide range of climates. Here we parameterize a WGEN-type weather generator using more than 50 million individual daily meteorological observations, from over 10'000 stations covering all continents, based on the Global Historical Climatology Network (GHCN) and Synoptic Cloud Reports (EECRA) databases. Using the resulting "universal" parameterization and driven by monthly summaries, we downscale mean temperature (minimum and maximum), cloud cover, and total precipitation, to daily estimates. We apply a hybrid gamma-generalized Pareto distribution to calculate daily precipitation amounts, which overcomes much of the inability of earlier weather generators to simulate high amounts of daily precipitation. Our globally parameterized weather generator has numerous applications, including vegetation and crop modelling for paleoenvironmental studies.
A Note on Comparing the Power of Test Statistics at Low Significance Levels.
Morris, Nathan; Elston, Robert
2011-01-01
It is an obvious fact that the power of a test statistic is dependent upon the significance (alpha) level at which the test is performed. It is perhaps a less obvious fact that the relative performance of two statistics in terms of power is also a function of the alpha level. Through numerous personal discussions, we have noted that even some competent statisticians have the mistaken intuition that relative power comparisons at traditional levels such as α = 0.05 will be roughly similar to relative power comparisons at very low levels, such as the level α = 5 × 10 -8 , which is commonly used in genome-wide association studies. In this brief note, we demonstrate that this notion is in fact quite wrong, especially with respect to comparing tests with differing degrees of freedom. In fact, at very low alpha levels the cost of additional degrees of freedom is often comparatively low. Thus we recommend that statisticians exercise caution when interpreting the results of power comparison studies which use alpha levels that will not be used in practice.
Using Relative Statistics and Approximate Disease Prevalence to Compare Screening Tests.
Samuelson, Frank; Abbey, Craig
2016-11-01
Schatzkin et al. and other authors demonstrated that the ratios of some conditional statistics such as the true positive fraction are equal to the ratios of unconditional statistics, such as disease detection rates, and therefore we can calculate these ratios between two screening tests on the same population even if negative test patients are not followed with a reference procedure and the true and false negative rates are unknown. We demonstrate that this same property applies to an expected utility metric. We also demonstrate that while simple estimates of relative specificities and relative areas under ROC curves (AUC) do depend on the unknown negative rates, we can write these ratios in terms of disease prevalence, and the dependence of these ratios on a posited prevalence is often weak particularly if that prevalence is small or the performance of the two screening tests is similar. Therefore we can estimate relative specificity or AUC with little loss of accuracy, if we use an approximate value of disease prevalence.
Castruccio, Stefano
2016-01-01
One of the main challenges when working with modern climate model ensembles is the increasingly larger size of the data produced, and the consequent difficulty in storing large amounts of spatio-temporally resolved information. Many compression algorithms can be used to mitigate this problem, but since they are designed to compress generic scientific datasets, they do not account for the nature of climate model output and they compress only individual simulations. In this work, we propose a different, statistics-based approach that explicitly accounts for the space-time dependence of the data for annual global three-dimensional temperature fields in an initial condition ensemble. The set of estimated parameters is small (compared to the data size) and can be regarded as a summary of the essential structure of the ensemble output; therefore, it can be used to instantaneously reproduce the temperature fields in an ensemble with a substantial saving in storage and time. The statistical model exploits the gridded geometry of the data and parallelization across processors. It is therefore computationally convenient and allows to fit a nontrivial model to a dataset of 1 billion data points with a covariance matrix comprising of 10^{18} entries. Supplementary materials for this article are available online.
Confidence Testing for Knowledge-Based Global Communities
Jack, Brady Michael; Liu, Chia-Ju; Chiu, Houn-Lin; Shymansky, James A.
2009-01-01
This proposal advocates the position that the use of confidence wagering (CW) during testing can predict the accuracy of a student's test answer selection during between-subject assessments. Data revealed female students were more favorable to taking risks when making CW and less inclined toward risk aversion than their male counterparts. Student…
IEEE Std 101-1972: IEEE guide for the statistical analysis of thermal life test data
International Nuclear Information System (INIS)
Anon.
1992-01-01
Procedures for estimating the thermal life of electrical insulation systems and materials call for life tests at several temperatures, usually well above the expected normal operating temperature. By the selection of high temperatures for the tests, life of the insulation samples will be terminated, according to some selected failure criterion or criteria, within relatively short times -- typically one week to one year. The result of these thermally accelerated life tests will be a set of data of life values for a corresponding set of temperatures. Usually the data consist of a set of life values for each of two to four (occasionally more) test temperatures, 10 C to 25 C apart. The objective then is to establish from these data the mean life vales at each temperature and the functional dependence of life on temperature, as well as the statistical consistency and the confidence to be attributed to the mean life values and the functional life temperature dependence. The purpose of this guide is to assist in this objective and to give guidance for comparing the results of tests on different materials and of different tests on the same materials
The Indian nuclear test in a global perspective
International Nuclear Information System (INIS)
Subrahmanyam, K.
1974-01-01
A peaceful nuclear explosion test was carried out by India on 18 May, 1974 at Pokharan in the Rajasthan Desert. The test was carried out as a part of India's steady programme to develop nuclear energy for peaceful purposes and there was no diversion of resources from development as is charged by some nations. The test has broken the monopoly of the nuclear superpowers to conduct nuclear tests for which they are entiltled by the Non-proliferation Treaty (NPT) and at the same time, sharply focussed the attention on the discriminatory character of the NPT which does not allow non-nuclear states to carry out nuclear tests even for peaceful purposes. It is argued that India's going nuclear may prove, in the long run, beneficial to the cause of disarmament. (M.G.B.)
Using the method of statistic tests for determining the pressure in the UNC-600 vacuum chamber
International Nuclear Information System (INIS)
Kiver, A.M.; Mirzoev, K.G.
1998-01-01
The aim of the paper is to simulate the process of pumping-out the UNC-600 vacuum chamber. The simulation is carried out by the Monte-Carlo statistic test method. It is shown that the pressure value in every liner of the chamber may be determined from the pressure in the pump branch pipe, determined by the discharge current of this pump. Therefore, it is possible to precise the working pressure in the ion guide of the UNC-600 vacuum chamber [ru
DEFF Research Database (Denmark)
Holbech, Henrik
-contribution of each individual to the measured response. Furthermore, the combination of a Gamma-Poisson stochastic part with a Weibull concentration-response model allowed accounting for the inter-replicate variability. Second, we checked for the possibility of optimizing the initial experimental design through...... was twofold. First, we refined the statistical analyses of reproduction data accounting for mortality all along the test period. The variable “number of clutches/eggs produced per individual-day” was used for EC x modelling, as classically done in epidemiology in order to account for the time...
Selection of hidden layer nodes in neural networks by statistical tests
International Nuclear Information System (INIS)
Ciftcioglu, Ozer
1992-05-01
A statistical methodology for selection of the number of hidden layer nodes in feedforward neural networks is described. The method considers the network as an empirical model for the experimental data set subject to pattern classification so that the selection process becomes a model estimation through parameter identification. The solution is performed for an overdetermined estimation problem for identification using nonlinear least squares minimization technique. The number of the hidden layer nodes is determined as result of hypothesis testing. Accordingly the redundant network structure with respect to the number of parameters is avoided and the classification error being kept to a minimum. (author). 11 refs.; 4 figs.; 1 tab
van Krimpen-Stoop, Edith M. L. A.; Meijer, Rob R.
Person-fit research in the context of paper-and-pencil tests is reviewed, and some specific problems regarding person fit in the context of computerized adaptive testing (CAT) are discussed. Some new methods are proposed to investigate person fit in a CAT environment. These statistics are based on Statistical Process Control (SPC) theory. A…
Wu, Hao
2018-05-01
In structural equation modelling (SEM), a robust adjustment to the test statistic or to its reference distribution is needed when its null distribution deviates from a χ 2 distribution, which usually arises when data do not follow a multivariate normal distribution. Unfortunately, existing studies on this issue typically focus on only a few methods and neglect the majority of alternative methods in statistics. Existing simulation studies typically consider only non-normal distributions of data that either satisfy asymptotic robustness or lead to an asymptotic scaled χ 2 distribution. In this work we conduct a comprehensive study that involves both typical methods in SEM and less well-known methods from the statistics literature. We also propose the use of several novel non-normal data distributions that are qualitatively different from the non-normal distributions widely used in existing studies. We found that several under-studied methods give the best performance under specific conditions, but the Satorra-Bentler method remains the most viable method for most situations. © 2017 The British Psychological Society.
Testing Genetic Pleiotropy with GWAS Summary Statistics for Marginal and Conditional Analyses.
Deng, Yangqing; Pan, Wei
2017-12-01
There is growing interest in testing genetic pleiotropy, which is when a single genetic variant influences multiple traits. Several methods have been proposed; however, these methods have some limitations. First, all the proposed methods are based on the use of individual-level genotype and phenotype data; in contrast, for logistical, and other, reasons, summary statistics of univariate SNP-trait associations are typically only available based on meta- or mega-analyzed large genome-wide association study (GWAS) data. Second, existing tests are based on marginal pleiotropy, which cannot distinguish between direct and indirect associations of a single genetic variant with multiple traits due to correlations among the traits. Hence, it is useful to consider conditional analysis, in which a subset of traits is adjusted for another subset of traits. For example, in spite of substantial lowering of low-density lipoprotein cholesterol (LDL) with statin therapy, some patients still maintain high residual cardiovascular risk, and, for these patients, it might be helpful to reduce their triglyceride (TG) level. For this purpose, in order to identify new therapeutic targets, it would be useful to identify genetic variants with pleiotropic effects on LDL and TG after adjusting the latter for LDL; otherwise, a pleiotropic effect of a genetic variant detected by a marginal model could simply be due to its association with LDL only, given the well-known correlation between the two types of lipids. Here, we develop a new pleiotropy testing procedure based only on GWAS summary statistics that can be applied for both marginal analysis and conditional analysis. Although the main technical development is based on published union-intersection testing methods, care is needed in specifying conditional models to avoid invalid statistical estimation and inference. In addition to the previously used likelihood ratio test, we also propose using generalized estimating equations under the
Statistical Analysis of the Polarimetric Cloud Analysis and Seeding Test (POLCAST) Field Projects
Ekness, Jamie Lynn
The North Dakota farming industry brings in more than $4.1 billion annually in cash receipts. Unfortunately, agriculture sales vary significantly from year to year, which is due in large part to weather events such as hail storms and droughts. One method to mitigate drought is to use hygroscopic seeding to increase the precipitation efficiency of clouds. The North Dakota Atmospheric Research Board (NDARB) sponsored the Polarimetric Cloud Analysis and Seeding Test (POLCAST) research project to determine the effectiveness of hygroscopic seeding in North Dakota. The POLCAST field projects obtained airborne and radar observations, while conducting randomized cloud seeding. The Thunderstorm Identification Tracking and Nowcasting (TITAN) program is used to analyze radar data (33 usable cases) in determining differences in the duration of the storm, rain rate and total rain amount between seeded and non-seeded clouds. The single ratio of seeded to non-seeded cases is 1.56 (0.28 mm/0.18 mm) or 56% increase for the average hourly rainfall during the first 60 minutes after target selection. A seeding effect is indicated with the lifetime of the storms increasing by 41 % between seeded and non-seeded clouds for the first 60 minutes past seeding decision. A double ratio statistic, a comparison of radar derived rain amount of the last 40 minutes of a case (seed/non-seed), compared to the first 20 minutes (seed/non-seed), is used to account for the natural variability of the cloud system and gives a double ratio of 1.85. The Mann-Whitney test on the double ratio of seeded to non-seeded cases (33 cases) gives a significance (p-value) of 0.063. Bootstrapping analysis of the POLCAST set indicates that 50 cases would provide statistically significant results based on the Mann-Whitney test of the double ratio. All the statistical analysis conducted on the POLCAST data set show that hygroscopic seeding in North Dakota does increase precipitation. While an additional POLCAST field
Statistical testing and power analysis for brain-wide association study.
Gong, Weikang; Wan, Lin; Lu, Wenlian; Ma, Liang; Cheng, Fan; Cheng, Wei; Grünewald, Stefan; Feng, Jianfeng
2018-04-05
The identification of connexel-wise associations, which involves examining functional connectivities between pairwise voxels across the whole brain, is both statistically and computationally challenging. Although such a connexel-wise methodology has recently been adopted by brain-wide association studies (BWAS) to identify connectivity changes in several mental disorders, such as schizophrenia, autism and depression, the multiple correction and power analysis methods designed specifically for connexel-wise analysis are still lacking. Therefore, we herein report the development of a rigorous statistical framework for connexel-wise significance testing based on the Gaussian random field theory. It includes controlling the family-wise error rate (FWER) of multiple hypothesis testings using topological inference methods, and calculating power and sample size for a connexel-wise study. Our theoretical framework can control the false-positive rate accurately, as validated empirically using two resting-state fMRI datasets. Compared with Bonferroni correction and false discovery rate (FDR), it can reduce false-positive rate and increase statistical power by appropriately utilizing the spatial information of fMRI data. Importantly, our method bypasses the need of non-parametric permutation to correct for multiple comparison, thus, it can efficiently tackle large datasets with high resolution fMRI images. The utility of our method is shown in a case-control study. Our approach can identify altered functional connectivities in a major depression disorder dataset, whereas existing methods fail. A software package is available at https://github.com/weikanggong/BWAS. Copyright © 2018 Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
Dominic Beaulieu-Prévost
2006-03-01
Full Text Available For the last 50 years of research in quantitative social sciences, the empirical evaluation of scientific hypotheses has been based on the rejection or not of the null hypothesis. However, more than 300 articles demonstrated that this method was problematic. In summary, null hypothesis testing (NHT is unfalsifiable, its results depend directly on sample size and the null hypothesis is both improbable and not plausible. Consequently, alternatives to NHT such as confidence intervals (CI and measures of effect size are starting to be used in scientific publications. The purpose of this article is, first, to provide the conceptual tools necessary to implement an approach based on confidence intervals, and second, to briefly demonstrate why such an approach is an interesting alternative to an approach based on NHT. As demonstrated in the article, the proposed CI approach avoids most problems related to a NHT approach and can often improve the scientific and contextual relevance of the statistical interpretations by testing range hypotheses instead of a point hypothesis and by defining the minimal value of a substantial effect. The main advantage of such a CI approach is that it replaces the notion of statistical power by an easily interpretable three-value logic (probable presence of a substantial effect, probable absence of a substantial effect and probabilistic undetermination. The demonstration includes a complete example.
Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing
2016-01-08
A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method.
Directory of Open Access Journals (Sweden)
Ke Li
2016-01-01
Full Text Available A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF and Diagnostic Bayesian Network (DBN is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO. To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA is proposed to evaluate the sensitiveness of symptom parameters (SPs for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method.
Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing
2016-01-01
A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method. PMID:26761006
Estimation of In Situ Stresses with Hydro-Fracturing Tests and a Statistical Method
Lee, Hikweon; Ong, See Hong
2018-03-01
At great depths, where borehole-based field stress measurements such as hydraulic fracturing are challenging due to difficult downhole conditions or prohibitive costs, in situ stresses can be indirectly estimated using wellbore failures such as borehole breakouts and/or drilling-induced tensile failures detected by an image log. As part of such efforts, a statistical method has been developed in which borehole breakouts detected on an image log are used for this purpose (Song et al. in Proceedings on the 7th international symposium on in situ rock stress, 2016; Song and Chang in J Geophys Res Solid Earth 122:4033-4052, 2017). The method employs a grid-searching algorithm in which the least and maximum horizontal principal stresses ( S h and S H) are varied, and the corresponding simulated depth-related breakout width distribution as a function of the breakout angle ( θ B = 90° - half of breakout width) is compared to that observed along the borehole to determine a set of S h and S H having the lowest misfit between them. An important advantage of the method is that S h and S H can be estimated simultaneously in vertical wells. To validate the statistical approach, the method is applied to a vertical hole where a set of field hydraulic fracturing tests have been carried out. The stress estimations using the proposed method were found to be in good agreement with the results interpreted from the hydraulic fracturing test measurements.
Partial discharge testing: a progress report. Statistical evaluation of PD data
International Nuclear Information System (INIS)
Warren, V.; Allan, J.
2005-01-01
It has long been known that comparing the partial discharge results obtained from a single machine is a valuable tool enabling companies to observe the gradual deterioration of a machine stator winding and thus plan appropriate maintenance for the machine. In 1998, at the annual Iris Rotating Machines Conference (IRMC), a paper was presented that compared thousands of PD test results to establish the criteria for comparing results from different machines and the expected PD levels. At subsequent annual Iris conferences, using similar analytical procedures, papers were presented that supported the previous criteria and: in 1999, established sensor location as an additional criterion; in 2000, evaluated the effect of insulation type and age on PD activity; in 2001, evaluated the effect of manufacturer on PD activity; in 2002, evaluated the effect of operating pressure for hydrogen-cooled machines; in 2003, evaluated the effect of insulation type and setting Trac alarms; in 2004, re-evaluated the effect of manufacturer on PD activity. Before going further in database analysis procedures, it would be prudent to statistically evaluate the anecdotal evidence observed to date. The goal was to determine which variables of machine conditions greatly influenced the PD results and which didn't. Therefore, this year's paper looks at the impact of operating voltage, machine type and winding type on the test results for air-cooled machines. Because of resource constraints, only data collected through 2003 was used; however, as before, it is still standardized for frequency bandwidth and pruned to include only full-load-hot (FLH) results collected for one sensor on operating machines. All questionable data, or data from off-line testing or unusual machine conditions was excluded, leaving 6824 results. Calibration of on-line PD test results is impractical; therefore, only results obtained using the same method of data collection and noise separation techniques are compared. For
Debate on GMOs health risks after statistical findings in regulatory tests.
de Vendômois, Joël Spiroux; Cellier, Dominique; Vélot, Christian; Clair, Emilie; Mesnage, Robin; Séralini, Gilles-Eric
2010-10-05
We summarize the major points of international debate on health risk studies for the main commercialized edible GMOs. These GMOs are soy, maize and oilseed rape designed to contain new pesticide residues since they have been modified to be herbicide-tolerant (mostly to Roundup) or to produce mutated Bt toxins. The debated alimentary chronic risks may come from unpredictable insertional mutagenesis effects, metabolic effects, or from the new pesticide residues. The most detailed regulatory tests on the GMOs are three-month long feeding trials of laboratory rats, which are biochemically assessed. The tests are not compulsory, and are not independently conducted. The test data and the corresponding results are kept in secret by the companies. Our previous analyses of regulatory raw data at these levels, taking the representative examples of three GM maize NK 603, MON 810, and MON 863 led us to conclude that hepatorenal toxicities were possible, and that longer testing was necessary. Our study was criticized by the company developing the GMOs in question and the regulatory bodies, mainly on the divergent biological interpretations of statistically significant biochemical and physiological effects. We present the scientific reasons for the crucially different biological interpretations and also highlight the shortcomings in the experimental protocols designed by the company. The debate implies an enormous responsibility towards public health and is essential due to nonexistent traceability or epidemiological studies in the GMO-producing countries.
Posner, A. J.
2017-12-01
The Middle Rio Grande River (MRG) traverses New Mexico from Cochiti to Elephant Butte reservoirs. Since the 1100s, cultivating and inhabiting the valley of this alluvial river has required various river training works. The mid-20th century saw a concerted effort to tame the river through channelization, Jetty Jacks, and dam construction. A challenge for river managers is to better understand the interactions between a river training works, dam construction, and the geomorphic adjustments of a desert river driven by spring snowmelt and summer thunderstorms carrying water and large sediment inputs from upstream and ephemeral tributaries. Due to its importance to the region, a vast wealth of data exists for conditions along the MRG. The investigation presented herein builds upon previous efforts by combining hydraulic model results, digitized planforms, and stream gage records in various statistical and conceptual models in order to test our understanding of this complex system. Spatially continuous variables were clipped by a set of river cross section data that is collected at decadal intervals since the early 1960s, creating a spatially homogenous database upon which various statistical testing was implemented. Conceptual models relate forcing variables and response variables to estimate river planform changes. The developed database, represents a unique opportunity to quantify and test geomorphic conceptual models in the unique characteristics of the MRG. The results of this investigation provides a spatially distributed characterization of planform variable changes, permitting managers to predict planform at a much higher resolution than previously available, and a better understanding of the relationship between flow regime and planform changes such as changes to longitudinal slope, sinuosity, and width. Lastly, data analysis and model interpretation led to the development of a new conceptual model for the impact of ephemeral tributaries in alluvial rivers.
Cosmological Non-Gaussian Signature Detection: Comparing Performance of Different Statistical Tests
Directory of Open Access Journals (Sweden)
O. Forni
2005-09-01
Full Text Available Currently, it appears that the best method for non-Gaussianity detection in the cosmic microwave background (CMB consists in calculating the kurtosis of the wavelet coefficients. We know that wavelet-kurtosis outperforms other methods such as the bispectrum, the genus, ridgelet-kurtosis, and curvelet-kurtosis on an empirical basis, but relatively few studies have compared other transform-based statistics, such as extreme values, or more recent tools such as higher criticism (HC, or proposed Ã¢Â€Âœbest possibleÃ¢Â€Â choices for such statistics. In this paper, we consider two models for transform-domain coefficients: (a a power-law model, which seems suited to the wavelet coefficients of simulated cosmic strings, and (b a sparse mixture model, which seems suitable for the curvelet coefficients of filamentary structure. For model (a, if power-law behavior holds with finite 8th moment, excess kurtosis is an asymptotically optimal detector, but if the 8th moment is not finite, a test based on extreme values is asymptotically optimal. For model (b, if the transform coefficients are very sparse, a recent test, higher criticism, is an optimal detector, but if they are dense, kurtosis is an optimal detector. Empirical wavelet coefficients of simulated cosmic strings have power-law character, infinite 8th moment, while curvelet coefficients of the simulated cosmic strings are not very sparse. In all cases, excess kurtosis seems to be an effective test in moderate-resolution imagery.
Mach, Douglas M.; Blakeslee, Richard J.; Bateman, Monte G.
2011-01-01
Using rotating vane electric field mills and Gerdien capacitors, we measured the electric field profile and conductivity during 850 overflights of thunderstorms and electrified shower clouds (ESCs) spanning regions including the Southeastern United States, the Western Atlantic Ocean, the Gulf of Mexico, Central America and adjacent oceans, Central Brazil, and the South Pacific. The overflights include storms over land and ocean, and with positive and negative fields above the storms. Over three-quarters (78%) of the land storms had detectable lightning, while less than half (43%) of the oceanic storms had lightning. Integrating our electric field and conductivity data, we determined total conduction currents and flash rates for each overpass. With knowledge of the storm location (land or ocean) and type (with or without lightning), we determine the mean currents by location and type. The mean current for ocean thunderstorms is 1.7 A while the mean current for land thunderstorms is 1.0 A. The mean current for ocean ESCs 0.41 A and the mean current for land ESCs is 0.13 A. We did not find any significant regional or latitudinal based patterns in our total conduction currents. By combining the aircraft derived storm currents and flash rates with diurnal flash rate statistics derived from the Lightning Imaging Sensor (LIS) and Optical Transient Detector (OTD) low Earth orbiting satellites, we reproduce the diurnal variation in the global electric circuit (i.e., the Carnegie curve) to within 4% for all but two short periods of time. The agreement with the Carnegie curve was obtained without any tuning or adjustment of the satellite or aircraft data. Given our data and assumptions, mean contributions to the global electric circuit are 1.1 kA (land) and 0.7 kA (ocean) from thunderstorms, and 0.22 kA (ocean) and 0.04 (land) from ESCs, resulting in a mean total conduction current estimate for the global electric circuit of 2.0 kA. Mean storm counts are 1100 for land
Directory of Open Access Journals (Sweden)
Tulio Rosembuj
2006-12-01
Full Text Available There is no singular globalization, nor is the result of an individual agent. We could start by saying that global action has different angles and subjects who perform it are different, as well as its objectives. The global is an invisible invasion of materials and immediate effects.
Tulio Rosembuj
2006-01-01
There is no singular globalization, nor is the result of an individual agent. We could start by saying that global action has different angles and subjects who perform it are different, as well as its objectives. The global is an invisible invasion of materials and immediate effects.
Toepfer, F.; Cortinas, J. V., Jr.; Kuo, W.; Tallapragada, V.; Stajner, I.; Nance, L. B.; Kelleher, K. E.; Firl, G.; Bernardet, L.
2017-12-01
NOAA develops, operates, and maintains an operational global modeling capability for weather, sub seasonal and seasonal prediction for the protection of life and property and fostering the US economy. In order to substantially improve the overall performance and accelerate advancements of the operational modeling suite, NOAA is partnering with NCAR to design and build the Global Modeling Test Bed (GMTB). The GMTB has been established to provide a platform and a capability for researchers to contribute to the advancement primarily through the development of physical parameterizations needed to improve operational NWP. The strategy to achieve this goal relies on effectively leveraging global expertise through a modern collaborative software development framework. This framework consists of a repository of vetted and supported physical parameterizations known as the Common Community Physics Package (CCPP), a common well-documented interface known as the Interoperable Physics Driver (IPD) for combining schemes into suites and for their configuration and connection to dynamic cores, and an open evidence-based governance process for managing the development and evolution of CCPP. In addition, a physics test harness designed to work within this framework has been established in order to facilitate easier like-to-like comparison of physics advancements. This paper will present an overview of the design of the CCPP and test platform. Additionally, an overview of potential new opportunities of how physics developers can engage in the process, from implementing code for CCPP/IPD compliance to testing their development within an operational-like software environment, will be presented. In addition, insight will be given as to how development gets elevated to CPPP-supported status, the pre-cursor to broad availability and use within operational NWP. An overview of how the GMTB can be expanded to support other global or regional modeling capabilities will also be presented.
A new efficient statistical test for detecting variability in the gene expression data.
Mathur, Sunil; Dolo, Samuel
2008-08-01
DNA microarray technology allows researchers to monitor the expressions of thousands of genes under different conditions. The detection of differential gene expression under two different conditions is very important in microarray studies. Microarray experiments are multi-step procedures and each step is a potential source of variance. This makes the measurement of variability difficult because approach based on gene-by-gene estimation of variance will have few degrees of freedom. It is highly possible that the assumption of equal variance for all the expression levels may not hold. Also, the assumption of normality of gene expressions may not hold. Thus it is essential to have a statistical procedure which is not based on the normality assumption and also it can detect genes with differential variance efficiently. The detection of differential gene expression variance will allow us to identify experimental variables that affect different biological processes and accuracy of DNA microarray measurements.In this article, a new nonparametric test for scale is developed based on the arctangent of the ratio of two expression levels. Most of the tests available in literature require the assumption of normal distribution, which makes them inapplicable in many situations, and it is also hard to verify the suitability of the normal distribution assumption for the given data set. The proposed test does not require the assumption of the distribution for the underlying population and hence makes it more practical and widely applicable. The asymptotic relative efficiency is calculated under different distributions, which show that the proposed test is very powerful when the assumption of normality breaks down. Monte Carlo simulation studies are performed to compare the power of the proposed test with some of the existing procedures. It is found that the proposed test is more powerful than commonly used tests under almost all the distributions considered in the study. A
International Nuclear Information System (INIS)
Martorell, S.; Serradell, V.; Munoz, A.; Sanchez, A.
1997-01-01
Background, objective, scope, detailed working plan and follow-up and final product of the project ''Global optimization of maintenance and surveillance testing based on reliability and probabilistic safety assessment'' are described
Noel, Jean; Prieto, Juan C.; Styner, Martin
2017-03-01
Functional Analysis of Diffusion Tensor Tract Statistics (FADTTS) is a toolbox for analysis of white matter (WM) fiber tracts. It allows associating diffusion properties along major WM bundles with a set of covariates of interest, such as age, diagnostic status and gender, and the structure of the variability of these WM tract properties. However, to use this toolbox, a user must have an intermediate knowledge in scripting languages (MATLAB). FADTTSter was created to overcome this issue and make the statistical analysis accessible to any non-technical researcher. FADTTSter is actively being used by researchers at the University of North Carolina. FADTTSter guides non-technical users through a series of steps including quality control of subjects and fibers in order to setup the necessary parameters to run FADTTS. Additionally, FADTTSter implements interactive charts for FADTTS' outputs. This interactive chart enhances the researcher experience and facilitates the analysis of the results. FADTTSter's motivation is to improve usability and provide a new analysis tool to the community that complements FADTTS. Ultimately, by enabling FADTTS to a broader audience, FADTTSter seeks to accelerate hypothesis testing in neuroimaging studies involving heterogeneous clinical data and diffusion tensor imaging. This work is submitted to the Biomedical Applications in Molecular, Structural, and Functional Imaging conference. The source code of this application is available in NITRC.
Using the Δ3 statistic to test for missed levels in mixed sequence neutron resonance data
International Nuclear Information System (INIS)
Mulhall, Declan
2009-01-01
The Δ 3 (L) statistic is studied as a tool to detect missing levels in the neutron resonance data where two sequences are present. These systems are problematic because there is no level repulsion, and the resonances can be too close to resolve. Δ 3 (L) is a measure of the fluctuations in the number of levels in an interval of length L on the energy axis. The method used is tested on ensembles of mixed Gaussian orthogonal ensemble spectra, with a known fraction of levels (x%) randomly depleted, and can accurately return x. The accuracy of the method as a function of spectrum size is established. The method is used on neutron resonance data for 11 isotopes with either s-wave neutrons on odd-A isotopes, or p-wave neutrons on even-A isotopes. The method compares favorably with a maximum likelihood method applied to the level spacing distribution. Nuclear data ensembles were made from 20 isotopes in total, and their Δ 3 (L) statistics are discussed in the context of random matrix theory.
Energy Technology Data Exchange (ETDEWEB)
Jha, Sumit Kumar [University of Central Florida, Orlando; Pullum, Laura L [ORNL; Ramanathan, Arvind [ORNL
2016-01-01
Embedded intelligent systems ranging from tiny im- plantable biomedical devices to large swarms of autonomous un- manned aerial systems are becoming pervasive in our daily lives. While we depend on the flawless functioning of such intelligent systems, and often take their behavioral correctness and safety for granted, it is notoriously difficult to generate test cases that expose subtle errors in the implementations of machine learning algorithms. Hence, the validation of intelligent systems is usually achieved by studying their behavior on representative data sets, using methods such as cross-validation and bootstrapping.In this paper, we present a new testing methodology for studying the correctness of intelligent systems. Our approach uses symbolic decision procedures coupled with statistical hypothesis testing to. We also use our algorithm to analyze the robustness of a human detection algorithm built using the OpenCV open-source computer vision library. We show that the human detection implementation can fail to detect humans in perturbed video frames even when the perturbations are so small that the corresponding frames look identical to the naked eye.
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg; Conradsen, Knut; Skriver, Henning
2016-01-01
Based on an omnibus likelihood ratio test statistic for the equality of several variance-covariance matrices following the complex Wishart distribution with an associated p-value and a factorization of this test statistic, change analysis in a short sequence of multilook, polarimetric SAR data...... in the covariance matrix representation is carried out. The omnibus test statistic and its factorization detect if and when change(s) occur. The technique is demonstrated on airborne EMISAR L-band data but may be applied to Sentinel-1, Cosmo-SkyMed, TerraSAR-X, ALOS and RadarSat-2 or other dual- and quad...
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg; Conradsen, Knut; Skriver, Henning
2016-01-01
Based on an omnibus likelihood ratio test statistic for the equality of several variance-covariance matrices following the complex Wishart distribution with an associated p-value and a factorization of this test statistic, change analysis in a short sequence of multilook, polarimetric SAR data...... in the covariance matrix representation is carried out. The omnibus test statistic and its factorization detect if and when change(s) occur. The technique is demonstrated on airborne EMISAR L-band data but may be applied to Sentinel-1, Cosmo-SkyMed, TerraSAR-X, ALOS and RadarSat-2 or other dual- and quad...
Tabor, Josh
2010-01-01
On the 2009 AP[c] Statistics Exam, students were asked to create a statistic to measure skewness in a distribution. This paper explores several of the most popular student responses and evaluates which statistic performs best when sampling from various skewed populations. (Contains 8 figures, 3 tables, and 4 footnotes.)
DWPF Sample Vial Insert Study-Statistical Analysis of DWPF Mock-Up Test Data
Energy Technology Data Exchange (ETDEWEB)
Harris, S.P. [Westinghouse Savannah River Company, AIKEN, SC (United States)
1997-09-18
This report is prepared as part of Technical/QA Task Plan WSRC-RP-97-351 which was issued in response to Technical Task Request HLW/DWPF/TTR-970132 submitted by DWPF. Presented in this report is a statistical analysis of DWPF Mock-up test data for evaluation of two new analytical methods which use insert samples from the existing HydragardTM sampler. The first is a new hydrofluoric acid based method called the Cold Chemical Method (Cold Chem) and the second is a modified fusion method.Either new DWPF analytical method could result in a two to three fold improvement in sample analysis time.Both new methods use the existing HydragardTM sampler to collect a smaller insert sample from the process sampling system. The insert testing methodology applies to the DWPF Slurry Mix Evaporator (SME) and the Melter Feed Tank (MFT) samples.The insert sample is named after the initial trials which placed the container inside the sample (peanut) vials. Samples in small 3 ml containers (Inserts) are analyzed by either the cold chemical method or a modified fusion method. The current analytical method uses a HydragardTM sample station to obtain nearly full 15 ml peanut vials. The samples are prepared by a multi-step process for Inductively Coupled Plasma (ICP) analysis by drying, vitrification, grinding and finally dissolution by either mixed acid or fusion. In contrast, the insert sample is placed directly in the dissolution vessel, thus eliminating the drying, vitrification and grinding operations for the Cold chem method. Although the modified fusion still requires drying and calcine conversion, the process is rapid due to the decreased sample size and that no vitrification step is required.A slurry feed simulant material was acquired from the TNX pilot facility from the test run designated as PX-7.The Mock-up test data were gathered on the basis of a statistical design presented in SRT-SCS-97004 (Rev. 0). Simulant PX-7 samples were taken in the DWPF Analytical Cell Mock
DWPF Sample Vial Insert Study-Statistical Analysis of DWPF Mock-Up Test Data
International Nuclear Information System (INIS)
Harris, S.P.
1997-01-01
This report is prepared as part of Technical/QA Task Plan WSRC-RP-97-351 which was issued in response to Technical Task Request HLW/DWPF/TTR-970132 submitted by DWPF. Presented in this report is a statistical analysis of DWPF Mock-up test data for evaluation of two new analytical methods which use insert samples from the existing HydragardTM sampler. The first is a new hydrofluoric acid based method called the Cold Chemical Method (Cold Chem) and the second is a modified fusion method.Both new methods use the existing HydragardTM sampler to collect a smaller insert sample from the process sampling system. The insert testing methodology applies to the DWPF Slurry Mix Evaporator (SME) and the Melter Feed Tank (MFT) samples. Samples in small 3 ml containers (Inserts) are analyzed by either the cold chemical method or a modified fusion method. The current analytical method uses a HydragardTM sample station to obtain nearly full 15 ml peanut vials. The samples are prepared by a multi-step process for Inductively Coupled Plasma (ICP) analysis by drying, vitrification, grinding and finally dissolution by either mixed acid or fusion. In contrast, the insert sample is placed directly in the dissolution vessel, thus eliminating the drying, vitrification and grinding operations for the Cold chem method. Although the modified fusion still requires drying and calcine conversion, the process is rapid due to the decreased sample size and that no vitrification step is required.A slurry feed simulant material was acquired from the TNX pilot facility from the test run designated as PX-7.The Mock-up test data were gathered on the basis of a statistical design presented in SRT-SCS-97004 (Rev. 0). Simulant PX-7 samples were taken in the DWPF Analytical Cell Mock-up Facility using 3 ml inserts and 15 ml peanut vials. A number of the insert samples were analyzed by Cold Chem and compared with full peanut vial samples analyzed by the current methods. The remaining inserts were analyzed by
Gray, Alistair; Veale, Jaimie F.; Binson, Diane; Sell, Randell L.
2013-01-01
Objective. Effectively addressing health disparities experienced by sexual minority populations requires high-quality official data on sexual orientation. We developed a conceptual framework of sexual orientation to improve the quality of sexual orientation data in New Zealand's Official Statistics System. Methods. We reviewed conceptual and methodological literature, culminating in a draft framework. To improve the framework, we held focus groups and key-informant interviews with sexual minority stakeholders and producers and consumers of official statistics. An advisory board of experts provided additional guidance. Results. The framework proposes working definitions of the sexual orientation topic and measurement concepts, describes dimensions of the measurement concepts, discusses variables framing the measurement concepts, and outlines conceptual grey areas. Conclusion. The framework proposes standard definitions and concepts for the collection of official sexual orientation data in New Zealand. It presents a model for producers of official statistics in other countries, who wish to improve the quality of health data on their citizens. PMID:23840231
Zheng, Yinggan; Gierl, Mark J.; Cui, Ying
2010-01-01
This study combined the kernel smoothing procedure and a nonparametric differential item functioning statistic--Cochran's Z--to statistically test the difference between the kernel-smoothed item response functions for reference and focal groups. Simulation studies were conducted to investigate the Type I error and power of the proposed…
Luh, Wei-Ming; Guo, Jiin-Huarng
2005-01-01
To deal with nonnormal and heterogeneous data for the one-way fixed effect analysis of variance model, the authors adopted a trimmed means method in conjunction with Hall's invertible transformation into a heteroscedastic test statistic (Alexander-Govern test or Welch test). The results of simulation experiments showed that the proposed technique…
Andru?cã Maria Carmen
2013-01-01
The field of globalization has highlighted an interdependence implied by a more harmonious understanding determined by the daily interaction between nations through the inducement of peace and the management of streamlining and the effectiveness of the global economy. For the functioning of the globalization, the developing countries that can be helped by the developed ones must be involved. The international community can contribute to the institution of the development environment of the gl...
Palazón, L; Navas, A
2017-06-01
Information on sediment contribution and transport dynamics from the contributing catchments is needed to develop management plans to tackle environmental problems related with effects of fine sediment as reservoir siltation. In this respect, the fingerprinting technique is an indirect technique known to be valuable and effective for sediment source identification in river catchments. Large variability in sediment delivery was found in previous studies in the Barasona catchment (1509 km 2 , Central Spanish Pyrenees). Simulation results with SWAT and fingerprinting approaches identified badlands and agricultural uses as the main contributors to sediment supply in the reservoir. In this study the Kruskal-Wallis H-test and (3) principal components analysis. Source contribution results were different between assessed options with the greatest differences observed for option using #3, including the two step process: principal components analysis and discriminant function analysis. The characteristics of the solutions by the applied mixing model and the conceptual understanding of the catchment showed that the most reliable solution was achieved using #2, the two step process of Kruskal-Wallis H-test and discriminant function analysis. The assessment showed the importance of the statistical procedure used to define the optimum composite fingerprint for sediment fingerprinting applications. Copyright © 2016 Elsevier Ltd. All rights reserved.
Statistical testing of the full-range leadership theory in nursing.
Kanste, Outi; Kääriäinen, Maria; Kyngäs, Helvi
2009-12-01
The aim of this study is to test statistically the structure of the full-range leadership theory in nursing. The data were gathered by postal questionnaires from nurses and nurse leaders working in healthcare organizations in Finland. A follow-up study was performed 1 year later. The sample consisted of 601 nurses and nurse leaders, and the follow-up study had 78 respondents. Theory was tested through structural equation modelling, standard regression analysis and two-way anova. Rewarding transformational leadership seems to promote and passive laissez-faire leadership to reduce willingness to exert extra effort, perceptions of leader effectiveness and satisfaction with the leader. Active management-by-exception seems to reduce willingness to exert extra effort and perception of leader effectiveness. Rewarding transformational leadership remained as a strong explanatory factor of all outcome variables measured 1 year later. The data supported the main structure of the full-range leadership theory, lending support to the universal nature of the theory.
Directory of Open Access Journals (Sweden)
Henry Braun
2017-11-01
Full Text Available Abstract Background Economists are making increasing use of measures of student achievement obtained through large-scale survey assessments such as NAEP, TIMSS, and PISA. The construction of these measures, employing plausible value (PV methodology, is quite different from that of the more familiar test scores associated with assessments such as the SAT or ACT. These differences have important implications both for utilization and interpretation. Although much has been written about PVs, it appears that there are still misconceptions about whether and how to employ them in secondary analyses. Methods We address a range of technical issues, including those raised in a recent article that was written to inform economists using these databases. First, an extensive review of the relevant literature was conducted, with particular attention to key publications that describe the derivation and psychometric characteristics of such achievement measures. Second, a simulation study was carried out to compare the statistical properties of estimates based on the use of PVs with those based on other, commonly used methods. Results It is shown, through both theoretical analysis and simulation, that under fairly general conditions appropriate use of PV yields approximately unbiased estimates of model parameters in regression analyses of large scale survey data. The superiority of the PV methodology is particularly evident when measures of student achievement are employed as explanatory variables. Conclusions The PV methodology used to report student test performance in large scale surveys remains the state-of-the-art for secondary analyses of these databases.
Statistical methods for the analysis of a screening test for chronic beryllium disease
Energy Technology Data Exchange (ETDEWEB)
Frome, E.L.; Neubert, R.L. [Oak Ridge National Lab., TN (United States). Mathematical Sciences Section; Smith, M.H.; Littlefield, L.G.; Colyer, S.P. [Oak Ridge Inst. for Science and Education, TN (United States). Medical Sciences Div.
1994-10-01
The lymphocyte proliferation test (LPT) is a noninvasive screening procedure used to identify persons who may have chronic beryllium disease. A practical problem in the analysis of LPT well counts is the occurrence of outlying data values (approximately 7% of the time). A log-linear regression model is used to describe the expected well counts for each set of test conditions. The variance of the well counts is proportional to the square of the expected counts, and two resistant regression methods are used to estimate the parameters of interest. The first approach uses least absolute values (LAV) on the log of the well counts to estimate beryllium stimulation indices (SIs) and the coefficient of variation. The second approach uses a resistant regression version of maximum quasi-likelihood estimation. A major advantage of the resistant regression methods is that it is not necessary to identify and delete outliers. These two new methods for the statistical analysis of the LPT data and the outlier rejection method that is currently being used are applied to 173 LPT assays. The authors strongly recommend the LAV method for routine analysis of the LPT.
Konijn, Elly A.; van de Schoot, Rens; Winter, Sonja D.; Ferguson, Christopher J.
2015-01-01
The present paper argues that an important cause of publication bias resides in traditional frequentist statistics forcing binary decisions. An alternative approach through Bayesian statistics provides various degrees of support for any hypothesis allowing balanced decisions and proper null
International Nuclear Information System (INIS)
Létourneau, Daniel; McNiven, Andrea; Keller, Harald; Wang, An; Amin, Md Nurul; Pearce, Jim; Norrlinger, Bernhard; Jaffray, David A.
2014-01-01
Purpose: High-quality radiation therapy using highly conformal dose distributions and image-guided techniques requires optimum machine delivery performance. In this work, a monitoring system for multileaf collimator (MLC) performance, integrating semiautomated MLC quality control (QC) tests and statistical process control tools, was developed. The MLC performance monitoring system was used for almost a year on two commercially available MLC models. Control charts were used to establish MLC performance and assess test frequency required to achieve a given level of performance. MLC-related interlocks and servicing events were recorded during the monitoring period and were investigated as indicators of MLC performance variations. Methods: The QC test developed as part of the MLC performance monitoring system uses 2D megavoltage images (acquired using an electronic portal imaging device) of 23 fields to determine the location of the leaves with respect to the radiation isocenter. The precision of the MLC performance monitoring QC test and the MLC itself was assessed by detecting the MLC leaf positions on 127 megavoltage images of a static field. After initial calibration, the MLC performance monitoring QC test was performed 3–4 times/week over a period of 10–11 months to monitor positional accuracy of individual leaves for two different MLC models. Analysis of test results was performed using individuals control charts per leaf with control limits computed based on the measurements as well as two sets of specifications of ±0.5 and ±1 mm. Out-of-specification and out-of-control leaves were automatically flagged by the monitoring system and reviewed monthly by physicists. MLC-related interlocks reported by the linear accelerator and servicing events were recorded to help identify potential causes of nonrandom MLC leaf positioning variations. Results: The precision of the MLC performance monitoring QC test and the MLC itself was within ±0.22 mm for most MLC leaves
Létourneau, Daniel; Wang, An; Amin, Md Nurul; Pearce, Jim; McNiven, Andrea; Keller, Harald; Norrlinger, Bernhard; Jaffray, David A
2014-12-01
High-quality radiation therapy using highly conformal dose distributions and image-guided techniques requires optimum machine delivery performance. In this work, a monitoring system for multileaf collimator (MLC) performance, integrating semiautomated MLC quality control (QC) tests and statistical process control tools, was developed. The MLC performance monitoring system was used for almost a year on two commercially available MLC models. Control charts were used to establish MLC performance and assess test frequency required to achieve a given level of performance. MLC-related interlocks and servicing events were recorded during the monitoring period and were investigated as indicators of MLC performance variations. The QC test developed as part of the MLC performance monitoring system uses 2D megavoltage images (acquired using an electronic portal imaging device) of 23 fields to determine the location of the leaves with respect to the radiation isocenter. The precision of the MLC performance monitoring QC test and the MLC itself was assessed by detecting the MLC leaf positions on 127 megavoltage images of a static field. After initial calibration, the MLC performance monitoring QC test was performed 3-4 times/week over a period of 10-11 months to monitor positional accuracy of individual leaves for two different MLC models. Analysis of test results was performed using individuals control charts per leaf with control limits computed based on the measurements as well as two sets of specifications of ± 0.5 and ± 1 mm. Out-of-specification and out-of-control leaves were automatically flagged by the monitoring system and reviewed monthly by physicists. MLC-related interlocks reported by the linear accelerator and servicing events were recorded to help identify potential causes of nonrandom MLC leaf positioning variations. The precision of the MLC performance monitoring QC test and the MLC itself was within ± 0.22 mm for most MLC leaves and the majority of the
Directory of Open Access Journals (Sweden)
Elżbieta Sandurska
2016-12-01
Full Text Available Introduction: Application of statistical software typically does not require extensive statistical knowledge, allowing to easily perform even complex analyses. Consequently, test selection criteria and important assumptions may be easily overlooked or given insufficient consideration. In such cases, the results may likely lead to wrong conclusions. Aim: To discuss issues related to assumption violations in the case of Student's t-test and one-way ANOVA, two parametric tests frequently used in the field of sports science, and to recommend solutions. Description of the state of knowledge: Student's t-test and ANOVA are parametric tests, and therefore some of the assumptions that need to be satisfied include normal distribution of the data and homogeneity of variances in groups. If the assumptions are violated, the original design of the test is impaired, and the test may then be compromised giving spurious results. A simple method to normalize the data and to stabilize the variance is to use transformations. If such approach fails, a good alternative to consider is a nonparametric test, such as Mann-Whitney, the Kruskal-Wallis or Wilcoxon signed-rank tests. Summary: Thorough verification of the parametric tests assumptions allows for correct selection of statistical tools, which is the basis of well-grounded statistical analysis. With a few simple rules, testing patterns in the data characteristic for the study of sports science comes down to a straightforward procedure.
Testing of a "smart-pebble" for measuring particle transport statistics
Kitsikoudis, Vasileios; Avgeris, Loukas; Valyrakis, Manousos
2017-04-01
This paper presents preliminary results from novel experiments aiming to assess coarse sediment transport statistics for a range of transport conditions, via the use of an innovative "smart-pebble" device. This device is a waterproof sphere, which has 7 cm diameter and is equipped with a number of sensors that provide information about the velocity, acceleration and positioning of the "smart-pebble" within the flow field. A series of specifically designed experiments are carried out to monitor the entrainment of a "smart-pebble" for fully developed, uniform, turbulent flow conditions over a hydraulically rough bed. Specifically, the bed surface is configured to three sections, each of them consisting of well packed glass beads of slightly increasing size at the downstream direction. The first section has a streamwise length of L1=150 cm and beads size of D1=15 mm, the second section has a length of L2=85 cm and beads size of D2=22 mm, and the third bed section has a length of L3=55 cm and beads size of D3=25.4 mm. Two cameras monitor the area of interest to provide additional information regarding the "smart-pebble" movement. Three-dimensional flow measurements are obtained with the aid of an acoustic Doppler velocimeter along a measurement grid to assess the flow forcing field. A wide range of flow rates near and above the threshold of entrainment is tested, while using four distinct densities for the "smart-pebble", which can affect its transport speed and total momentum. The acquired data are analyzed to derive Lagrangian transport statistics and the implications of such an important experiment for the transport of particles by rolling are discussed. The flow conditions for the initiation of motion, particle accelerations and equilibrium particle velocities (translating into transport rates), statistics of particle impact and its motion, can be extracted from the acquired data, which can be further compared to develop meaningful insights for sediment transport
Stolz, Douglas C.; Rutledge, Steven A.; Pierce, Jeffrey R.; van den Heever, Susan C.
2017-07-01
The objective of this study is to determine the relative contributions of normalized convective available potential energy (NCAPE), cloud condensation nuclei (CCN) concentrations, warm cloud depth (WCD), vertical wind shear (SHEAR), and environmental relative humidity (RH) to the variability of lightning and radar reflectivity within convective features (CFs) observed by the Tropical Rainfall Measuring Mission (TRMM) satellite. Our approach incorporates multidimensional binned representations of observations of CFs and modeled thermodynamics, kinematics, and CCN as inputs to develop approximations for total lightning density (TLD) and the average height of 30 dBZ radar reflectivity (AVGHT30). The results suggest that TLD and AVGHT30 increase with increasing NCAPE, increasing CCN, decreasing WCD, increasing SHEAR, and decreasing RH. Multiple-linear approximations for lightning and radar quantities using the aforementioned predictors account for significant portions of the variance in the binned data set (R2 ≈ 0.69-0.81). The standardized weights attributed to CCN, NCAPE, and WCD are largest, the standardized weight of RH varies relative to other predictors, while the standardized weight for SHEAR is comparatively small. We investigate these statistical relationships for collections of CFs within various geographic areas and compare the aerosol (CCN) and thermodynamic (NCAPE and WCD) contributions to variations in the CF population in a partial sensitivity analysis based on multiple-linear regression approximations computed herein. A global lightning parameterization is developed; the average difference between predicted and observed TLD decreases from +21.6 to +11.6% when using a hybrid approach to combine separate approximations over continents and oceans, thus highlighting the need for regionally targeted investigations in the future.
Directory of Open Access Journals (Sweden)
Frank Pega
2013-01-01
Full Text Available Objective. Effectively addressing health disparities experienced by sexual minority populations requires high-quality official data on sexual orientation. We developed a conceptual framework of sexual orientation to improve the quality of sexual orientation data in New Zealand’s Official Statistics System. Methods. We reviewed conceptual and methodological literature, culminating in a draft framework. To improve the framework, we held focus groups and key-informant interviews with sexual minority stakeholders and producers and consumers of official statistics. An advisory board of experts provided additional guidance. Results. The framework proposes working definitions of the sexual orientation topic and measurement concepts, describes dimensions of the measurement concepts, discusses variables framing the measurement concepts, and outlines conceptual grey areas. Conclusion. The framework proposes standard definitions and concepts for the collection of official sexual orientation data in New Zealand. It presents a model for producers of official statistics in other countries, who wish to improve the quality of health data on their citizens.
Link System Performance at the First Global Test of the CMS Alignment System
Energy Technology Data Exchange (ETDEWEB)
Arce, P.; Calvo, E.; Figueroa, C.F.; Rodrigo, T.; Vila, I.; Virto, A.L. [Universidad de Cantabria (Spain); Barcala, J.M.; Fernandez, M.G.; Ferrando, A.; Josa, M.I.; Molinero, A.; Oller, J.C. [CIEMAT, Madrid (Spain)
2001-07-01
A test of components and a global test of the CMS alignment system was performed at the 14 hall of the ISR tunnel at CERN along Summer 2000. Positions are reconstructed and compared to survey measurements. The obtained results from the measurements of the Link System are presented here. (Author) 12 refs.
Link System Performance at the First Global Test of the CMS Alignment System
International Nuclear Information System (INIS)
Arce, P.; Calvo, E.; Figueroa, C. F.; Rodrigo, T.; Vila, I.; Virto, A. L.; Barcala, J. M.; Fernandez, M. G.; Ferrando, A.; Josa, M. I.; Molinero, A.; Oller, J. C.
2001-01-01
A test of components and a global test of the CMS alignment system was performed at the 14 hall of the ISR tunnel at CERN along Summer 2000. Positions are reconstructed and compared to survey measurements. The obtained results from the measurements of the Link System are presented here. (Author) 12 refs
Kossobokov, V.G.; Romashkova, L.L.; Keilis-Borok, V. I.; Healy, J.H.
1999-01-01
Algorithms M8 and MSc (i.e., the Mendocino Scenario) were used in a real-time intermediate-term research prediction of the strongest earthquakes in the Circum-Pacific seismic belt. Predictions are made by M8 first. Then, the areas of alarm are reduced by MSc at the cost that some earthquakes are missed in the second approximation of prediction. In 1992-1997, five earthquakes of magnitude 8 and above occurred in the test area: all of them were predicted by M8 and MSc identified correctly the locations of four of them. The space-time volume of the alarms is 36% and 18%, correspondingly, when estimated with a normalized product measure of empirical distribution of epicenters and uniform time. The statistical significance of the achieved results is beyond 99% both for M8 and MSc. For magnitude 7.5 + , 10 out of 19 earthquakes were predicted by M8 in 40% and five were predicted by M8-MSc in 13% of the total volume considered. This implies a significance level of 81% for M8 and 92% for M8-MSc. The lower significance levels might result from a global change in seismic regime in 1993-1996, when the rate of the largest events has doubled and all of them become exclusively normal or reversed faults. The predictions are fully reproducible; the algorithms M8 and MSc in complete formal definitions were published before we started our experiment [Keilis-Borok, V.I., Kossobokov, V.G., 1990. Premonitory activation of seismic flow: Algorithm M8, Phys. Earth and Planet. Inter. 61, 73-83; Kossobokov, V.G., Keilis-Borok, V.I., Smith, S.W., 1990. Localization of intermediate-term earthquake prediction, J. Geophys. Res., 95, 19763-19772; Healy, J.H., Kossobokov, V.G., Dewey, J.W., 1992. A test to evaluate the earthquake prediction algorithm, M8. U.S. Geol. Surv. OFR 92-401]. M8 is available from the IASPEI Software Library [Healy, J.H., Keilis-Borok, V.I., Lee, W.H.K. (Eds.), 1997. Algorithms for Earthquake Statistics and Prediction, Vol. 6. IASPEI Software Library]. ?? 1999 Elsevier
Awédikian , Roy; Yannou , Bernard
2012-01-01
International audience; With the growing complexity of industrial software applications, industrials are looking for efficient and practical methods to validate the software. This paper develops a model-based statistical testing approach that automatically generates online and offline test cases for embedded software. It discusses an integrated framework that combines solutions for three major software testing research questions: (i) how to select test inputs; (ii) how to predict the expected...
A method to identify dependencies between organizational factors using statistical independence test
International Nuclear Information System (INIS)
Kim, Y.; Chung, C.H.; Kim, C.; Jae, M.; Jung, J.H.
2004-01-01
A considerable number of studies on organizational factors in nuclear power plants have been made especially in recent years, most of which have assumed organizational factors to be independent. However, since organizational factors characterize the organization in terms of safety and efficiency etc. and there would be some factors that have close relations between them. Therefore, from whatever point of view, if we want to identify the characteristics of an organization, the dependence relationships should be considered to get an accurate result. In this study the organization of a reference nuclear power plant in Korea was analyzed for the trip cases of that plant using 20 organizational factors that Jacobs and Haber had suggested: 1) coordination of work, 2) formalization, 3) organizational knowledge, 4) roles and responsibilities, 5) external communication, 6) inter-department communications, 7) intra-departmental communications, 8) organizational culture, 9) ownership, 10) safety culture, 11) time urgency, 12) centralization, 13) goal prioritization, 14) organizational learning, 15) problem identification, 16) resource allocation, 17) performance evaluation, 18) personnel selection, 19) technical knowledge, and 20) training. By utilizing the results of the analysis, a method to identify the dependence relationships between organizational factors is presented. The statistical independence test for the analysis result of the trip cases is adopted to reveal dependencies. This method is geared to the needs to utilize many kinds of data that has been obtained as the operating years of nuclear power plants increase, and more reliable dependence relations may be obtained by using these abundant data
Semenov, Alexander V; Elsas, Jan Dirk; Glandorf, Debora C M; Schilthuizen, Menno; Boer, Willem F
2013-08-01
To fulfill existing guidelines, applicants that aim to place their genetically modified (GM) insect-resistant crop plants on the market are required to provide data from field experiments that address the potential impacts of the GM plants on nontarget organisms (NTO's). Such data may be based on varied experimental designs. The recent EFSA guidance document for environmental risk assessment (2010) does not provide clear and structured suggestions that address the statistics of field trials on effects on NTO's. This review examines existing practices in GM plant field testing such as the way of randomization, replication, and pseudoreplication. Emphasis is placed on the importance of design features used for the field trials in which effects on NTO's are assessed. The importance of statistical power and the positive and negative aspects of various statistical models are discussed. Equivalence and difference testing are compared, and the importance of checking the distribution of experimental data is stressed to decide on the selection of the proper statistical model. While for continuous data (e.g., pH and temperature) classical statistical approaches - for example, analysis of variance (ANOVA) - are appropriate, for discontinuous data (counts) only generalized linear models (GLM) are shown to be efficient. There is no golden rule as to which statistical test is the most appropriate for any experimental situation. In particular, in experiments in which block designs are used and covariates play a role GLMs should be used. Generic advice is offered that will help in both the setting up of field testing and the interpretation and data analysis of the data obtained in this testing. The combination of decision trees and a checklist for field trials, which are provided, will help in the interpretation of the statistical analyses of field trials and to assess whether such analyses were correctly applied. We offer generic advice to risk assessors and applicants that will
Kuretzki, Carlos Henrique; Campos, Antônio Carlos Ligocki; Malafaia, Osvaldo; Soares, Sandramara Scandelari Kusano de Paula; Tenório, Sérgio Bernardo; Timi, Jorge Rufino Ribas
2016-03-01
The use of information technology is often applied in healthcare. With regard to scientific research, the SINPE(c) - Integrated Electronic Protocols was created as a tool to support researchers, offering clinical data standardization. By the time, SINPE(c) lacked statistical tests obtained by automatic analysis. Add to SINPE(c) features for automatic realization of the main statistical methods used in medicine . The study was divided into four topics: check the interest of users towards the implementation of the tests; search the frequency of their use in health care; carry out the implementation; and validate the results with researchers and their protocols. It was applied in a group of users of this software in their thesis in the strict sensu master and doctorate degrees in one postgraduate program in surgery. To assess the reliability of the statistics was compared the data obtained both automatically by SINPE(c) as manually held by a professional in statistics with experience with this type of study. There was concern for the use of automatic statistical tests, with good acceptance. The chi-square, Mann-Whitney, Fisher and t-Student were considered as tests frequently used by participants in medical studies. These methods have been implemented and thereafter approved as expected. The incorporation of the automatic SINPE (c) Statistical Analysis was shown to be reliable and equal to the manually done, validating its use as a research tool for medical research.
Wiß, Felix; Stacke, Tobias; Hagemann, Stefan
2014-05-01
Soil moisture and its memory can have a strong impact on near surface temperature and precipitation and have the potential to promote severe heat waves, dry spells and floods. To analyze how soil moisture is simulated in recent general circulation models (GCMs), soil moisture data from a 23 model ensemble of Atmospheric Model Intercomparison Project (AMIP) type simulations from the Coupled Model Intercomparison Project Phase 5 (CMIP5) are examined for the period 1979 to 2008 with regard to parameterization and statistical characteristics. With respect to soil moisture processes, the models vary in their maximum soil and root depth, the number of soil layers, the water-holding capacity, and the ability to simulate freezing which all together leads to very different soil moisture characteristics. Differences in the water-holding capacity are resulting in deviations in the global median soil moisture of more than one order of magnitude between the models. In contrast, the variance shows similar absolute values when comparing the models to each other. Thus, the input and output rates by precipitation and evapotranspiration, which are computed by the atmospheric component of the models, have to be in the same range. Most models simulate great variances in the monsoon areas of the tropics and north western U.S., intermediate variances in Europe and eastern U.S., and low variances in the Sahara, continental Asia, and central and western Australia. In general, the variance decreases with latitude over the high northern latitudes. As soil moisture trends in the models were found to be negligible, the soil moisture anomalies were calculated by subtracting the 30 year monthly climatology from the data. The length of the memory is determined from the soil moisture anomalies by calculating the first insignificant autocorrelation for ascending monthly lags (insignificant autocorrelation folding time). The models show a great spread of autocorrelation length from a few months in
Statistical assessment of numerous Monte Carlo tallies
International Nuclear Information System (INIS)
Kiedrowski, Brian C.; Solomon, Clell J.
2011-01-01
Four tests are developed to assess the statistical reliability of collections of tallies that number in thousands or greater. To this end, the relative-variance density function is developed and its moments are studied using simplified, non-transport models. The statistical tests are performed upon the results of MCNP calculations of three different transport test problems and appear to show that the tests are appropriate indicators of global statistical quality. (author)
Fidalgo, Angel M.; Alavi, Seyed Mohammad; Amirian, Seyed Mohammad Reza
2014-01-01
This study examines three controversial aspects in differential item functioning (DIF) detection by logistic regression (LR) models: first, the relative effectiveness of different analytical strategies for detecting DIF; second, the suitability of the Wald statistic for determining the statistical significance of the parameters of interest; and…
The Effects of Pre-Lecture Quizzes on Test Anxiety and Performance in a Statistics Course
Brown, Michael J.; Tallon, Jennifer
2015-01-01
The purpose of our study was to examine the effects of pre-lecture quizzes in a statistics course. Students (N = 70) from 2 sections of an introductory statistics course served as participants in this study. One section completed pre-lecture quizzes whereas the other section did not. Completing pre-lecture quizzes was associated with improved exam…
Martin, Eden R; Tunc, Ilker; Liu, Zhi; Slifer, Susan H; Beecham, Ashley H; Beecham, Gary W
2018-03-01
Population substructure can lead to confounding in tests for genetic association, and failure to adjust properly can result in spurious findings. Here we address this issue of confounding by considering the impact of global ancestry (average ancestry across the genome) and local ancestry (ancestry at a specific chromosomal location) on regression parameters and relative power in ancestry-adjusted and -unadjusted models. We examine theoretical expectations under different scenarios for population substructure; applying different regression models, verifying and generalizing using simulations, and exploring the findings in real-world admixed populations. We show that admixture does not lead to confounding when the trait locus is tested directly in a single admixed population. However, if there is more complex population structure or a marker locus in linkage disequilibrium (LD) with the trait locus is tested, both global and local ancestry can be confounders. Additionally, we show the genotype parameters of adjusted and unadjusted models all provide tests for LD between the marker and trait locus, but in different contexts. The local ancestry adjusted model tests for LD in the ancestral populations, while tests using the unadjusted and the global ancestry adjusted models depend on LD in the admixed population(s), which may be enriched due to different ancestral allele frequencies. Practically, this implies that global-ancestry adjustment should be used for screening, but local-ancestry adjustment may better inform fine mapping and provide better effect estimates at trait loci. © 2017 WILEY PERIODICALS, INC.
DEFF Research Database (Denmark)
Denwood, M.J.; McKendrick, I.J.; Matthews, L.
Introduction. There is an urgent need for a method of analysing FECRT data that is computationally simple and statistically robust. A method for evaluating the statistical power of a proposed FECRT study would also greatly enhance the current guidelines. Methods. A novel statistical framework has...... been developed that evaluates observed FECRT data against two null hypotheses: (1) the observed efficacy is consistent with the expected efficacy, and (2) the observed efficacy is inferior to the expected efficacy. The method requires only four simple summary statistics of the observed data. Power...... that the notional type 1 error rate of the new statistical test is accurate. Power calculations demonstrate a power of only 65% with a sample size of 20 treatment and control animals, which increases to 69% with 40 control animals or 79% with 40 treatment animals. Discussion. The method proposed is simple...
Statistical Modeling for Quality Assurance of Human Papillomavirus DNA Batch Testing.
Beylerian, Emily N; Slavkovsky, Rose C; Holme, Francesca M; Jeronimo, Jose A
2018-03-22
Our objective was to simulate the distribution of human papillomavirus (HPV) DNA test results from a 96-well microplate assay to identify results that may be consistent with well-to-well contamination, enabling programs to apply specific quality assurance parameters. For this modeling study, we designed an algorithm that generated the analysis population of 900,000 to simulate the results of 10,000 microplate assays, assuming discrete HPV prevalences of 12%, 13%, 14%, 15%, and 16%. Using binomial draws, the algorithm created a vector of results for each prevalence and reassembled them into 96-well matrices for results distribution analysis of the number of positive cells and number and size of cell clusters (≥2 positive cells horizontally or vertically adjacent) per matrix. For simulation conditions of 12% and 16% HPV prevalence, 95% of the matrices displayed the following characteristics: 5 to 17 and 8 to 22 total positive cells, 0 to 4 and 0 to 5 positive cell clusters, and largest cluster sizes of up to 5 and up to 6 positive cells, respectively. Our results suggest that screening programs in regions with an oncogenic HPV prevalence of 12% to 16% can expect 5 to 22 positive results per microplate in approximately 95% of assays and 0 to 5 positive results clusters with no cluster larger than 6 positive results. Results consistently outside of these ranges deviate from what is statistically expected and could be the result of well-to-well contamination. Our results provide guidance that laboratories can use to identify microplates suspicious for well-to-well contamination, enabling improved quality assurance.This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal.
Directory of Open Access Journals (Sweden)
Claire Ramus
2016-03-01
Full Text Available This data article describes a controlled, spiked proteomic dataset for which the “ground truth” of variant proteins is known. It is based on the LC-MS analysis of samples composed of a fixed background of yeast lysate and different spiked amounts of the UPS1 mixture of 48 recombinant proteins. It can be used to objectively evaluate bioinformatic pipelines for label-free quantitative analysis, and their ability to detect variant proteins with good sensitivity and low false discovery rate in large-scale proteomic studies. More specifically, it can be useful for tuning software tools parameters, but also testing new algorithms for label-free quantitative analysis, or for evaluation of downstream statistical methods. The raw MS files can be downloaded from ProteomeXchange with identifier http://www.ebi.ac.uk/pride/archive/projects/PXD001819. Starting from some raw files of this dataset, we also provide here some processed data obtained through various bioinformatics tools (including MaxQuant, Skyline, MFPaQ, IRMa-hEIDI and Scaffold in different workflows, to exemplify the use of such data in the context of software benchmarking, as discussed in details in the accompanying manuscript [1]. The experimental design used here for data processing takes advantage of the different spike levels introduced in the samples composing the dataset, and processed data are merged in a single file to facilitate the evaluation and illustration of software tools results for the detection of variant proteins with different absolute expression levels and fold change values.
Wang, Hao; Wang, Qunwei; He, Ming
2018-05-01
In order to investigate and improve the level of detection technology of water content in liquid chemical reagents of domestic laboratories, proficiency testing provider PT0031 (CNAS) has organized proficiency testing program of water content in toluene, 48 laboratories from 18 provinces/cities/municipals took part in the PT. This paper introduces the implementation process of proficiency testing for determination of water content in toluene, including sample preparation, homogeneity and stability test, the results of statistics of iteration robust statistic technique and analysis, summarized and analyzed those of the different test standards which are widely used in the laboratories, put forward the technological suggestions for the improvement of the test quality of water content. Satisfactory results were obtained by 43 laboratories, amounting to 89.6% of the total participating laboratories.
Refurbish research and test reactors corresponding to global age of nuclear energy
International Nuclear Information System (INIS)
Mishima, Kaichiro; Oyama, Yukio; Okamoto, Koji; Yamana, Hajime; Yamaguchi, Akira
2011-01-01
This special article featured arguments for refurbishment of research and test reactors corresponding to global age of nuclear energy, based on the report: 'Investigation of research facilities necessary for future joint usage' issued by the special committee of Atomic Energy Society of Japan (AESJ) in September 2010. It consisted of six papers titled as 'Introduction-establishment of AESJ special committee for investigation', 'State of research and test reactors in Japan', 'State of overseas research and test reactors', 'Needs analysis for research and test reactors', 'Proposal of AESJ special committee' and 'Summary and future issues'. In order to develop human resources and promote research and development needed in global age of nuclear energy, research and test reactors would be refurbished as an Asian regional center of excellence. (T. Tanaka)
Fang, Yongxiang; Wit, Ernst
2008-01-01
Fisher’s combined probability test is the most commonly used method to test the overall significance of a set independent p-values. However, it is very obviously that Fisher’s statistic is more sensitive to smaller p-values than to larger p-value and a small p-value may overrule the other p-values
DEFF Research Database (Denmark)
Plum, Maja
Globalization is often referred to as external to education - a state of affair facing the modern curriculum with numerous challenges. In this paper it is examined as internal to curriculum; analysed as a problematization in a Foucaultian sense. That is, as a complex of attentions, worries, ways...... of reasoning, producing curricular variables. The analysis is made through an example of early childhood curriculum in Danish Pre-school, and the way the curricular variable of the pre-school child comes into being through globalization as a problematization, carried forth by the comparative practices of PISA...
F. Gerard Adams
2008-01-01
The rapid globalization of the world economy is causing fundamental changes in patterns of trade and finance. Some economists have argued that globalization has arrived and that the world is â€œflatâ€ . While the geographic scope of markets has increased, the author argues that new patterns of trade and finance are a result of the discrepancies between â€œoldâ€ countries and â€œnewâ€ . As the differences are gradually wiped out, particularly if knowledge and technology spread worldwide, the t...
Pestman, Wiebe R
2009-01-01
This textbook provides a broad and solid introduction to mathematical statistics, including the classical subjects hypothesis testing, normal regression analysis, and normal analysis of variance. In addition, non-parametric statistics and vectorial statistics are considered, as well as applications of stochastic analysis in modern statistics, e.g., Kolmogorov-Smirnov testing, smoothing techniques, robustness and density estimation. For students with some elementary mathematical background. With many exercises. Prerequisites from measure theory and linear algebra are presented.
Yu, Xiao-Ying; Barnett, J. Matthew; Amidan, Brett G.; Recknagle, Kurtis P.; Flaherty, Julia E.; Antonio, Ernest J.; Glissmeyer, John A.
2018-03-01
The ANSI/HPS N13.1-2011 standard requires gaseous tracer uniformity testing for sampling associated with stacks used in radioactive air emissions. Sulfur hexafluoride (SF6), a greenhouse gas with a high global warming potential, has long been the gas tracer used in such testing. To reduce the impact of gas tracer tests on the environment, nitrous oxide (N2O) was evaluated as a potential replacement to SF6. The physical evaluation included the development of a test plan to record percent coefficient of variance and the percent maximum deviation between the two gases while considering variables such as fan configuration, injection position, and flow rate. Statistical power was calculated to determine how many sample sets were needed, and computational fluid dynamic modeling was utilized to estimate overall mixing in stacks. Results show there are no significant differences between the behaviors of the two gases, and SF6 modeling corroborated N2O test results. Although, in principle, all tracer gases should behave in an identical manner for measuring mixing within a stack, the series of physical tests guided by statistics was performed to demonstrate the equivalence of N2O testing to SF6 testing in the context of stack qualification tests. The results demonstrate that N2O is a viable choice leading to a four times reduction in global warming impacts for future similar compliance driven testing.
Rényi statistics for testing composite hypotheses in general exponential models
Czech Academy of Sciences Publication Activity Database
Morales, D.; Pardo, L.; Pardo, M. C.; Vajda, Igor
2004-01-01
Roč. 38, č. 2 (2004), s. 133-147 ISSN 0233-1888 R&D Projects: GA ČR GA201/02/1391 Grant - others:BMF(ES) 2003-00892; BMF(ES) 2003-04820 Institutional research plan: CEZ:AV0Z1075907 Keywords : natural exponential models * Levy processes * generalized Wald statistics Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.323, year: 2004
Comments on statistical issues in numerical modeling for underground nuclear test monitoring
International Nuclear Information System (INIS)
Nicholson, W.L.; Anderson, K.K.
1993-01-01
The Symposium concluded with prepared summaries by four experts in the involved disciplines. These experts made no mention of statistics and/or the statistical content of issues. The first author contributed an extemporaneous statement at the Symposium because there are important issues associated with conducting and evaluating numerical modeling that are familiar to statisticians and often treated successfully by them. This note expands upon these extemporaneous remarks
Weerasinghe, Dash; Orsak, Timothy; Mendro, Robert
In an age of student accountability, public school systems must find procedures for identifying effective schools, classrooms, and teachers that help students continue to learn academically. As a result, researchers have been modeling schools and classrooms to calculate productivity indicators that will withstand not only statistical review but…
2013-12-01
AbdelWahab, “ 2G / 3G Inter-RAT Handover Performance Analysis,” Second European Conference on Antennas and Propagation, pp. 1, 8, 11–16, Nov. 2007. [19] J...RADIO GLOBAL SYSTEM FOR MOBILE COMMUNICATIONS TRANSMITTER DEVELOPMENT FOR HETEROGENEOUS NETWORK VULNERABILITY TESTING by Carson C. McAbee... MOBILE COMMUNICATIONS TRANSMITTER DEVELOPMENT FOR HETEROGENEOUS NETWORK VULNERABILITY TESTING 5. FUNDING NUMBERS 6. AUTHOR(S) Carson C. McAbee
Y. Hu; M. Vaughan; C. McClain; M. Behrenfeld; H. Maring; D. Anderson; S. Sun-Mack; D. Flittner; J. Huang; B. Wielicki; P. Minnis; C. Weimer; C. Trepte; R. Kuehn
2007-01-01
International audience; This study presents an empirical relation that links layer integrated depolarization ratios, the extinction coefficients, and effective radii of water clouds, based on Monte Carlo simulations of CALIPSO lidar observations. Combined with cloud effective radius retrieved from MODIS, cloud liquid water content and effective number density of water clouds are estimated from CALIPSO lidar depolarization measurements in this study. Global statistics of the cloud liquid water...
Hu, Y.; Vaughan, M.; McClain, C.; Behrenfeld, M.; Maring, H.; Anderson, D.; Sun-Mack, S.; Flittner, D.; Huang, J.; Wielicki, B.; Minnis, P.; Weimer, C.; Trepte, C.; Kuehn, R.
2007-03-01
This study presents an empirical relation that links layer integrated depolarization ratios, the extinction coefficients, and effective radii of water clouds, based on Monte Carlo simulations of CALIPSO lidar observations. Combined with cloud effective radius retrieved from MODIS, cloud liquid water content and effective number density of water clouds are estimated from CALIPSO lidar depolarization measurements in this study. Global statistics of the cloud liquid water content and effective number density are presented.
Sport-specific endurance plank test for evaluation of global core muscle function.
Tong, Tom K; Wu, Shing; Nie, Jinlei
2014-02-01
To examine the validity and reliability of a sports-specific endurance plank test for the evaluation of global core muscle function. Repeated-measures study. Laboratory environment. Twenty-eight male and eight female young athletes. Surface electromyography (sEMG) of selected trunk flexors and extensors, and an intervention of pre-fatigue core workout were applied for test validation. Intraclass correlation coefficient (ICC), coefficient of variation (CV), and the measurement bias ratio */÷ ratio limits of agreement (LOA) were calculated to assess reliability and measurement error. Test validity was shown by the sEMG of selected core muscles, which indicated >50% increase in muscle activation during the test; and the definite discrimination of the ∼30% reduction in global core muscle endurance subsequent to a pre-fatigue core workout. For test-retest reliability, when the first attempt of three repeated trials was considered as familiarisation, the ICC was 0.99 (95% CI: 0.98-0.99), CV was 2.0 ± 1.56% and the measurement bias ratio */÷ ratio LOA was 0.99 */÷ 1.07. The findings suggest that the sport-specific endurance plank test is a valid, reliable and practical method for assessing global core muscle endurance in athletes given that at least one familiarisation trial takes place prior to measurement. Copyright © 2013 Elsevier Ltd. All rights reserved.
Berti, Matteo; Corsini, Alessandro; Franceschini, Silvia; Iannacone, Jean Pascal
2013-04-01
The application of space borne synthetic aperture radar interferometry has progressed, over the last two decades, from the pioneer use of single interferograms for analyzing changes on the earth's surface to the development of advanced multi-interferogram techniques to analyze any sort of natural phenomena which involves movements of the ground. The success of multi-interferograms techniques in the analysis of natural hazards such as landslides and subsidence is widely documented in the scientific literature and demonstrated by the consensus among the end-users. Despite the great potential of this technique, radar interpretation of slope movements is generally based on the sole analysis of average displacement velocities, while the information embraced in multi interferogram time series is often overlooked if not completely neglected. The underuse of PS time series is probably due to the detrimental effect of residual atmospheric errors, which make the PS time series characterized by erratic, irregular fluctuations often difficult to interpret, and also to the difficulty of performing a visual, supervised analysis of the time series for a large dataset. In this work is we present a procedure for automatic classification of PS time series based on a series of statistical characterization tests. The procedure allows to classify the time series into six distinctive target trends (0=uncorrelated; 1=linear; 2=quadratic; 3=bilinear; 4=discontinuous without constant velocity; 5=discontinuous with change in velocity) and retrieve for each trend a series of descriptive parameters which can be efficiently used to characterize the temporal changes of ground motion. The classification algorithms were developed and tested using an ENVISAT datasets available in the frame of EPRS-E project (Extraordinary Plan of Environmental Remote Sensing) of the Italian Ministry of Environment (track "Modena", Northern Apennines). This dataset was generated using standard processing, then the
International Nuclear Information System (INIS)
Gross, D.H.E.
2006-01-01
Heat can flow from cold to hot at any phase separation even in macroscopic systems. Therefore also Lynden-Bell's famous gravo-thermal catastrophe must be reconsidered. In contrast to traditional canonical Boltzmann-Gibbs statistics this is correctly described only by microcanonical statistics. Systems studied in chemical thermodynamics (ChTh) by using canonical statistics consist of several homogeneous macroscopic phases. Evidently, macroscopic statistics as in chemistry cannot and should not be applied to non-extensive or inhomogeneous systems like nuclei or galaxies. Nuclei are small and inhomogeneous. Multifragmented nuclei are even more inhomogeneous and the fragments even smaller. Phase transitions of first order and especially phase separations therefore cannot be described by a (homogeneous) canonical ensemble. Taking this serious, fascinating perspectives open for statistical nuclear fragmentation as test ground for the basic principles of statistical mechanics, especially of phase transitions, without the use of the thermodynamic limit. Moreover, there is also a lot of similarity between the accessible phase space of fragmenting nuclei and inhomogeneous multistellar systems. This underlines the fundamental significance for statistical physics in general. (orig.)
Comparison of tests for spatial heterogeneity on data with global clustering patterns and outliers
Directory of Open Access Journals (Sweden)
Hachey Mark
2009-10-01
Full Text Available Abstract Background The ability to evaluate geographic heterogeneity of cancer incidence and mortality is important in cancer surveillance. Many statistical methods for evaluating global clustering and local cluster patterns are developed and have been examined by many simulation studies. However, the performance of these methods on two extreme cases (global clustering evaluation and local anomaly (outlier detection has not been thoroughly investigated. Methods We compare methods for global clustering evaluation including Tango's Index, Moran's I, and Oden's I*pop; and cluster detection methods such as local Moran's I and SaTScan elliptic version on simulated count data that mimic global clustering patterns and outliers for cancer cases in the continental United States. We examine the power and precision of the selected methods in the purely spatial analysis. We illustrate Tango's MEET and SaTScan elliptic version on a 1987-2004 HIV and a 1950-1969 lung cancer mortality data in the United States. Results For simulated data with outlier patterns, Tango's MEET, Moran's I and I*pop had powers less than 0.2, and SaTScan had powers around 0.97. For simulated data with global clustering patterns, Tango's MEET and I*pop (with 50% of total population as the maximum search window had powers close to 1. SaTScan had powers around 0.7-0.8 and Moran's I has powers around 0.2-0.3. In the real data example, Tango's MEET indicated the existence of global clustering patterns in both the HIV and lung cancer mortality data. SaTScan found a large cluster for HIV mortality rates, which is consistent with the finding from Tango's MEET. SaTScan also found clusters and outliers in the lung cancer mortality data. Conclusion SaTScan elliptic version is more efficient for outlier detection compared with the other methods evaluated in this article. Tango's MEET and Oden's I*pop perform best in global clustering scenarios among the selected methods. The use of SaTScan for
... Testing Treatment & Outcomes Health Professionals Statistics More Resources Candidiasis Candida infections of the mouth, throat, and esophagus Vaginal candidiasis Invasive candidiasis Definition Symptoms Risk & Prevention Sources Diagnosis ...
McArtor, Daniel B; Lubke, Gitta H; Bergeman, C S
2017-12-01
Person-centered methods are useful for studying individual differences in terms of (dis)similarities between response profiles on multivariate outcomes. Multivariate distance matrix regression (MDMR) tests the significance of associations of response profile (dis)similarities and a set of predictors using permutation tests. This paper extends MDMR by deriving and empirically validating the asymptotic null distribution of its test statistic, and by proposing an effect size for individual outcome variables, which is shown to recover true associations. These extensions alleviate the computational burden of permutation tests currently used in MDMR and render more informative results, thus making MDMR accessible to new research domains.
McAlinden, Colm; Khadka, Jyoti; Pesudovs, Konrad
2011-07-01
The ever-expanding choice of ocular metrology and imaging equipment has driven research into the validity of their measurements. Consequently, studies of the agreement between two instruments or clinical tests have proliferated in the ophthalmic literature. It is important that researchers apply the appropriate statistical tests in agreement studies. Correlation coefficients are hazardous and should be avoided. The 'limits of agreement' method originally proposed by Altman and Bland in 1983 is the statistical procedure of choice. Its step-by-step use and practical considerations in relation to optometry and ophthalmology are detailed in addition to sample size considerations and statistical approaches to precision (repeatability or reproducibility) estimates. Ophthalmic & Physiological Optics © 2011 The College of Optometrists.
Assessment of noise in a digital image using the join-count statistic and the Moran test
International Nuclear Information System (INIS)
Kehshih Chuang; Huang, H.K.
1992-01-01
It is assumed that data bits of a pixel in digital images can be divided into signal and noise bits. The signal bits occupy the most significant part of the pixel. The signal parts of each pixel are correlated while the noise parts are uncorrelated. Two statistical methods, the Moran test and the join-count statistic, are used to examine the noise parts. Images from computerized tomography, magnetic resonance and computed radiography are used for the evaluation of the noise bits. A residual image is formed by subtracting the original image from its smoothed version. The noise level in the residual image is then identical to that in the original image. Both statistical tests are then performed on the bit planes of the residual image. Results show that most digital images contain only 8-9 bits of correlated information. Both methods are easy to implement and fast to perform. (author)
Global dose to man from proposed NNTRP high altitude nuclear tests
International Nuclear Information System (INIS)
Peterson, K.R.
1975-05-01
Radionuclide measurements from past high altitude nuclear testing have enabled development of a model to estimate surface deposition and doses from 400 kt of fission products injected in winter within the Pacific Test Area at altitudes in excess of 50 km. The largest 30-year average dose to man is about 10 millirem and occurs at 30 0 to 50 0 N latitude. The principal contributor to this dose is external gamma radiation from gross fission products. Individual doses from 90 Sr via the forage-cow-milk pathway and 137 Cs via the pasture-meat pathway are about 1/5 the gross fission product doses. The global 30-year population dose is 3 x 10 7 person-rem, which compares with a 30-year natural background population dose of 1 X 10 10 person-rem. Due in large part to the global distribution of population, over 98 percent of the global person-rem from the proposed high altitude tests is received in the Northern Hemisphere, while about 75 percent of the total population dose occurs within the 30 0 --50 0 N latitude belt. Detonations in summer would decrease the global dose by about a factor of three. (U.S.)
Residuals and the Residual-Based Statistic for Testing Goodness of Fit of Structural Equation Models
Foldnes, Njal; Foss, Tron; Olsson, Ulf Henning
2012-01-01
The residuals obtained from fitting a structural equation model are crucial ingredients in obtaining chi-square goodness-of-fit statistics for the model. The authors present a didactic discussion of the residuals, obtaining a geometrical interpretation by recognizing the residuals as the result of oblique projections. This sheds light on the…
Basic Mathematics Test Predicts Statistics Achievement and Overall First Year Academic Success
Fonteyne, Lot; De Fruyt, Filip; Dewulf, Nele; Duyck, Wouter; Erauw, Kris; Goeminne, Katy; Lammertyn, Jan; Marchant, Thierry; Moerkerke, Beatrijs; Oosterlinck, Tom; Rosseel, Yves
2015-01-01
In the psychology and educational science programs at Ghent University, only 36.1% of the new incoming students in 2011 and 2012 passed all exams. Despite availability of information, many students underestimate the scientific character of social science programs. Statistics courses are a major obstacle in this matter. Not all enrolling students…
Finite Element Analysis of the Amontons-Coulomb's Model using Local and Global Friction Tests
International Nuclear Information System (INIS)
Oliveira, M. C.; Menezes, L. F.; Ramalho, A.; Alves, J. L.
2011-01-01
In spite of the abundant number of experimental friction tests that have been reported, the contact with friction modeling persists to be one of the factors that determine the effectiveness of sheet metal forming simulation. This difficulty can be understood due to the nature of the friction phenomena, which comprises the interaction of different factors connected to both sheet and tools' surfaces. Although in finite element numerical simulations friction models are commonly applied at the local level, they normally rely on parameters identified based on global experimental tests results. The aim of this study is to analyze the applicability of the Amontons-Coulomb's friction coefficient identified using complementary tests: (i) load-scanning, at the local level and (ii) draw-bead, at the global level; to the numerical simulation of sheet metal forming processes.
Rigby, A S
2001-11-10
The odds ratio is an appropriate method of analysis for data in 2 x 2 contingency tables. However, other methods of analysis exist. One such method is based on the chi2 test of goodness-of-fit. Key players in the development of statistical theory include Pearson, Fisher and Yates. Data are presented in the form of 2 x 2 contingency tables and a method of analysis based on the chi2 test is introduced. There are many variations of the basic test statistic, one of which is the chi2 test with Yates' continuity correction. The usefulness (or not) of Yates' continuity correction is discussed. Problems of interpretation when the method is applied to k x m tables are highlighted. Some properties of the chi2 the test are illustrated by taking examples from the author's teaching experiences. Journal editors should be encouraged to give both observed and expected cell frequencies so that better information comes out of the chi2 test statistic.
The Q* Index: A Useful Global Measure of Dementia Screening Test Accuracy
Directory of Open Access Journals (Sweden)
A.J. Larner
2015-06-01
Full Text Available Background/Aims: Single, global or unitary, indicators of test diagnostic performance have intuitive appeal for clinicians. The Q* index, the point in receiver operating characteristic (ROC curve space closest to the ideal top left-hand corner and where test sensitivity and specificity are equal, is one such measure. Methods: Datasets from four pragmatic accuracy studies which examined the Mini-Mental State Examination, Addenbrooke's Cognitive Examination-Revised, Montreal Cognitive Assessment, Test Your Memory test, and Mini-Addenbrooke's Cognitive Examination were examined to calculate and compare the Q* index, the maximal correct classification accuracy, and the maximal Youden index, as well as the sensitivity and specificity at these cutoffs. Results: Tests ranked similarly for the Q* index and the area under the ROC curve (AUC ROC. The Q* index cutoff was more sensitive (and less specific than the maximal correct classification accuracy cutoff, and less sensitive (and more specific than the maximal Youden index cutoff. Conclusion: The Q* index may be a useful global parameter summarising the test accuracy of cognitive screening instruments, facilitating comparison between tests, and defining a possible test cutoff value. As the point of equal sensitivity and specificity, its use may be more intuitive and appealing for clinicians than AUC ROC.
Statistical Tests Black swans or dragon-kings? A simple test for deviations from the power law★
Janczura, J.; Weron, R.
2012-05-01
We develop a simple test for deviations from power law tails. Actually, from the tails of any distribution. We use this test - which is based on the asymptotic properties of the empirical distribution function - to answer the question whether great natural disasters, financial crashes or electricity price spikes should be classified as dragon-kings or `only' as black swans.
Festing, Michael F W
2014-01-01
The safety of chemicals, drugs, novel foods and genetically modified crops is often tested using repeat-dose sub-acute toxicity tests in rats or mice. It is important to avoid misinterpretations of the results as these tests are used to help determine safe exposure levels in humans. Treated and control groups are compared for a range of haematological, biochemical and other biomarkers which may indicate tissue damage or other adverse effects. However, the statistical analysis and presentation of such data poses problems due to the large number of statistical tests which are involved. Often, it is not clear whether a "statistically significant" effect is real or a false positive (type I error) due to sampling variation. The author's conclusions appear to be reached somewhat subjectively by the pattern of statistical significances, discounting those which they judge to be type I errors and ignoring any biomarker where the p-value is greater than p = 0.05. However, by using standardised effect sizes (SESs) a range of graphical methods and an over-all assessment of the mean absolute response can be made. The approach is an extension, not a replacement of existing methods. It is intended to assist toxicologists and regulators in the interpretation of the results. Here, the SES analysis has been applied to data from nine published sub-acute toxicity tests in order to compare the findings with those of the author's. Line plots, box plots and bar plots show the pattern of response. Dose-response relationships are easily seen. A "bootstrap" test compares the mean absolute differences across dose groups. In four out of seven papers where the no observed adverse effect level (NOAEL) was estimated by the authors, it was set too high according to the bootstrap test, suggesting that possible toxicity is under-estimated.
Directory of Open Access Journals (Sweden)
Michael F W Festing
Full Text Available The safety of chemicals, drugs, novel foods and genetically modified crops is often tested using repeat-dose sub-acute toxicity tests in rats or mice. It is important to avoid misinterpretations of the results as these tests are used to help determine safe exposure levels in humans. Treated and control groups are compared for a range of haematological, biochemical and other biomarkers which may indicate tissue damage or other adverse effects. However, the statistical analysis and presentation of such data poses problems due to the large number of statistical tests which are involved. Often, it is not clear whether a "statistically significant" effect is real or a false positive (type I error due to sampling variation. The author's conclusions appear to be reached somewhat subjectively by the pattern of statistical significances, discounting those which they judge to be type I errors and ignoring any biomarker where the p-value is greater than p = 0.05. However, by using standardised effect sizes (SESs a range of graphical methods and an over-all assessment of the mean absolute response can be made. The approach is an extension, not a replacement of existing methods. It is intended to assist toxicologists and regulators in the interpretation of the results. Here, the SES analysis has been applied to data from nine published sub-acute toxicity tests in order to compare the findings with those of the author's. Line plots, box plots and bar plots show the pattern of response. Dose-response relationships are easily seen. A "bootstrap" test compares the mean absolute differences across dose groups. In four out of seven papers where the no observed adverse effect level (NOAEL was estimated by the authors, it was set too high according to the bootstrap test, suggesting that possible toxicity is under-estimated.
Murphy, Thomas; Schwedock, Julie; Nguyen, Kham; Mills, Anna; Jones, David
2015-01-01
New recommendations for the validation of rapid microbiological methods have been included in the revised Technical Report 33 release from the PDA. The changes include a more comprehensive review of the statistical methods to be used to analyze data obtained during validation. This case study applies those statistical methods to accuracy, precision, ruggedness, and equivalence data obtained using a rapid microbiological methods system being evaluated for water bioburden testing. Results presented demonstrate that the statistical methods described in the PDA Technical Report 33 chapter can all be successfully applied to the rapid microbiological method data sets and gave the same interpretation for equivalence to the standard method. The rapid microbiological method was in general able to pass the requirements of PDA Technical Report 33, though the study shows that there can be occasional outlying results and that caution should be used when applying statistical methods to low average colony-forming unit values. Prior to use in a quality-controlled environment, any new method or technology has to be shown to work as designed by the manufacturer for the purpose required. For new rapid microbiological methods that detect and enumerate contaminating microorganisms, additional recommendations have been provided in the revised PDA Technical Report No. 33. The changes include a more comprehensive review of the statistical methods to be used to analyze data obtained during validation. This paper applies those statistical methods to analyze accuracy, precision, ruggedness, and equivalence data obtained using a rapid microbiological method system being validated for water bioburden testing. The case study demonstrates that the statistical methods described in the PDA Technical Report No. 33 chapter can be successfully applied to rapid microbiological method data sets and give the same comparability results for similarity or difference as the standard method. © PDA, Inc
Fang, Yongxiang; Wit, Ernst
2008-01-01
Fisher’s combined probability test is the most commonly used method to test the overall significance of a set independent p-values. However, it is very obviously that Fisher’s statistic is more sensitive to smaller p-values than to larger p-value and a small p-value may overrule the other p-values and decide the test result. This is, in some cases, viewed as a flaw. In order to overcome this flaw and improve the power of the test, the joint tail probability of a set p-values is proposed as a ...
Comparison of statistical tests for association between rare variants and binary traits.
Bacanu, SA; Nelson, MR; Whittaker, JC
2012-01-01
: Genome-wide association studies have found thousands of common genetic variants associated with a wide variety of diseases and other complex traits. However, a large portion of the predicted genetic contribution to many traits remains unknown. One plausible explanation is that some of the missing variation is due to the effects of rare variants. Nonetheless, the statistical analysis of rare variants is challenging. A commonly used method is to contrast, within the same region (gene), the fr...
A powerful score-based test statistic for detecting gene-gene co-association.
Xu, Jing; Yuan, Zhongshang; Ji, Jiadong; Zhang, Xiaoshuai; Li, Hongkai; Wu, Xuesen; Xue, Fuzhong; Liu, Yanxun
2016-01-29
The genetic variants identified by Genome-wide association study (GWAS) can only account for a small proportion of the total heritability for complex disease. The existence of gene-gene joint effects which contains the main effects and their co-association is one of the possible explanations for the "missing heritability" problems. Gene-gene co-association refers to the extent to which the joint effects of two genes differ from the main effects, not only due to the traditional interaction under nearly independent condition but the correlation between genes. Generally, genes tend to work collaboratively within specific pathway or network contributing to the disease and the specific disease-associated locus will often be highly correlated (e.g. single nucleotide polymorphisms (SNPs) in linkage disequilibrium). Therefore, we proposed a novel score-based statistic (SBS) as a gene-based method for detecting gene-gene co-association. Various simulations illustrate that, under different sample sizes, marginal effects of causal SNPs and co-association levels, the proposed SBS has the better performance than other existed methods including single SNP-based and principle component analysis (PCA)-based logistic regression model, the statistics based on canonical correlations (CCU), kernel canonical correlation analysis (KCCU), partial least squares path modeling (PLSPM) and delta-square (δ (2)) statistic. The real data analysis of rheumatoid arthritis (RA) further confirmed its advantages in practice. SBS is a powerful and efficient gene-based method for detecting gene-gene co-association.
Small, coded, pill-sized tracers embedded in grain are proposed as a method for grain traceability. A sampling process for a grain traceability system was designed and investigated by applying probability statistics using a science-based sampling approach to collect an adequate number of tracers fo...
Kruschke, John K; Liddell, Torrin M
2018-02-01
In the practice of data analysis, there is a conceptual distinction between hypothesis testing, on the one hand, and estimation with quantified uncertainty on the other. Among frequentists in psychology, a shift of emphasis from hypothesis testing to estimation has been dubbed "the New Statistics" (Cumming 2014). A second conceptual distinction is between frequentist methods and Bayesian methods. Our main goal in this article is to explain how Bayesian methods achieve the goals of the New Statistics better than frequentist methods. The article reviews frequentist and Bayesian approaches to hypothesis testing and to estimation with confidence or credible intervals. The article also describes Bayesian approaches to meta-analysis, randomized controlled trials, and power analysis.
[Do we always correctly interpret the results of statistical nonparametric tests].
Moczko, Jerzy A
2014-01-01
Mann-Whitney, Wilcoxon, Kruskal-Wallis and Friedman tests create a group of commonly used tests to analyze the results of clinical and laboratory data. These tests are considered to be extremely flexible and their asymptotic relative efficiency exceeds 95 percent. Compared with the corresponding parametric tests they do not require checking the fulfillment of the conditions such as the normality of data distribution, homogeneity of variance, the lack of correlation means and standard deviations, etc. They can be used both in the interval and or-dinal scales. The article presents an example Mann-Whitney test, that does not in any case the choice of these four nonparametric tests treated as a kind of gold standard leads to correct inference.
Petocz, Peter; Sowey, Eric
2008-01-01
In this article, the authors focus on hypothesis testing--that peculiarly statistical way of deciding things. Statistical methods for testing hypotheses were developed in the 1920s and 1930s by some of the most famous statisticians, in particular Ronald Fisher, Jerzy Neyman and Egon Pearson, who laid the foundations of almost all modern methods of…
International Nuclear Information System (INIS)
Sigit Asmara Santa
2006-01-01
The engineering sensitivity improvement of Helium mass spectrometer leak detection using global hard vacuum test configuration has been done. The purpose of this work is to enhance the sensitivity of the current leak detection of pressurized method (sniffer method) with the sensitivity of 10 -3 ∼ 10 -5 std cm 3 /s, to the global hard vacuum test configuration method which can be achieved of up to 10 -8 std cm 3 /s. The goal of this research and development is to obtain a Helium leak test configuration which is suitable and can be used as routine bases in the quality control tests of FPM capsule and AgInCd safety control rod products. The result is an additional instrumented vacuum tube connected with conventional Helium mass spectrometer. The pressure and temperature of the test object during the leak measurement are simulated by means of a 4.1 kW capacity heater and Helium injection to test object, respectively. The addition of auxiliary mechanical vacuum pump of 2.4 l/s pumping speed which is directly connected to the vacuum tube, will reduce 86 % of evacuation time. The reduction of the measured sensitivity due to the auxiliary mechanical vacuum pump can be overcome by shutting off the pump soon after Helium mass spectrometer reaches its operating pressure condition. (author)
Woodruff, David; Wu, Yi-Fang
2012-01-01
The purpose of this paper is to illustrate alpha's robustness and usefulness, using actual and simulated educational test data. The sampling properties of alpha are compared with the sampling properties of several other reliability coefficients: Guttman's lambda[subscript 2], lambda[subscript 4], and lambda[subscript 6]; test-retest reliability;…
DEFF Research Database (Denmark)
Nielsen, Morten Ørregaard
This paper presents a family of simple nonparametric unit root tests indexed by one parameter, d, and containing Breitung's (2002) test as the special case d = 1. It is shown that (i) each member of the family with d > 0 is consistent, (ii) the asymptotic distribution depends on d, and thus refle...
Weber, Benjamin; Lee, Sau L; Delvadia, Renishkumar; Lionberger, Robert; Li, Bing V; Tsong, Yi; Hochhaus, Guenther
2015-03-01
Equivalence testing of aerodynamic particle size distribution (APSD) through multi-stage cascade impactors (CIs) is important for establishing bioequivalence of orally inhaled drug products. Recent work demonstrated that the median of the modified chi-square ratio statistic (MmCSRS) is a promising metric for APSD equivalence testing of test (T) and reference (R) products as it can be applied to a reduced number of CI sites that are more relevant for lung deposition. This metric is also less sensitive to the increased variability often observed for low-deposition sites. A method to establish critical values for the MmCSRS is described here. This method considers the variability of the R product by employing a reference variance scaling approach that allows definition of critical values as a function of the observed variability of the R product. A stepwise CI equivalence test is proposed that integrates the MmCSRS as a method for comparing the relative shapes of CI profiles and incorporates statistical tests for assessing equivalence of single actuation content and impactor sized mass. This stepwise CI equivalence test was applied to 55 published CI profile scenarios, which were classified as equivalent or inequivalent by members of the Product Quality Research Institute working group (PQRI WG). The results of the stepwise CI equivalence test using a 25% difference in MmCSRS as an acceptance criterion provided the best matching with those of the PQRI WG as decisions of both methods agreed in 75% of the 55 CI profile scenarios.
Bazelet, Corinna S.; Thompson, Aileen C.; Naskrecki, Piotr
2016-01-01
The use of endemism and vascular plants only for biodiversity hotspot delineation has long been contested. Few studies have focused on the efficacy of global biodiversity hotspots for the conservation of insects, an important, abundant, and often ignored component of biodiversity. We aimed to test five alternative diversity measures for hotspot delineation and examine the efficacy of biodiversity hotspots for conserving a non-typical target organism, South African katydids. Using a 1° fishnet...
Statistically based reevaluation of PISC-II round robin test data
International Nuclear Information System (INIS)
Heasler, P.G.; Taylor, T.T.; Doctor, S.R.
1993-05-01
This report presents a re-analysis of an international PISC-II (Programme for Inspection of Steel Components, Phase 2) round-robin inspection results using formal statistical techniques to account for experimental error. The analysis examines US team performance vs. other participants performance,flaw sizing performance and errors associated with flaw sizing, factors influencing flaw detection probability, performance of all participants with respect to recently adopted ASME Section 11 flaw detection performance demonstration requirements, and develops conclusions concerning ultrasonic inspection capability. Inspection data were gathered on four heavy section steel components which included two plates and two nozzle configurations
International Nuclear Information System (INIS)
MacKernan, Donal; Basios, Vasileios
2009-01-01
The statistical properties of chaotic Markov analytic maps and equivalent repellers are investigated through matrix representations of the Frobenius-Perron operator (U). The associated basis sets are constructed using Chebyshev functions and Markov partitions which can be tailored to examine statistical dynamical properties associated with observables having support over local regions or for example, about periodic orbits. The decay properties of corresponding time correlations functions are given by a analytic expression of the spectra of U which is expected to be valid for a much larger class of systems than that studied here. An explicit and general expression is also derived for the correction factor to the dynamical zeta functions occurring when analytic function spaces are not invariant under U.
Statistical homogeneity tests applied to large data sets from high energy physics experiments
Trusina, J.; Franc, J.; Kůs, V.
2017-12-01
Homogeneity tests are used in high energy physics for the verification of simulated Monte Carlo samples, it means if they have the same distribution as a measured data from particle detector. Kolmogorov-Smirnov, χ 2, and Anderson-Darling tests are the most used techniques to assess the samples’ homogeneity. Since MC generators produce plenty of entries from different models, each entry has to be re-weighted to obtain the same sample size as the measured data has. One way of the homogeneity testing is through the binning. If we do not want to lose any information, we can apply generalized tests based on weighted empirical distribution functions. In this paper, we propose such generalized weighted homogeneity tests and introduce some of their asymptotic properties. We present the results based on numerical analysis which focuses on estimations of the type-I error and power of the test. Finally, we present application of our homogeneity tests to data from the experiment DØ in Fermilab.
Kucharik, Christopher J.; Foley, Jonathan A.; Delire, Christine; Fisher, Veronica A.; Coe, Michael T.; Lenters, John D.; Young-Molling, Christine; Ramankutty, Navin; Norman, John M.; Gower, Stith T.
2000-09-01
While a new class of Dynamic Global Ecosystem Models (DGEMs) has emerged in the past few years as an important tool for describing global biogeochemical cycles and atmosphere-biosphere interactions, these models are still largely untested. Here we analyze the behavior of a new DGEM and compare the results to global-scale observations of water balance, carbon balance, and vegetation structure. In this study, we use version 2 of the Integrated Biosphere Simulator (IBIS), which includes several major improvements and additions to the prototype model developed by Foley et al. [1996]. IBIS is designed to be a comprehensive model of the terrestrial biosphere; the model represents a wide range of processes, including land surface physics, canopy physiology, plant phenology, vegetation dynamics and competition, and carbon and nutrient cycling. The model generates global simulations of the surface water balance (e.g., runoff), the terrestrial carbon balance (e.g., net primary production, net ecosystem exchange, soil carbon, aboveground and belowground litter, and soil CO2 fluxes), and vegetation structure (e.g., biomass, leaf area index, and vegetation composition). In order to test the performance of the model, we have assembled a wide range of continental and global-scale data, including measurements of river discharge, net primary production, vegetation structure, root biomass, soil carbon, litter carbon, and soil CO2 flux. Using these field data and model results for the contemporary biosphere (1965-1994), our evaluation shows that simulated patterns of runoff, NPP, biomass, leaf area index, soil carbon, and total soil CO2 flux agree reasonably well with measurements that have been compiled from numerous ecosystems. These results also compare favorably to other global model results.
Harlander, John M.; Englert, Christoph R.; Brown, Charles M.; Marr, Kenneth D.; Miller, Ian J.; Zastera, Vaz; Bach, Bernhard W.; Mende, Stephen B.
2017-10-01
The design and laboratory tests of the interferometers for the Michelson Interferometer for Global High-resolution Thermospheric Imaging (MIGHTI) instrument which measures thermospheric wind and temperature for the NASA-sponsored Ionospheric Connection (ICON) Explorer mission are described. The monolithic interferometers use the Doppler Asymmetric Spatial Heterodyne (DASH) Spectroscopy technique for wind measurements and a multi-element photometer approach to measure thermospheric temperatures. The DASH technique and overall optical design of the MIGHTI instrument are described in an overview followed by details on the design, element fabrication, assembly, laboratory tests and thermal control of the interferometers that are the heart of MIGHTI.
Jokhio, Gul A.; Syed Mohsin, Sharifah M.; Gul, Yasmeen
2018-04-01
It has been established that Adobe provides, in addition to being sustainable and economic, a better indoor air quality without spending extensive amounts of energy as opposed to the modern synthetic materials. The material, however, suffers from weak structural behaviour when subjected to adverse loading conditions. A wide range of mechanical properties has been reported in literature owing to lack of research and standardization. The present paper presents the statistical analysis of the results that were obtained through compressive and flexural tests on Adobe samples. Adobe specimens with and without wire mesh reinforcement were tested and the results were reported. The statistical analysis of these results presents an interesting read. It has been found that the compressive strength of adobe increases by about 43% after adding a single layer of wire mesh reinforcement. This increase is statistically significant. The flexural response of Adobe has also shown improvement with the addition of wire mesh reinforcement, however, the statistical significance of the same cannot be established.
DEFF Research Database (Denmark)
Sommer, Helle Mølgaard; Holst, Helle; Spliid, Henrik
1995-01-01
Three identical microbiological experiments were carried out and analysed in order to examine the variability of the parameter estimates. The microbiological system consisted of a substrate (toluene) and a biomass (pure culture) mixed together in an aquifer medium. The degradation of the substrate...... and the growth of the biomass are described by the Monod model consisting of two nonlinear coupled first-order differential equations. The objective of this study was to estimate the kinetic parameters in the Monod model and to test whether the parameters from the three identical experiments have the same values....... Estimation of the parameters was obtained using an iterative maximum likelihood method and the test used was an approximative likelihood ratio test. The test showed that the three sets of parameters were identical only on a 4% alpha level....
The HepTestContest: a global innovation contest to identify approaches to hepatitis B and C testing.
Tucker, Joseph D; Meyers, Kathrine; Best, John; Kaplan, Karyn; Pendse, Razia; Fenton, Kevin A; Andrieux-Meyer, Isabelle; Figueroa, Carmen; Goicochea, Pedro; Gore, Charles; Ishizaki, Azumi; Khwairakpam, Giten; Miller, Veronica; Mozalevskis, Antons; Ninburg, Michael; Ocama, Ponsiano; Peeling, Rosanna; Walsh, Nick; Colombo, Massimo G; Easterbrook, Philippa
2017-11-01
); decentralization (n = 8); and task shifting (n = 7). The global innovation contest identified a range of local hepatitis testing approaches that can be used to inform the development of testing strategies in different settings and populations. Further implementation and evaluation of different testing approaches is needed.
The HepTestContest: a global innovation contest to identify approaches to hepatitis B and C testing
Directory of Open Access Journals (Sweden)
Joseph D. Tucker
2017-11-01
support targeted testing (n = 8; decentralization (n = 8; and task shifting (n = 7. Conclusion The global innovation contest identified a range of local hepatitis testing approaches that can be used to inform the development of testing strategies in different settings and populations. Further implementation and evaluation of different testing approaches is needed.
Shah, Munawar; Jin, Shuanggen
2015-12-01
Pre-earthquake ionospheric anomalies are still challenging and unclear to obtain and understand, particularly for different earthquake magnitudes and focal depths as well as types of fault. In this paper, the seismo-ionospheric disturbances (SID) related to global earthquakes with 1492 Mw ≥ 5.0 from 1998 to 2014 are investigated using the total electron content (TEC) of GPS global ionosphere maps (GIM). Statistical analysis of 10-day TEC data before global Mw ≥ 5.0 earthquakes shows significant enhancement 5 days before an earthquake of Mw ≥ 6.0 at a 95% confidence level. Earthquakes with a focal depth of less than 60 km and Mw ≥ 6.0 are presumably the root of deviation in the ionospheric TEC because earthquake breeding zones have gigantic quantities of energy at shallower focal depths. Increased anomalous TEC is recorded in cumulative percentages beyond Mw = 5.5. Sharpness in cumulative percentages is evident in seismo-ionospheric disturbance prior to Mw ≥ 6.0 earthquakes. Seismo-ionospheric disturbances related to strike slip and thrust earthquakes are noticeable for magnitude Mw6.0-7.0 earthquakes. The relative values reveal high ratios (up to 2) and low ratios (up to -0.5) within 5 days prior to global earthquakes for positive and negative anomalies. The anomalous patterns in TEC related to earthquakes are possibly due to the coupling of high amounts of energy from earthquake breeding zones of higher magnitude and shallower focal depth.
Statistical reliability assessment of UT round-robin test data for piping welds
International Nuclear Information System (INIS)
Kim, H.M.; Park, I.K.; Park, U.S.; Park, Y.W.; Kang, S.C.; Lee, J.H.
2004-01-01
Ultrasonic NDE is one of important technologies in the life-time maintenance of nuclear power plant. Ultrasonic inspection system is consisted of the operator, equipment and procedure. The reliability of ultrasonic inspection system is affected by its ability. The performance demonstration round robin was conducted to quantify the capability of ultrasonic inspection for in-service. Several teams employed procedures that met or exceeded with ASME sec. XI code requirements detected the piping of nuclear power plant with various cracks to evaluate the capability of detection and sizing. In this paper, the statistical reliability assessment of ultrasonic nondestructive inspection data using probability of detection (POD) is presented. The result of POD using logistic model was useful to the reliability assessment for the NDE hit or miss data. (orig.)
A Statistical Test of Correlations and Periodicities in the Geological Records
Yabushita, S.
1997-09-01
Matsumoto & Kubotani argued that there is a positive and statistically significant correlation between cratering and mass extinction. This argument is critically examined by adopting a method of Ertel used by Matsumoto & Kubotani but by applying it more directly to the extinction and cratering records. It is shown that on the null-hypothesis of random distribution of crater ages, the observed correlation has a probability of occurrence of 13%. However, when large craters are excluded whose ages agree with the times of peaks of extinction rate of marine fauna, one obtains a negative correlation. This result strongly indicates that mass extinction are not due to accumulation of impacts but due to isolated gigantic impacts.
Belley , Philippe; Havet , Nathalie; Lacroix , Guy
2012-01-01
The paper focuses on the early career patterns of young male and female workers. It investigates potential dynamic links between statistical discrimination, mobility, tenure and wage profiles. The model assumes that it is more costly for an employer to assess female workers' productivity and that the noise/signal ratio tapers off more rapidly for male workers. These two assumptions yield numerous theoretical predictions pertaining to gender wage gaps. These predictions are tested using data f...
Hu, Y.; Vaughan, M.; McClain, C.; Behrenfeld, M.; Maring, H.; Anderson, D.; Sun-Mack, S.; Flittner, D.; Huang, J.; Wielicki, B.; Minnis, P.; Weimer, C.; Trepte, C.; Kuehn, R.
2007-06-01
This study presents an empirical relation that links the volume extinction coefficients of water clouds, the layer integrated depolarization ratios measured by lidar, and the effective radii of water clouds derived from collocated passive sensor observations. Based on Monte Carlo simulations of CALIPSO lidar observations, this method combines the cloud effective radius reported by MODIS with the lidar depolarization ratios measured by CALIPSO to estimate both the liquid water content and the effective number concentration of water clouds. The method is applied to collocated CALIPSO and MODIS measurements obtained during July and October of 2006, and January 2007. Global statistics of the cloud liquid water content and effective number concentration are presented.
Homogeneity testing of the global ESA CCI multi-satellite soil moisture climate data record
Preimesberger, Wolfgang; Su, Chun-Hsu; Gruber, Alexander; Dorigo, Wouter
2017-04-01
ESA's Climate Change Initiative (CCI) creates a global, long-term data record by merging multiple available earth observation products with the goal to provide a product for climate studies, trend analysis, and risk assessments. The blending of soil moisture (SM) time series derived from different active and passive remote sensing instruments with varying sensor characteristics, such as microwave frequency, signal polarization or radiometric accuracy, could potentially lead to inhomogeneities in the merged long-term data series, undercutting the usefulness of the product. To detect the spatio-temporal extent of contiguous periods without inhomogeneities as well as subsequently minimizing their negative impact on the data records, different relative homogeneity tests (namely Fligner-Killeen test of homogeneity of variances and Wilcoxon rank-sums test) are implemented and tested on the combined active-passive ESA CCI SM data set. Inhomogeneities are detected by comparing the data against reference data from in-situ data from ISMN, and model-based estimates from GLDAS-Noah and MERRA-Land. Inhomogeneity testing is performed over the ESA CCI SM data time frame of 38 years (from 1978 to 2015), on a global quarter-degree grid and with regard to six alterations in the combination of observation systems used in the data blending process. This study describes and explains observed variations in the spatial and temporal patterns of inhomogeneities in the combined products. Besides we proposes methodologies for measuring and reducing the impact of inhomogeneities on trends derived from the ESA CCI SM data set, and suggest the use of inhomogeneity-corrected data for future trend studies. This study is supported by the European Union's FP7 EartH2Observe "Global Earth Observation for Integrated Water Resource Assessment" project (grant agreement number 331 603608).
Weibull statistics effective area and volume in the ball-on-ring testing method
DEFF Research Database (Denmark)
Frandsen, Henrik Lund
2014-01-01
The ball-on-ring method is together with other biaxial bending methods often used for measuring the strength of plates of brittle materials, because machining defects are remote from the high stresses causing the failure of the specimens. In order to scale the measured Weibull strength...... to geometries relevant for the application of the material, the effective area or volume for the test specimen must be evaluated. In this work analytical expressions for the effective area and volume of the ball-on-ring test specimen is derived. In the derivation the multiaxial stress field has been accounted...
Addressing Barriers to the Development and Adoption of Rapid Diagnostic Tests in Global Health
Directory of Open Access Journals (Sweden)
Eric Miller
2015-06-01
Full Text Available Immunochromatographic rapid diagnostic tests (RDTs have demonstrated significant potential for use as point-of- care diagnostic tests in resource-limited settings. Most notably, RDTs for malaria have reached an unparalleled level of technological maturity and market penetration, and are now considered an important complement to standard microscopic methods of malaria diagnosis. However, the technical development of RDTs for other infectious diseases, and their uptake within the global health community as a core diagnostic modality, has been hindered by a number of extant challenges. These range from technical and biological issues, such as the need for better affinity agents and biomarkers of disease, to social, infrastructural, regulatory and economic barriers, which have all served to slow their adoption and diminish their impact. In order for the immunochromatographic RDT format to be successfully adapted to other disease targets, to see widespread distribution, and to improve clinical outcomes for patients on a global scale, these challenges must be identified and addressed, and the global health community must be engaged in championing the broader use of RDTs.
Addressing Barriers to the Development and Adoption of Rapid Diagnostic Tests in Global Health
Directory of Open Access Journals (Sweden)
Eric Miller
2015-06-01
Full Text Available Immunochromatographic rapid diagnostic tests (RDTs have demonstrated significant potential for use as point-of-care diagnostic tests in resource-limited settings. Most notably, RDTs for malaria have reached an unparalleled level of technological maturity and market penetration, and are now considered an important complement to standard microscopic methods of malaria diagnosis. However, the technical development of RDTs for other infectious diseases, and their uptake within the global health community as a core diagnostic modality, has been hindered by a number of extant challenges. These range from technical and biological issues, such as the need for better affinity agents and biomarkers of disease, to social, infrastructural, regulatory and economic barriers, which have all served to slow their adoption and diminish their impact. In order for the immunochromatographic RDT format to be successfully adapted to other disease targets, to see widespread distribution, and to improve clinical outcomes for patients on a global scale, these challenges must be identified and addressed, and the global health community must be engaged in championing the broader use of RDTs.
Addressing Barriers to the Development and Adoption of Rapid Diagnostic Tests in Global Health.
Miller, Eric; Sikes, Hadley D
Immunochromatographic rapid diagnostic tests (RDTs) have demonstrated significant potential for use as point-of-care diagnostic tests in resource-limited settings. Most notably, RDTs for malaria have reached an unparalleled level of technological maturity and market penetration, and are now considered an important complement to standard microscopic methods of malaria diagnosis. However, the technical development of RDTs for other infectious diseases, and their uptake within the global health community as a core diagnostic modality, has been hindered by a number of extant challenges. These range from technical and biological issues, such as the need for better affinity agents and biomarkers of disease, to social, infrastructural, regulatory and economic barriers, which have all served to slow their adoption and diminish their impact. In order for the immunochromatographic RDT format to be successfully adapted to other disease targets, to see widespread distribution, and to improve clinical outcomes for patients on a global scale, these challenges must be identified and addressed, and the global health community must be engaged in championing the broader use of RDTs.
A statistical characterization of the finger tapping test: modeling, estimation, and applications.
Austin, Daniel; McNames, James; Klein, Krystal; Jimison, Holly; Pavel, Misha
2015-03-01
Sensory-motor performance is indicative of both cognitive and physical function. The Halstead-Reitan finger tapping test is a measure of sensory-motor speed commonly used to assess function as part of a neuropsychological evaluation. Despite the widespread use of this test, the underlying motor and cognitive processes driving tapping behavior during the test are not well characterized or understood. This lack of understanding may make clinical inferences from test results about health or disease state less accurate because important aspects of the task such as variability or fatigue are unmeasured. To overcome these limitations, we enhanced the tapper with a sensor that enables us to more fully characterize all the aspects of tapping. This modification enabled us to decompose the tapping performance into six component phases and represent each phase with a set of parameters having clear functional interpretation. This results in a set of 29 total parameters for each trial, including change in tapping over time, and trial-to-trial and tap-to-tap variability. These parameters can be used to more precisely link different aspects of cognition or motor function to tapping behavior. We demonstrate the benefits of this new instrument with a simple hypothesis-driven trial comparing single and dual-task tapping.
Godleski, Stephanie A.; Ostrov, Jamie M.
2010-01-01
The present study used both categorical and dimensional approaches to test the association between relational and physical aggression and hostile intent attributions for both relational and instrumental provocation situations using the National Institute of Child Health and Human Development longitudinal Study of Early Child Care and Youth…
Statistical Indexes for Monitoring Item Behavior under Computer Adaptive Testing Environment.
Zhu, Renbang; Yu, Feng; Liu, Su
A computerized adaptive test (CAT) administration usually requires a large supply of items with accurately estimated psychometric properties, such as item response theory (IRT) parameter estimates, to ensure the precision of examinee ability estimation. However, an estimated IRT model of a given item in any given pool does not always correctly…
DEFF Research Database (Denmark)
Paisley, Larry
2002-01-01
The evolution of monitoring and surveillance for bovine spongiform encephalopathy (BSE) from the phase of passive surveillance that began in the United Kingdom in 1988 until the present is described. Currently, surveillance for BSE in Europe consists of mass testing of cattle slaughtered for human...
What is the ARM Climate Research Facility: Is Global Warming a Real Bias or a Statistical Anomaly?
Energy Technology Data Exchange (ETDEWEB)
Egami, Takeshi [U of Tennessee and ORNL; Sisterson, Douglas L.
2010-03-10
The Atmospheric Radiation Measurement (ARM) Climate Research Facility (ACRF) is a U.S. Department of Energy, Office of Science, Office of Biological and Environmental Research national user facility. With multi-laboratory management of distributed facilities worldwide, the ACRF does not fit the mold of a traditional user facility located at a national laboratory. The ACRF provides the world's most comprehensive 24/7 observational capabilities for obtaining atmospheric data specifically for climate change research. Serving nearly 5,000 registered users from 15 federal and state agencies, 375 universities, and 67 countries, the ACRF Data Archive collects and delivers over 5 terabytes of data per month to its users. The ACRF users provide critical information about cloud formation processes, water vapor, and aerosols, and their influence on radiative transfer in the atmosphere. This information is used to improve global climate model predictions of climate change.
Crandall, Philip G; Mauromoustakos, Andy; O'Bryan, Corliss A; Thompson, Kevin C; Yiannas, Frank; Bridges, Kerry; Francois, Catherine
2017-10-01
In 2000, the Consumer Goods Forum established the Global Food Safety Initiative (GFSI) to increase the safety of the world's food supply and to harmonize food safety regulations worldwide. In 2013, a university research team in conjunction with Diversey Consulting (Sealed Air), the Consumer Goods Forum, and officers of GFSI solicited input from more than 15,000 GFSI-certified food producers worldwide to determine whether GFSI certification had lived up to these expectations. A total of 828 usable questionnaires were analyzed, representing about 2,300 food manufacturing facilities and food suppliers in 21 countries, mainly across Western Europe, Australia, New Zealand, and North America. Nearly 90% of these certified suppliers perceived GFSI as being beneficial for addressing their food safety concerns, and respondents were eight times more likely to repeat the certification process knowing what it entailed. Nearly three-quarters (74%) of these food manufacturers would choose to go through the certification process again even if certification were not required by one of their current retail customers. Important drivers for becoming GFSI certified included continuing to do business with an existing customer, starting to do business with new customer, reducing the number of third-party food safety audits, and continuing improvement of their food safety program. Although 50% or fewer respondents stated that they saw actual increases in sales, customers, suppliers, or employees, significantly more companies agreed than disagreed that there was an increase in these key performance indicators in the year following GFSI certification. A majority of respondents (81%) agreed that there was a substantial investment in staff time since certification, and 50% agreed there was a significant capital investment. This survey is the largest and most representative of global food manufacturers conducted to date.
Imhoff, M.; Bounoua, L.
2004-12-01
A unique combination of satellite and socio-economic data were used to explore the relationship between human consumption and the carbon cycle. Biophysical models were applied to consumption data to estimate the annual amount of Earth's terrestrial net primary production humans require for food, fiber and fuel using the same modeling architecture as satellite-supported NPP measurements. The amount of Earth's NPP required to support human activities is a powerful measure of the aggregate human impacts on the biosphere and indicator of societal vulnerability to climate change. Equations were developed estimating the amount of landscape-level NPP required to generate all the products consumed by 230 countries including; vegetal foods, meat, milk, eggs, wood, fuel-wood, paper and fiber. The amount of NPP required was calculated on a per capita basis and projected onto a global map of population to create a spatially explicit map of NPP-carbon demand in units of elemental carbon. NPP demand was compared to a map of Earth's average annual net primary production or supply created using 17 years (1982-1998) of AVHRR vegetation index to produce a geographically accurate balance sheet of terrestrial NPP-carbon supply and demand. Globally, humans consume 20 percent of Earth's total net primary production on land. Regionally the NPP-carbon balance percentage varies from 6 to over 70 percent and locally from near 0 to over 30,000 percent in major urban areas. The uneven distribution of NPP-carbon supply and demand, indicate the degree to which various human populations rely on NPP imports, are vulnerable to climate change and suggest policy options for slowing future growth in NPP demand.
International Nuclear Information System (INIS)
Baghaee Moghaddam, Taher; Soltani, Mehrtash; Karim, Mohamed Rehan
2015-01-01
Highlights: • Effect of PET modification on stiffness property of asphalt mixture was examined. • Different temperatures and loading amounts were designated. • Statistical analysis was used to find interactions between selected variables. • A good agreement between experimental results and predicted values was obtained. • Optimal amount of PET was calculated to achieve the highest mixture performance. - Abstract: Stiffness of asphalt mixture is a fundamental design parameter of flexible pavement. According to literature, stiffness value is very susceptible to environmental and loading conditions. In this paper, effects of applied stress and temperature on the stiffness modulus of unmodified and Polyethylene Terephthalate (PET) modified asphalt mixtures were evaluated using Response Surface Methodology (RSM). A quadratic model was successfully fitted to the experimental data. Based on the results achieved in this study, the temperature variation had the highest impact on the mixture’s stiffness. Besides, PET content and amount of stress showed to have almost the same effect on the stiffness of mixtures. The optimal amount of PET was found to be 0.41% by weight of aggregate particles to reach the highest stiffness value
Callegaro, Giulia; Malkoc, Kasja; Corvi, Raffaella; Urani, Chiara; Stefanini, Federico M
2017-12-01
The identification of the carcinogenic risk of chemicals is currently mainly based on animal studies. The in vitro Cell Transformation Assays (CTAs) are a promising alternative to be considered in an integrated approach. CTAs measure the induction of foci of transformed cells. CTAs model key stages of the in vivo neoplastic process and are able to detect both genotoxic and some non-genotoxic compounds, being the only in vitro method able to deal with the latter. Despite their favorable features, CTAs can be further improved, especially reducing the possible subjectivity arising from the last phase of the protocol, namely visual scoring of foci using coded morphological features. By taking advantage of digital image analysis, the aim of our work is to translate morphological features into statistical descriptors of foci images, and to use them to mimic the classification performances of the visual scorer to discriminate between transformed and non-transformed foci. Here we present a classifier based on five descriptors trained on a dataset of 1364 foci, obtained with different compounds and concentrations. Our classifier showed accuracy, sensitivity and specificity equal to 0.77 and an area under the curve (AUC) of 0.84. The presented classifier outperforms a previously published model. Copyright © 2017 Elsevier Ltd. All rights reserved.
Relationship between the COI test and other sensory profiles by statistical procedures
Directory of Open Access Journals (Sweden)
Calvente, J. J.
1994-04-01
Full Text Available Relationships between 139 sensory attributes evaluated on 32 samples of virgin olive oil have been analysed by a statistical sensory wheel that guarantees the objectiveness and prediction of its conclusions concerning the best clusters of attributes: green, bitter-pungent, ripe fruit, fruity, sweet fruit, undesirable attributes and two miscellanies. The procedure allows the sensory notes evaluated for potential consumers of this edible oil from the point of view of its habitual consumers to be understood with special reference to The European Communities Regulation n-2568/91. Five different panels: Spanish, Greek, Italian, Dutch and British, have been used to evaluate the samples. Analysis of the relationships between stimuli perceived by aroma, flavour, smell, mouthfeel and taste together with Linear Sensory Profiles based on Fuzzy Logic are provided. A 3-dimensional plot indicates the usefulness of the proposed procedure in the authentication of different varieties of virgin olive oil. An analysis of the volatile compounds responsible for most of the attributes gives weight to the conclusions. Directions which promise to improve the E.G. Regulation on the sensory quality of olive oil are also given.
Goedhart, Paul W; van der Voet, Hilko; Baldacchino, Ferdinando; Arpaia, Salvatore
2014-04-01
Genetic modification of plants may result in unintended effects causing potentially adverse effects on the environment. A comparative safety assessment is therefore required by authorities, such as the European Food Safety Authority, in which the genetically modified plant is compared with its conventional counterpart. Part of the environmental risk assessment is a comparative field experiment in which the effect on non-target organisms is compared. Statistical analysis of such trials come in two flavors: difference testing and equivalence testing. It is important to know the statistical properties of these, for example, the power to detect environmental change of a given magnitude, before the start of an experiment. Such prospective power analysis can best be studied by means of a statistical simulation model. This paper describes a general framework for simulating data typically encountered in environmental risk assessment of genetically modified plants. The simulation model, available as Supplementary Material, can be used to generate count data having different statistical distributions possibly with excess-zeros. In addition the model employs completely randomized or randomized block experiments, can be used to simulate single or multiple trials across environments, enables genotype by environment interaction by adding random variety effects, and finally includes repeated measures in time following a constant, linear or quadratic pattern in time possibly with some form of autocorrelation. The model also allows to add a set of reference varieties to the GM plants and its comparator to assess the natural variation which can then be used to set limits of concern for equivalence testing. The different count distributions are described in some detail and some examples of how to use the simulation model to study various aspects, including a prospective power analysis, are provided.
DEFF Research Database (Denmark)
Paulsen, Rasmus Reinhold; Larsen, Rasmus; Ersbøll, Bjarne Kjær
2002-01-01
surface models are built by using the anatomical landmarks to warp a template mesh onto all shapes in the training set. Testing the gender related differences is done by initially reducing the dimensionality using principal component analysis of the vertices of the warped meshes. The number of components...... to retain is chosen using Horn's parallel analysis. Finally a multivariate analysis of variance is performed on these components....
Austin, Peter C; Mamdani, Muhammad M; Juurlink, David N; Hux, Janet E
2006-09-01
To illustrate how multiple hypotheses testing can produce associations with no clinical plausibility. We conducted a study of all 10,674,945 residents of Ontario aged between 18 and 100 years in 2000. Residents were randomly assigned to equally sized derivation and validation cohorts and classified according to their astrological sign. Using the derivation cohort, we searched through 223 of the most common diagnoses for hospitalization until we identified two for which subjects born under one astrological sign had a significantly higher probability of hospitalization compared to subjects born under the remaining signs combined (P<0.05). We tested these 24 associations in the independent validation cohort. Residents born under Leo had a higher probability of gastrointestinal hemorrhage (P=0.0447), while Sagittarians had a higher probability of humerus fracture (P=0.0123) compared to all other signs combined. After adjusting the significance level to account for multiple comparisons, none of the identified associations remained significant in either the derivation or validation cohort. Our analyses illustrate how the testing of multiple, non-prespecified hypotheses increases the likelihood of detecting implausible associations. Our findings have important implications for the analysis and interpretation of clinical studies.
Kipiński, Lech; König, Reinhard; Sielużycki, Cezary; Kordecki, Wojciech
2011-10-01
Stationarity is a crucial yet rarely questioned assumption in the analysis of time series of magneto- (MEG) or electroencephalography (EEG). One key drawback of the commonly used tests for stationarity of encephalographic time series is the fact that conclusions on stationarity are only indirectly inferred either from the Gaussianity (e.g. the Shapiro-Wilk test or Kolmogorov-Smirnov test) or the randomness of the time series and the absence of trend using very simple time-series models (e.g. the sign and trend tests by Bendat and Piersol). We present a novel approach to the analysis of the stationarity of MEG and EEG time series by applying modern statistical methods which were specifically developed in econometrics to verify the hypothesis that a time series is stationary. We report our findings of the application of three different tests of stationarity--the Kwiatkowski-Phillips-Schmidt-Schin (KPSS) test for trend or mean stationarity, the Phillips-Perron (PP) test for the presence of a unit root and the White test for homoscedasticity--on an illustrative set of MEG data. For five stimulation sessions, we found already for short epochs of duration of 250 and 500 ms that, although the majority of the studied epochs of single MEG trials were usually mean-stationary (KPSS test and PP test), they were classified as nonstationary due to their heteroscedasticity (White test). We also observed that the presence of external auditory stimulation did not significantly affect the findings regarding the stationarity of the data. We conclude that the combination of these tests allows a refined analysis of the stationarity of MEG and EEG time series.
Directory of Open Access Journals (Sweden)
Rafdzah Zaki
2013-06-01
Full Text Available Objective(s: Reliability measures precision or the extent to which test results can be replicated. This is the first ever systematic review to identify statistical methods used to measure reliability of equipment measuring continuous variables. This studyalso aims to highlight the inappropriate statistical method used in the reliability analysis and its implication in the medical practice. Materials and Methods: In 2010, five electronic databases were searched between 2007 and 2009 to look for reliability studies. A total of 5,795 titles were initially identified. Only 282 titles were potentially related, and finally 42 fitted the inclusion criteria. Results: The Intra-class Correlation Coefficient (ICC is the most popular method with 25 (60% studies having used this method followed by the comparing means (8 or 19%. Out of 25 studies using the ICC, only 7 (28% reported the confidence intervals and types of ICC used. Most studies (71% also tested the agreement of instruments. Conclusion: This study finds that the Intra-class Correlation Coefficient is the most popular method used to assess the reliability of medical instruments measuring continuous outcomes. There are also inappropriate applications and interpretations of statistical methods in some studies. It is important for medical researchers to be aware of this issue, and be able to correctly perform analysis in reliability studies.
Siddiqi, Ariba; Arjunan, Sridhar P; Kumar, Dinesh K
2016-08-01
Age-associated changes in the surface electromyogram (sEMG) of Tibialis Anterior (TA) muscle can be attributable to neuromuscular alterations that precede strength loss. We have used our sEMG model of the Tibialis Anterior to interpret the age-related changes and compared with the experimental sEMG. Eighteen young (20-30 years) and 18 older (60-85 years) performed isometric dorsiflexion at 6 different percentage levels of maximum voluntary contractions (MVC), and their sEMG from the TA muscle was recorded. Six different age-related changes in the neuromuscular system were simulated using the sEMG model at the same MVCs as the experiment. The maximal power of the spectrum, Gaussianity and Linearity Test Statistics were computed from the simulated and experimental sEMG. A correlation analysis at α=0.05 was performed between the simulated and experimental age-related change in the sEMG features. The results show the loss in motor units was distinguished by the Gaussianity and Linearity test statistics; while the maximal power of the PSD distinguished between the muscular factors. The simulated condition of 40% loss of motor units with halved the number of fast fibers best correlated with the age-related change observed in the experimental sEMG higher order statistical features. The simulated aging condition found by this study corresponds with the moderate motor unit remodelling and negligible strength loss reported in literature for the cohorts aged 60-70 years.
Rana, Santosh; Dhanotia, Jitendra; Bhatia, Vimal; Prakash, Shashi
2018-04-01
In this paper, we propose a simple, fast, and accurate technique for detection of collimation position of an optical beam using the self-imaging phenomenon and correlation analysis. Herrera-Fernandez et al. [J. Opt.18, 075608 (2016)JOOPDB0150-536X10.1088/2040-8978/18/7/075608] proposed an experimental arrangement for collimation testing by comparing the period of two different self-images produced by a single diffraction grating. Following their approach, we propose a testing procedure based on correlation coefficient (CC) for efficient detection of variation in the size and fringe width of the Talbot self-images and thereby the collimation position. When the beam is collimated, the physical properties of the self-images of the grating, such as its size and fringe width, do not vary from one Talbot plane to the other and are identical; the CC is maximum in such a situation. For the de-collimated position, the size and fringe width of the self-images vary, and correspondingly the CC decreases. Hence, the magnitude of CC is a measure of degree of collimation. Using the method, we could set the collimation position to a resolution of 1 μm, which relates to ±0.25 μ radians in terms of collimation angle (for testing a collimating lens of diameter 46 mm and focal length 300 mm). In contrast to most collimation techniques reported to date, the proposed technique does not require a translation/rotation of the grating, use of complicated phase evaluation algorithms, or an intricate method for determination of period of the grating or its self-images. The technique is fully automated and provides high resolution and precision.
On the Integrity of Online Testing for Introductory Statistics Courses: A Latent Variable Approach
Directory of Open Access Journals (Sweden)
Alan Fask
2015-04-01
Full Text Available There has been a remarkable growth in distance learning courses in higher education. Despite indications that distance learning courses are more vulnerable to cheating behavior than traditional courses, there has been little research studying whether online exams facilitate a relatively greater level of cheating. This article examines this issue by developing an approach using a latent variable to measure student cheating. This latent variable is linked to both known student mastery related variables and variables unrelated to student mastery. Grade scores from a proctored final exam and an unproctored final exam are used to test for increased cheating behavior in the unproctored exam
Davis-Sharts, J
1986-10-01
Maslow's hierarchy of basic human needs provides a major theoretical framework in nursing science. The purpose of this study was to empirically test Maslow's need theory, specifically at the levels of physiological and security needs, using a hologeistic comparative method. Thirty cultures taken from the 60 cultural units in the Health Relations Area Files (HRAF) Probability Sample were found to have data available for examining hypotheses about thermoregulatory (physiological) and protective (security) behaviors practiced prior to sleep onset. The findings demonstrate there is initial worldwide empirical evidence to support Maslow's need hierarchy.
International Nuclear Information System (INIS)
Coleman, S.Y.; Nicholls, J.R.
2006-01-01
Cyclic oxidation testing at elevated temperatures requires careful experimental design and the adoption of standard procedures to ensure reliable data. This is a major aim of the 'COTEST' research programme. Further, as such tests are both time consuming and costly, in terms of human effort, to take measurements over a large number of cycles, it is important to gain maximum information from a minimum number of tests (trials). This search for standardisation of cyclic oxidation conditions leads to a series of tests to determine the relative effects of cyclic parameters on the oxidation process. Following a review of the available literature, databases and the experience of partners to the COTEST project, the most influential parameters, upper dwell temperature (oxidation temperature) and time (hot time), lower dwell time (cold time) and environment, were investigated in partners' laboratories. It was decided to test upper dwell temperature at 3 levels, at and equidistant from a reference temperature; to test upper dwell time at a reference, a higher and a lower time; to test lower dwell time at a reference and a higher time and wet and dry environments. Thus an experiment, consisting of nine trials, was designed according to statistical criteria. The results of the trial were analysed statistically, to test the main linear and quadratic effects of upper dwell temperature and hot time and the main effects of lower dwell time (cold time) and environment. The nine trials are a quarter fraction of the 36 possible combinations of parameter levels that could have been studied. The results have been analysed by half Normal plots as there are only 2 degrees of freedom for the experimental error variance, which is rather low for a standard analysis of variance. Half Normal plots give a visual indication of which factors are statistically significant. In this experiment each trial has 3 replications, and the data are analysed in terms of mean mass change, oxidation kinetics
Category (CAT) IIIb Level 1 Test Plan for Global Positioning System (GPS)
1993-09-01
applications. CAT 11Tb is defined in Advisory Circular ( AC ) 120-28C [1] as "a precision instrument approach and landing with no decision height (DH), or...2) FAA AC 20-57A (Automatic Landing Systems) [31, AC 120-28C (Criteria for Approval of CAT III Landing Weather Minima) [I] and the FAA tunnel-in...AD-A274 098I I~II l~iiUIRII 11111ilIII2 DOT/FAA/RD-93/21 Category ( CAT ) IIb Level 1 MTR 93W0000102 Research and Test Plan for Global Development
Cohn, T.A.; England, J.F.; Berenbrock, C.E.; Mason, R.R.; Stedinger, J.R.; Lamontagne, J.R.
2013-01-01
he Grubbs-Beck test is recommended by the federal guidelines for detection of low outliers in flood flow frequency computation in the United States. This paper presents a generalization of the Grubbs-Beck test for normal data (similar to the Rosner (1983) test; see also Spencer and McCuen (1996)) that can provide a consistent standard for identifying multiple potentially influential low flows. In cases where low outliers have been identified, they can be represented as “less-than” values, and a frequency distribution can be developed using censored-data statistical techniques, such as the Expected Moments Algorithm. This approach can improve the fit of the right-hand tail of a frequency distribution and provide protection from lack-of-fit due to unimportant but potentially influential low flows (PILFs) in a flood series, thus making the flood frequency analysis procedure more robust.
Karzmark, Peter; Deutsch, Gayle K
2018-01-01
This investigation was designed to determine the predictive accuracy of a comprehensive neuropsychological and brief neuropsychological test battery with regard to the capacity to perform instrumental activities of daily living (IADLs). Accuracy statistics that included measures of sensitivity, specificity, positive and negative predicted power and positive likelihood ratio were calculated for both types of batteries. The sample was drawn from a general neurological group of adults (n = 117) that included a number of older participants (age >55; n = 38). Standardized neuropsychological assessments were administered to all participants and were comprised of the Halstead Reitan Battery and portions of the Wechsler Adult Intelligence Scale-III. A comprehensive test battery yielded a moderate increase over base-rate in predictive accuracy that generalized to older individuals. There was only limited support for using a brief battery, for although sensitivity was high, specificity was low. We found that a comprehensive neuropsychological test battery provided good classification accuracy for predicting IADL capacity.
Nielsen, Allan A.; Conradsen, Knut; Skriver, Henning
2016-10-01
Test statistics for comparison of real (as opposed to complex) variance-covariance matrices exist in the statistics literature [1]. In earlier publications we have described a test statistic for the equality of two variance-covariance matrices following the complex Wishart distribution with an associated p-value [2]. We showed their application to bitemporal change detection and to edge detection [3] in multilook, polarimetric synthetic aperture radar (SAR) data in the covariance matrix representation [4]. The test statistic and the associated p-value is described in [5] also. In [6] we focussed on the block-diagonal case, we elaborated on some computer implementation issues, and we gave examples on the application to change detection in both full and dual polarization bitemporal, bifrequency, multilook SAR data. In [7] we described an omnibus test statistic Q for the equality of k variance-covariance matrices following the complex Wishart distribution. We also described a factorization of Q = R2 R3 … Rk where Q and Rj determine if and when a difference occurs. Additionally, we gave p-values for Q and Rj. Finally, we demonstrated the use of Q and Rj and the p-values to change detection in truly multitemporal, full polarization SAR data. Here we illustrate the methods by means of airborne L-band SAR data (EMISAR) [8,9]. The methods may be applied to other polarimetric SAR data also such as data from Sentinel-1, COSMO-SkyMed, TerraSAR-X, ALOS, and RadarSat-2 and also to single-pol data. The account given here closely follows that given our recent IEEE TGRS paper [7]. Selected References [1] Anderson, T. W., An Introduction to Multivariate Statistical Analysis, John Wiley, New York, third ed. (2003). [2] Conradsen, K., Nielsen, A. A., Schou, J., and Skriver, H., "A test statistic in the complex Wishart distribution and its application to change detection in polarimetric SAR data," IEEE Transactions on Geoscience and Remote Sensing 41(1): 4-19, 2003. [3] Schou, J
Saccomani, Maria Pia; Audoly, Stefania; Bellu, Giuseppina; D'Angiò, Leontina
2010-04-01
DAISY (Differential Algebra for Identifiability of SYstems) is a recently developed computer algebra software tool which can be used to automatically check global identifiability of (linear and) nonlinear dynamic models described by differential equations involving polynomial or rational functions. Global identifiability is a fundamental prerequisite for model identification which is important not only for biological or medical systems but also for many physical and engineering systems derived from first principles. Lack of identifiability implies that the parameter estimation techniques may not fail but any obtained numerical estimates will be meaningless. The software does not require understanding of the underlying mathematical principles and can be used by researchers in applied fields with a minimum of mathematical background. We illustrate the DAISY software by checking the a priori global identifiability of two benchmark nonlinear models taken from the literature. The analysis of these two examples includes comparison with other methods and demonstrates how identifiability analysis is simplified by this tool. Thus we illustrate the identifiability analysis of other two examples, by including discussion of some specific aspects related to the role of observability and knowledge of initial conditions in testing identifiability and to the computational complexity of the software. The main focus of this paper is not on the description of the mathematical background of the algorithm, which has been presented elsewhere, but on illustrating its use and on some of its more interesting features. DAISY is available on the web site http://www.dei.unipd.it/ approximately pia/. 2010 Elsevier Ltd. All rights reserved.
State-trait decomposition of Name Letter Test scores and relationships with global self-esteem.
Perinelli, Enrico; Alessandri, Guido; Donnellan, M Brent; Łaguna, Mariola
2018-06-01
The Name Letter Test (NLT) assesses the degree that participants show a preference for an individual's own initials. The NLT was often thought to measure implicit self-esteem, but recent literature reviews do not equivocally support this hypothesis. Several authors have argued that the NLT is most strongly associated with the state component of self-esteem. The current research uses a modified STARTS model to (a) estimate the percentage of stable and transient components of the NLT and (b) estimate the covariances between stable/transient components of the NLT and stable/transient components of self-esteem and positive and negative affect. Two longitudinal studies were conducted with different time lags: In Study 1, participants were assessed daily for 7 consecutive days, whereas in Study 2, participants were assessed weekly for 8 consecutive weeks. Participants also completed a battery of questionnaires including global self-esteem, positive affect, and negative affect. In both studies, the NLT showed (a) high stability across time, (b) a high percentage of stable variance, (c) no significant covariance with stable and transient factors for global self-esteem, and (d) a different pattern of correlations with stable and transient factors of affect than global self-esteem. Collectively, these results further undermine the claim that the NLT is a valid measure of implicit self-esteem. Future work is needed to identify theoretically grounded correlates of the NLT. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Directory of Open Access Journals (Sweden)
Yun-shil Cha
2013-01-01
Full Text Available Birnbaum (2011, 2012 questioned the iid (independent and identically distributed sampling assumptions used by state-of-the-art statistical tests in Regenwetter, Dana and Davis-Stober's (2010, 2011 analysis of the ``linear order model''. Birnbaum (2012 cited, but did not use, a test of iid by Smith and Batchelder (2008 with analytically known properties. Instead, he created two new test statistics with unknown sampling distributions. Our rebuttal has five components: 1 We demonstrate that the Regenwetter et al. data pass Smith and Batchelder's test of iid with flying colors. 2 We provide evidence from Monte Carlo simulations that Birnbaum's (2012 proposed tests have unknown Type-I error rates, which depend on the actual choice probabilities and on how data are coded as well as on the null hypothesis of iid sampling. 3 Birnbaum analyzed only a third of Regenwetter et al.'s data. We show that his two new tests fail to replicate on the other two-thirds of the data, within participants. 4 Birnbaum selectively picked data of one respondent to suggest that choice probabilities may have changed partway into the experiment. Such nonstationarity could potentially cause a seemingly good fit to be a Type-II error. We show that the linear order model fits equally well if we allow for warm-up effects. 5 Using hypothetical data, Birnbaum (2012 claimed to show that ``true-and-error'' models for binary pattern probabilities overcome the alleged short-comings of Regenwetter et al.'s approach. We disprove this claim on the same data.
Statistical analysis on the fluence factor of surveillance test data of Korean nuclear power plants
Energy Technology Data Exchange (ETDEWEB)
Lee, Gyeong Geun; Kim, Min Chul; Yoon, Ji Hyun; Lee, Bong Sang; Lim, Sang Yeob; Kwon, Jun Hyun [Nuclear Materials Safety Research Division, Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2017-06-15
The transition temperature shift (TTS) of the reactor pressure vessel materials is an important factor that determines the lifetime of a nuclear power plant. The prediction of the TTS at the end of a plant’s lifespan is calculated based on the equation of Regulatory Guide 1.99 revision 2 (RG1.99/2) from the US. The fluence factor in the equation was expressed as a power function, and the exponent value was determined by the early surveillance data in the US. Recently, an advanced approach to estimate the TTS was proposed in various countries for nuclear power plants, and Korea is considering the development of a new TTS model. In this study, the TTS trend of the Korean surveillance test results was analyzed using a nonlinear regression model and a mixed-effect model based on the power function. The nonlinear regression model yielded a similar exponent as the power function in the fluence compared with RG1.99/2. The mixed-effect model had a higher value of the exponent and showed superior goodness of fit compared with the nonlinear regression model. Compared with RG1.99/2 and RG1.99/3, the mixed-effect model provided a more accurate prediction of the TTS.
Mieth, Bettina; Kloft, Marius; Rodríguez, Juan Antonio; Sonnenburg, Sören; Vobruba, Robin; Morcillo-Suárez, Carlos; Farré, Xavier; Marigorta, Urko M.; Fehr, Ernst; Dickhaus, Thorsten; Blanchard, Gilles; Schunk, Daniel; Navarro, Arcadi; Müller, Klaus-Robert
2016-01-01
The standard approach to the analysis of genome-wide association studies (GWAS) is based on testing each position in the genome individually for statistical significance of its association with the phenotype under investigation. To improve the analysis of GWAS, we propose a combination of machine learning and statistical testing that takes correlation structures within the set of SNPs under investigation in a mathematically well-controlled manner into account. The novel two-step algorithm, COMBI, first trains a support vector machine to determine a subset of candidate SNPs and then performs hypothesis tests for these SNPs together with an adequate threshold correction. Applying COMBI to data from a WTCCC study (2007) and measuring performance as replication by independent GWAS published within the 2008–2015 period, we show that our method outperforms ordinary raw p-value thresholding as well as other state-of-the-art methods. COMBI presents higher power and precision than the examined alternatives while yielding fewer false (i.e. non-replicated) and more true (i.e. replicated) discoveries when its results are validated on later GWAS studies. More than 80% of the discoveries made by COMBI upon WTCCC data have been validated by independent studies. Implementations of the COMBI method are available as a part of the GWASpi toolbox 2.0. PMID:27892471
Holzhütter, H G; Genschow, E; Diener, W; Schlede, E
2003-05-01
The acute toxic class (ATC) methods were developed for determining LD(50)/LC(50) estimates of chemical substances with significantly fewer animals than needed when applying conventional LD(50)/LC(50) tests. The ATC methods are sequential stepwise procedures with fixed starting doses/concentrations and a maximum of six animals used per dose/concentration. The numbers of dead/moribund animals determine whether further testing is necessary or whether the test is terminated. In recent years we have developed classification procedures for the oral, dermal and inhalation routes of administration by using biometric methods. The biometric approach assumes a probit model for the mortality probability of a single animal and assigns the chemical to that toxicity class for which the best concordance is achieved between the statistically expected and the observed numbers of dead/moribund animals at the various steps of the test procedure. In previous publications we have demonstrated the validity of the biometric ATC methods on the basis of data obtained for the oral ATC method in two-animal ring studies with 15 participants from six countries. Although the test procedures and biometric evaluations for the dermal and inhalation ATC methods have already been published, there was a need for an adaptation of the classification schemes to the starting doses/concentrations of the Globally Harmonized Classification System (GHS) recently adopted by the Organization for Economic Co-operation and Development (OECD). Here we present the biometric evaluation of the dermal and inhalation ATC methods for the starting doses/concentrations of the GHS and of some other international classification systems still in use. We have developed new test procedures and decision rules for the dermal and inhalation ATC methods, which require significantly fewer animals to provide predictions of toxicity classes, that are equally good or even better than those achieved by using the conventional LD(50)/LC
Global testing under sparse alternatives: ANOVA, multiple comparisons and the higher criticism
Arias-Castro, Ery; Candès, Emmanuel J.; Plan, Yaniv
2011-01-01
Testing for the significance of a subset of regression coefficients in a linear model, a staple of statistical analysis, goes back at least to the work of Fisher who introduced the analysis of variance (ANOVA). We study this problem under the assumption that the coefficient vector is sparse, a common situation in modern high-dimensional settings. Suppose we have $p$ covariates and that under the alternative, the response only depends upon the order of $p^{1-\\alpha}$ of those, $0\\le\\alpha\\le1$...
International Nuclear Information System (INIS)
Ohm, H.
1982-01-01
Using the example of the delayed neutron spectrum of 24 s- 137 I the statistical model is tested in view of its applicability. A computer code was developed which simulates delayed neutron spectra by the Monte Carlo method under the assumption that the transition probabilities of the ν and the neutron decays obey the Porter-Thomas distribution while the distances of the neutron emitting levels are Wigner distribution. Gramow-Teller ν-transitions and simply forbidden ν-transitions from the preceding nucleus to the emitting nucleus were regarded. (orig./HSI) [de
Stockburger, D W
1999-05-01
Active server pages permit a software developer to customize the Web experience for users by inserting server-side script and database access into Web pages. This paper describes applications of these techniques and provides a primer on the use of these methods. Applications include a system that generates and grades individualized homework assignments and tests for statistics students. The student accesses the system as a Web page, prints out the assignment, does the assignment, and enters the answers on the Web page. The server, running on NT Server 4.0, grades the assignment, updates the grade book (on a database), and returns the answer key to the student.
International Nuclear Information System (INIS)
Yurkov, M.V.
2002-01-01
This paper presents an experimental study of the statistical properties of the radiation from a SASE FEL. The experiments were performed at the TESLA Test Facility VUV SASE FEL at DESY operating in a high-gain linear regime with a gain of about 10 6 . It is shown that fluctuations of the output radiation energy follows a gamma-distribution. We also measured for the first time the probability distribution of SASE radiation energy after a narrow-band monochromator. The experimental results are in good agreement with theoretical predictions, the energy fluctuations after the monochromator follow a negative exponential distribution
Sandurska, Elżbieta; Szulc, Aleksandra
2016-01-01
Sandurska Elżbieta, Szulc Aleksandra. A method of statistical analysis in the field of sports science when assumptions of parametric tests are not violated. Journal of Education Health and Sport. 2016;6(13):275-287. eISSN 2391-8306. DOI http://dx.doi.org/10.5281/zenodo.293762 http://ojs.ukw.edu.pl/index.php/johs/article/view/4278 The journal has had 7 points in Ministry of Science and Higher Education parametric evaluation. Part B item 754 (09.12.2016). 754 Journal...
The Comprehensive Nuclear-Test-Ban Treaty and Its Relevance for the Global Security
Directory of Open Access Journals (Sweden)
Dáša ADAŠKOVÁ
2013-06-01
Full Text Available The Comprehensive Nuclear-Test-Ban Treaty (CTBT is one of important international nuclear non-proliferation and disarmament measures. One of its pillars is the verification mechanism that has been built as an international system of nuclear testing detection to enable the control of observance of the obligations anchored in the CTBT. Despite the great relevance to the global non-proliferation and disarmament efforts, the CTBT is still not in force. The main aim of the article is to summarize the importance of the CTBT and its entry into force not only from the international relations perspective but also from the perspective of the technical implementation of the monitoring system.
Kilborn, Joshua P; Jones, David L; Peebles, Ernst B; Naar, David F
2017-04-01
Clustering data continues to be a highly active area of data analysis, and resemblance profiles are being incorporated into ecological methodologies as a hypothesis testing-based approach to clustering multivariate data. However, these new clustering techniques have not been rigorously tested to determine the performance variability based on the algorithm's assumptions or any underlying data structures. Here, we use simulation studies to estimate the statistical error rates for the hypothesis test for multivariate structure based on dissimilarity profiles (DISPROF). We concurrently tested a widely used algorithm that employs the unweighted pair group method with arithmetic mean (UPGMA) to estimate the proficiency of clustering with DISPROF as a decision criterion. We simulated unstructured multivariate data from different probability distributions with increasing numbers of objects and descriptors, and grouped data with increasing overlap, overdispersion for ecological data, and correlation among descriptors within groups. Using simulated data, we measured the resolution and correspondence of clustering solutions achieved by DISPROF with UPGMA against the reference grouping partitions used to simulate the structured test datasets. Our results highlight the dynamic interactions between dataset dimensionality, group overlap, and the properties of the descriptors within a group (i.e., overdispersion or correlation structure) that are relevant to resemblance profiles as a clustering criterion for multivariate data. These methods are particularly useful for multivariate ecological datasets that benefit from distance-based statistical analyses. We propose guidelines for using DISPROF as a clustering decision tool that will help future users avoid potential pitfalls during the application of methods and the interpretation of results.
Global Environmental Micro Sensors Test Operations in the Natural Environment (GEMSTONE
Directory of Open Access Journals (Sweden)
Mark ADAMS
2007-10-01
Full Text Available ENSCO, Inc. is developing an innovative atmospheric observing system known as Global Environmental Micro Sensors (GEMS. The GEMS concept features an integrated system of miniaturized in situ, airborne probes measuring temperature, relative humidity, pressure, and vector wind velocity. In order for the probes to remain airborne for long periods of time, their design is based on a helium-filled super-pressure balloon. The GEMS probes are neutrally buoyant and carried passively by the wind at predetermined levels. Each probe contains on-board satellite communication, power generation, processing, and geolocation capabilities. ENSCO has partnered with the National Aeronautics and Space Administration’s Kennedy Space Center (KSC Weather Office for a project called GEMS Test Operations in the Natural Environment (GEMSTONE. The goal of the GEMSTONE project was to build and field-test a small system of prototype probes in the Earth’s atmosphere. This paper summarizes the 9-month GEMSTONE project (Sep 2006 – May 2007 including probe and system engineering as well as experiment design and data analysis from laboratory and field tests. These tests revealed issues with reliability, sensor accuracy, electronics miniaturization, and sub-system optimization. Nevertheless, the success of the third and final free flight test provides a solid foundation to move forward in follow on projects addressing these issues as highlighted in the technology roadmap for future GEMS development.
Seismic waveform inversion best practices: regional, global and exploration test cases
Modrak, Ryan; Tromp, Jeroen
2016-09-01
Reaching the global minimum of a waveform misfit function requires careful choices about the nonlinear optimization, preconditioning and regularization methods underlying an inversion. Because waveform inversion problems are susceptible to erratic convergence associated with strong nonlinearity, one or two test cases are not enough to reliably inform such decisions. We identify best practices, instead, using four seismic near-surface problems, one regional problem and two global problems. To make meaningful quantitative comparisons between methods, we carry out hundreds of inversions, varying one aspect of the implementation at a time. Comparing nonlinear optimization algorithms, we find that limited-memory BFGS provides computational savings over nonlinear conjugate gradient methods in a wide range of test cases. Comparing preconditioners, we show that a new diagonal scaling derived from the adjoint of the forward operator provides better performance than two conventional preconditioning schemes. Comparing regularization strategies, we find that projection, convolution, Tikhonov regularization and total variation regularization are effective in different contexts. Besides questions of one strategy or another, reliability and efficiency in waveform inversion depend on close numerical attention and care. Implementation details involving the line search and restart conditions have a strong effect on computational cost, regardless of the chosen nonlinear optimization algorithm.
A test of a global seismic system for monitoring earthquakes and underground nuclear explosions
International Nuclear Information System (INIS)
Bowman, J.R.; Muirhead, K.; Spiliopoulos, S.; Jepsen, D.; Leonard, M.
1993-01-01
Australia is a member of the Group of Scientific Experts (GSE) to consider international cooperative measures to detect and identify events, an ad hoc group of the United Nations Conference on Disarmament. The GSE conducted a large-scale technical test (GSETT-2) from 22 April to 9 June 1991 that focused on the exchange and analysis of seismic parameter and waveform data. Thirty-four countries participated in GSETT-2, and data were contributed from 60 stations on all continents. GSETT-2 demonstrated the feasibility of collecting and transmitting large volumes (around 1 giga-byte) of digital data around the world, and of producing a preliminary bulletin of global seismicity within 48 hours and a final bulletin within 7 days. However, the experiment also revealed the difficulty of keeping up with the flow of data and analysis with existing resources. The Final Event Bulletins listed 3715 events for the 42 recording days of the test, about twice the number reported routinely by another international agency 5 months later. The quality of the Final Event Bulletin was limited by the uneven spatial distribution of seismic stations that contributed to GSETT-2 and by the ambiguity of associating phases detected by widely separated stations to form seismic events. A monitoring system similar to that used in GSETT-2 could provide timely and accurate reporting of global seismicity. It would need an improved distribution of stations, application of more conservative event formation rules and further development of analysis software. 8 refs., 9 figs
Bjøntegaard, Øyvind; Krauss, Matias; Budelmann, Harald
2015-01-01
This report presents the Round-Robin (RR) program and test results including a statistical evaluation of the RILEM TC195-DTD committee named “Recommendation for test methods for autogenous deformation (AD) and thermal dilation (TD) of early age concrete”. The task of the committee was to investigate the linear test set-up for AD and TD measurements (Dilation Rigs) in the period from setting to the end of the hardening phase some weeks after. These are the stress-inducing deformations in a hardening concrete structure subjected to restraint conditions. The main task was to carry out an RR program on testing of AD of one concrete at 20 °C isothermal conditions in Dilation Rigs. The concrete part materials were distributed to 10 laboratories (Canada, Denmark, France, Germany, Japan, The Netherlands, Norway, Sweden and USA), and in total 30 tests on AD were carried out. Some supporting tests were also performed, as well as a smaller RR on cement paste. The committee has worked out a test procedure recommenda...
International Nuclear Information System (INIS)
Foray, G.; Descamps-Mandine, A.; R’Mili, M.; Lamon, J.
2012-01-01
The present paper investigates glass fibre flaw size distributions. Two commercial fibre grades (HP and HD) mainly used in cement-based composite reinforcement were studied. Glass fibre fractography is a difficult and time consuming exercise, and thus is seldom carried out. An approach based on tensile tests on multifilament bundles and examination of the fibre surface by atomic force microscopy (AFM) was used. Bundles of more than 500 single filaments each were tested. Thus a statistically significant database of failure data was built up for the HP and HD glass fibres. Gaussian flaw distributions were derived from the filament tensile strength data or extracted from the AFM images. The two distributions were compared. Defect sizes computed from raw AFM images agreed reasonably well with those derived from tensile strength data. Finally, the pertinence of a Gaussian distribution was discussed. The alternative Pareto distribution provided a fair approximation when dealing with AFM flaw size.
Directory of Open Access Journals (Sweden)
Y. Hu
2007-06-01
Full Text Available This study presents an empirical relation that links the volume extinction coefficients of water clouds, the layer integrated depolarization ratios measured by lidar, and the effective radii of water clouds derived from collocated passive sensor observations. Based on Monte Carlo simulations of CALIPSO lidar observations, this method combines the cloud effective radius reported by MODIS with the lidar depolarization ratios measured by CALIPSO to estimate both the liquid water content and the effective number concentration of water clouds. The method is applied to collocated CALIPSO and MODIS measurements obtained during July and October of 2006, and January 2007. Global statistics of the cloud liquid water content and effective number concentration are presented.
Maina, Fadji Zaouna; Guadagnini, Alberto
2018-01-01
We study the contribution of typically uncertain subsurface flow parameters to gravity changes that can be recorded during pumping tests in unconfined aquifers. We do so in the framework of a Global Sensitivity Analysis and quantify the effects of uncertainty of such parameters on the first four statistical moments of the probability distribution of gravimetric variations induced by the operation of the well. System parameters are grouped into two main categories, respectively, governing groundwater flow in the unsaturated and saturated portions of the domain. We ground our work on the three-dimensional analytical model proposed by Mishra and Neuman (2011), which fully takes into account the richness of the physical process taking place across the unsaturated and saturated zones and storage effects in a finite radius pumping well. The relative influence of model parameter uncertainties on drawdown, moisture content, and gravity changes are quantified through (a) the Sobol' indices, derived from a classical decomposition of variance and (b) recently developed indices quantifying the relative contribution of each uncertain model parameter to the (ensemble) mean, skewness, and kurtosis of the model output. Our results document (i) the importance of the effects of the parameters governing the unsaturated flow dynamics on the mean and variance of local drawdown and gravity changes; (ii) the marked sensitivity (as expressed in terms of the statistical moments analyzed) of gravity changes to the employed water retention curve model parameter, specific yield, and storage, and (iii) the influential role of hydraulic conductivity of the unsaturated and saturated zones to the skewness and kurtosis of gravimetric variation distributions. The observed temporal dynamics of the strength of the relative contribution of system parameters to gravimetric variations suggest that gravity data have a clear potential to provide useful information for estimating the key hydraulic
Matsui, Toshihisa; Chern, Jiun-Dar; Tao, Wei-Kuo; Lang, Stephen E.; Satoh, Masaki; Hashino, Tempei; Kubota, Takuji
2016-01-01
A 14-year climatology of Tropical Rainfall Measuring Mission (TRMM) collocated multi-sensor signal statistics reveal a distinct land-ocean contrast as well as geographical variability of precipitation type, intensity, and microphysics. Microphysics information inferred from the TRMM precipitation radar and Microwave Imager (TMI) show a large land-ocean contrast for the deep category, suggesting continental convective vigor. Over land, TRMM shows higher echo-top heights and larger maximum echoes, suggesting taller storms and more intense precipitation, as well as larger microwave scattering, suggesting the presence of morelarger frozen convective hydrometeors. This strong land-ocean contrast in deep convection is invariant over seasonal and multi-year time-scales. Consequently, relatively short-term simulations from two global storm-resolving models can be evaluated in terms of their land-ocean statistics using the TRMM Triple-sensor Three-step Evaluation via a satellite simulator. The models evaluated are the NASA Multi-scale Modeling Framework (MMF) and the Non-hydrostatic Icosahedral Cloud Atmospheric Model (NICAM). While both simulations can represent convective land-ocean contrasts in warm precipitation to some extent, near-surface conditions over land are relatively moisture in NICAM than MMF, which appears to be the key driver in the divergent warm precipitation results between the two models. Both the MMF and NICAM produced similar frequencies of large CAPE between land and ocean. The dry MMF boundary layer enhanced microwave scattering signals over land, but only NICAM had an enhanced deep convection frequency over land. Neither model could reproduce a realistic land-ocean contrast in in deep convective precipitation microphysics. A realistic contrast between land and ocean remains an issue in global storm-resolving modeling.
Nacelle Chine Installation Based on Wind-Tunnel Test Using Efficient Global Optimization
Kanazaki, Masahiro; Yokokawa, Yuzuru; Murayama, Mitsuhiro; Ito, Takeshi; Jeong, Shinkyu; Yamamoto, Kazuomi
Design exploration of a nacelle chine installation was carried out. The nacelle chine improves stall performance when deploying multi-element high-lift devices. This study proposes an efficient design process using a Kriging surrogate model to determine the nacelle chine installation point in wind-tunnel tests. The design exploration was conducted in a wind-tunnel using the JAXA high-lift aircraft model at the JAXA Large-scale Low-speed Wind Tunnel. The objective was to maximize the maximum lift. The chine installation points were designed on the engine nacelle in the axial and chord-wise direction, while the geometry of the chine was fixed. In the design process, efficient global optimization (EGO) which includes Kriging model and genetic algorithm (GA) was employed. This method makes it possible both to improve the accuracy of the response surface and to explore the global optimum efficiently. Detailed observations of flowfields using the Particle Image Velocimetry method confirmed the chine effect and design results.
International Nuclear Information System (INIS)
Zhang Shuangnan; Xie Yi
2012-01-01
We test models for the evolution of neutron star (NS) magnetic fields (B). Our model for the evolution of the NS spin is taken from an analysis of pulsar timing noise presented by Hobbs et al.. We first test the standard model of a pulsar's magnetosphere in which B does not change with time and magnetic dipole radiation is assumed to dominate the pulsar's spin-down. We find that this model fails to predict both the magnitudes and signs of the second derivatives of the spin frequencies (ν-double dot). We then construct a phenomenological model of the evolution of B, which contains a long-term decay (LTD) modulated by short-term oscillations; a pulsar's spin is thus modified by its B-evolution. We find that an exponential LTD is not favored by the observed statistical properties of ν-double dot for young pulsars and fails to explain the fact that ν-double dot is negative for roughly half of the old pulsars. A simple power-law LTD can explain all the observed statistical properties of ν-double dot. Finally, we discuss some physical implications of our results to models of the B-decay of NSs and suggest reliable determination of the true ages of many young NSs is needed, in order to constrain further the physical mechanisms of their B-decay. Our model can be further tested with the measured evolutions of ν-dot and ν-double dot for an individual pulsar; the decay index, oscillation amplitude, and period can also be determined this way for the pulsar.
International Nuclear Information System (INIS)
Kraemer, H.A.
1982-01-01
Using the data of 339 patients, the following parameters of thyroid function were statistically evaluated. The in vitro parameters ET 3 U, TT 4 (D), FT 4 -index and PB 127 I and the radioiodine test with determination of PB 131 I before i.v. injection of 400 μg protirelin (DHP) and 120 minutes after the injection. There was no correlation between the percentage Change of the PB 121 I level 120 min after protirelin (DHP) administration and the percentage change of the TSH level 30 min after protirelin (DTP1) administration. The accuracies of the in vitro parameters ET 3 U, TT 4 (D) and FT 4 -index on the one hand and the extended protirelin test on the other hand were compared. (orig./MG) [de
Some tests of wet tropospheric calibration for the CASA Uno Global Positioning System experiment
Dixon, T. H.; Wolf, S. Kornreich
1990-01-01
Wet tropospheric path delay can be a major error source for Global Positioning System (GPS) geodetic experiments. Strategies for minimizing this error are investigted using data from CASA Uno, the first major GPS experiment in Central and South America, where wet path delays may be both high and variable. Wet path delay calibration using water vapor radiometers (WVRs) and residual delay estimation is compared with strategies where the entire wet path delay is estimated stochastically without prior calibration, using data from a 270-km test baseline in Costa Rica. Both approaches yield centimeter-level baseline repeatability and similar tropospheric estimates, suggesting that WVR calibration is not critical for obtaining high precision results with GPS in the CASA region.
Understanding Statistics - Cancer Statistics
Annual reports of U.S. cancer statistics including new cases, deaths, trends, survival, prevalence, lifetime risk, and progress toward Healthy People targets, plus statistical summaries for a number of common cancer types.
Koolen, B.B.; Pijnenburg, M.W.; Brackel, H.J.; Landstra, A.M.; Berg, N.J. van den; Merkus, P.J.F.M.; Hop, W.C.J.; Vaessen-Verberne, A.A.
2011-01-01
Several tools are useful in detecting uncontrolled asthma in children. The aim of this study was to compare Global Initiative for Asthma (GINA) guidelines with the Childhood Asthma Control Test (C-ACT) and the Asthma Control Test (ACT) in detecting uncontrolled asthma in children. 145 children with
Evans, Mark
2016-12-01
A new parametric approach, termed the Wilshire equations, offers the realistic potential of being able to accurately lift materials operating at in-service conditions from accelerated test results lasting no more than 5000 hours. The success of this approach can be attributed to a well-defined linear relationship that appears to exist between various creep properties and a log transformation of the normalized stress. However, these linear trends are subject to discontinuities, the number of which appears to differ from material to material. These discontinuities have until now been (1) treated as abrupt in nature and (2) identified by eye from an inspection of simple graphical plots of the data. This article puts forward a statistical test for determining the correct number of discontinuities present within a creep data set and a method for allowing these discontinuities to occur more gradually, so that the methodology is more in line with the accepted view as to how creep mechanisms evolve with changing test conditions. These two developments are fully illustrated using creep data sets on two steel alloys. When these new procedures are applied to these steel alloys, not only do they produce more accurate and realistic looking long-term predictions of the minimum creep rate, but they also lead to different conclusions about the mechanisms determining the rates of creep from those originally put forward by Wilshire.
Feiveson, Alan H.; Foy, Millennia; Ploutz-Snyder, Robert; Fiedler, James
2014-01-01
Do you have elevated p-values? Is the data analysis process getting you down? Do you experience anxiety when you need to respond to criticism of statistical methods in your manuscript? You may be suffering from Insufficient Statistical Support Syndrome (ISSS). For symptomatic relief of ISSS, come for a free consultation with JSC biostatisticians at our help desk during the poster sessions at the HRP Investigators Workshop. Get answers to common questions about sample size, missing data, multiple testing, when to trust the results of your analyses and more. Side effects may include sudden loss of statistics anxiety, improved interpretation of your data, and increased confidence in your results.
DEFF Research Database (Denmark)
Christensen, Rune Haubo Bojesen; Ennis, John M.; Ennis, Daniel M.
2014-01-01
/preference responses or ties in choice experiments. Food Quality and Preference, 23, 13–17) noted that this proportion can depend on the product category, have proposed that the expected proportion of preference responses within a given category be called an identicality norm, and have argued that knowledge...... of such norms is valuable for more complete interpretation of 2-Alternative Choice (2-AC) data. For instance, these norms can be used to indicate consumer segmentation even with non-replicated data. In this paper, we show that the statistical test suggested by Ennis and Ennis (2012a) behaves poorly and has too...... when ingredient changes are considered for cost-reduction or health initiative purposes....
Directory of Open Access Journals (Sweden)
J. Q. Zhao
2016-06-01
Full Text Available Accurate and timely change detection of Earth’s surface features is extremely important for understanding relationships and interactions between people and natural phenomena. Many traditional methods of change detection only use a part of polarization information and the supervised threshold selection. Those methods are insufficiency and time-costing. In this paper, we present a novel unsupervised change-detection method based on quad-polarimetric SAR data and automatic threshold selection to solve the problem of change detection. First, speckle noise is removed for the two registered SAR images. Second, the similarity measure is calculated by the test statistic, and automatic threshold selection of KI is introduced to obtain the change map. The efficiency of the proposed method is demonstrated by the quad-pol SAR images acquired by Radarsat-2 over Wuhan of China.
International Nuclear Information System (INIS)
DeYoung, P.A.; Gelderloos, C.J.; Kortering, D.; Sarafa, J.; Zienert, K.; Gordon, M.S.; Fineman, B.J.; Gilfoyle, G.P.; Lu, X.; McGrath, R.L.; de Castro Rizzo, D.M.; Alexander, J.M.; Auger, G.; Kox, S.; Vaz, L.C.; Beck, C.; Henderson, D.J.; Kovar, D.G.; Vineyard, M.F.; Department of Physics, State University of New York at Stony Brook, Stony Brook, New York 11794; Department of Chemistry, State University of New York at Stony Brook, Stony Brook, New York 11794; Argonne National Laboratory, Argonne, Illinois 60439)
1990-01-01
We present data for small-angle particle-particle correlations from the reactions 80, 140, 215, and 250 MeV 16 O+ 27 Al→p-p or p-d. The main features of these data are anticorrelations for small relative momenta (≤25 MeV/c) that strengthen with increasing bombarding energy. Statistical model calculations have been performed to predict the mean lifetimes for each step of evaporative decay, and then simulate the trajectories of the particle pairs and the resulting particle correlations. This simulation accounts very well for the trends of the data and can provide an important new test for the hypothesis of equilibration on which the model is built
Forbes, Valery E; Aufderheide, John; Warbritton, Ryan; van der Hoeven, Nelly; Caspers, Norbert
2007-03-01
This study presents results of the effects of bisphenol A (BPA) on adult egg production, egg hatchability, egg development rates and juvenile growth rates in the freshwater gastropod, Marisa cornuarietis. We observed no adult mortality, substantial inter-snail variability in reproductive output, and no effects of BPA on reproduction during 12 weeks of exposure to 0, 0.1, 1.0, 16, 160 or 640 microg/L BPA. We observed no effects of BPA on egg hatchability or timing of egg hatching. Juveniles showed good growth in the control and all treatments, and there were no significant effects of BPA on this endpoint. Our results do not support previous claims of enhanced reproduction in Marisa cornuarietis in response to exposure to BPA. Statistical power analysis indicated high levels of inter-snail variability in the measured endpoints and highlighted the need for sufficient replication when testing treatment effects on reproduction in M. cornuarietis with adequate power.
Baseline Testing of the EV Global E-Bike with Ultracapacitors
Eichenberg, Dennis J.; Kolacz, John S.; Tavernelli, Paul F.
2001-01-01
The NASA John H. Glenn Research Center initiated baseline testing of the EV Global E-Bike SX with ultracapacitors as a way to reduce pollution in urban areas, reduce fossil fuel consumption, and reduce operating costs for transportation systems. The E-Bike provides an inexpensive approach to advance the state of art in hybrid technology in a practical application. The project transfers space technology to terrestrial use via nontraditional partners, and provides power system data valuable for future space applications. The work was done under the Hybrid Power Management (HPM) Program, which includes the Hybrid Electric Transit Bus (HETB). The E-Bike is a state of the art, ground up, hybrid electrical bicycle. Unique features of the vehicle's power system include the use of an efficient, 400 W electric hub motor, and a seven-speed derailleur system that permits operation as fully electric, fully pedal, or a combination of the two. Other innovative features, such as regenerative braking through ultracapacitor energy storage, are planned. Regenerative braking recovers much of the kinetic energy of the vehicle during deceleration. A description of the E-bike, the results of performance testing, and future vehicle development plans are given in this report. The report concludes that the E-Bike provides excellent performance, and that the implementation of ultracapacitors in the power system can provide significant performance improvements.
Meijer, Rob R.; van Krimpen-Stoop, Edith M. L. A.
In this study a cumulative-sum (CUSUM) procedure from the theory of Statistical Process Control was modified and applied in the context of person-fit analysis in a computerized adaptive testing (CAT) environment. Six person-fit statistics were proposed using the CUSUM procedure, and three of them could be used to investigate the CAT in online test…
International Nuclear Information System (INIS)
CAP, JEROME S.; TRACEY, BRIAN
1999-01-01
Aerospace payloads, such as satellites, are subjected to vibroacoustic excitation during launch. Sandia's MTI satellite has recently been certified to this environment using a combination of base input random vibration and reverberant acoustic noise. The initial choices for the acoustic and random vibration test specifications were obtained from the launch vehicle Interface Control Document (ICD). In order to tailor the random vibration levels for the laboratory certification testing, it was necessary to determine whether vibration energy was flowing across the launch vehicle interface from the satellite to the launch vehicle or the other direction. For frequencies below 120 Hz this issue was addressed using response limiting techniques based on results from the Coupled Loads Analysis (CLA). However, since the CLA Finite Element Analysis FEA model was only correlated for frequencies below 120 Hz, Statistical Energy Analysis (SEA) was considered to be a better choice for predicting the direction of the energy flow for frequencies above 120 Hz. The existing SEA model of the launch vehicle had been developed using the VibroAcoustic Payload Environment Prediction System (VAPEPS) computer code[1]. Therefore, the satellite would have to be modeled using VAPEPS as well. As is the case for any computational model, the confidence in its predictive capability increases if one can correlate a sample prediction against experimental data. Fortunately, Sandia had the ideal data set for correlating an SEA model of the MTI satellite--the measured response of a realistic assembly to a reverberant acoustic test that was performed during MTI's qualification test series. The first part of this paper will briefly describe the VAPEPS modeling effort and present the results of the correlation study for the VAPEPS model. The second part of this paper will present the results from a study that used a commercial SEA software package[2] to study the effects of in-plane modes and to evaluate
Goldie, Fraser C; Fulton, Rachael L; Dawson, Jesse; Bluhmki, Erich; Lees, Kennedy R
2014-08-01
Clinical trials for acute ischemic stroke treatment require large numbers of participants and are expensive to conduct. Methods that enhance statistical power are therefore desirable. We explored whether this can be achieved by a measure incorporating both early and late measures of outcome (e.g. seven-day NIH Stroke Scale combined with 90-day modified Rankin scale). We analyzed sensitivity to treatment effect, using proportional odds logistic regression for ordinal scales and generalized estimating equation method for global outcomes, with all analyses adjusted for baseline severity and age. We ran simulations to assess relations between sample size and power for ordinal scales and corresponding global outcomes. We used R version 2·12·1 (R Development Core Team. R Foundation for Statistical Computing, Vienna, Austria) for simulations and SAS 9·2 (SAS Institute Inc., Cary, NC, USA) for all other analyses. Each scale considered for combination was sensitive to treatment effect in isolation. The mRS90 and NIHSS90 had adjusted odds ratio of 1·56 and 1·62, respectively. Adjusted odds ratio for global outcomes of the combination of mRS90 with NIHSS7 and NIHSS90 with NIHSS7 were 1·69 and 1·73, respectively. The smallest sample sizes required to generate statistical power ≥80% for mRS90, NIHSS7, and global outcomes of mRS90 and NIHSS7 combined and NIHSS90 and NIHSS7 combined were 500, 490, 400, and 380, respectively. When data concerning both early and late outcomes are combined into a global measure, there is increased sensitivity to treatment effect compared with solitary ordinal scales. This delivers a 20% reduction in required sample size at 80% power. Combining early with late outcomes merits further consideration. © 2013 The Authors. International Journal of Stroke © 2013 World Stroke Organization.
Directory of Open Access Journals (Sweden)
Iksoo Huh
Full Text Available Recent advances in genotyping methodologies have allowed genome-wide association studies (GWAS to accurately identify genetic variants that associate with common or pathological complex traits. Although most GWAS have focused on associations with single genetic variants, joint identification of multiple genetic variants, and how they interact, is essential for understanding the genetic architecture of complex phenotypic traits. Here, we propose an efficient stepwise method based on the Cochran-Mantel-Haenszel test (for stratified categorical data to identify causal joint multiple genetic variants in GWAS. This method combines the CMH statistic with a stepwise procedure to detect multiple genetic variants associated with specific categorical traits, using a series of associated I × J contingency tables and a null hypothesis of no phenotype association. Through a new stratification scheme based on the sum of minor allele count criteria, we make the method more feasible for GWAS data having sample sizes of several thousands. We also examine the properties of the proposed stepwise method via simulation studies, and show that the stepwise CMH test performs better than other existing methods (e.g., logistic regression and detection of associations by Markov blanket for identifying multiple genetic variants. Finally, we apply the proposed approach to two genomic sequencing datasets to detect linked genetic variants associated with bipolar disorder and obesity, respectively.
Page, G P; Amos, C I; Boerwinkle, E
1998-04-01
We present a test statistic, the quantitative LOD (QLOD) score, for the testing of both linkage and exclusion of quantitative-trait loci in randomly selected human sibships. As with the traditional LOD score, the boundary values of 3, for linkage, and -2, for exclusion, can be used for the QLOD score. We investigated the sample sizes required for inferring exclusion and linkage, for various combinations of linked genetic variance, total heritability, recombination distance, and sibship size, using fixed-size sampling. The sample sizes required for both linkage and exclusion were not qualitatively different and depended on the percentage of variance being linked or excluded and on the total genetic variance. Information regarding linkage and exclusion in sibships larger than size 2 increased as approximately all possible pairs n(n-1)/2 up to sibships of size 6. Increasing the recombination (theta) distance between the marker and the trait loci reduced empirically the power for both linkage and exclusion, as a function of approximately (1-2theta)4.
International Nuclear Information System (INIS)
Kleva, R.G.
1980-01-01
The first part of this work is concerned with test particle transport in a stochastic magnetic field. In the absence of collisions, the test particle self-diffusion coefficient is given by D = D/sub m/ V (in the zero gyroradius limit), where D/sub m/ is the magnetic diffusion coefficient due to a given spectrum of magnetic fluctuations and V is the particle velocity along a field line. The effect of collisions, either classical or turbulent, on this result is considered. The second part of this work is concerned with the evolution of the collisionless tearing mode in the presence of a stochastic magnetic field. A statistical closure approximation, obtained from the DIA by neglecting a mode-coupling term, is used to derive a nonlinear dispersion relation. For L 0 < L/sub K/ the dominant nonlinear effect is shown to be a turbulent broadening of the perturbed current layer. Saturation occurs when the perturbed current layer broadens to the point where Δ' = 0, where Δ' is the jump in the logarithmic derivative of the vector potential across the perturbed current layer
Energy Technology Data Exchange (ETDEWEB)
Werth, D.; Chen, K. F.
2013-08-22
The ability of water managers to maintain adequate supplies in coming decades depends, in part, on future weather conditions, as climate change has the potential to alter river flows from their current values, possibly rendering them unable to meet demand. Reliable climate projections are therefore critical to predicting the future water supply for the United States. These projections cannot be provided solely by global climate models (GCMs), however, as their resolution is too coarse to resolve the small-scale climate changes that can affect hydrology, and hence water supply, at regional to local scales. A process is needed to ‘downscale’ the GCM results to the smaller scales and feed this into a surface hydrology model to help determine the ability of rivers to provide adequate flow to meet future needs. We apply a statistical downscaling to GCM projections of precipitation and temperature through the use of a scaling method. This technique involves the correction of the cumulative distribution functions (CDFs) of the GCM-derived temperature and precipitation results for the 20{sup th} century, and the application of the same correction to 21{sup st} century GCM projections. This is done for three meteorological stations located within the Coosa River basin in northern Georgia, and is used to calculate future river flow statistics for the upper Coosa River. Results are compared to the historical Coosa River flow upstream from Georgia Power Company’s Hammond coal-fired power plant and to flows calculated with the original, unscaled GCM results to determine the impact of potential changes in meteorology on future flows.
DEFF Research Database (Denmark)
Korneliussen, Thorfinn Sand; Moltke, Ida; Albrechtsen, Anders
2013-01-01
A number of different statistics are used for detecting natural selection using DNA sequencing data, including statistics that are summaries of the frequency spectrum, such as Tajima's D. These statistics are now often being applied in the analysis of Next Generation Sequencing (NGS) data. Howeve......, estimates of frequency spectra from NGS data are strongly affected by low sequencing coverage; the inherent technology dependent variation in sequencing depth causes systematic differences in the value of the statistic among genomic regions....
Biskaborn, B. K.; Lanckman, J.-P.; Lantuit, H.; Elger, K.; Streletskiy, D. A.; Cable, W. L.; Romanovsky, V. E.
2015-03-01
The Global Terrestrial Network for Permafrost (GTN-P) provides the first dynamic database associated with the Thermal State of Permafrost (TSP) and the Circumpolar Active Layer Monitoring (CALM) programs, which extensively collect permafrost temperature and active layer thickness data from Arctic, Antarctic and Mountain permafrost regions. The purpose of the database is to establish an "early warning system" for the consequences of climate change in permafrost regions and to provide standardized thermal permafrost data to global models. In this paper we perform statistical analysis of the GTN-P metadata aiming to identify the spatial gaps in the GTN-P site distribution in relation to climate-effective environmental parameters. We describe the concept and structure of the Data Management System in regard to user operability, data transfer and data policy. We outline data sources and data processing including quality control strategies. Assessment of the metadata and data quality reveals 63% metadata completeness at active layer sites and 50% metadata completeness for boreholes. Voronoi Tessellation Analysis on the spatial sample distribution of boreholes and active layer measurement sites quantifies the distribution inhomogeneity and provides potential locations of additional permafrost research sites to improve the representativeness of thermal monitoring across areas underlain by permafrost. The depth distribution of the boreholes reveals that 73% are shallower than 25 m and 27% are deeper, reaching a maximum of 1 km depth. Comparison of the GTN-P site distribution with permafrost zones, soil organic carbon contents and vegetation types exhibits different local to regional monitoring situations on maps. Preferential slope orientation at the sites most likely causes a bias in the temperature monitoring and should be taken into account when using the data for global models. The distribution of GTN-P sites within zones of projected temperature change show a high
International Nuclear Information System (INIS)
Aslan, B.; Zech, G.
2005-01-01
We introduce the novel concept of statistical energy as a statistical tool. We define statistical energy of statistical distributions in a similar way as for electric charge distributions. Charges of opposite sign are in a state of minimum energy if they are equally distributed. This property is used to check whether two samples belong to the same parent distribution, to define goodness-of-fit tests and to unfold distributions distorted by measurement. The approach is binning-free and especially powerful in multidimensional applications
Analysis of statistical misconception in terms of statistical reasoning
Maryati, I.; Priatna, N.
2018-05-01
Reasoning skill is needed for everyone to face globalization era, because every person have to be able to manage and use information from all over the world which can be obtained easily. Statistical reasoning skill is the ability to collect, group, process, interpret, and draw conclusion of information. Developing this skill can be done through various levels of education. However, the skill is low because many people assume that statistics is just the ability to count and using formulas and so do students. Students still have negative attitude toward course which is related to research. The purpose of this research is analyzing students’ misconception in descriptive statistic course toward the statistical reasoning skill. The observation was done by analyzing the misconception test result and statistical reasoning skill test; observing the students’ misconception effect toward statistical reasoning skill. The sample of this research was 32 students of math education department who had taken descriptive statistic course. The mean value of misconception test was 49,7 and standard deviation was 10,6 whereas the mean value of statistical reasoning skill test was 51,8 and standard deviation was 8,5. If the minimal value is 65 to state the standard achievement of a course competence, students’ mean value is lower than the standard competence. The result of students’ misconception study emphasized on which sub discussion that should be considered. Based on the assessment result, it was found that students’ misconception happen on this: 1) writing mathematical sentence and symbol well, 2) understanding basic definitions, 3) determining concept that will be used in solving problem. In statistical reasoning skill, the assessment was done to measure reasoning from: 1) data, 2) representation, 3) statistic format, 4) probability, 5) sample, and 6) association.
DAISY: a new software tool to test global identifiability of biological and physiological systems.
Bellu, Giuseppina; Saccomani, Maria Pia; Audoly, Stefania; D'Angiò, Leontina
2007-10-01
A priori global identifiability is a structural property of biological and physiological models. It is considered a prerequisite for well-posed estimation, since it concerns the possibility of recovering uniquely the unknown model parameters from measured input-output data, under ideal conditions (noise-free observations and error-free model structure). Of course, determining if the parameters can be uniquely recovered from observed data is essential before investing resources, time and effort in performing actual biomedical experiments. Many interesting biological models are nonlinear but identifiability analysis for nonlinear system turns out to be a difficult mathematical problem. Different methods have been proposed in the literature to test identifiability of nonlinear models but, to the best of our knowledge, so far no software tools have been proposed for automatically checking identifiability of nonlinear models. In this paper, we describe a software tool implementing a differential algebra algorithm to perform parameter identifiability analysis for (linear and) nonlinear dynamic models described by polynomial or rational equations. Our goal is to provide the biological investigator a completely automatized software, requiring minimum prior knowledge of mathematical modelling and no in-depth understanding of the mathematical tools. The DAISY (Differential Algebra for Identifiability of SYstems) software will potentially be useful in biological modelling studies, especially in physiology and clinical medicine, where research experiments are particularly expensive and/or difficult to perform. Practical examples of use of the software tool DAISY are presented. DAISY is available at the web site http://www.dei.unipd.it/~pia/.
Franco-Giraldo, Alvaro; Alvarez-Dardet, Carlos
2009-06-01
This article comes from the intense international pressure that follows a near-catastrophy, such as the human influenza A H1N1 epidemic, and the limited resources for confronting such events. The analysis covers prevailing 20th century trends in the international public health arena and the change-induced challenges brought on by globalization, the transition set in motion by what has been deemed the "new" international public health and an ever-increasing focus on global health, in the context of an international scenario of shifting risks and opportunities and a growing number of multinational players. Global public health is defined as a public right, based on a new appreciation of the public, a new paradigm centered on human rights, and altruistic philosophy, politics, and ethics that undergird the changes in international public health on at least three fronts: redefining its theoretical foundation, improving world health, and renewing the international public health system, all of which is the byproduct of a new form of governance. A new world health system, directed by new global public institutions, would aim to make public health a global public right and face a variety of staggering challenges, such as working on public policy management on a global scale, renewing and democratizing the current global governing structure, and conquering the limits and weaknesses witnessed by international health.
Directory of Open Access Journals (Sweden)
Corinna S Bazelet
Full Text Available The use of endemism and vascular plants only for biodiversity hotspot delineation has long been contested. Few studies have focused on the efficacy of global biodiversity hotspots for the conservation of insects, an important, abundant, and often ignored component of biodiversity. We aimed to test five alternative diversity measures for hotspot delineation and examine the efficacy of biodiversity hotspots for conserving a non-typical target organism, South African katydids. Using a 1° fishnet grid, we delineated katydid hotspots in two ways: (1 count-based: grid cells in the top 10% of total, endemic, threatened and/or sensitive species richness; vs. (2 score-based: grid cells with a mean value in the top 10% on a scoring system which scored each species on the basis of its IUCN Red List threat status, distribution, mobility and trophic level. We then compared katydid hotspots with each other and with recognized biodiversity hotspots. Grid cells within biodiversity hotspots had significantly higher count-based and score-based diversity than non-hotspot grid cells. There was a significant association between the three types of hotspots. Of the count-based measures, endemic species richness was the best surrogate for the others. However, the score-based measure out-performed all count-based diversity measures. Species richness was the least successful surrogate of all. The strong performance of the score-based method for hotspot prediction emphasizes the importance of including species' natural history information for conservation decision-making, and is easily adaptable to other organisms. Furthermore, these results add empirical support for the efficacy of biodiversity hotspots in conserving non-target organisms.
Bismark, Andrew W; Thomas, Michael L; Tarasenko, Melissa; Shiluk, Alexandra L; Rackelmann, Sonia Y; Young, Jared W; Light, Gregory A
2018-04-12
Attentional dysfunction contributes to functional impairments in schizophrenia (SZ). Sustained attention is typically assessed via continuous performance tasks (CPTs), though many CPTs have limited cross-species translational validity and place demands on additional cognitive domains. A reverse-translated 5-Choice Continuous Performance Task (5C-CPT) for human testing-originally developed for use in rodents-was designed to minimize demands on perceptual, visual learning, processing speed, or working memory functions. To-date, no studies have validated the 5C-CPT against gold standard attentional measures nor evaluated how 5C-CPT scores relate to cognition in SZ. Here we examined the relationship between the 5C-CPT and the CPT-Identical Pairs (CPT-IP), an established and psychometrically robust measure of vigilance from the MATRICS Consensus Cognitive Battery (MCCB) in a sample of SZ patients (n = 35). Relationships to global and individual subdomains of cognition were also assessed. 5C-CPT and CPT-IP measures of performance (d-prime) were strongly correlated (r = 0.60). In a regression model, the 5C-CPT and CPT-IP collectively accounted for 54% of the total variance in MCCB total scores, and 27.6% of overall cognitive variance was shared between the 5C-CPT and CPT-IP. These results indicate that the reverse translated 5C-CPT and the gold standard CPT-IP index a common attentional construct that also significantly overlaps with variance in general cognitive performance. The use of simple, cross-species validated behavioral indices of attentional/cognitive functioning such as the 5C-CPT could accelerate the development of novel generalized pro-cognitive therapeutics for SZ and related neuropsychiatric disorders.
Bazelet, Corinna S; Thompson, Aileen C; Naskrecki, Piotr
2016-01-01
The use of endemism and vascular plants only for biodiversity hotspot delineation has long been contested. Few studies have focused on the efficacy of global biodiversity hotspots for the conservation of insects, an important, abundant, and often ignored component of biodiversity. We aimed to test five alternative diversity measures for hotspot delineation and examine the efficacy of biodiversity hotspots for conserving a non-typical target organism, South African katydids. Using a 1° fishnet grid, we delineated katydid hotspots in two ways: (1) count-based: grid cells in the top 10% of total, endemic, threatened and/or sensitive species richness; vs. (2) score-based: grid cells with a mean value in the top 10% on a scoring system which scored each species on the basis of its IUCN Red List threat status, distribution, mobility and trophic level. We then compared katydid hotspots with each other and with recognized biodiversity hotspots. Grid cells within biodiversity hotspots had significantly higher count-based and score-based diversity than non-hotspot grid cells. There was a significant association between the three types of hotspots. Of the count-based measures, endemic species richness was the best surrogate for the others. However, the score-based measure out-performed all count-based diversity measures. Species richness was the least successful surrogate of all. The strong performance of the score-based method for hotspot prediction emphasizes the importance of including species' natural history information for conservation decision-making, and is easily adaptable to other organisms. Furthermore, these results add empirical support for the efficacy of biodiversity hotspots in conserving non-target organisms.
Richard V. Pouyat; Ian D. Yesilonis; Miklos Dombos; Katalin Szlavecz; Heikki Setala; Sarel Cilliers; Erzsebet Hornung; D. Johan Kotze; Stephanie Yarwood
2015-01-01
As part of the Global Urban Soil Ecology and Education Network and to test the urban ecosystem convergence hypothesis, we report on soil pH, organic carbon (OC), total nitrogen (TN), phosphorus (P), and potassium (K) measured in four soil habitat types (turfgrass, ruderal, remnant, and reference) in five metropolitan areas (Baltimore, Budapest,...
Norris, Peter M.; da Silva, Arlindo M.
2018-01-01
Part 1 of this series presented a Monte Carlo Bayesian method for constraining a complex statistical model of global circulation model (GCM) sub-gridcolumn moisture variability using high-resolution Moderate Resolution Imaging Spectroradiometer (MODIS) cloud data, thereby permitting parameter estimation and cloud data assimilation for large-scale models. This article performs some basic testing of this new approach, verifying that it does indeed reduce mean and standard deviation biases significantly with respect to the assimilated MODIS cloud optical depth, brightness temperature and cloud-top pressure and that it also improves the simulated rotational–Raman scattering cloud optical centroid pressure (OCP) against independent (non-assimilated) retrievals from the Ozone Monitoring Instrument (OMI). Of particular interest, the Monte Carlo method does show skill in the especially difficult case where the background state is clear but cloudy observations exist. In traditional linearized data assimilation methods, a subsaturated background cannot produce clouds via any infinitesimal equilibrium perturbation, but the Monte Carlo approach allows non-gradient-based jumps into regions of non-zero cloud probability. In the example provided, the method is able to restore marine stratocumulus near the Californian coast, where the background state has a clear swath. This article also examines a number of algorithmic and physical sensitivities of the new method and provides guidance for its cost-effective implementation. One obvious difficulty for the method, and other cloud data assimilation methods as well, is the lack of information content in passive-radiometer-retrieved cloud observables on cloud vertical structure, beyond cloud-top pressure and optical thickness, thus necessitating strong dependence on the background vertical moisture structure. It is found that a simple flow-dependent correlation modification from Riishojgaard provides some help in this respect, by
Norris, Peter M.; da Silva, Arlindo M.
2016-01-01
Part 1 of this series presented a Monte Carlo Bayesian method for constraining a complex statistical model of global circulation model (GCM) sub-gridcolumn moisture variability using high-resolution Moderate Resolution Imaging Spectroradiometer (MODIS) cloud data, thereby permitting parameter estimation and cloud data assimilation for large-scale models. This article performs some basic testing of this new approach, verifying that it does indeed reduce mean and standard deviation biases significantly with respect to the assimilated MODIS cloud optical depth, brightness temperature and cloud-top pressure and that it also improves the simulated rotational-Raman scattering cloud optical centroid pressure (OCP) against independent (non-assimilated) retrievals from the Ozone Monitoring Instrument (OMI). Of particular interest, the Monte Carlo method does show skill in the especially difficult case where the background state is clear but cloudy observations exist. In traditional linearized data assimilation methods, a subsaturated background cannot produce clouds via any infinitesimal equilibrium perturbation, but the Monte Carlo approach allows non-gradient-based jumps into regions of non-zero cloud probability. In the example provided, the method is able to restore marine stratocumulus near the Californian coast, where the background state has a clear swath. This article also examines a number of algorithmic and physical sensitivities of the new method and provides guidance for its cost-effective implementation. One obvious difficulty for the method, and other cloud data assimilation methods as well, is the lack of information content in passive-radiometer-retrieved cloud observables on cloud vertical structure, beyond cloud-top pressure and optical thickness, thus necessitating strong dependence on the background vertical moisture structure. It is found that a simple flow-dependent correlation modification from Riishojgaard provides some help in this respect, by
Vlahos, Loukas; Archontis, Vasilis; Isliker, Heinz
We consider 3D nonlinear MHD simulations of an emerging flux tube, from the convection zone into the corona, focusing on the coronal part of the simulations. We first analyze the statistical nature and spatial structure of the electric field, calculating histograms and making use of iso-contour visualizations. Then test-particle simulations are performed for electrons, in order to study heating and acceleration phenomena, as well as to determine HXR emission. This study is done by comparatively exploring quiet, turbulent explosive, and mildly explosive phases of the MHD simulations. Also, the importance of collisional and relativistic effects is assessed, and the role of the integration time is investigated. Particular aim of this project is to verify the quasi- linear assumptions made in standard transport models, and to identify possible transport effects that cannot be captured with the latter. In order to determine the relation of our results to Fermi acceleration and Fokker-Planck modeling, we determine the standard transport coefficients. After all, we find that the electric field of the MHD simulations must be downscaled in order to prevent an un-physically high degree of acceleration, and the value chosen for the scale factor strongly affects the results. In different MHD time-instances we find heating to take place, and acceleration that depends on the level of MHD turbulence. Also, acceleration appears to be a transient phenomenon, there is a kind of saturation effect, and the parallel dynamics clearly dominate the energetics. The HXR spectra are not yet really compatible with observations, we have though to further explore the scaling of the electric field and the integration times used.
A statistical approach to plasma profile analysis
International Nuclear Information System (INIS)
Kardaun, O.J.W.F.; McCarthy, P.J.; Lackner, K.; Riedel, K.S.
1990-05-01
A general statistical approach to the parameterisation and analysis of tokamak profiles is presented. The modelling of the profile dependence on both the radius and the plasma parameters is discussed, and pertinent, classical as well as robust, methods of estimation are reviewed. Special attention is given to statistical tests for discriminating between the various models, and to the construction of confidence intervals for the parameterised profiles and the associated global quantities. The statistical approach is shown to provide a rigorous approach to the empirical testing of plasma profile invariance. (orig.)
Ueno, Tamio; Matuda, Junichi; Yamane, Nobuhisa
2013-03-01
To evaluate the occurrence of out-of acceptable ranges and accuracy of antimicrobial susceptibility tests, we applied a new statistical tool to the Inter-Laboratory Quality Control Program established by the Kyushu Quality Control Research Group. First, we defined acceptable ranges of minimum inhibitory concentration (MIC) for broth microdilution tests and inhibitory zone diameter for disk diffusion tests on the basis of Clinical and Laboratory Standards Institute (CLSI) M100-S21. In the analysis, more than two out-of acceptable range results in the 20 tests were considered as not allowable according to the CLSI document. Of the 90 participating laboratories, 46 (51%) experienced one or more occurrences of out-of acceptable range results. Then, a binomial test was applied to each participating laboratory. The results indicated that the occurrences of out-of acceptable range results in the 11 laboratories were significantly higher when compared to the CLSI recommendation (allowable rate laboratory was statistically compared with zero using a Student's t-test. The results revealed that 5 of the 11 above laboratories reported erroneous test results that systematically drifted to the side of resistance. In conclusion, our statistical approach has enabled us to detect significantly higher occurrences and source of interpretive errors in antimicrobial susceptibility tests; therefore, this approach can provide us with additional information that can improve the accuracy of the test results in clinical microbiology laboratories.
Natrella, Mary Gibbons
1963-01-01
Formulated to assist scientists and engineers engaged in army ordnance research and development programs, this well-known and highly regarded handbook is a ready reference for advanced undergraduate and graduate students as well as for professionals seeking engineering information and quantitative data for designing, developing, constructing, and testing equipment. Topics include characterizing and comparing the measured performance of a material, product, or process; general considerations in planning experiments; statistical techniques for analyzing extreme-value data; use of transformations
Nuclear Test Depth Determination with Synthetic Modelling: Global Analysis from PNEs to DPRK-2016
Rozhkov, Mikhail; Stachnik, Joshua; Baker, Ben; Epiphansky, Alexey; Bobrov, Dmitry
2016-04-01
retrieval and pre-processing. After the event database is compiled, the control is passed to the driver software, running the external processing and plotting toolboxes, which controls the final stage and produces the final result. The modules are mostly Python coded, C-coded (Raysynth3D complex topography regional synthetics) and FORTRAN coded synthetics from the CPS330 software package by Robert Herrmann of Saint Louis University. The extension of this single station depth determination method is under development and uses joint information from all stations participating in processing. It is based on simultaneous depth and moment tensor determination for both short and long period seismic phases. A novel approach recently developed for microseismic event location utilizing only phase waveform information was migrated to a global scale. It should provide faster computation as it does not require intensive synthetic modelling, and might benefit processing noisy signals. A consistent depth estimate for all recent nuclear tests was produced for the vast number of IMS stations (primary and auxiliary) used in processing.
Oliveira, Jorge; Gamito, Pedro; Alghazzawi, Daniyal M; Fardoun, Habib M; Rosa, Pedro J; Sousa, Tatiana; Picareli, Luís Felipe; Morais, Diogo; Lopes, Paulo
2017-08-14
This investigation sought to understand whether performance in naturalistic virtual reality tasks for cognitive assessment relates to the cognitive domains that are supposed to be measured. The Shoe Closet Test (SCT) was developed based on a simple visual search task involving attention skills, in which participants have to match each pair of shoes with the colors of the compartments in a virtual shoe closet. The interaction within the virtual environment was made using the Microsoft Kinect. The measures consisted of concurrent paper-and-pencil neurocognitive tests for global cognitive functioning, executive functions, attention, psychomotor ability, and the outcomes of the SCT. The results showed that the SCT correlated with global cognitive performance as measured with the Montreal Cognitive Assessment (MoCA). The SCT explained one third of the total variance of this test and revealed good sensitivity and specificity in discriminating scores below one standard deviation in this screening tool. These findings suggest that performance of such functional tasks involves a broad range of cognitive processes that are associated with global cognitive functioning and that may be difficult to isolate through paper-and-pencil neurocognitive tests.
de Araujo, Georgia Véras; Leite, Débora F B; Rizzo, José A; Sarinho, Emanuel S C
2016-08-01
The aim of this study was to identify a possible association between the assessment of clinical asthma control using the Asthma Control Test (ACT) and the Global Initiative for Asthma (GINA) classification and to perform comparisons with values of spirometry. Through this cross-sectional study, 103 pregnant women with asthma were assessed in the period from October 2010 to October 2013 in the asthma pregnancy clinic at the Clinical Hospital of the Federal University of Pernambuco. Questionnaires concerning the level of asthma control were administered using the Global Initiative for Asthma classification, the Asthma Control Test validated for asthmatic expectant mothers and spirometry; all three methods of assessing asthma control were performed during the same visit between the twenty-first and twenty-seventh weeks of pregnancy. There was a significant association between clinical asthma control assessment using the Asthma Control Test and the Global Initiative for Asthma classification (pspirometry. This study shows that both the Global Initiative for Asthma classification and the Asthma Control Test can be used for asthmatic expectant mothers to assess the clinical control of asthma, especially at the end of the second trimester, which is assumed to be the period of worsening asthma exacerbations during pregnancy. We highlight the importance of the Asthma Control Test as a subjective instrument with easy application, easy interpretation and good reproducibility that does not require spirometry to assess the level of asthma control and can be used in the primary care of asthmatic expectant mothers. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Hagell, Peter; Westergren, Albert
Sample size is a major factor in statistical null hypothesis testing, which is the basis for many approaches to testing Rasch model fit. Few sample size recommendations for testing fit to the Rasch model concern the Rasch Unidimensional Measurement Models (RUMM) software, which features chi-square and ANOVA/F-ratio based fit statistics, including Bonferroni and algebraic sample size adjustments. This paper explores the occurrence of Type I errors with RUMM fit statistics, and the effects of algebraic sample size adjustments. Data with simulated Rasch model fitting 25-item dichotomous scales and sample sizes ranging from N = 50 to N = 2500 were analysed with and without algebraically adjusted sample sizes. Results suggest the occurrence of Type I errors with N less then or equal to 500, and that Bonferroni correction as well as downward algebraic sample size adjustment are useful to avoid such errors, whereas upward adjustment of smaller samples falsely signal misfit. Our observations suggest that sample sizes around N = 250 to N = 500 may provide a good balance for the statistical interpretation of the RUMM fit statistics studied here with respect to Type I errors and under the assumption of Rasch model fit within the examined frame of reference (i.e., about 25 item parameters well targeted to the sample).
Skill Testing a Three-Dimensional Global Tide Model to Historical Current Meter Records
2013-12-17
breaking internal gravity waves generated over rough topography. The strength of the globally averaged wave drag is tuned to minimize the RMS...Ross Sea SO 02 39 83 Drake Passage SO 03 15 30 Weddell Sea SO 04 45 127 Antarctic Circumpolar Current SP 01 19 49 East Auckland Current SP 02 28 75 East
Developing and testing a global-scale regression model to quantify mean annual streamflow
Barbarossa, Valerio; Huijbregts, Mark A. J.; Hendriks, A. Jan; Beusen, Arthur H. W.; Clavreul, Julie; King, Henry; Schipper, Aafke M.
2017-01-01
Quantifying mean annual flow of rivers (MAF) at ungauged sites is essential for assessments of global water supply, ecosystem integrity and water footprints. MAF can be quantified with spatially explicit process-based models, which might be overly time-consuming and data-intensive for this purpose, or with empirical regression models that predict MAF based on climate and catchment characteristics. Yet, regression models have mostly been developed at a regional scale and the extent to which they can be extrapolated to other regions is not known. In this study, we developed a global-scale regression model for MAF based on a dataset unprecedented in size, using observations of discharge and catchment characteristics from 1885 catchments worldwide, measuring between 2 and 106 km2. In addition, we compared the performance of the regression model with the predictive ability of the spatially explicit global hydrological model PCR-GLOBWB by comparing results from both models to independent measurements. We obtained a regression model explaining 89% of the variance in MAF based on catchment area and catchment averaged mean annual precipitation and air temperature, slope and elevation. The regression model performed better than PCR-GLOBWB for the prediction of MAF, as root-mean-square error (RMSE) values were lower (0.29-0.38 compared to 0.49-0.57) and the modified index of agreement (d) was higher (0.80-0.83 compared to 0.72-0.75). Our regression model can be applied globally to estimate MAF at any point of the river network, thus providing a feasible alternative to spatially explicit process-based global hydrological models.