A simplification of the likelihood ratio test statistic for testing ...
African Journals Online (AJOL)
The traditional likelihood ratio test statistic for testing hypothesis about goodness of fit of multinomial probabilities in one, two and multi – dimensional contingency table was simplified. Advantageously, using the simplified version of the statistic to test the null hypothesis is easier and faster because calculating the expected ...
Similar tests and the standardized log likelihood ratio statistic
DEFF Research Database (Denmark)
Jensen, Jens Ledet
1986-01-01
When testing an affine hypothesis in an exponential family the 'ideal' procedure is to calculate the exact similar test, or an approximation to this, based on the conditional distribution given the minimal sufficient statistic under the null hypothesis. By contrast to this there is a 'primitive......' approach in which the marginal distribution of a test statistic considered and any nuisance parameter appearing in the test statistic is replaced by an estimate. We show here that when using standardized likelihood ratio statistics the 'primitive' procedure is in fact an 'ideal' procedure to order O(n -3...
DEFF Research Database (Denmark)
Christensen, Ole Fredslund; Frydenberg, Morten; Jensen, Jens Ledet
2005-01-01
The large deviation modified likelihood ratio statistic is studied for testing a variance component equal to a specified value. Formulas are presented in the general balanced case, whereas in the unbalanced case only the one-way random effects model is studied. Simulation studies are presented......, showing that the normal approximation to the large deviation modified likelihood ratio statistic gives confidence intervals for variance components with coverage probabilities very close to the nominal confidence coefficient....
Coelho, Carlos A.; Marques, Filipe J.
2013-09-01
In this paper the authors combine the equicorrelation and equivariance test introduced by Wilks [13] with the likelihood ratio test (l.r.t.) for independence of groups of variables to obtain the l.r.t. of block equicorrelation and equivariance. This test or its single block version may find applications in many areas as in psychology, education, medicine, genetics and they are important "in many tests of multivariate analysis, e.g. in MANOVA, Profile Analysis, Growth Curve analysis, etc" [12, 9]. By decomposing the overall hypothesis into the hypotheses of independence of groups of variables and the hypothesis of equicorrelation and equivariance we are able to obtain the expressions for the overall l.r.t. statistic and its moments. From these we obtain a suitable factorization of the characteristic function (c.f.) of the logarithm of the l.r.t. statistic, which enables us to develop highly manageable and precise near-exact distributions for the test statistic.
Weber, Benjamin; Lee, Sau L; Delvadia, Renishkumar; Lionberger, Robert; Li, Bing V; Tsong, Yi; Hochhaus, Guenther
2015-03-01
Equivalence testing of aerodynamic particle size distribution (APSD) through multi-stage cascade impactors (CIs) is important for establishing bioequivalence of orally inhaled drug products. Recent work demonstrated that the median of the modified chi-square ratio statistic (MmCSRS) is a promising metric for APSD equivalence testing of test (T) and reference (R) products as it can be applied to a reduced number of CI sites that are more relevant for lung deposition. This metric is also less sensitive to the increased variability often observed for low-deposition sites. A method to establish critical values for the MmCSRS is described here. This method considers the variability of the R product by employing a reference variance scaling approach that allows definition of critical values as a function of the observed variability of the R product. A stepwise CI equivalence test is proposed that integrates the MmCSRS as a method for comparing the relative shapes of CI profiles and incorporates statistical tests for assessing equivalence of single actuation content and impactor sized mass. This stepwise CI equivalence test was applied to 55 published CI profile scenarios, which were classified as equivalent or inequivalent by members of the Product Quality Research Institute working group (PQRI WG). The results of the stepwise CI equivalence test using a 25% difference in MmCSRS as an acceptance criterion provided the best matching with those of the PQRI WG as decisions of both methods agreed in 75% of the 55 CI profile scenarios.
Kanji, Gopal K
2006-01-01
This expanded and updated Third Edition of Gopal K. Kanji's best-selling resource on statistical tests covers all the most commonly used tests with information on how to calculate and interpret results with simple datasets. Each entry begins with a short summary statement about the test's purpose, and contains details of the test objective, the limitations (or assumptions) involved, a brief outline of the method, a worked example, and the numerical calculation. 100 Statistical Tests, Third Edition is the one indispensable guide for users of statistical materials and consumers of statistical information at all levels and across all disciplines.
Statistical moments of the Strehl ratio
Yaitskova, Natalia; Esselborn, Michael; Gladysz, Szymon
2012-07-01
Knowledge of the statistical characteristics of the Strehl ratio is essential for the performance assessment of the existing and future adaptive optics systems. For full assessment not only the mean value of the Strehl ratio but also higher statistical moments are important. Variance is related to the stability of an image and skewness reflects the chance to have in a set of short exposure images more or less images with the quality exceeding the mean. Skewness is a central parameter in the domain of lucky imaging. We present a rigorous theory for the calculation of the mean value, the variance and the skewness of the Strehl ratio. In our approach we represent the residual wavefront as being formed by independent cells. The level of the adaptive optics correction defines the number of the cells and the variance of the cells, which are the two main parameters of our theory. The deliverables are the values of the three moments as the functions of the correction level. We make no further assumptions except for the statistical independence of the cells.
Testing statistical hypotheses
Lehmann, E L
2005-01-01
The third edition of Testing Statistical Hypotheses updates and expands upon the classic graduate text, emphasizing optimality theory for hypothesis testing and confidence sets. The principal additions include a rigorous treatment of large sample optimality, together with the requisite tools. In addition, an introduction to the theory of resampling methods such as the bootstrap is developed. The sections on multiple testing and goodness of fit testing are expanded. The text is suitable for Ph.D. students in statistics and includes over 300 new problems out of a total of more than 760. E.L. Lehmann is Professor of Statistics Emeritus at the University of California, Berkeley. He is a member of the National Academy of Sciences and the American Academy of Arts and Sciences, and the recipient of honorary degrees from the University of Leiden, The Netherlands and the University of Chicago. He is the author of Elements of Large-Sample Theory and (with George Casella) he is also the author of Theory of Point Estimat...
International Nuclear Information System (INIS)
Gouvea, Andre de; Murayama, Hitoshi
2003-01-01
'Anarchy' is the hypothesis that there is no fundamental distinction among the three flavors of neutrinos. It describes the mixing angles as random variables, drawn from well-defined probability distributions dictated by the group Haar measure. We perform a Kolmogorov-Smirnov (KS) statistical test to verify whether anarchy is consistent with all neutrino data, including the new result presented by KamLAND. We find a KS probability for Nature's choice of mixing angles equal to 64%, quite consistent with the anarchical hypothesis. In turn, assuming that anarchy is indeed correct, we compute lower bounds on vertical bar U e3 vertical bar 2 , the remaining unknown 'angle' of the leptonic mixing matrix
Branching ratios in sequential statistical multifragmentation
International Nuclear Information System (INIS)
Moretto, L.G.; Phair, L.; Tso, K.; Jing, K.; Wozniak, G.J.
1995-01-01
The energy dependence of the probability of producing n fragments follows a characteristic statistical law. Experimental intermediate-mass-fragment multiplicity distributions are shown to be binomial at all excitation energies. From these distributions a single binary event probability can be extracted that has the thermal dependence p=exp[-B/T]. Thus, it is inferred that multifragmentation is a sequence of thermal binary events. The increase of p with excitation energy implies a corresponding contraction of the time-scale and explains recently observed fragment-fragment and fragment-spectator Coulomb correlations. (authors). 22 refs., 5 figs
Branching ratios in sequential statistical multifragmentation
International Nuclear Information System (INIS)
Moretto, L.G.; Phair, L.; Tso, K.; Jing, K.; Wozniak, G.J.
1995-01-01
The energy dependence of the probability of producing n fragments follows a characteristic statistical law. Experimental intermediate-mass-fragment multiplicity distributions are shown to be binomial at all excitation energies. From these distributions a single binary event probability can be extracted that has the thermal dependence p = exp[-B/T]. Thus, it is inferred that multifragmentation is a sequence of thermal binary events. The increase of p with excitation energy implies a corresponding contraction of the time-scale and explains recently observed fragment-fragment and fragment-spectator Coulomb correlations. (author). 22 refs., 5 figs
Particle ratios, quarks, and Chao-Yang statistics
Energy Technology Data Exchange (ETDEWEB)
Chew, C K; Low, G B; Lo, S Y [Nanyang Univ. (Singapore). Dept. of Physics; Phua, K K [Argonne National Lab., IL (USA)
1980-01-01
By introducing quarks into Chao-Yang statistics for 'violent' collisions, particle ratios are obtained which are consistent with the Chao-Yang results. The present method can also be extended to baryon-meson and baryon-antibaryon ratios.
Explorations in Statistics: The Analysis of Ratios and Normalized Data
Curran-Everett, Douglas
2013-01-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This ninth installment of "Explorations in Statistics" explores the analysis of ratios and normalized--or standardized--data. As researchers, we compute a ratio--a numerator divided by a denominator--to compute a…
Testing statistical hypotheses of equivalence
Wellek, Stefan
2010-01-01
Equivalence testing has grown significantly in importance over the last two decades, especially as its relevance to a variety of applications has become understood. Yet published work on the general methodology remains scattered in specialists' journals, and for the most part, it focuses on the relatively narrow topic of bioequivalence assessment.With a far broader perspective, Testing Statistical Hypotheses of Equivalence provides the first comprehensive treatment of statistical equivalence testing. The author addresses a spectrum of specific, two-sided equivalence testing problems, from the
Directory of Open Access Journals (Sweden)
Omar Chavez
2016-07-01
Full Text Available A method to improve the detection of seismo-magnetic signals is presented herein. Eight events registered for periods of 24 hours with seismic activity were analyzed and compared with non-seismic periods of the same duration. The distance between the earthquakes (EQs and the ultra-low frequency detector is of ρ = (1.8 100.45M, where M is the magnitude of the EQ reported by the Seismological National Service of Mexico, in a period of three years. An improved fast Fourier transform analysis in the form of the ratio of the vertical magnetic field component to the horizontal one (Q = Bz/Bx has been developed. There are important differences between the frequencies obtained during the days of seismic activity compared with those with no seismic activity.
Explorations in statistics: the analysis of ratios and normalized data.
Curran-Everett, Douglas
2013-09-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This ninth installment of Explorations in Statistics explores the analysis of ratios and normalized-or standardized-data. As researchers, we compute a ratio-a numerator divided by a denominator-to compute a proportion for some biological response or to derive some standardized variable. In each situation, we want to control for differences in the denominator when the thing we really care about is the numerator. But there is peril lurking in a ratio: only if the relationship between numerator and denominator is a straight line through the origin will the ratio be meaningful. If not, the ratio will misrepresent the true relationship between numerator and denominator. In contrast, regression techniques-these include analysis of covariance-are versatile: they can accommodate an analysis of the relationship between numerator and denominator when a ratio is useless.
Statistical validity of using ratio variables in human kinetics research.
Liu, Yuanlong; Schutz, Robert W
2003-09-01
The purposes of this study were to investigate the validity of the simple ratio and three alternative deflation models and examine how the variation of the numerator and denominator variables affects the reliability of a ratio variable. A simple ratio and three alternative deflation models were fitted to four empirical data sets, and common criteria were applied to determine the best model for deflation. Intraclass correlation was used to examine the component effect on the reliability of a ratio variable. The results indicate that the validity, of a deflation model depends on the statistical characteristics of the particular component variables used, and an optimal deflation model for all ratio variables may not exist. Therefore, it is recommended that different models be fitted to each empirical data set to determine the best deflation model. It was found that the reliability of a simple ratio is affected by the coefficients of variation and the within- and between-trial correlations between the numerator and denominator variables. It was recommended that researchers should compute the reliability of the derived ratio scores and not assume that strong reliabilities in the numerator and denominator measures automatically lead to high reliability in the ratio measures.
Statistical tests to compare motif count exceptionalities
Directory of Open Access Journals (Sweden)
Vandewalle Vincent
2007-03-01
Full Text Available Abstract Background Finding over- or under-represented motifs in biological sequences is now a common task in genomics. Thanks to p-value calculation for motif counts, exceptional motifs are identified and represent candidate functional motifs. The present work addresses the related question of comparing the exceptionality of one motif in two different sequences. Just comparing the motif count p-values in each sequence is indeed not sufficient to decide if this motif is significantly more exceptional in one sequence compared to the other one. A statistical test is required. Results We develop and analyze two statistical tests, an exact binomial one and an asymptotic likelihood ratio test, to decide whether the exceptionality of a given motif is equivalent or significantly different in two sequences of interest. For that purpose, motif occurrences are modeled by Poisson processes, with a special care for overlapping motifs. Both tests can take the sequence compositions into account. As an illustration, we compare the octamer exceptionalities in the Escherichia coli K-12 backbone versus variable strain-specific loops. Conclusion The exact binomial test is particularly adapted for small counts. For large counts, we advise to use the likelihood ratio test which is asymptotic but strongly correlated with the exact binomial test and very simple to use.
The Laplace Likelihood Ratio Test for Heteroscedasticity
Directory of Open Access Journals (Sweden)
J. Martin van Zyl
2011-01-01
Full Text Available It is shown that the likelihood ratio test for heteroscedasticity, assuming the Laplace distribution, gives good results for Gaussian and fat-tailed data. The likelihood ratio test, assuming normality, is very sensitive to any deviation from normality, especially when the observations are from a distribution with fat tails. Such a likelihood test can also be used as a robust test for a constant variance in residuals or a time series if the data is partitioned into groups.
DEFF Research Database (Denmark)
Stoica, Iuliana-Madalina; Babamoradi, Hamid; van den Berg, Frans
2017-01-01
•A statistical strategy combining fluorescence spectroscopy, multivariate analysis and Wilks’ ratio is proposed.•The method was tested both off-line and on-line having riboflavin as a (controlled) contaminant.•Wilks’ ratio signals unusual recordings based on shifts in variance and covariance...... structure described in in-control data....
The behavior of the likelihood ratio test for testing missingness
Hens, Niel; Aerts, Marc; Molenberghs, Geert; Thijs, Herbert
2003-01-01
To asses the sensitivity of conclusions to model choices in the context of selection models for non-random dropout, one can oppose the different missing mechanisms to each other; e.g. by the likelihood ratio tests. The finite sample behavior of the null distribution and the power of the likelihood ratio test is studied under a variety of missingness mechanisms. missing data; sensitivity analysis; likelihood ratio test; missing mechanisms
A Hybrid Joint Moment Ratio Test for Financial Time Series
Groenendijk, Patrick A.; Lucas, André; Vries, de Casper G.
1998-01-01
We advocate the use of absolute moment ratio statistics in conjunctionwith standard variance ratio statistics in order to disentangle lineardependence, non-linear dependence, and leptokurtosis in financial timeseries. Both statistics are computed for multiple return horizonssimultaneously, and the
Statistical hypothesis testing with SAS and R
Taeger, Dirk
2014-01-01
A comprehensive guide to statistical hypothesis testing with examples in SAS and R When analyzing datasets the following questions often arise:Is there a short hand procedure for a statistical test available in SAS or R?If so, how do I use it?If not, how do I program the test myself? This book answers these questions and provides an overview of the most commonstatistical test problems in a comprehensive way, making it easy to find and performan appropriate statistical test. A general summary of statistical test theory is presented, along with a basicdescription for each test, including the
Nonparametric Statistics Test Software Package.
1983-09-01
25 I1l,lCELL WRITE (NCF,12 ) IvE (I ,RCCT(I) 122 FORMAT(IlXt 3(H5 9 1) IF( IeLT *NCELL) WRITE (NOF1123 J PARTV(I1J 123 FORMAT( Xll----’,FIo.3J 25 CONT...the user’s entries. Its purpose is to write two types of files needed by the program Crunch: the data file, and the option file. 211 Iuill rateLchiavar...data file and communicate the choice of test and test parameters to Crunch. After a data file is written, Lochinvar prompts the writing of the
Multiple Improvements of Multiple Imputation Likelihood Ratio Tests
Chan, Kin Wai; Meng, Xiao-Li
2017-01-01
Multiple imputation (MI) inference handles missing data by first properly imputing the missing values $m$ times, and then combining the $m$ analysis results from applying a complete-data procedure to each of the completed datasets. However, the existing method for combining likelihood ratio tests has multiple defects: (i) the combined test statistic can be negative in practice when the reference null distribution is a standard $F$ distribution; (ii) it is not invariant to re-parametrization; ...
The insignificance of statistical significance testing
Johnson, Douglas H.
1999-01-01
Despite their use in scientific journals such as The Journal of Wildlife Management, statistical hypothesis tests add very little value to the products of research. Indeed, they frequently confuse the interpretation of data. This paper describes how statistical hypothesis tests are often viewed, and then contrasts that interpretation with the correct one. I discuss the arbitrariness of P-values, conclusions that the null hypothesis is true, power analysis, and distinctions between statistical and biological significance. Statistical hypothesis testing, in which the null hypothesis about the properties of a population is almost always known a priori to be false, is contrasted with scientific hypothesis testing, which examines a credible null hypothesis about phenomena in nature. More meaningful alternatives are briefly outlined, including estimation and confidence intervals for determining the importance of factors, decision theory for guiding actions in the face of uncertainty, and Bayesian approaches to hypothesis testing and other statistical practices.
Polarimetric Segmentation Using Wishart Test Statistic
DEFF Research Database (Denmark)
Skriver, Henning; Schou, Jesper; Nielsen, Allan Aasbjerg
2002-01-01
A newly developed test statistic for equality of two complex covariance matrices following the complex Wishart distribution and an associated asymptotic probability for the test statistic has been used in a segmentation algorithm. The segmentation algorithm is based on the MUM (merge using moments......) approach, which is a merging algorithm for single channel SAR images. The polarimetric version described in this paper uses the above-mentioned test statistic for merging. The segmentation algorithm has been applied to polarimetric SAR data from the Danish dual-frequency, airborne polarimetric SAR, EMISAR...
Teaching Statistics in Language Testing Courses
Brown, James Dean
2013-01-01
The purpose of this article is to examine the literature on teaching statistics for useful ideas that teachers of language testing courses can draw on and incorporate into their teaching toolkits as they see fit. To those ends, the article addresses eight questions: What is known generally about teaching statistics? Why are students so anxious…
A Hybrid Joint Moment Ratio Test for Financial Time Series
P.A. Groenendijk (Patrick); A. Lucas (André); C.G. de Vries (Casper)
1998-01-01
textabstractWe advocate the use of absolute moment ratio statistics in conjunction with standard variance ratio statistics in order to disentangle linear dependence, non-linear dependence, and leptokurtosis in financial time series. Both statistics are computed for multiple return horizons
SPSS for applied sciences basic statistical testing
Davis, Cole
2013-01-01
This book offers a quick and basic guide to using SPSS and provides a general approach to solving problems using statistical tests. It is both comprehensive in terms of the tests covered and the applied settings it refers to, and yet is short and easy to understand. Whether you are a beginner or an intermediate level test user, this book will help you to analyse different types of data in applied settings. It will also give you the confidence to use other statistical software and to extend your expertise to more specific scientific settings as required.The author does not use mathematical form
Safeguarding a Lunar Rover with Wald's Sequential Probability Ratio Test
Furlong, Michael; Dille, Michael; Wong, Uland; Nefian, Ara
2016-01-01
The virtual bumper is a safeguarding mechanism for autonomous and remotely operated robots. In this paper we take a new approach to the virtual bumper system by using an old statistical test. By using a modified version of Wald's sequential probability ratio test we demonstrate that we can reduce the number of false positive reported by the virtual bumper, thereby saving valuable mission time. We use the concept of sequential probability ratio to control vehicle speed in the presence of possible obstacles in order to increase certainty about whether or not obstacles are present. Our new algorithm reduces the chances of collision by approximately 98 relative to traditional virtual bumper safeguarding without speed control.
Statistical Analysis of the Grid Connected Photovoltaic System Performance Ratio
Directory of Open Access Journals (Sweden)
Javier Vilariño-García
2017-05-01
Full Text Available A methodology based on the application of variance analysis and Tukey's method to a data set of solar radiation in the plane of the photovoltaic modules and the corresponding values of power delivered to the grid at intervals of 10 minutes presents from sunrise to sunset during the 52 weeks of the year 2013. These data were obtained through a monitoring system located in a photovoltaic plant of 10 MW of rated power located in Cordoba, consisting of 16 transformers and 98 investors. The application of the comparative method among the middle of the performance index of the processing centers to detect with an analysis of variance if there is significant difference in average at least the rest at a level of significance of 5% and then by testing Tukey which one or more processing centers that are below average due to a fault to be detected and corrected are.
Statistical treatment of fatigue test data
International Nuclear Information System (INIS)
Raske, D.T.
1980-01-01
This report discussed several aspects of fatigue data analysis in order to provide a basis for the development of statistically sound design curves. Included is a discussion on the choice of the dependent variable, the assumptions associated with least squares regression models, the variability of fatigue data, the treatment of data from suspended tests and outlying observations, and various strain-life relations
Statistical test theory for the behavioral sciences
de Gruijter, Dato N M
2007-01-01
Since the development of the first intelligence test in the early 20th century, educational and psychological tests have become important measurement techniques to quantify human behavior. Focusing on this ubiquitous yet fruitful area of research, Statistical Test Theory for the Behavioral Sciences provides both a broad overview and a critical survey of assorted testing theories and models used in psychology, education, and other behavioral science fields. Following a logical progression from basic concepts to more advanced topics, the book first explains classical test theory, covering true score, measurement error, and reliability. It then presents generalizability theory, which provides a framework to deal with various aspects of test scores. In addition, the authors discuss the concept of validity in testing, offering a strategy for evidence-based validity. In the two chapters devoted to item response theory (IRT), the book explores item response models, such as the Rasch model, and applications, incl...
Simplified Freeman-Tukey test statistics for testing probabilities in ...
African Journals Online (AJOL)
This paper presents the simplified version of the Freeman-Tukey test statistic for testing hypothesis about multinomial probabilities in one, two and multidimensional contingency tables that does not require calculating the expected cell frequencies before test of significance. The simplified method established new criteria of ...
New Graphical Methods and Test Statistics for Testing Composite Normality
Directory of Open Access Journals (Sweden)
Marc S. Paolella
2015-07-01
Full Text Available Several graphical methods for testing univariate composite normality from an i.i.d. sample are presented. They are endowed with correct simultaneous error bounds and yield size-correct tests. As all are based on the empirical CDF, they are also consistent for all alternatives. For one test, called the modified stabilized probability test, or MSP, a highly simplified computational method is derived, which delivers the test statistic and also a highly accurate p-value approximation, essentially instantaneously. The MSP test is demonstrated to have higher power against asymmetric alternatives than the well-known and powerful Jarque-Bera test. A further size-correct test, based on combining two test statistics, is shown to have yet higher power. The methodology employed is fully general and can be applied to any i.i.d. univariate continuous distribution setting.
Statistical analysis of the ratio of electric and magnetic fields in random fields generators
Serra, R.; Nijenhuis, J.
2013-01-01
In this paper we present statistical models of the ratio of random electric and magnetic fields in mode-stirred reverberation chambers. This ratio is based on the electric and magnetic field statistics derived for ideal reverberation conditions. It provides a further performance indicator for
Analysis of Preference Data Using Intermediate Test Statistic Abstract
African Journals Online (AJOL)
PROF. O. E. OSUAGWU
2013-06-01
Jun 1, 2013 ... West African Journal of Industrial and Academic Research Vol.7 No. 1 June ... Keywords:-Preference data, Friedman statistic, multinomial test statistic, intermediate test statistic. ... new method and consequently a new statistic ...
Diagonal Likelihood Ratio Test for Equality of Mean Vectors in High-Dimensional Data
Hu, Zongliang; Tong, Tiejun; Genton, Marc G.
2017-01-01
We propose a likelihood ratio test framework for testing normal mean vectors in high-dimensional data under two common scenarios: the one-sample test and the two-sample test with equal covariance matrices. We derive the test statistics under the assumption that the covariance matrices follow a diagonal matrix structure. In comparison with the diagonal Hotelling's tests, our proposed test statistics display some interesting characteristics. In particular, they are a summation of the log-transformed squared t-statistics rather than a direct summation of those components. More importantly, to derive the asymptotic normality of our test statistics under the null and local alternative hypotheses, we do not require the assumption that the covariance matrix follows a diagonal matrix structure. As a consequence, our proposed test methods are very flexible and can be widely applied in practice. Finally, simulation studies and a real data analysis are also conducted to demonstrate the advantages of our likelihood ratio test method.
Kepler Planet Detection Metrics: Statistical Bootstrap Test
Jenkins, Jon M.; Burke, Christopher J.
2016-01-01
This document describes the data produced by the Statistical Bootstrap Test over the final three Threshold Crossing Event (TCE) deliveries to NExScI: SOC 9.1 (Q1Q16)1 (Tenenbaum et al. 2014), SOC 9.2 (Q1Q17) aka DR242 (Seader et al. 2015), and SOC 9.3 (Q1Q17) aka DR253 (Twicken et al. 2016). The last few years have seen significant improvements in the SOC science data processing pipeline, leading to higher quality light curves and more sensitive transit searches. The statistical bootstrap analysis results presented here and the numerical results archived at NASAs Exoplanet Science Institute (NExScI) bear witness to these software improvements. This document attempts to introduce and describe the main features and differences between these three data sets as a consequence of the software changes.
Statistical modeling and MAP estimation for body fat quantification with MRI ratio imaging
Wong, Wilbur C. K.; Johnson, David H.; Wilson, David L.
2008-03-01
We are developing small animal imaging techniques to characterize the kinetics of lipid accumulation/reduction of fat depots in response to genetic/dietary factors associated with obesity and metabolic syndromes. Recently, we developed an MR ratio imaging technique that approximately yields lipid/{lipid + water}. In this work, we develop a statistical model for the ratio distribution that explicitly includes a partial volume (PV) fraction of fat and a mixture of a Rician and multiple Gaussians. Monte Carlo hypothesis testing showed that our model was valid over a wide range of coefficient of variation of the denominator distribution (c.v.: 0-0:20) and correlation coefficient among the numerator and denominator (ρ 0-0.95), which cover the typical values that we found in MRI data sets (c.v.: 0:027-0:063, ρ: 0:50-0:75). Then a maximum a posteriori (MAP) estimate for the fat percentage per voxel is proposed. Using a digital phantom with many PV voxels, we found that ratio values were not linearly related to PV fat content and that our method accurately described the histogram. In addition, the new method estimated the ground truth within +1.6% vs. +43% for an approach using an uncorrected ratio image, when we simply threshold the ratio image. On the six genetically obese rat data sets, the MAP estimate gave total fat volumes of 279 +/- 45mL, values 21% smaller than those from the uncorrected ratio images, principally due to the non-linear PV effect. We conclude that our algorithm can increase the accuracy of fat volume quantification even in regions having many PV voxels, e.g. ectopic fat depots.
Statistical tests for person misfit in computerized adaptive testing
Glas, Cornelis A.W.; Meijer, R.R.; van Krimpen-Stoop, Edith
1998-01-01
Recently, several person-fit statistics have been proposed to detect nonfitting response patterns. This study is designed to generalize an approach followed by Klauer (1995) to an adaptive testing system using the two-parameter logistic model (2PL) as a null model. The approach developed by Klauer
A Statistical Perspective on Highly Accelerated Testing
Energy Technology Data Exchange (ETDEWEB)
Thomas, Edward V. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-02-01
Highly accelerated life testing has been heavily promoted at Sandia (and elsewhere) as a means to rapidly identify product weaknesses caused by flaws in the product's design or manufacturing process. During product development, a small number of units are forced to fail at high stress. The failed units are then examined to determine the root causes of failure. The identification of the root causes of product failures exposed by highly accelerated life testing can instigate changes to the product's design and/or manufacturing process that result in a product with increased reliability. It is widely viewed that this qualitative use of highly accelerated life testing (often associated with the acronym HALT) can be useful. However, highly accelerated life testing has also been proposed as a quantitative means for "demonstrating" the reliability of a product where unreliability is associated with loss of margin via an identified and dominating failure mechanism. It is assumed that the dominant failure mechanism can be accelerated by changing the level of a stress factor that is assumed to be related to the dominant failure mode. In extreme cases, a minimal number of units (often from a pre-production lot) are subjected to a single highly accelerated stress relative to normal use. If no (or, sufficiently few) units fail at this high stress level, some might claim that a certain level of reliability has been demonstrated (relative to normal use conditions). Underlying this claim are assumptions regarding the level of knowledge associated with the relationship between the stress level and the probability of failure. The primary purpose of this document is to discuss (from a statistical perspective) the efficacy of using accelerated life testing protocols (and, in particular, "highly accelerated" protocols) to make quantitative inferences concerning the performance of a product (e.g., reliability) when in fact there is lack-of-knowledge and uncertainty concerning
Explorations in Statistics: Hypothesis Tests and P Values
Curran-Everett, Douglas
2009-01-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This second installment of "Explorations in Statistics" delves into test statistics and P values, two concepts fundamental to the test of a scientific null hypothesis. The essence of a test statistic is that it compares what…
Directory of Open Access Journals (Sweden)
Chen Cao
2016-09-01
Full Text Available This study focused on producing flash flood hazard susceptibility maps (FFHSM using frequency ratio (FR and statistical index (SI models in the Xiqu Gully (XQG of Beijing, China. First, a total of 85 flash flood hazard locations (n = 85 were surveyed in the field and plotted using geographic information system (GIS software. Based on the flash flood hazard locations, a flood hazard inventory map was built. Seventy percent (n = 60 of the flooding hazard locations were randomly selected for building the models. The remaining 30% (n = 25 of the flooded hazard locations were used for validation. Considering that the XQG used to be a coal mining area, coalmine caves and subsidence caused by coal mining exist in this catchment, as well as many ground fissures. Thus, this study took the subsidence risk level into consideration for FFHSM. The ten conditioning parameters were elevation, slope, curvature, land use, geology, soil texture, subsidence risk area, stream power index (SPI, topographic wetness index (TWI, and short-term heavy rain. This study also tested different classification schemes for the values for each conditional parameter and checked their impacts on the results. The accuracy of the FFHSM was validated using area under the curve (AUC analysis. Classification accuracies were 86.61%, 83.35%, and 78.52% using frequency ratio (FR-natural breaks, statistical index (SI-natural breaks and FR-manual classification schemes, respectively. Associated prediction accuracies were 83.69%, 81.22%, and 74.23%, respectively. It was found that FR modeling using a natural breaks classification method was more appropriate for generating FFHSM for the Xiqu Gully.
Nearly Efficient Likelihood Ratio Tests for Seasonal Unit Roots
DEFF Research Database (Denmark)
Jansson, Michael; Nielsen, Morten Ørregaard
In an important generalization of zero frequency autore- gressive unit root tests, Hylleberg, Engle, Granger, and Yoo (1990) developed regression-based tests for unit roots at the seasonal frequencies in quarterly time series. We develop likelihood ratio tests for seasonal unit roots and show...... that these tests are "nearly efficient" in the sense of Elliott, Rothenberg, and Stock (1996), i.e. that their local asymptotic power functions are indistinguishable from the Gaussian power envelope. Currently available nearly efficient testing procedures for seasonal unit roots are regression-based and require...... the choice of a GLS detrending parameter, which our likelihood ratio tests do not....
Filipiak, Katarzyna; Klein, Daniel; Roy, Anuradha
2017-01-01
The problem of testing the separability of a covariance matrix against an unstructured variance-covariance matrix is studied in the context of multivariate repeated measures data using Rao's score test (RST). The RST statistic is developed with the first component of the separable structure as a first-order autoregressive (AR(1)) correlation matrix or an unstructured (UN) covariance matrix under the assumption of multivariate normality. It is shown that the distribution of the RST statistic under the null hypothesis of any separability does not depend on the true values of the mean or the unstructured components of the separable structure. A significant advantage of the RST is that it can be performed for small samples, even smaller than the dimension of the data, where the likelihood ratio test (LRT) cannot be used, and it outperforms the standard LRT in a number of contexts. Monte Carlo simulations are then used to study the comparative behavior of the null distribution of the RST statistic, as well as that of the LRT statistic, in terms of sample size considerations, and for the estimation of the empirical percentiles. Our findings are compared with existing results where the first component of the separable structure is a compound symmetry (CS) correlation matrix. It is also shown by simulations that the empirical null distribution of the RST statistic converges faster than the empirical null distribution of the LRT statistic to the limiting χ 2 distribution. The tests are implemented on a real dataset from medical studies. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Obtaining reliable Likelihood Ratio tests from simulated likelihood functions
DEFF Research Database (Denmark)
Andersen, Laura Mørch
It is standard practice by researchers and the default option in many statistical programs to base test statistics for mixed models on simulations using asymmetric draws (e.g. Halton draws). This paper shows that when the estimated likelihood functions depend on standard deviations of mixed param...
Scaling images using their background ratio. An application in statistical comparisons of images
International Nuclear Information System (INIS)
Kalemis, A; Binnie, D; Bailey, D L; Flower, M A; Ott, R J
2003-01-01
Comparison of two medical images often requires image scaling as a pre-processing step. This is usually done with the scaling-to-the-mean or scaling-to-the-maximum techniques which, under certain circumstances, in quantitative applications may contribute a significant amount of bias. In this paper, we present a simple scaling method which assumes only that the most predominant values in the corresponding images belong to their background structure. The ratio of the two images to be compared is calculated and its frequency histogram is plotted. The scaling factor is given by the position of the peak in this histogram which belongs to the background structure. The method was tested against the traditional scaling-to-the-mean technique on simulated planar gamma-camera images which were compared using pixelwise statistical parametric tests. Both sensitivity and specificity for each condition were measured over a range of different contrasts and sizes of inhomogeneity for the two scaling techniques. The new method was found to preserve sensitivity in all cases while the traditional technique resulted in significant degradation of sensitivity in certain cases
Scaling images using their background ratio. An application in statistical comparisons of images.
Kalemis, A; Binnie, D; Bailey, D L; Flower, M A; Ott, R J
2003-06-07
Comparison of two medical images often requires image scaling as a pre-processing step. This is usually done with the scaling-to-the-mean or scaling-to-the-maximum techniques which, under certain circumstances, in quantitative applications may contribute a significant amount of bias. In this paper, we present a simple scaling method which assumes only that the most predominant values in the corresponding images belong to their background structure. The ratio of the two images to be compared is calculated and its frequency histogram is plotted. The scaling factor is given by the position of the peak in this histogram which belongs to the background structure. The method was tested against the traditional scaling-to-the-mean technique on simulated planar gamma-camera images which were compared using pixelwise statistical parametric tests. Both sensitivity and specificity for each condition were measured over a range of different contrasts and sizes of inhomogeneity for the two scaling techniques. The new method was found to preserve sensitivity in all cases while the traditional technique resulted in significant degradation of sensitivity in certain cases.
Testing for Statistical Discrimination based on Gender
DEFF Research Database (Denmark)
Lesner, Rune Vammen
. It is shown that the implications of both screening discrimination and stereotyping are consistent with observable wage dynamics. In addition, it is found that the gender wage gap decreases in tenure but increases in job transitions and that the fraction of women in high-ranking positions within a firm does......This paper develops a model which incorporates the two most commonly cited strands of the literature on statistical discrimination, namely screening discrimination and stereotyping. The model is used to provide empirical evidence of statistical discrimination based on gender in the labour market...... not affect the level of statistical discrimination by gender....
Understanding the properties of diagnostic tests - Part 2: Likelihood ratios.
Ranganathan, Priya; Aggarwal, Rakesh
2018-01-01
Diagnostic tests are used to identify subjects with and without disease. In a previous article in this series, we examined some attributes of diagnostic tests - sensitivity, specificity, and predictive values. In this second article, we look at likelihood ratios, which are useful for the interpretation of diagnostic test results in everyday clinical practice.
Statistical Decision Theory Estimation, Testing, and Selection
Liese, Friedrich
2008-01-01
Suitable for advanced graduate students and researchers in mathematical statistics and decision theory, this title presents an account of the concepts and a treatment of the major results of classical finite sample size decision theory and modern asymptotic decision theory
Testing for Statistical Discrimination based on Gender
Lesner, Rune Vammen
2016-01-01
This paper develops a model which incorporates the two most commonly cited strands of the literature on statistical discrimination, namely screening discrimination and stereotyping. The model is used to provide empirical evidence of statistical discrimination based on gender in the labour market. It is shown that the implications of both screening discrimination and stereotyping are consistent with observable wage dynamics. In addition, it is found that the gender wage gap decreases in tenure...
Further comments on the sequential probability ratio testing methods
Energy Technology Data Exchange (ETDEWEB)
Kulacsy, K. [Hungarian Academy of Sciences, Budapest (Hungary). Central Research Inst. for Physics
1997-05-23
The Bayesian method for belief updating proposed in Racz (1996) is examined. The interpretation of the belief function introduced therein is found, and the method is compared to the classical binary Sequential Probability Ratio Testing method (SPRT). (author).
Distinguish Dynamic Basic Blocks by Structural Statistical Testing
DEFF Research Database (Denmark)
Petit, Matthieu; Gotlieb, Arnaud
Statistical testing aims at generating random test data that respect selected probabilistic properties. A distribution probability is associated with the program input space in order to achieve statistical test purpose: to test the most frequent usage of software or to maximize the probability of...... control flow path) during the test data selection. We implemented this algorithm in a statistical test data generator for Java programs. A first experimental validation is presented...
Extending the Reach of Statistical Software Testing
National Research Council Canada - National Science Library
Weber, Robert
2004-01-01
.... In particular, as system complexity increases, the matrices required to generate test cases and perform model analysis can grow dramatically, even exponentially, overwhelming the test generation...
Nearly Efficient Likelihood Ratio Tests of the Unit Root Hypothesis
DEFF Research Database (Denmark)
Jansson, Michael; Nielsen, Morten Ørregaard
Seemingly absent from the arsenal of currently available "nearly efficient" testing procedures for the unit root hypothesis, i.e. tests whose local asymptotic power functions are indistinguishable from the Gaussian power envelope, is a test admitting a (quasi-)likelihood ratio interpretation. We...... show that the likelihood ratio unit root test derived in a Gaussian AR(1) model with standard normal innovations is nearly efficient in that model. Moreover, these desirable properties carry over to more complicated models allowing for serially correlated and/or non-Gaussian innovations....
A flexible spatial scan statistic with a restricted likelihood ratio for detecting disease clusters.
Tango, Toshiro; Takahashi, Kunihiko
2012-12-30
Spatial scan statistics are widely used tools for detection of disease clusters. Especially, the circular spatial scan statistic proposed by Kulldorff (1997) has been utilized in a wide variety of epidemiological studies and disease surveillance. However, as it cannot detect noncircular, irregularly shaped clusters, many authors have proposed different spatial scan statistics, including the elliptic version of Kulldorff's scan statistic. The flexible spatial scan statistic proposed by Tango and Takahashi (2005) has also been used for detecting irregularly shaped clusters. However, this method sets a feasible limitation of a maximum of 30 nearest neighbors for searching candidate clusters because of heavy computational load. In this paper, we show a flexible spatial scan statistic implemented with a restricted likelihood ratio proposed by Tango (2008) to (1) eliminate the limitation of 30 nearest neighbors and (2) to have surprisingly much less computational time than the original flexible spatial scan statistic. As a side effect, it is shown to be able to detect clusters with any shape reasonably well as the relative risk of the cluster becomes large via Monte Carlo simulation. We illustrate the proposed spatial scan statistic with data on mortality from cerebrovascular disease in the Tokyo Metropolitan area, Japan. Copyright © 2012 John Wiley & Sons, Ltd.
Wald Sequential Probability Ratio Test for Space Object Conjunction Assessment
Carpenter, James R.; Markley, F Landis
2014-01-01
This paper shows how satellite owner/operators may use sequential estimates of collision probability, along with a prior assessment of the base risk of collision, in a compound hypothesis ratio test to inform decisions concerning collision risk mitigation maneuvers. The compound hypothesis test reduces to a simple probability ratio test, which appears to be a novel result. The test satisfies tolerances related to targeted false alarm and missed detection rates. This result is independent of the method one uses to compute the probability density that one integrates to compute collision probability. A well-established test case from the literature shows that this test yields acceptable results within the constraints of a typical operational conjunction assessment decision timeline. Another example illustrates the use of the test in a practical conjunction assessment scenario based on operations of the International Space Station.
Energy Technology Data Exchange (ETDEWEB)
Silva Filho, Severino Higino da; Bieseki, Lindiane; Pergher, Sibele Berenice Castella, E-mail: sibelepergher@gmail.com [Universidade Federal do Rio Grande do Norte (LABPEMOL/UFRN), Natal, RN (Brazil). Lab. de Peneiras Moleculares; Maia, Ana Aurea B.; Angelica, Romulo Simoes [Universidade Federal do Para (UFPA), Belem PA (Brazil); Treichel, Helen [Universidade Federal da Fronteira Sil (UFFS), Erechim, RS (Brazil)
2017-05-15
The NaOH/metakaolin ratio and crystallization time were studied for the synthesis of zeolite NaA from a sample of kaolin from a Capim mine. The tests were carried out by using statistical design with axial points and replication of the central point. The samples obtained were characterized by X-ray diffraction (DRX), scanning electron microscopy and chemical analysis using a microprobe EPMA. The results showed that there is a relationship between the amount of NaOH added and crystallization time. The tests carried out using the lowest NaOH/metakaolin ratio (0.5) and the shortest time (4 h) produced a non-crystalline material. On the other hand, increasing the NaOH/metakaolin ratio and the crystallization time led to the formation of a NaA phase with a high structural level, but with the presence of a sodalite phase as an impurity. (author)
[The research protocol VI: How to choose the appropriate statistical test. Inferential statistics].
Flores-Ruiz, Eric; Miranda-Novales, María Guadalupe; Villasís-Keever, Miguel Ángel
2017-01-01
The statistical analysis can be divided in two main components: descriptive analysis and inferential analysis. An inference is to elaborate conclusions from the tests performed with the data obtained from a sample of a population. Statistical tests are used in order to establish the probability that a conclusion obtained from a sample is applicable to the population from which it was obtained. However, choosing the appropriate statistical test in general poses a challenge for novice researchers. To choose the statistical test it is necessary to take into account three aspects: the research design, the number of measurements and the scale of measurement of the variables. Statistical tests are divided into two sets, parametric and nonparametric. Parametric tests can only be used if the data show a normal distribution. Choosing the right statistical test will make it easier for readers to understand and apply the results.
The research protocol VI: How to choose the appropriate statistical test. Inferential statistics
Directory of Open Access Journals (Sweden)
Eric Flores-Ruiz
2017-10-01
Full Text Available The statistical analysis can be divided in two main components: descriptive analysis and inferential analysis. An inference is to elaborate conclusions from the tests performed with the data obtained from a sample of a population. Statistical tests are used in order to establish the probability that a conclusion obtained from a sample is applicable to the population from which it was obtained. However, choosing the appropriate statistical test in general poses a challenge for novice researchers. To choose the statistical test it is necessary to take into account three aspects: the research design, the number of measurements and the scale of measurement of the variables. Statistical tests are divided into two sets, parametric and nonparametric. Parametric tests can only be used if the data show a normal distribution. Choosing the right statistical test will make it easier for readers to understand and apply the results.
A more powerful test based on ratio distribution for retention noninferiority hypothesis.
Deng, Ling; Chen, Gang
2013-03-11
Rothmann et al. ( 2003 ) proposed a method for the statistical inference of fraction retention noninferiority (NI) hypothesis. A fraction retention hypothesis is defined as a ratio of the new treatment effect verse the control effect in the context of a time to event endpoint. One of the major concerns using this method in the design of an NI trial is that with a limited sample size, the power of the study is usually very low. This makes an NI trial not applicable particularly when using time to event endpoint. To improve power, Wang et al. ( 2006 ) proposed a ratio test based on asymptotic normality theory. Under a strong assumption (equal variance of the NI test statistic under null and alternative hypotheses), the sample size using Wang's test was much smaller than that using Rothmann's test. However, in practice, the assumption of equal variance is generally questionable for an NI trial design. This assumption is removed in the ratio test proposed in this article, which is derived directly from a Cauchy-like ratio distribution. In addition, using this method, the fundamental assumption used in Rothmann's test, that the observed control effect is always positive, that is, the observed hazard ratio for placebo over the control is greater than 1, is no longer necessary. Without assuming equal variance under null and alternative hypotheses, the sample size required for an NI trial can be significantly reduced if using the proposed ratio test for a fraction retention NI hypothesis.
statistical tests for frequency distribution of mean gravity anomalies
African Journals Online (AJOL)
ES Obe
1980-03-01
Mar 1, 1980 ... STATISTICAL TESTS FOR FREQUENCY DISTRIBUTION OF MEAN. GRAVITY ANOMALIES. By ... approach. Kaula [1,2] discussed the method of applying statistical techniques in the ..... mathematical foundation of physical ...
Statistical Tests for Mixed Linear Models
Khuri, André I; Sinha, Bimal K
2011-01-01
An advanced discussion of linear models with mixed or random effects. In recent years a breakthrough has occurred in our ability to draw inferences from exact and optimum tests of variance component models, generating much research activity that relies on linear models with mixed and random effects. This volume covers the most important research of the past decade as well as the latest developments in hypothesis testing. It compiles all currently available results in the area of exact and optimum tests for variance component models and offers the only comprehensive treatment for these models a
International Nuclear Information System (INIS)
Onchi, T; Fujisawa, A; Sanpei, A; Himura, H; Masamune, S
2017-01-01
Permutation entropy and statistical complexity are measures for complex time series. The Bandt–Pompe methodology evaluates probability distribution using permutation. The method is robust and effective to quantify information of time series data. Statistical complexity is the product of Jensen–Shannon divergence and permutation entropy. These physical parameters are introduced to analyse time series of emission and magnetic fluctuations in low-aspect-ratio reversed-field pinch (RFP) plasma. The observed time-series data aggregates in a region of the plane, the so-called C – H plane, determined by entropy versus complexity. The C – H plane is a representation space used for distinguishing periodic, chaos, stochastic and noisy processes of time series data. The characteristics of the emissions and magnetic fluctuation change under different RFP-plasma conditions. The statistical complexities of soft x-ray emissions and magnetic fluctuations depend on the relationships between reversal and pinch parameters. (paper)
Diagonal Likelihood Ratio Test for Equality of Mean Vectors in High-Dimensional Data
Hu, Zongliang
2017-10-27
We propose a likelihood ratio test framework for testing normal mean vectors in high-dimensional data under two common scenarios: the one-sample test and the two-sample test with equal covariance matrices. We derive the test statistics under the assumption that the covariance matrices follow a diagonal matrix structure. In comparison with the diagonal Hotelling\\'s tests, our proposed test statistics display some interesting characteristics. In particular, they are a summation of the log-transformed squared t-statistics rather than a direct summation of those components. More importantly, to derive the asymptotic normality of our test statistics under the null and local alternative hypotheses, we do not require the assumption that the covariance matrix follows a diagonal matrix structure. As a consequence, our proposed test methods are very flexible and can be widely applied in practice. Finally, simulation studies and a real data analysis are also conducted to demonstrate the advantages of our likelihood ratio test method.
Testing for statistical discrimination in health care.
Balsa, Ana I; McGuire, Thomas G; Meredith, Lisa S
2005-02-01
To examine the extent to which doctors' rational reactions to clinical uncertainty ("statistical discrimination") can explain racial differences in the diagnosis of depression, hypertension, and diabetes. Main data are from the Medical Outcomes Study (MOS), a 1986 study conducted by RAND Corporation in three U.S. cities. The study compares the processes and outcomes of care for patients in different health care systems. Complementary data from National Health And Examination Survey III (NHANES III) and National Comorbidity Survey (NCS) are also used. Across three systems of care (staff health maintenance organizations, multispecialty groups, and solo practices), the MOS selected 523 health care clinicians. A representative cross-section (21,480) of patients was then chosen from a pool of adults who visited any of these providers during a 9-day period. We analyzed a subsample of the MOS data consisting of patients of white family physicians or internists (11,664 patients). We obtain variables reflecting patients' health conditions and severity, demographics, socioeconomic status, and insurance from the patients' screener interview (administered by MOS staff prior to the patient's encounter with the clinician). We used the reports made by the clinician after the visit to construct indicators of doctors' diagnoses. We obtained prevalence rates from NHANES III and NCS. We find evidence consistent with statistical discrimination for diagnoses of hypertension, diabetes, and depression. In particular, we find that if clinicians act like Bayesians, plausible priors held by the physician about the prevalence of the disease across racial groups could account for racial differences in the diagnosis of hypertension and diabetes. In the case of depression, we find evidence that race affects decisions through differences in communication patterns between doctors and white and minority patients. To contend effectively with inequities in health care, it is necessary to understand
Yuan, Ke-Hai; Tian, Yubin; Yanagihara, Hirokazu
2015-06-01
Survey data typically contain many variables. Structural equation modeling (SEM) is commonly used in analyzing such data. The most widely used statistic for evaluating the adequacy of a SEM model is T ML, a slight modification to the likelihood ratio statistic. Under normality assumption, T ML approximately follows a chi-square distribution when the number of observations (N) is large and the number of items or variables (p) is small. However, in practice, p can be rather large while N is always limited due to not having enough participants. Even with a relatively large N, empirical results show that T ML rejects the correct model too often when p is not too small. Various corrections to T ML have been proposed, but they are mostly heuristic. Following the principle of the Bartlett correction, this paper proposes an empirical approach to correct T ML so that the mean of the resulting statistic approximately equals the degrees of freedom of the nominal chi-square distribution. Results show that empirically corrected statistics follow the nominal chi-square distribution much more closely than previously proposed corrections to T ML, and they control type I errors reasonably well whenever N ≥ max(50,2p). The formulations of the empirically corrected statistics are further used to predict type I errors of T ML as reported in the literature, and they perform well.
CONFIDENCE LEVELS AND/VS. STATISTICAL HYPOTHESIS TESTING IN STATISTICAL ANALYSIS. CASE STUDY
Directory of Open Access Journals (Sweden)
ILEANA BRUDIU
2009-05-01
Full Text Available Estimated parameters with confidence intervals and testing statistical assumptions used in statistical analysis to obtain conclusions on research from a sample extracted from the population. Paper to the case study presented aims to highlight the importance of volume of sample taken in the study and how this reflects on the results obtained when using confidence intervals and testing for pregnant. If statistical testing hypotheses not only give an answer "yes" or "no" to some questions of statistical estimation using statistical confidence intervals provides more information than a test statistic, show high degree of uncertainty arising from small samples and findings build in the "marginally significant" or "almost significant (p very close to 0.05.
A statistical procedure for testing financial contagion
Directory of Open Access Journals (Sweden)
Attilio Gardini
2013-05-01
Full Text Available The aim of the paper is to provide an analysis of contagion through the measurement of the risk premia disequilibria dynamics. In order to discriminate among several disequilibrium situations we propose to test contagion on the basis of a two-step procedure: in the first step we estimate the preference parameters of the consumption-based asset pricing model (CCAPM to control for fundamentals and to measure the equilibrium risk premia in different countries; in the second step we measure the differences among empirical risk premia and equilibrium risk premia in order to test cross-country disequilibrium situations due to contagion. Disequilibrium risk premium measures are modelled by the multivariate DCC-GARCH model including a deterministic crisis variable. The model describes simultaneously the risk premia dynamics due to endogenous amplifications of volatility and to exogenous idiosyncratic shocks (contagion, having controlled for fundamentals effects in the first step. Our approach allows us to achieve two goals: (i to identify the disequilibria generated by irrational behaviours of the agents, which cause increasing in volatility that is not explained by the economic fundamentals but is endogenous to financial markets, and (ii to assess the existence of contagion effect defined by exogenous shift in cross-country return correlations during crisis periods. Our results show evidence of contagion from the United States to United Kingdom, Japan, France, and Italy during the financial crisis which started in 2007-08.
Statistical hypothesis tests of some micrometeorological observations
International Nuclear Information System (INIS)
SethuRaman, S.; Tichler, J.
1977-01-01
Chi-square goodness-of-fit is used to test the hypothesis that the medium scale of turbulence in the atmospheric surface layer is normally distributed. Coefficients of skewness and excess are computed from the data. If the data are not normal, these coefficients are used in Edgeworth's asymptotic expansion of Gram-Charlier series to determine an altrnate probability density function. The observed data are then compared with the modified probability densities and the new chi-square values computed.Seventy percent of the data analyzed was either normal or approximatley normal. The coefficient of skewness g 1 has a good correlation with the chi-square values. Events with vertical-barg 1 vertical-bar 1 vertical-bar<0.43 were approximately normal. Intermittency associated with the formation and breaking of internal gravity waves in surface-based inversions over water is thought to be the reason for the non-normality
HOW TO SELECT APPROPRIATE STATISTICAL TEST IN SCIENTIFIC ARTICLES
Directory of Open Access Journals (Sweden)
Vladimir TRAJKOVSKI
2016-09-01
Full Text Available Statistics is mathematical science dealing with the collection, analysis, interpretation, and presentation of masses of numerical data in order to draw relevant conclusions. Statistics is a form of mathematical analysis that uses quantified models, representations and synopses for a given set of experimental data or real-life studies. The students and young researchers in biomedical sciences and in special education and rehabilitation often declare that they have chosen to enroll that study program because they have lack of knowledge or interest in mathematics. This is a sad statement, but there is much truth in it. The aim of this editorial is to help young researchers to select statistics or statistical techniques and statistical software appropriate for the purposes and conditions of a particular analysis. The most important statistical tests are reviewed in the article. Knowing how to choose right statistical test is an important asset and decision in the research data processing and in the writing of scientific papers. Young researchers and authors should know how to choose and how to use statistical methods. The competent researcher will need knowledge in statistical procedures. That might include an introductory statistics course, and it most certainly includes using a good statistics textbook. For this purpose, there is need to return of Statistics mandatory subject in the curriculum of the Institute of Special Education and Rehabilitation at Faculty of Philosophy in Skopje. Young researchers have a need of additional courses in statistics. They need to train themselves to use statistical software on appropriate way.
Comments on the sequential probability ratio testing methods
Energy Technology Data Exchange (ETDEWEB)
Racz, A. [Hungarian Academy of Sciences, Budapest (Hungary). Central Research Inst. for Physics
1996-07-01
In this paper the classical sequential probability ratio testing method (SPRT) is reconsidered. Every individual boundary crossing event of the SPRT is regarded as a new piece of evidence about the problem under hypothesis testing. The Bayes method is applied for belief updating, i.e. integrating these individual decisions. The procedure is recommended to use when the user (1) would like to be informed about the tested hypothesis continuously and (2) would like to achieve his final conclusion with high confidence level. (Author).
Testing the Difference of Correlated Agreement Coefficients for Statistical Significance
Gwet, Kilem L.
2016-01-01
This article addresses the problem of testing the difference between two correlated agreement coefficients for statistical significance. A number of authors have proposed methods for testing the difference between two correlated kappa coefficients, which require either the use of resampling methods or the use of advanced statistical modeling…
Hayslett, H T
1991-01-01
Statistics covers the basic principles of Statistics. The book starts by tackling the importance and the two kinds of statistics; the presentation of sample data; the definition, illustration and explanation of several measures of location; and the measures of variation. The text then discusses elementary probability, the normal distribution and the normal approximation to the binomial. Testing of statistical hypotheses and tests of hypotheses about the theoretical proportion of successes in a binomial population and about the theoretical mean of a normal population are explained. The text the
Corrections of the NIST Statistical Test Suite for Randomness
Kim, Song-Ju; Umeno, Ken; Hasegawa, Akio
2004-01-01
It is well known that the NIST statistical test suite was used for the evaluation of AES candidate algorithms. We have found that the test setting of Discrete Fourier Transform test and Lempel-Ziv test of this test suite are wrong. We give four corrections of mistakes in the test settings. This suggests that re-evaluation of the test results should be needed.
An Intersection–Union Test for the Sharpe Ratio
Directory of Open Access Journals (Sweden)
Gabriel Frahm
2018-04-01
Full Text Available An intersection–union test for supporting the hypothesis that a given investment strategy is optimal among a set of alternatives is presented. It compares the Sharpe ratio of the benchmark with that of each other strategy. The intersection–union test takes serial dependence into account and does not presume that asset returns are multivariate normally distributed. An empirical study based on the G–7 countries demonstrates that it is hard to find significant results due to the lack of data, which confirms a general observation in empirical finance.
Directory of Open Access Journals (Sweden)
J. Sunil Rao
2007-01-01
Full Text Available In gene selection for cancer classifi cation using microarray data, we define an eigenvalue-ratio statistic to measure a gene’s contribution to the joint discriminability when this gene is included into a set of genes. Based on this eigenvalueratio statistic, we define a novel hypothesis testing for gene statistical redundancy and propose two gene selection methods. Simulation studies illustrate the agreement between statistical redundancy testing and gene selection methods. Real data examples show the proposed gene selection methods can select a compact gene subset which can not only be used to build high quality cancer classifiers but also show biological relevance.
Caveats for using statistical significance tests in research assessments
DEFF Research Database (Denmark)
Schneider, Jesper Wiborg
2013-01-01
controversial and numerous criticisms have been leveled against their use. Based on examples from articles by proponents of the use statistical significance tests in research assessments, we address some of the numerous problems with such tests. The issues specifically discussed are the ritual practice......This article raises concerns about the advantages of using statistical significance tests in research assessments as has recently been suggested in the debate about proper normalization procedures for citation indicators by Opthof and Leydesdorff (2010). Statistical significance tests are highly...... argue that applying statistical significance tests and mechanically adhering to their results are highly problematic and detrimental to critical thinking. We claim that the use of such tests do not provide any advantages in relation to deciding whether differences between citation indicators...
Statistical analysis and planning of multihundred-watt impact tests
International Nuclear Information System (INIS)
Martz, H.F. Jr.; Waterman, M.S.
1977-10-01
Modular multihundred-watt (MHW) radioisotope thermoelectric generators (RTG's) are used as a power source for spacecraft. Due to possible environmental contamination by radioactive materials, numerous tests are required to determine and verify the safety of the RTG. There are results available from 27 fueled MHW impact tests regarding hoop failure, fingerprint failure, and fuel failure. Data from the 27 tests are statistically analyzed for relationships that exist between the test design variables and the failure types. Next, these relationships are used to develop a statistical procedure for planning and conducting either future MHW impact tests or similar tests on other RTG fuel sources. Finally, some conclusions are given
Scale invariant for one-sided multivariate likelihood ratio tests
Directory of Open Access Journals (Sweden)
Samruam Chongcharoen
2010-07-01
Full Text Available Suppose 1 2 , ,..., n X X X is a random sample from Np ( ,V distribution. Consider 0 1 2 : ... 0 p H and1 : 0 for 1, 2,..., i H i p , let 1 0 H H denote the hypothesis that 1 H holds but 0 H does not, and let ~ 0 H denote thehypothesis that 0 H does not hold. Because the likelihood ratio test (LRT of 0 H versus 1 0 H H is complicated, severalad hoc tests have been proposed. Tang, Gnecco and Geller (1989 proposed an approximate LRT, Follmann (1996 suggestedrejecting 0 H if the usual test of 0 H versus ~ 0 H rejects 0 H with significance level 2 and a weighted sum of the samplemeans is positive, and Chongcharoen, Singh and Wright (2002 modified Follmann’s test to include information about thecorrelation structure in the sum of the sample means. Chongcharoen and Wright (2007, 2006 give versions of the Tang-Gnecco-Geller tests and Follmann-type tests, respectively, with invariance properties. With LRT’s scale invariant desiredproperty, we investigate its powers by using Monte Carlo techniques and compare them with the tests which we recommendin Chongcharoen and Wright (2007, 2006.
A Note on Three Statistical Tests in the Logistic Regression DIF Procedure
Paek, Insu
2012-01-01
Although logistic regression became one of the well-known methods in detecting differential item functioning (DIF), its three statistical tests, the Wald, likelihood ratio (LR), and score tests, which are readily available under the maximum likelihood, do not seem to be consistently distinguished in DIF literature. This paper provides a clarifying…
Kleibergen, F.R.
2002-01-01
We extend the novel pivotal statistics for testing the parameters in the instrumental variables regression model. We show that these statistics result from a decomposition of the Anderson-Rubin statistic into two independent pivotal statistics. The first statistic is a score statistic that tests
Directory of Open Access Journals (Sweden)
Brayan Alexander Fonseca Martinez
2017-11-01
Full Text Available One of the most commonly observational study designs employed in veterinary is the cross-sectional study with binary outcomes. To measure an association with exposure, the use of prevalence ratios (PR or odds ratios (OR are possible. In human epidemiology, much has been discussed about the use of the OR exclusively for case–control studies and some authors reported that there is no good justification for fitting logistic regression when the prevalence of the disease is high, in which OR overestimate the PR. Nonetheless, interpretation of OR is difficult since confusing between risk and odds can lead to incorrect quantitative interpretation of data such as “the risk is X times greater,” commonly reported in studies that use OR. The aims of this study were (1 to review articles with cross-sectional designs to assess the statistical method used and the appropriateness of the interpretation of the estimated measure of association and (2 to illustrate the use of alternative statistical methods that estimate PR directly. An overview of statistical methods and its interpretation using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA guidelines was conducted and included a diverse set of peer-reviewed journals among the veterinary science field using PubMed as the search engine. From each article, the statistical method used and the appropriateness of the interpretation of the estimated measure of association were registered. Additionally, four alternative models for logistic regression that estimate directly PR were tested using our own dataset from a cross-sectional study on bovine viral diarrhea virus. The initial search strategy found 62 articles, in which 6 articles were excluded and therefore 56 studies were used for the overall analysis. The review showed that independent of the level of prevalence reported, 96% of articles employed logistic regression, thus estimating the OR. Results of the multivariate models
Kolmogorov complexity, pseudorandom generators and statistical models testing
Czech Academy of Sciences Publication Activity Database
Šindelář, Jan; Boček, Pavel
2002-01-01
Roč. 38, č. 6 (2002), s. 747-759 ISSN 0023-5954 R&D Projects: GA ČR GA102/99/1564 Institutional research plan: CEZ:AV0Z1075907 Keywords : Kolmogorov complexity * pseudorandom generators * statistical models testing Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.341, year: 2002
Common pitfalls in statistical analysis: The perils of multiple testing
Ranganathan, Priya; Pramesh, C. S.; Buyse, Marc
2016-01-01
Multiple testing refers to situations where a dataset is subjected to statistical testing multiple times - either at multiple time-points or through multiple subgroups or for multiple end-points. This amplifies the probability of a false-positive finding. In this article, we look at the consequences of multiple testing and explore various methods to deal with this issue. PMID:27141478
Significance levels for studies with correlated test statistics.
Shi, Jianxin; Levinson, Douglas F; Whittemore, Alice S
2008-07-01
When testing large numbers of null hypotheses, one needs to assess the evidence against the global null hypothesis that none of the hypotheses is false. Such evidence typically is based on the test statistic of the largest magnitude, whose statistical significance is evaluated by permuting the sample units to simulate its null distribution. Efron (2007) has noted that correlation among the test statistics can induce substantial interstudy variation in the shapes of their histograms, which may cause misleading tail counts. Here, we show that permutation-based estimates of the overall significance level also can be misleading when the test statistics are correlated. We propose that such estimates be conditioned on a simple measure of the spread of the observed histogram, and we provide a method for obtaining conditional significance levels. We justify this conditioning using the conditionality principle described by Cox and Hinkley (1974). Application of the method to gene expression data illustrates the circumstances when conditional significance levels are needed.
The efficiency of the crude oil markets: Evidence from variance ratio tests
Energy Technology Data Exchange (ETDEWEB)
Charles, Amelie, E-mail: acharles@audencia.co [Audencia Nantes, School of Management, 8 route de la Joneliere, 44312 Nantes (France); Darne, Olivier, E-mail: olivier.darne@univ-nantes.f [LEMNA, University of Nantes, IEMN-IAE, Chemin de la Censive du Tertre, 44322 Nantes (France)
2009-11-15
This study examines the random walk hypothesis for the crude oil markets, using daily data over the period 1982-2008. The weak-form efficient market hypothesis for two crude oil markets (UK Brent and US West Texas Intermediate) is tested with non-parametric variance ratio tests developed by [Wright J.H., 2000. Alternative variance-ratio tests using ranks and signs. Journal of Business and Economic Statistics, 18, 1-9] and [Belaire-Franch J. and Contreras D., 2004. Ranks and signs-based multiple variance ratio tests. Working paper, Department of Economic Analysis, University of Valencia] as well as the wild-bootstrap variance ratio tests suggested by [Kim, J.H., 2006. Wild bootstrapping variance ratio tests. Economics Letters, 92, 38-43]. We find that the Brent crude oil market is weak-form efficiency while the WTI crude oil market seems to be inefficiency on the 1994-2008 sub-period, suggesting that the deregulation have not improved the efficiency on the WTI crude oil market in the sense of making returns less predictable.
The efficiency of the crude oil markets. Evidence from variance ratio tests
International Nuclear Information System (INIS)
Charles, Amelie; Darne, Olivier
2009-01-01
This study examines the random walk hypothesis for the crude oil markets, using daily data over the period 1982-2008. The weak-form efficient market hypothesis for two crude oil markets (UK Brent and US West Texas Intermediate) is tested with non-parametric variance ratio tests developed by [Wright J.H., 2000. Alternative variance-ratio tests using ranks and signs. Journal of Business and Economic Statistics, 18, 1-9] and [Belaire-Franch J. and Contreras D., 2004. Ranks and signs-based multiple variance ratio tests. Working paper, Department of Economic Analysis, University of Valencia] as well as the wild-bootstrap variance ratio tests suggested by [Kim, J.H., 2006. Wild bootstrapping variance ratio tests. Economics Letters, 92, 38-43]. We find that the Brent crude oil market is weak-form efficiency while the WTI crude oil market seems to be inefficiency on the 1994-2008 sub-period, suggesting that the deregulation have not improved the efficiency on the WTI crude oil market in the sense of making returns less predictable. (author)
The efficiency of the crude oil markets. Evidence from variance ratio tests
Energy Technology Data Exchange (ETDEWEB)
Charles, Amelie [Audencia Nantes, School of Management, 8 route de la Joneliere, 44312 Nantes (France); Darne, Olivier [LEMNA, University of Nantes, IEMN-IAE, Chemin de la Censive du Tertre, 44322 Nantes (France)
2009-11-15
This study examines the random walk hypothesis for the crude oil markets, using daily data over the period 1982-2008. The weak-form efficient market hypothesis for two crude oil markets (UK Brent and US West Texas Intermediate) is tested with non-parametric variance ratio tests developed by [Wright J.H., 2000. Alternative variance-ratio tests using ranks and signs. Journal of Business and Economic Statistics, 18, 1-9] and [Belaire-Franch J. and Contreras D., 2004. Ranks and signs-based multiple variance ratio tests. Working paper, Department of Economic Analysis, University of Valencia] as well as the wild-bootstrap variance ratio tests suggested by [Kim, J.H., 2006. Wild bootstrapping variance ratio tests. Economics Letters, 92, 38-43]. We find that the Brent crude oil market is weak-form efficiency while the WTI crude oil market seems to be inefficiency on the 1994-2008 sub-period, suggesting that the deregulation have not improved the efficiency on the WTI crude oil market in the sense of making returns less predictable. (author)
Comparing statistical tests for detecting soil contamination greater than background
International Nuclear Information System (INIS)
Hardin, J.W.; Gilbert, R.O.
1993-12-01
The Washington State Department of Ecology (WSDE) recently issued a report that provides guidance on statistical issues regarding investigation and cleanup of soil and groundwater contamination under the Model Toxics Control Act Cleanup Regulation. Included in the report are procedures for determining a background-based cleanup standard and for conducting a 3-step statistical test procedure to decide if a site is contaminated greater than the background standard. The guidance specifies that the State test should only be used if the background and site data are lognormally distributed. The guidance in WSDE allows for using alternative tests on a site-specific basis if prior approval is obtained from WSDE. This report presents the results of a Monte Carlo computer simulation study conducted to evaluate the performance of the State test and several alternative tests for various contamination scenarios (background and site data distributions). The primary test performance criteria are (1) the probability the test will indicate that a contaminated site is indeed contaminated, and (2) the probability that the test will indicate an uncontaminated site is contaminated. The simulation study was conducted assuming the background concentrations were from lognormal or Weibull distributions. The site data were drawn from distributions selected to represent various contamination scenarios. The statistical tests studied are the State test, t test, Satterthwaite's t test, five distribution-free tests, and several tandem tests (wherein two or more tests are conducted using the same data set)
Statistical inferences for bearings life using sudden death test
Directory of Open Access Journals (Sweden)
Morariu Cristin-Olimpiu
2017-01-01
Full Text Available In this paper we propose a calculus method for reliability indicators estimation and a complete statistical inferences for three parameters Weibull distribution of bearings life. Using experimental values regarding the durability of bearings tested on stands by the sudden death tests involves a series of particularities of the estimation using maximum likelihood method and statistical inference accomplishment. The paper detailing these features and also provides an example calculation.
Sequential Probability Ratio Test for Spacecraft Collision Avoidance Maneuver Decisions
Carpenter, J. Russell; Markley, F. Landis
2013-01-01
A document discusses sequential probability ratio tests that explicitly allow decision-makers to incorporate false alarm and missed detection risks, and are potentially less sensitive to modeling errors than a procedure that relies solely on a probability of collision threshold. Recent work on constrained Kalman filtering has suggested an approach to formulating such a test for collision avoidance maneuver decisions: a filter bank with two norm-inequality-constrained epoch-state extended Kalman filters. One filter models the null hypotheses that the miss distance is inside the combined hard body radius at the predicted time of closest approach, and one filter models the alternative hypothesis. The epoch-state filter developed for this method explicitly accounts for any process noise present in the system. The method appears to work well using a realistic example based on an upcoming, highly elliptical orbit formation flying mission.
Ganju, Jitendra; Yu, Xinxin; Ma, Guoguang Julie
2013-01-01
Formal inference in randomized clinical trials is based on controlling the type I error rate associated with a single pre-specified statistic. The deficiency of using just one method of analysis is that it depends on assumptions that may not be met. For robust inference, we propose pre-specifying multiple test statistics and relying on the minimum p-value for testing the null hypothesis of no treatment effect. The null hypothesis associated with the various test statistics is that the treatment groups are indistinguishable. The critical value for hypothesis testing comes from permutation distributions. Rejection of the null hypothesis when the smallest p-value is less than the critical value controls the type I error rate at its designated value. Even if one of the candidate test statistics has low power, the adverse effect on the power of the minimum p-value statistic is not much. Its use is illustrated with examples. We conclude that it is better to rely on the minimum p-value rather than a single statistic particularly when that single statistic is the logrank test, because of the cost and complexity of many survival trials. Copyright © 2013 John Wiley & Sons, Ltd.
Log-concave Probability Distributions: Theory and Statistical Testing
DEFF Research Database (Denmark)
An, Mark Yuing
1996-01-01
This paper studies the broad class of log-concave probability distributions that arise in economics of uncertainty and information. For univariate, continuous, and log-concave random variables we prove useful properties without imposing the differentiability of density functions. Discrete...... and multivariate distributions are also discussed. We propose simple non-parametric testing procedures for log-concavity. The test statistics are constructed to test one of the two implicati ons of log-concavity: increasing hazard rates and new-is-better-than-used (NBU) property. The test for increasing hazard...... rates are based on normalized spacing of the sample order statistics. The tests for NBU property fall into the category of Hoeffding's U-statistics...
DEFF Research Database (Denmark)
Conradsen, Knut; Nielsen, Allan Aasbjerg; Schou, Jesper
2003-01-01
. Based on this distribution, a test statistic for equality of two such matrices and an associated asymptotic probability for obtaining a smaller value of the test statistic are derived and applied successfully to change detection in polarimetric SAR data. In a case study, EMISAR L-band data from April 17...... to HH, VV, or HV data alone, the derived test statistic reduces to the well-known gamma likelihood-ratio test statistic. The derived test statistic and the associated significance value can be applied as a line or edge detector in fully polarimetric SAR data also....
A likelihood ratio test for species membership based on DNA sequence data
DEFF Research Database (Denmark)
Matz, Mikhail V.; Nielsen, Rasmus
2005-01-01
DNA barcoding as an approach for species identification is rapidly increasing in popularity. However, it remains unclear which statistical procedures should accompany the technique to provide a measure of uncertainty. Here we describe a likelihood ratio test which can be used to test if a sampled...... sequence is a member of an a priori specified species. We investigate the performance of the test using coalescence simulations, as well as using the real data from butterflies and frogs representing two kinds of challenge for DNA barcoding: extremely low and extremely high levels of sequence variability....
Accelerated testing statistical models, test plans, and data analysis
Nelson, Wayne B
2009-01-01
The Wiley-Interscience Paperback Series consists of selected books that have been made more accessible to consumers in an effort to increase global appeal and general circulation. With these new unabridged softcover volumes, Wiley hopes to extend the lives of these works by making them available to future generations of statisticians, mathematicians, and scientists. "". . . a goldmine of knowledge on accelerated life testing principles and practices . . . one of the very few capable of advancing the science of reliability. It definitely belongs in every bookshelf on engineering.""-Dev G.
Statistical Estimation of Heterogeneities: A New Frontier in Well Testing
Neuman, S. P.; Guadagnini, A.; Illman, W. A.; Riva, M.; Vesselinov, V. V.
2001-12-01
Well-testing methods have traditionally relied on analytical solutions of groundwater flow equations in relatively simple domains, consisting of one or at most a few units having uniform hydraulic properties. Recently, attention has been shifting toward methods and solutions that would allow one to characterize subsurface heterogeneities in greater detail. On one hand, geostatistical inverse methods are being used to assess the spatial variability of parameters, such as permeability and porosity, on the basis of multiple cross-hole pressure interference tests. On the other hand, analytical solutions are being developed to describe the mean and variance (first and second statistical moments) of flow to a well in a randomly heterogeneous medium. Geostatistical inverse interpretation of cross-hole tests yields a smoothed but detailed "tomographic" image of how parameters actually vary in three-dimensional space, together with corresponding measures of estimation uncertainty. Moment solutions may soon allow one to interpret well tests in terms of statistical parameters such as the mean and variance of log permeability, its spatial autocorrelation and statistical anisotropy. The idea of geostatistical cross-hole tomography is illustrated through pneumatic injection tests conducted in unsaturated fractured tuff at the Apache Leap Research Site near Superior, Arizona. The idea of using moment equations to interpret well-tests statistically is illustrated through a recently developed three-dimensional solution for steady state flow to a well in a bounded, randomly heterogeneous, statistically anisotropic aquifer.
Your Chi-Square Test Is Statistically Significant: Now What?
Sharpe, Donald
2015-01-01
Applied researchers have employed chi-square tests for more than one hundred years. This paper addresses the question of how one should follow a statistically significant chi-square test result in order to determine the source of that result. Four approaches were evaluated: calculating residuals, comparing cells, ransacking, and partitioning. Data…
Statistical test for the distribution of galaxies on plates
International Nuclear Information System (INIS)
Garcia Lambas, D.
1985-01-01
A statistical test for the distribution of galaxies on plates is presented. We apply the test to synthetic astronomical plates obtained by means of numerical simulation (Garcia Lambas and Sersic 1983) with three different models for the 3-dimensional distribution, comparison with an observational plate, suggest the presence of filamentary structure. (author)
CUSUM-based person-fit statistics for adaptive testing
van Krimpen-Stoop, Edith; Meijer, R.R.
1999-01-01
Item scores that do not fit an assumed item response theory model may cause the latent trait value to be estimated inaccurately. Several person-fit statistics for detecting nonfitting score patterns for paper-and-pencil tests have been proposed. In the context of computerized adaptive tests (CAT),
CUSUM-based person-fit statistics for adaptive testing
van Krimpen-Stoop, Edith; Meijer, R.R.
2001-01-01
Item scores that do not fit an assumed item response theory model may cause the latent trait value to be inaccurately estimated. Several person-fit statistics for detecting nonfitting score patterns for paper-and-pencil tests have been proposed. In the context of computerized adaptive tests (CAT),
Modified Distribution-Free Goodness-of-Fit Test Statistic.
Chun, So Yeon; Browne, Michael W; Shapiro, Alexander
2018-03-01
Covariance structure analysis and its structural equation modeling extensions have become one of the most widely used methodologies in social sciences such as psychology, education, and economics. An important issue in such analysis is to assess the goodness of fit of a model under analysis. One of the most popular test statistics used in covariance structure analysis is the asymptotically distribution-free (ADF) test statistic introduced by Browne (Br J Math Stat Psychol 37:62-83, 1984). The ADF statistic can be used to test models without any specific distribution assumption (e.g., multivariate normal distribution) of the observed data. Despite its advantage, it has been shown in various empirical studies that unless sample sizes are extremely large, this ADF statistic could perform very poorly in practice. In this paper, we provide a theoretical explanation for this phenomenon and further propose a modified test statistic that improves the performance in samples of realistic size. The proposed statistic deals with the possible ill-conditioning of the involved large-scale covariance matrices.
Yuan, Ke-Hai
2008-01-01
In the literature of mean and covariance structure analysis, noncentral chi-square distribution is commonly used to describe the behavior of the likelihood ratio (LR) statistic under alternative hypothesis. Due to the inaccessibility of the rather technical literature for the distribution of the LR statistic, it is widely believed that the…
THE ATKINSON INDEX, THE MORAN STATISTIC, AND TESTING EXPONENTIALITY
Nao, Mimoto; Ricardas, Zitikis; Department of Statistics and Probability, Michigan State University; Department of Statistical and Actuarial Sciences, University of Western Ontario
2008-01-01
Constructing tests for exponentiality has been an active and fruitful research area, with numerous applications in engineering, biology and other sciences concerned with life-time data. In the present paper, we construct and investigate powerful tests for exponentiality based on two well known quantities: the Atkinson index and the Moran statistic. We provide an extensive study of the performance of the tests and compare them with those already available in the literature.
[Clinical research IV. Relevancy of the statistical test chosen].
Talavera, Juan O; Rivas-Ruiz, Rodolfo
2011-01-01
When we look at the difference between two therapies or the association of a risk factor or prognostic indicator with its outcome, we need to evaluate the accuracy of the result. This assessment is based on a judgment that uses information about the study design and statistical management of the information. This paper specifically mentions the relevance of the statistical test selected. Statistical tests are chosen mainly from two characteristics: the objective of the study and type of variables. The objective can be divided into three test groups: a) those in which you want to show differences between groups or inside a group before and after a maneuver, b) those that seek to show the relationship (correlation) between variables, and c) those that aim to predict an outcome. The types of variables are divided in two: quantitative (continuous and discontinuous) and qualitative (ordinal and dichotomous). For example, if we seek to demonstrate differences in age (quantitative variable) among patients with systemic lupus erythematosus (SLE) with and without neurological disease (two groups), the appropriate test is the "Student t test for independent samples." But if the comparison is about the frequency of females (binomial variable), then the appropriate statistical test is the χ(2).
688,112 statistical results : Content mining psychology articles for statistical test results
Hartgerink, C.H.J.
2016-01-01
In this data deposit, I describe a dataset that is the result of content mining 167,318 published articles for statistical test results reported according to the standards prescribed by the American Psychological Association (APA). Articles published by the APA, Springer, Sage, and Taylor & Francis
Reliability Evaluation of Concentric Butterfly Valve Using Statistical Hypothesis Test
International Nuclear Information System (INIS)
Chang, Mu Seong; Choi, Jong Sik; Choi, Byung Oh; Kim, Do Sik
2015-01-01
A butterfly valve is a type of flow-control device typically used to regulate a fluid flow. This paper presents an estimation of the shape parameter of the Weibull distribution, characteristic life, and B10 life for a concentric butterfly valve based on a statistical analysis of the reliability test data taken before and after the valve improvement. The difference in the shape and scale parameters between the existing and improved valves is reviewed using a statistical hypothesis test. The test results indicate that the shape parameter of the improved valve is similar to that of the existing valve, and that the scale parameter of the improved valve is found to have increased. These analysis results are particularly useful for a reliability qualification test and the determination of the service life cycles
Reliability Evaluation of Concentric Butterfly Valve Using Statistical Hypothesis Test
Energy Technology Data Exchange (ETDEWEB)
Chang, Mu Seong; Choi, Jong Sik; Choi, Byung Oh; Kim, Do Sik [Korea Institute of Machinery and Materials, Daejeon (Korea, Republic of)
2015-12-15
A butterfly valve is a type of flow-control device typically used to regulate a fluid flow. This paper presents an estimation of the shape parameter of the Weibull distribution, characteristic life, and B10 life for a concentric butterfly valve based on a statistical analysis of the reliability test data taken before and after the valve improvement. The difference in the shape and scale parameters between the existing and improved valves is reviewed using a statistical hypothesis test. The test results indicate that the shape parameter of the improved valve is similar to that of the existing valve, and that the scale parameter of the improved valve is found to have increased. These analysis results are particularly useful for a reliability qualification test and the determination of the service life cycles.
Testing the statistical compatibility of independent data sets
International Nuclear Information System (INIS)
Maltoni, M.; Schwetz, T.
2003-01-01
We discuss a goodness-of-fit method which tests the compatibility between statistically independent data sets. The method gives sensible results even in cases where the χ 2 minima of the individual data sets are very low or when several parameters are fitted to a large number of data points. In particular, it avoids the problem that a possible disagreement between data sets becomes diluted by data points which are insensitive to the crucial parameters. A formal derivation of the probability distribution function for the proposed test statistics is given, based on standard theorems of statistics. The application of the method is illustrated on data from neutrino oscillation experiments, and its complementarity to the standard goodness-of-fit is discussed
A comparison of test statistics for the recovery of rapid growth-based enumeration tests
van den Heuvel, Edwin R.; IJzerman-Boon, Pieta C.
This paper considers five test statistics for comparing the recovery of a rapid growth-based enumeration test with respect to the compendial microbiological method using a specific nonserial dilution experiment. The finite sample distributions of these test statistics are unknown, because they are
Statistical approach for collaborative tests, reference material certification procedures
International Nuclear Information System (INIS)
Fangmeyer, H.; Haemers, L.; Larisse, J.
1977-01-01
The first part introduces the different aspects in organizing and executing intercomparison tests of chemical or physical quantities. It follows a description of a statistical procedure to handle the data collected in a circular analysis. Finally, an example demonstrates how the tool can be applied and which conclusion can be drawn of the results obtained
Use of run statistics to validate tensile tests
International Nuclear Information System (INIS)
Eatherly, W.P.
1981-01-01
In tensile testing of irradiated graphites, it is difficult to assure alignment of sample and train for tensile measurements. By recording location of fractures, run (sequential) statistics can readily detect lack of randomness. The technique is based on partitioning binomial distributions
Conducting tests for statistically significant differences using forest inventory data
James A. Westfall; Scott A. Pugh; John W. Coulston
2013-01-01
Many forest inventory and monitoring programs are based on a sample of ground plots from which estimates of forest resources are derived. In addition to evaluating metrics such as number of trees or amount of cubic wood volume, it is often desirable to make comparisons between resource attributes. To properly conduct statistical tests for differences, it is imperative...
Oestrus Detection in Dairy Cows Using Likelihood Ratio Tests
DEFF Research Database (Denmark)
Jónsson, Ragnar Ingi; Björgvinssin, Trausti; Blanke, Mogens
2008-01-01
This paper addresses detection of oestrus in dairy cows using methods from statistical change detection. The activity of the cows was measured by a necklace attached sensor. Statistical properties of the activity measure were investigated. Using data sets from 17 cows, diurnal activity variations...
Test for the statistical significance of differences between ROC curves
International Nuclear Information System (INIS)
Metz, C.E.; Kronman, H.B.
1979-01-01
A test for the statistical significance of observed differences between two measured Receiver Operating Characteristic (ROC) curves has been designed and evaluated. The set of observer response data for each ROC curve is assumed to be independent and to arise from a ROC curve having a form which, in the absence of statistical fluctuations in the response data, graphs as a straight line on double normal-deviate axes. To test the significance of an apparent difference between two measured ROC curves, maximum likelihood estimates of the two parameters of each curve and the associated parameter variances and covariance are calculated from the corresponding set of observer response data. An approximate Chi-square statistic with two degrees of freedom is then constructed from the differences between the parameters estimated for each ROC curve and from the variances and covariances of these estimates. This statistic is known to be truly Chi-square distributed only in the limit of large numbers of trials in the observer performance experiments. Performance of the statistic for data arising from a limited number of experimental trials was evaluated. Independent sets of rating scale data arising from the same underlying ROC curve were paired, and the fraction of differences found (falsely) significant was compared to the significance level, α, used with the test. Although test performance was found to be somewhat dependent on both the number of trials in the data and the position of the underlying ROC curve in the ROC space, the results for various significance levels showed the test to be reliable under practical experimental conditions
International Nuclear Information System (INIS)
Dordevic, N.; Wehrens, R.; Postma, G.J.; Buydens, L.M.C.; Camin, F.
2012-01-01
Highlights: ► The assessment of claims of origin is of enormous economic importance for DOC and DOCG wines. ► The official method is based on univariate statistical tests of H, C and O isotopic ratios. ► We consider 5220 Italian wine samples collected in the period 2000–2010. ► Multivariate statistical analysis leads to much better specificity and easier detection of false claims of origin. ► In the case of multi-modal data, mixture modelling provides additional improvements. - Abstract: Wine derives its economic value to a large extent from geographical origin, which has a significant impact on the quality of the wine. According to the food legislation, wines can be without geographical origin (table wine) and wines with origin. Wines with origin must have characteristics which are essential due to its region of production and must be produced, processed and prepared, exclusively within that region. The development of fast and reliable analytical methods for the assessment of claims of origin is very important. The current official method is based on the measurement of stable isotope ratios of water and alcohol in wine, which are influenced by climatic factors. The results in this paper are based on 5220 Italian wine samples collected in the period 2000–2010. We evaluate the univariate approach underlying the official method to assess claims of origin and propose several new methods to get better geographical discrimination between samples. It is shown that multivariate methods are superior to univariate approaches in that they show increased sensitivity and specificity. In cases where data are non-normally distributed, an approach based on mixture modelling provides additional improvements.
Energy Technology Data Exchange (ETDEWEB)
Dordevic, N.; Wehrens, R. [IASMA Research and Innovation Centre, Fondazione Edmund Mach, via Mach 1, 38010 San Michele all' Adige (Italy); Postma, G.J.; Buydens, L.M.C. [Radboud University Nijmegen, Institute for Molecules and Materials, Analytical Chemistry, P.O. Box 9010, 6500 GL Nijmegen (Netherlands); Camin, F., E-mail: federica.camin@fmach.it [IASMA Research and Innovation Centre, Fondazione Edmund Mach, via Mach 1, 38010 San Michele all' Adige (Italy)
2012-12-13
Highlights: Black-Right-Pointing-Pointer The assessment of claims of origin is of enormous economic importance for DOC and DOCG wines. Black-Right-Pointing-Pointer The official method is based on univariate statistical tests of H, C and O isotopic ratios. Black-Right-Pointing-Pointer We consider 5220 Italian wine samples collected in the period 2000-2010. Black-Right-Pointing-Pointer Multivariate statistical analysis leads to much better specificity and easier detection of false claims of origin. Black-Right-Pointing-Pointer In the case of multi-modal data, mixture modelling provides additional improvements. - Abstract: Wine derives its economic value to a large extent from geographical origin, which has a significant impact on the quality of the wine. According to the food legislation, wines can be without geographical origin (table wine) and wines with origin. Wines with origin must have characteristics which are essential due to its region of production and must be produced, processed and prepared, exclusively within that region. The development of fast and reliable analytical methods for the assessment of claims of origin is very important. The current official method is based on the measurement of stable isotope ratios of water and alcohol in wine, which are influenced by climatic factors. The results in this paper are based on 5220 Italian wine samples collected in the period 2000-2010. We evaluate the univariate approach underlying the official method to assess claims of origin and propose several new methods to get better geographical discrimination between samples. It is shown that multivariate methods are superior to univariate approaches in that they show increased sensitivity and specificity. In cases where data are non-normally distributed, an approach based on mixture modelling provides additional improvements.
688,112 statistical results: Content mining psychology articles for statistical test results
Hartgerink, C.H.J.
2016-01-01
In this data deposit, I describe a dataset that is the result of content mining 167,318 published articles for statistical test results reported according to the standards prescribed by the American Psychological Association (APA). Articles published by the APA, Springer, Sage, and Taylor & Francis were included (mining from Wiley and Elsevier was actively blocked). As a result of this content mining, 688,112 results from 50,845 articles were extracted. In order to provide a comprehensive set...
Testing statistical isotropy in cosmic microwave background polarization maps
Rath, Pranati K.; Samal, Pramoda Kumar; Panda, Srikanta; Mishra, Debesh D.; Aluri, Pavan K.
2018-04-01
We apply our symmetry based Power tensor technique to test conformity of PLANCK Polarization maps with statistical isotropy. On a wide range of angular scales (l = 40 - 150), our preliminary analysis detects many statistically anisotropic multipoles in foreground cleaned full sky PLANCK polarization maps viz., COMMANDER and NILC. We also study the effect of residual foregrounds that may still be present in the Galactic plane using both common UPB77 polarization mask, as well as the individual component separation method specific polarization masks. However, some of the statistically anisotropic modes still persist, albeit significantly in NILC map. We further probed the data for any coherent alignments across multipoles in several bins from the chosen multipole range.
A critique of statistical hypothesis testing in clinical research
Directory of Open Access Journals (Sweden)
Somik Raha
2011-01-01
Full Text Available Many have documented the difficulty of using the current paradigm of Randomized Controlled Trials (RCTs to test and validate the effectiveness of alternative medical systems such as Ayurveda. This paper critiques the applicability of RCTs for all clinical knowledge-seeking endeavors, of which Ayurveda research is a part. This is done by examining statistical hypothesis testing, the underlying foundation of RCTs, from a practical and philosophical perspective. In the philosophical critique, the two main worldviews of probability are that of the Bayesian and the frequentist. The frequentist worldview is a special case of the Bayesian worldview requiring the unrealistic assumptions of knowing nothing about the universe and believing that all observations are unrelated to each other. Many have claimed that the first belief is necessary for science, and this claim is debunked by comparing variations in learning with different prior beliefs. Moving beyond the Bayesian and frequentist worldviews, the notion of hypothesis testing itself is challenged on the grounds that a hypothesis is an unclear distinction, and assigning a probability on an unclear distinction is an exercise that does not lead to clarity of action. This critique is of the theory itself and not any particular application of statistical hypothesis testing. A decision-making frame is proposed as a way of both addressing this critique and transcending ideological debates on probability. An example of a Bayesian decision-making approach is shown as an alternative to statistical hypothesis testing, utilizing data from a past clinical trial that studied the effect of Aspirin on heart attacks in a sample population of doctors. As a big reason for the prevalence of RCTs in academia is legislation requiring it, the ethics of legislating the use of statistical methods for clinical research is also examined.
Obtaining reliable likelihood ratio tests from simulated likelihood functions
DEFF Research Database (Denmark)
Andersen, Laura Mørch
2014-01-01
Mixed models: Models allowing for continuous heterogeneity by assuming that value of one or more parameters follow a specified distribution have become increasingly popular. This is known as ‘mixing’ parameters, and it is standard practice by researchers - and the default option in many statistic...
Total Protein and Albumin/Globulin Ratio Test
... Plasma Free Metanephrines Platelet Count Platelet Function Tests Pleural Fluid Analysis PML-RARA Porphyrin Tests Potassium Prealbumin ... of the various types of proteins in the liquid ( serum or plasma ) portion of the blood. Two ...
Using Relative Statistics and Approximate Disease Prevalence to Compare Screening Tests.
Samuelson, Frank; Abbey, Craig
2016-11-01
Schatzkin et al. and other authors demonstrated that the ratios of some conditional statistics such as the true positive fraction are equal to the ratios of unconditional statistics, such as disease detection rates, and therefore we can calculate these ratios between two screening tests on the same population even if negative test patients are not followed with a reference procedure and the true and false negative rates are unknown. We demonstrate that this same property applies to an expected utility metric. We also demonstrate that while simple estimates of relative specificities and relative areas under ROC curves (AUC) do depend on the unknown negative rates, we can write these ratios in terms of disease prevalence, and the dependence of these ratios on a posited prevalence is often weak particularly if that prevalence is small or the performance of the two screening tests is similar. Therefore we can estimate relative specificity or AUC with little loss of accuracy, if we use an approximate value of disease prevalence.
Jet-Surface Interaction - High Aspect Ratio Nozzle Test: Test Summary
Brown, Clifford A.
2016-01-01
The Jet-Surface Interaction High Aspect Ratio Nozzle Test was conducted in the Aero-Acoustic Propulsion Laboratory at the NASA Glenn Research Center in the fall of 2015. There were four primary goals specified for this test: (1) extend the current noise database for rectangular nozzles to higher aspect ratios, (2) verify data previously acquired at small-scale with data from a larger model, (3) acquired jet-surface interaction noise data suitable for creating verifying empirical noise models and (4) investigate the effect of nozzle septa on the jet-mixing and jet-surface interaction noise. These slides give a summary of the test with representative results for each goal.
DEFF Research Database (Denmark)
Zhai, Weiwei; Nielsen, Rasmus; Slatkin, Montgomery
2009-01-01
In this report, we investigate the statistical power of several tests of selective neutrality based on patterns of genetic diversity within and between species. The goal is to compare tests based solely on population genetic data with tests using comparative data or a combination of comparative...... and population genetic data. We show that in the presence of repeated selective sweeps on relatively neutral background, tests based on the d(N)/d(S) ratios in comparative data almost always have more power to detect selection than tests based on population genetic data, even if the overall level of divergence...... selection. The Hudson-Kreitman-Aguadé test is the most powerful test for detecting positive selection among the population genetic tests investigated, whereas McDonald-Kreitman test typically has more power to detect negative selection. We discuss our findings in the light of the discordant results obtained...
Testing and qualification of confidence in statistical procedures
Energy Technology Data Exchange (ETDEWEB)
Serghiuta, D.; Tholammakkil, J.; Hammouda, N. [Canadian Nuclear Safety Commission (Canada); O' Hagan, A. [Sheffield Univ. (United Kingdom)
2014-07-01
This paper discusses a framework for designing artificial test problems, evaluation criteria, and two of the benchmark tests developed under a research project initiated by the Canadian Nuclear Safety Commission to investigate the approaches for qualification of tolerance limit methods and algorithms proposed for application in optimization of CANDU regional/neutron overpower protection trip setpoints for aged conditions. A significant component of this investigation has been the development of a series of benchmark problems of gradually increased complexity, from simple 'theoretical' problems up to complex problems closer to the real application. The first benchmark problem discussed in this paper is a simplified scalar problem which does not involve extremal, maximum or minimum, operations, typically encountered in the real applications. The second benchmark is a high dimensional, but still simple, problem for statistical inference of maximum channel power during normal operation. Bayesian algorithms have been developed for each benchmark problem to provide an independent way of constructing tolerance limits from the same data and allow assessing how well different methods make use of those data and, depending on the type of application, evaluating what the level of 'conservatism' is. The Bayesian method is not, however, used as a reference method, or 'gold' standard, but simply as an independent review method. The approach and the tests developed can be used as a starting point for developing a generic suite (generic in the sense of potentially applying whatever the proposed statistical method) of empirical studies, with clear criteria for passing those tests. Some lessons learned, in particular concerning the need to assure the completeness of the description of the application and the role of completeness of input information, are also discussed. It is concluded that a formal process which includes extended and detailed benchmark
Lloyd-Jones, Luke R; Robinson, Matthew R; Yang, Jian; Visscher, Peter M
2018-04-01
Genome-wide association studies (GWAS) have identified thousands of loci that are robustly associated with complex diseases. The use of linear mixed model (LMM) methodology for GWAS is becoming more prevalent due to its ability to control for population structure and cryptic relatedness and to increase power. The odds ratio (OR) is a common measure of the association of a disease with an exposure ( e.g. , a genetic variant) and is readably available from logistic regression. However, when the LMM is applied to all-or-none traits it provides estimates of genetic effects on the observed 0-1 scale, a different scale to that in logistic regression. This limits the comparability of results across studies, for example in a meta-analysis, and makes the interpretation of the magnitude of an effect from an LMM GWAS difficult. In this study, we derived transformations from the genetic effects estimated under the LMM to the OR that only rely on summary statistics. To test the proposed transformations, we used real genotypes from two large, publicly available data sets to simulate all-or-none phenotypes for a set of scenarios that differ in underlying model, disease prevalence, and heritability. Furthermore, we applied these transformations to GWAS summary statistics for type 2 diabetes generated from 108,042 individuals in the UK Biobank. In both simulation and real-data application, we observed very high concordance between the transformed OR from the LMM and either the simulated truth or estimates from logistic regression. The transformations derived and validated in this study improve the comparability of results from prospective and already performed LMM GWAS on complex diseases by providing a reliable transformation to a common comparative scale for the genetic effects. Copyright © 2018 by the Genetics Society of America.
Pearce element ratios: A paradigm for testing hypotheses
Russell, J. K.; Nicholls, Jim; Stanley, Clifford R.; Pearce, T. H.
Science moves forward with the development of new ideas that are encapsulated by hypotheses whose aim is to explain the structure of data sets or to expand existing theory. These hypotheses remain conjecture until they have been tested. In fact, Karl Popper advocated that a scientist's job does not finish with the creation of an idea but, rather, begins with the testing of the related hypotheses. In Popper's [1959] advocation it is implicit that there be tools with which we can test our hypotheses. Consequently, the development of rigorous tests for conceptual models plays a major role in maintaining the integrity of scientific endeavor [e.g., Greenwood, 1989].
A statistical test for outlier identification in data envelopment analysis
Directory of Open Access Journals (Sweden)
Morteza Khodabin
2010-09-01
Full Text Available In the use of peer group data to assess individual, typical or best practice performance, the effective detection of outliers is critical for achieving useful results. In these ‘‘deterministic’’ frontier models, statistical theory is now mostly available. This paper deals with the statistical pared sample method and its capability of detecting outliers in data envelopment analysis. In the presented method, each observation is deleted from the sample once and the resulting linear program is solved, leading to a distribution of efficiency estimates. Based on the achieved distribution, a pared test is designed to identify the potential outlier(s. We illustrate the method through a real data set. The method could be used in a first step, as an exploratory data analysis, before using any frontier estimation.
Statistical testing of association between menstruation and migraine.
Barra, Mathias; Dahl, Fredrik A; Vetvik, Kjersti G
2015-02-01
To repair and refine a previously proposed method for statistical analysis of association between migraine and menstruation. Menstrually related migraine (MRM) affects about 20% of female migraineurs in the general population. The exact pathophysiological link from menstruation to migraine is hypothesized to be through fluctuations in female reproductive hormones, but the exact mechanisms remain unknown. Therefore, the main diagnostic criterion today is concurrency of migraine attacks with menstruation. Methods aiming to exclude spurious associations are wanted, so that further research into these mechanisms can be performed on a population with a true association. The statistical method is based on a simple two-parameter null model of MRM (which allows for simulation modeling), and Fisher's exact test (with mid-p correction) applied to standard 2 × 2 contingency tables derived from the patients' headache diaries. Our method is a corrected version of a previously published flawed framework. To our best knowledge, no other published methods for establishing a menstruation-migraine association by statistical means exist today. The probabilistic methodology shows good performance when subjected to receiver operator characteristic curve analysis. Quick reference cutoff values for the clinical setting were tabulated for assessing association given a patient's headache history. In this paper, we correct a proposed method for establishing association between menstruation and migraine by statistical methods. We conclude that the proposed standard of 3-cycle observations prior to setting an MRM diagnosis should be extended with at least one perimenstrual window to obtain sufficient information for statistical processing. © 2014 American Headache Society.
Measures of effect size for chi-squared and likelihood-ratio goodness-of-fit tests.
Johnston, Janis E; Berry, Kenneth J; Mielke, Paul W
2006-10-01
A fundamental shift in editorial policy for psychological journals was initiated when the fourth edition of the Publication Manual of the American Psychological Association (1994) placed emphasis on reporting measures of effect size. This paper presents measures of effect size for the chi-squared and the likelihood-ratio goodness-of-fit statistic tests.
21 CFR 862.1455 - Lecithin/sphingomyelin ratio in amniotic fluid test system.
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Lecithin/sphingomyelin ratio in amniotic fluid... Clinical Chemistry Test Systems § 862.1455 Lecithin/sphingomyelin ratio in amniotic fluid test system. (a) Identification. A lecithin/sphingomyelin ratio in amniotic fluid test system is a device intended to measure the...
Quantum Statistical Testing of a Quantum Random Number Generator
Energy Technology Data Exchange (ETDEWEB)
Humble, Travis S [ORNL
2014-01-01
The unobservable elements in a quantum technology, e.g., the quantum state, complicate system verification against promised behavior. Using model-based system engineering, we present methods for verifying the opera- tion of a prototypical quantum random number generator. We begin with the algorithmic design of the QRNG followed by the synthesis of its physical design requirements. We next discuss how quantum statistical testing can be used to verify device behavior as well as detect device bias. We conclude by highlighting how system design and verification methods must influence effort to certify future quantum technologies.
Evaluation of the Wishart test statistics for polarimetric SAR data
DEFF Research Database (Denmark)
Skriver, Henning; Nielsen, Allan Aasbjerg; Conradsen, Knut
2003-01-01
A test statistic for equality of two covariance matrices following the complex Wishart distribution has previously been used in new algorithms for change detection, edge detection and segmentation in polarimetric SAR images. Previously, the results for change detection and edge detection have been...... quantitatively evaluated. This paper deals with the evaluation of segmentation. A segmentation performance measure originally developed for single-channel SAR images has been extended to polarimetric SAR images, and used to evaluate segmentation for a merge-using-moment algorithm for polarimetric SAR data....
Development of modelling algorithm of technological systems by statistical tests
Shemshura, E. A.; Otrokov, A. V.; Chernyh, V. G.
2018-03-01
The paper tackles the problem of economic assessment of design efficiency regarding various technological systems at the stage of their operation. The modelling algorithm of a technological system was performed using statistical tests and with account of the reliability index allows estimating the level of machinery technical excellence and defining the efficiency of design reliability against its performance. Economic feasibility of its application shall be determined on the basis of service quality of a technological system with further forecasting of volumes and the range of spare parts supply.
Reliability assessment for safety critical systems by statistical random testing
International Nuclear Information System (INIS)
Mills, S.E.
1995-11-01
In this report we present an overview of reliability assessment for software and focus on some basic aspects of assessing reliability for safety critical systems by statistical random testing. We also discuss possible deviations from some essential assumptions on which the general methodology is based. These deviations appear quite likely in practical applications. We present and discuss possible remedies and adjustments and then undertake applying this methodology to a portion of the SDS1 software. We also indicate shortcomings of the methodology and possible avenues to address to follow to address these problems. (author). 128 refs., 11 tabs., 31 figs
Reliability assessment for safety critical systems by statistical random testing
Energy Technology Data Exchange (ETDEWEB)
Mills, S E [Carleton Univ., Ottawa, ON (Canada). Statistical Consulting Centre
1995-11-01
In this report we present an overview of reliability assessment for software and focus on some basic aspects of assessing reliability for safety critical systems by statistical random testing. We also discuss possible deviations from some essential assumptions on which the general methodology is based. These deviations appear quite likely in practical applications. We present and discuss possible remedies and adjustments and then undertake applying this methodology to a portion of the SDS1 software. We also indicate shortcomings of the methodology and possible avenues to address to follow to address these problems. (author). 128 refs., 11 tabs., 31 figs.
Shirota, Yukari; Hashimoto, Takako; Fitri Sari, Riri
2018-03-01
It has been very significant to visualize time series big data. In the paper we shall discuss a new analysis method called “statistical shape analysis” or “geometry driven statistics” on time series statistical data in economics. In the paper, we analyse the agriculture, value added and industry, value added (percentage of GDP) changes from 2000 to 2010 in Asia. We handle the data as a set of landmarks on a two-dimensional image to see the deformation using the principal components. The point of the analysis method is the principal components of the given formation which are eigenvectors of its bending energy matrix. The local deformation can be expressed as the set of non-Affine transformations. The transformations give us information about the local differences between in 2000 and in 2010. Because the non-Affine transformation can be decomposed into a set of partial warps, we present the partial warps visually. The statistical shape analysis is widely used in biology but, in economics, no application can be found. In the paper, we investigate its potential to analyse the economic data.
Statistical characteristics of mechanical heart valve cavitation in accelerated testing.
Wu, Changfu; Hwang, Ned H C; Lin, Yu-Kweng M
2004-07-01
Cavitation damage has been observed on mechanical heart valves (MHVs) undergoing accelerated testing. Cavitation itself can be modeled as a stochastic process, as it varies from beat to beat of the testing machine. This in-vitro study was undertaken to investigate the statistical characteristics of MHV cavitation. A 25-mm St. Jude Medical bileaflet MHV (SJM 25) was tested in an accelerated tester at various pulse rates, ranging from 300 to 1,000 bpm, with stepwise increments of 100 bpm. A miniature pressure transducer was placed near a leaflet tip on the inflow side of the valve, to monitor regional transient pressure fluctuations at instants of valve closure. The pressure trace associated with each beat was passed through a 70 kHz high-pass digital filter to extract the high-frequency oscillation (HFO) components resulting from the collapse of cavitation bubbles. Three intensity-related measures were calculated for each HFO burst: its time span; its local root-mean-square (LRMS) value; and the area enveloped by the absolute value of the HFO pressure trace and the time axis, referred to as cavitation impulse. These were treated as stochastic processes, of which the first-order probability density functions (PDFs) were estimated for each test rate. Both the LRMS value and cavitation impulse were log-normal distributed, and the time span was normal distributed. These distribution laws were consistent at different test rates. The present investigation was directed at understanding MHV cavitation as a stochastic process. The results provide a basis for establishing further the statistical relationship between cavitation intensity and time-evolving cavitation damage on MHV surfaces. These data are required to assess and compare the performance of MHVs of different designs.
Statistical modeling of dental unit water bacterial test kit performance.
Cohen, Mark E; Harte, Jennifer A; Stone, Mark E; O'Connor, Karen H; Coen, Michael L; Cullum, Malford E
2007-01-01
While it is important to monitor dental water quality, it is unclear whether in-office test kits provide bacterial counts comparable to the gold standard method (R2A). Studies were conducted on specimens with known bacterial concentrations, and from dental units, to evaluate test kit accuracy across a range of bacterial types and loads. Colony forming units (CFU) were counted for samples from each source, using R2A and two types of test kits, and conformity to Poisson distribution expectations was evaluated. Poisson regression was used to test for effects of source and device, and to estimate rate ratios for kits relative to R2A. For all devices, distributions were Poisson for low CFU/mL when only beige-pigmented bacteria were considered. For higher counts, R2A remained Poisson, but kits exhibited over-dispersion. Both kits undercounted relative to R2A, but the degree of undercounting was reasonably stable. Kits did not grow pink-pigmented bacteria from dental-unit water identified as Methylobacterium rhodesianum. Only one of the test kits provided results with adequate reliability at higher bacterial concentrations. Undercount bias could be estimated for this device and used to adjust test kit results. Insensitivity to methylobacteria spp. is problematic.
Statistics for Ratios of Rayleigh, Rician, Nakagami-m, and Weibull Distributed Random Variables
Directory of Open Access Journals (Sweden)
Dragana Č. Pavlović
2013-01-01
Full Text Available The distributions of ratios of random variables are of interest in many areas of the sciences. In this brief paper, we present the joint probability density function (PDF and PDF of maximum of ratios μ1=R1/r1 and μ2=R2/r2 for the cases where R1, R2, r1, and r2 are Rayleigh, Rician, Nakagami-m, and Weibull distributed random variables. Random variables R1 and R2, as well as random variables r1 and r2, are correlated. Ascertaining on the suitability of the Weibull distribution to describe fading in both indoor and outdoor environments, special attention is dedicated to the case of Weibull random variables. For this case, analytical expressions for the joint PDF, PDF of maximum, PDF of minimum, and product moments of arbitrary number of ratios μi=Ri/ri, i=1,…,L are obtained. Random variables in numerator, Ri, as well as random variables in denominator, ri, are exponentially correlated. To the best of the authors' knowledge, analytical expressions for the PDF of minimum and product moments of {μi}i=1L are novel in the open technical literature. The proposed mathematical analysis is complemented by various numerical results. An application of presented theoretical results is illustrated with respect to performance assessment of wireless systems.
Statistical tests for power-law cross-correlated processes
Podobnik, Boris; Jiang, Zhi-Qiang; Zhou, Wei-Xing; Stanley, H. Eugene
2011-12-01
For stationary time series, the cross-covariance and the cross-correlation as functions of time lag n serve to quantify the similarity of two time series. The latter measure is also used to assess whether the cross-correlations are statistically significant. For nonstationary time series, the analogous measures are detrended cross-correlations analysis (DCCA) and the recently proposed detrended cross-correlation coefficient, ρDCCA(T,n), where T is the total length of the time series and n the window size. For ρDCCA(T,n), we numerically calculated the Cauchy inequality -1≤ρDCCA(T,n)≤1. Here we derive -1≤ρDCCA(T,n)≤1 for a standard variance-covariance approach and for a detrending approach. For overlapping windows, we find the range of ρDCCA within which the cross-correlations become statistically significant. For overlapping windows we numerically determine—and for nonoverlapping windows we derive—that the standard deviation of ρDCCA(T,n) tends with increasing T to 1/T. Using ρDCCA(T,n) we show that the Chinese financial market's tendency to follow the U.S. market is extremely weak. We also propose an additional statistical test that can be used to quantify the existence of cross-correlations between two power-law correlated time series.
Improved anomaly detection using multi-scale PLS and generalized likelihood ratio test
Madakyaru, Muddu
2017-02-16
Process monitoring has a central role in the process industry to enhance productivity, efficiency, and safety, and to avoid expensive maintenance. In this paper, a statistical approach that exploit the advantages of multiscale PLS models (MSPLS) and those of a generalized likelihood ratio (GLR) test to better detect anomalies is proposed. Specifically, to consider the multivariate and multi-scale nature of process dynamics, a MSPLS algorithm combining PLS and wavelet analysis is used as modeling framework. Then, GLR hypothesis testing is applied using the uncorrelated residuals obtained from MSPLS model to improve the anomaly detection abilities of these latent variable based fault detection methods even further. Applications to a simulated distillation column data are used to evaluate the proposed MSPLS-GLR algorithm.
Improved anomaly detection using multi-scale PLS and generalized likelihood ratio test
Madakyaru, Muddu; Harrou, Fouzi; Sun, Ying
2017-01-01
Process monitoring has a central role in the process industry to enhance productivity, efficiency, and safety, and to avoid expensive maintenance. In this paper, a statistical approach that exploit the advantages of multiscale PLS models (MSPLS) and those of a generalized likelihood ratio (GLR) test to better detect anomalies is proposed. Specifically, to consider the multivariate and multi-scale nature of process dynamics, a MSPLS algorithm combining PLS and wavelet analysis is used as modeling framework. Then, GLR hypothesis testing is applied using the uncorrelated residuals obtained from MSPLS model to improve the anomaly detection abilities of these latent variable based fault detection methods even further. Applications to a simulated distillation column data are used to evaluate the proposed MSPLS-GLR algorithm.
Chen, Connie; Gribble, Matthew O; Bartroff, Jay; Bay, Steven M; Goldstein, Larry
2017-05-01
The United States's Clean Water Act stipulates in section 303(d) that states must identify impaired water bodies for which total maximum daily loads (TMDLs) of pollution inputs into water bodies are developed. Decision-making procedures about how to list, or delist, water bodies as impaired, or not, per Clean Water Act 303(d) differ across states. In states such as California, whether or not a particular monitoring sample suggests that water quality is impaired can be regarded as a binary outcome variable, and California's current regulatory framework invokes a version of the exact binomial test to consolidate evidence across samples and assess whether the overall water body complies with the Clean Water Act. Here, we contrast the performance of California's exact binomial test with one potential alternative, the Sequential Probability Ratio Test (SPRT). The SPRT uses a sequential testing framework, testing samples as they become available and evaluating evidence as it emerges, rather than measuring all the samples and calculating a test statistic at the end of the data collection process. Through simulations and theoretical derivations, we demonstrate that the SPRT on average requires fewer samples to be measured to have comparable Type I and Type II error rates as the current fixed-sample binomial test. Policymakers might consider efficient alternatives such as SPRT to current procedure. Copyright © 2017 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Gupta, S.S.; Panchapakesan, S.
1975-01-01
A quantile selection procedure in reliability problems pertaining to a restricted family of probability distributions is discussed. This family is assumed to be star-ordered with respect to the standard normal distribution folded at the origin. Motivation for this formulation of the problem is described. Both exact and asymptotic results dealing with the distribution of the maximum of ratios of order statistics from such a family are obtained and tables of the appropriate constants, percentiles of this statistic, are given in order to facilitate the use of the selection procedure
The diagnostic odds ratio: a single indicator of test performance
Glas, Afina S.; Lijmer, Jeroen G.; Prins, Martin H.; Bonsel, Gouke J.; Bossuyt, Patrick M. M.
2003-01-01
Diagnostic testing can be used to discriminate subjects with a target disorder from subjects without it. Several indicators of diagnostic performance have been proposed, such as sensitivity and specificity. Using paired indicators can be a disadvantage in comparing the performance of competing
Urine Test: Microalbumin-to-Creatinine Ratio (For Parents)
... could interfere with test results. Be sure to review all your child's medications with your doctor. The Procedure Your child will be asked to urinate (pee) into a clean sample cup in the doctor's office or at home. Collecting the specimen should only take a few minutes. If your child isn' ...
Thompson, William C; Newman, Eryn J
2015-08-01
Forensic scientists have come under increasing pressure to quantify the strength of their evidence, but it is not clear which of several possible formats for presenting quantitative conclusions will be easiest for lay people, such as jurors, to understand. This experiment examined the way that people recruited from Amazon's Mechanical Turk (n = 541) responded to 2 types of forensic evidence--a DNA comparison and a shoeprint comparison--when an expert explained the strength of this evidence 3 different ways: using random match probabilities (RMPs), likelihood ratios (LRs), or verbal equivalents of likelihood ratios (VEs). We found that verdicts were sensitive to the strength of DNA evidence regardless of how the expert explained it, but verdicts were sensitive to the strength of shoeprint evidence only when the expert used RMPs. The weight given to DNA evidence was consistent with the predictions of a Bayesian network model that incorporated the perceived risk of a false match from 3 causes (coincidence, a laboratory error, and a frame-up), but shoeprint evidence was undervalued relative to the same Bayesian model. Fallacious interpretations of the expert's testimony (consistent with the source probability error and the defense attorney's fallacy) were common and were associated with the weight given to the evidence and verdicts. The findings indicate that perceptions of forensic science evidence are shaped by prior beliefs and expectations as well as expert testimony and consequently that the best way to characterize and explain forensic evidence may vary across forensic disciplines. (c) 2015 APA, all rights reserved).
ALTMANN, BERTHOLD
AFTER A BRIEF SUMMARY OF THE TEST PROGRAM (DESCRIBED MORE FULLY IN LI 000 318), THE STATISTICAL RESULTS TABULATED AS OVERALL "ABC (APPROACH BY CONCEPT)-RELEVANCE RATIOS" AND "ABC-RECALL FIGURES" ARE PRESENTED AND REVIEWED. AN ABSTRACT MODEL DEVELOPED IN ACCORDANCE WITH MAX WEBER'S "IDEALTYPUS" ("DIE OBJEKTIVITAET…
Binary star statistics: the mass ratio distribution for very wide systems
International Nuclear Information System (INIS)
Trimble, V.
1987-01-01
The distribution of mass ratios for a sample of common proper motion (CPM) binaries is determined and compared with that of 798 visual binaries (VB's) studied earlier, in hopes of answering the question: Can the member stars of these systems have been drawn at random from the normal initial mass function for single stars? The observed distributions peak strongly toward q = 1.0 for both kinds of systems, but less strongly for the CPM's than for the VB's. Due allowance having been made for assorted observational selection effects, it seems quite probable that the CPM's represent the observed part of a population drawn at random from the normal IMF, while the VB's are much more difficult to interpret that way and could, perhaps, result from a formation mechanism that somewhat favors sytems with roughly equal components. (author)
Why the null matters: statistical tests, random walks and evolution.
Sheets, H D; Mitchell, C E
2001-01-01
A number of statistical tests have been developed to determine what type of dynamics underlie observed changes in morphology in evolutionary time series, based on the pattern of change within the time series. The theory of the 'scaled maximum', the 'log-rate-interval' (LRI) method, and the Hurst exponent all operate on the same principle of comparing the maximum change, or rate of change, in the observed dataset to the maximum change expected of a random walk. Less change in a dataset than expected of a random walk has been interpreted as indicating stabilizing selection, while more change implies directional selection. The 'runs test' in contrast, operates on the sequencing of steps, rather than on excursion. Applications of these tests to computer generated, simulated time series of known dynamical form and various levels of additive noise indicate that there is a fundamental asymmetry in the rate of type II errors of the tests based on excursion: they are all highly sensitive to noise in models of directional selection that result in a linear trend within a time series, but are largely noise immune in the case of a simple model of stabilizing selection. Additionally, the LRI method has a lower sensitivity than originally claimed, due to the large range of LRI rates produced by random walks. Examination of the published results of these tests show that they have seldom produced a conclusion that an observed evolutionary time series was due to directional selection, a result which needs closer examination in light of the asymmetric response of these tests.
Determination of Geometrical REVs Based on Volumetric Fracture Intensity and Statistical Tests
Directory of Open Access Journals (Sweden)
Ying Liu
2018-05-01
Full Text Available This paper presents a method to estimate a representative element volume (REV of a fractured rock mass based on the volumetric fracture intensity P32 and statistical tests. A 150 m × 80 m × 50 m 3D fracture network model was generated based on field data collected at the Maji dam site by using the rectangular window sampling method. The volumetric fracture intensity P32 of each cube was calculated by varying the cube location in the generated 3D fracture network model and varying the cube side length from 1 to 20 m, and the distribution of the P32 values was described. The size effect and spatial effect of the fractured rock mass were studied; the P32 values from the same cube sizes and different locations were significantly different, and the fluctuation in P32 values clearly decreases as the cube side length increases. In this paper, a new method that comprehensively considers the anisotropy of rock masses, simplicity of calculation and differences between different methods was proposed to estimate the geometrical REV size. The geometrical REV size of the fractured rock mass was determined based on the volumetric fracture intensity P32 and two statistical test methods, namely, the likelihood ratio test and the Wald–Wolfowitz runs test. The results of the two statistical tests were substantially different; critical cube sizes of 13 m and 12 m were estimated by the Wald–Wolfowitz runs test and the likelihood ratio test, respectively. Because the different test methods emphasize different considerations and impact factors, considering a result that these two tests accept, the larger cube size, 13 m, was selected as the geometrical REV size of the fractured rock mass at the Maji dam site in China.
International Nuclear Information System (INIS)
Kolitsch, S.; Gänser, H.-P.; Maierhofer, J.; Pippan, R.
2016-01-01
Cracks in components reduce the endurable stress so that the endurance limit obtained from common smooth fatigue specimens cannot be used anymore as a design criterion. In such cases, the Kitagawa-Takahashi diagram can be used to predict the admissible stress range for infinite life, at a given crack length and stress range. This diagram is constructed for a single load ratio R. However, in typical mechanical engineering applications, the load ratio R varies widely due to the applied load spectra and residual stresses. In the present work an extended Kitagawa-Takahashi diagram accounting for crack length, crack extension and load ratio is constructed. To describe the threshold behaviour of short cracks, a master resistance curve valid for a wide range of steels is developed using a statistical approach. (paper)
15N/14N isotopic ratio and statistical analysis: an efficient way of linking seized Ecstasy tablets
International Nuclear Information System (INIS)
Palhol, Fabien; Lamoureux, Catherine; Chabrillat, Martine; Naulet, Norbert
2004-01-01
In this study, the 15 N/ 14 N isotopic ratios of 106 samples of 3,4-methylenedioxymethamphetamine (MDMA) extracted from Ecstasy tablets are presented. These ratios, measured using gas chromatography-combustion-isotope ratio mass spectrometry (GC-C-IRMS), show a large discrimination between samples with a range of δ 15 N values between -17 and +19%o, depending on the precursors and the method used in clandestine laboratories. Thus, δ 15 N values can be used in a statistical analysis carried out in order to link Ecstasy tablets prepared with the same precursors and synthetic pathway. The similarity index obtained after principal component analysis and hierarchical cluster analysis appears to be an efficient way to group tablets seized in different places
Energy Technology Data Exchange (ETDEWEB)
Palhol, Fabien; Lamoureux, Catherine; Chabrillat, Martine; Naulet, Norbert
2004-05-10
In this study, the {sup 15}N/{sup 14}N isotopic ratios of 106 samples of 3,4-methylenedioxymethamphetamine (MDMA) extracted from Ecstasy tablets are presented. These ratios, measured using gas chromatography-combustion-isotope ratio mass spectrometry (GC-C-IRMS), show a large discrimination between samples with a range of {delta}{sup 15}N values between -17 and +19%o, depending on the precursors and the method used in clandestine laboratories. Thus, {delta}{sup 15}N values can be used in a statistical analysis carried out in order to link Ecstasy tablets prepared with the same precursors and synthetic pathway. The similarity index obtained after principal component analysis and hierarchical cluster analysis appears to be an efficient way to group tablets seized in different places.
Transfer of drug dissolution testing by statistical approaches: Case study
AL-Kamarany, Mohammed Amood; EL Karbane, Miloud; Ridouan, Khadija; Alanazi, Fars K.; Hubert, Philippe; Cherrah, Yahia; Bouklouze, Abdelaziz
2011-01-01
The analytical transfer is a complete process that consists in transferring an analytical procedure from a sending laboratory to a receiving laboratory. After having experimentally demonstrated that also masters the procedure in order to avoid problems in the future. Method of transfers is now commonplace during the life cycle of analytical method in the pharmaceutical industry. No official guideline exists for a transfer methodology in pharmaceutical analysis and the regulatory word of transfer is more ambiguous than for validation. Therefore, in this study, Gauge repeatability and reproducibility (R&R) studies associated with other multivariate statistics appropriates were successfully applied for the transfer of the dissolution test of diclofenac sodium as a case study from a sending laboratory A (accredited laboratory) to a receiving laboratory B. The HPLC method for the determination of the percent release of diclofenac sodium in solid pharmaceutical forms (one is the discovered product and another generic) was validated using accuracy profile (total error) in the sender laboratory A. The results showed that the receiver laboratory B masters the test dissolution process, using the same HPLC analytical procedure developed in laboratory A. In conclusion, if the sender used the total error to validate its analytical method, dissolution test can be successfully transferred without mastering the analytical method validation by receiving laboratory B and the pharmaceutical analysis method state should be maintained to ensure the same reliable results in the receiving laboratory. PMID:24109204
Comparison of Statistical Methods for Detector Testing Programs
Energy Technology Data Exchange (ETDEWEB)
Rennie, John Alan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Abhold, Mark [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-10-14
A typical goal for any detector testing program is to ascertain not only the performance of the detector systems under test, but also the confidence that systems accepted using that testing program’s acceptance criteria will exceed a minimum acceptable performance (which is usually expressed as the minimum acceptable success probability, p). A similar problem often arises in statistics, where we would like to ascertain the fraction, p, of a population of items that possess a property that may take one of two possible values. Typically, the problem is approached by drawing a fixed sample of size n, with the number of items out of n that possess the desired property, x, being termed successes. The sample mean gives an estimate of the population mean p ≈ x/n, although usually it is desirable to accompany such an estimate with a statement concerning the range within which p may fall and the confidence associated with that range. Procedures for establishing such ranges and confidence limits are described in detail by Clopper, Brown, and Agresti for two-sided symmetric confidence intervals.
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg; Conradsen, Knut; Skriver, Henning
2016-01-01
Based on an omnibus likelihood ratio test statistic for the equality of several variance-covariance matrices following the complex Wishart distribution with an associated p-value and a factorization of this test statistic, change analysis in a short sequence of multilook, polarimetric SAR data...... in the covariance matrix representation is carried out. The omnibus test statistic and its factorization detect if and when change(s) occur. The technique is demonstrated on airborne EMISAR L-band data but may be applied to Sentinel-1, Cosmo-SkyMed, TerraSAR-X, ALOS and RadarSat-2 or other dual- and quad...
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg; Conradsen, Knut; Skriver, Henning
2016-01-01
Based on an omnibus likelihood ratio test statistic for the equality of several variance-covariance matrices following the complex Wishart distribution with an associated p-value and a factorization of this test statistic, change analysis in a short sequence of multilook, polarimetric SAR data...... in the covariance matrix representation is carried out. The omnibus test statistic and its factorization detect if and when change(s) occur. The technique is demonstrated on airborne EMISAR L-band data but may be applied to Sentinel-1, Cosmo-SkyMed, TerraSAR-X, ALOS and RadarSat-2 or other dual- and quad...
Bayesian models based on test statistics for multiple hypothesis testing problems.
Ji, Yuan; Lu, Yiling; Mills, Gordon B
2008-04-01
We propose a Bayesian method for the problem of multiple hypothesis testing that is routinely encountered in bioinformatics research, such as the differential gene expression analysis. Our algorithm is based on modeling the distributions of test statistics under both null and alternative hypotheses. We substantially reduce the complexity of the process of defining posterior model probabilities by modeling the test statistics directly instead of modeling the full data. Computationally, we apply a Bayesian FDR approach to control the number of rejections of null hypotheses. To check if our model assumptions for the test statistics are valid for various bioinformatics experiments, we also propose a simple graphical model-assessment tool. Using extensive simulations, we demonstrate the performance of our models and the utility of the model-assessment tool. In the end, we apply the proposed methodology to an siRNA screening and a gene expression experiment.
Rivoirard, Romain; Duplay, Vianney; Oriol, Mathieu; Tinquaut, Fabien; Chauvin, Franck; Magne, Nicolas; Bourmaud, Aurelie
2016-01-01
Quality of reporting for Randomized Clinical Trials (RCTs) in oncology was analyzed in several systematic reviews, but, in this setting, there is paucity of data for the outcomes definitions and consistency of reporting for statistical tests in RCTs and Observational Studies (OBS). The objective of this review was to describe those two reporting aspects, for OBS and RCTs in oncology. From a list of 19 medical journals, three were retained for analysis, after a random selection: British Medical Journal (BMJ), Annals of Oncology (AoO) and British Journal of Cancer (BJC). All original articles published between March 2009 and March 2014 were screened. Only studies whose main outcome was accompanied by a corresponding statistical test were included in the analysis. Studies based on censored data were excluded. Primary outcome was to assess quality of reporting for description of primary outcome measure in RCTs and of variables of interest in OBS. A logistic regression was performed to identify covariates of studies potentially associated with concordance of tests between Methods and Results parts. 826 studies were included in the review, and 698 were OBS. Variables were described in Methods section for all OBS studies and primary endpoint was clearly detailed in Methods section for 109 RCTs (85.2%). 295 OBS (42.2%) and 43 RCTs (33.6%) had perfect agreement for reported statistical test between Methods and Results parts. In multivariable analysis, variable "number of included patients in study" was associated with test consistency: aOR (adjusted Odds Ratio) for third group compared to first group was equal to: aOR Grp3 = 0.52 [0.31-0.89] (P value = 0.009). Variables in OBS and primary endpoint in RCTs are reported and described with a high frequency. However, statistical tests consistency between methods and Results sections of OBS is not always noted. Therefore, we encourage authors and peer reviewers to verify consistency of statistical tests in oncology studies.
Monte Carlo simulation of the sequential probability ratio test for radiation monitoring
International Nuclear Information System (INIS)
Coop, K.L.
1984-01-01
A computer program simulates the Sequential Probability Ratio Test (SPRT) using Monte Carlo techniques. The program, SEQTEST, performs random-number sampling of either a Poisson or normal distribution to simulate radiation monitoring data. The results are in terms of the detection probabilities and the average time required for a trial. The computed SPRT results can be compared with tabulated single interval test (SIT) values to determine the better statistical test for particular monitoring applications. Use of the SPRT in a hand-and-foot alpha monitor shows that the SPRT provides better detection probabilities while generally requiring less counting time. Calculations are also performed for a monitor where the SPRT is not permitted to the take longer than the single interval test. Although the performance of the SPRT is degraded by this restriction, the detection probabilities are still similar to the SIT values, and average counting times are always less than 75% of the SIT time. Some optimal conditions for use of the SPRT are described. The SPRT should be the test of choice in many radiation monitoring situations. 6 references, 8 figures, 1 table
Statistical Analysis of the Polarimetric Cloud Analysis and Seeding Test (POLCAST) Field Projects
Ekness, Jamie Lynn
The North Dakota farming industry brings in more than $4.1 billion annually in cash receipts. Unfortunately, agriculture sales vary significantly from year to year, which is due in large part to weather events such as hail storms and droughts. One method to mitigate drought is to use hygroscopic seeding to increase the precipitation efficiency of clouds. The North Dakota Atmospheric Research Board (NDARB) sponsored the Polarimetric Cloud Analysis and Seeding Test (POLCAST) research project to determine the effectiveness of hygroscopic seeding in North Dakota. The POLCAST field projects obtained airborne and radar observations, while conducting randomized cloud seeding. The Thunderstorm Identification Tracking and Nowcasting (TITAN) program is used to analyze radar data (33 usable cases) in determining differences in the duration of the storm, rain rate and total rain amount between seeded and non-seeded clouds. The single ratio of seeded to non-seeded cases is 1.56 (0.28 mm/0.18 mm) or 56% increase for the average hourly rainfall during the first 60 minutes after target selection. A seeding effect is indicated with the lifetime of the storms increasing by 41 % between seeded and non-seeded clouds for the first 60 minutes past seeding decision. A double ratio statistic, a comparison of radar derived rain amount of the last 40 minutes of a case (seed/non-seed), compared to the first 20 minutes (seed/non-seed), is used to account for the natural variability of the cloud system and gives a double ratio of 1.85. The Mann-Whitney test on the double ratio of seeded to non-seeded cases (33 cases) gives a significance (p-value) of 0.063. Bootstrapping analysis of the POLCAST set indicates that 50 cases would provide statistically significant results based on the Mann-Whitney test of the double ratio. All the statistical analysis conducted on the POLCAST data set show that hygroscopic seeding in North Dakota does increase precipitation. While an additional POLCAST field
A statistical design for testing apomictic diversification through linkage analysis.
Zeng, Yanru; Hou, Wei; Song, Shuang; Feng, Sisi; Shen, Lin; Xia, Guohua; Wu, Rongling
2014-03-01
The capacity of apomixis to generate maternal clones through seed reproduction has made it a useful characteristic for the fixation of heterosis in plant breeding. It has been observed that apomixis displays pronounced intra- and interspecific diversification, but the genetic mechanisms underlying this diversification remains elusive, obstructing the exploitation of this phenomenon in practical breeding programs. By capitalizing on molecular information in mapping populations, we describe and assess a statistical design that deploys linkage analysis to estimate and test the pattern and extent of apomictic differences at various levels from genotypes to species. The design is based on two reciprocal crosses between two individuals each chosen from a hermaphrodite or monoecious species. A multinomial distribution likelihood is constructed by combining marker information from two crosses. The EM algorithm is implemented to estimate the rate of apomixis and test its difference between two plant populations or species as the parents. The design is validated by computer simulation. A real data analysis of two reciprocal crosses between hickory (Carya cathayensis) and pecan (C. illinoensis) demonstrates the utilization and usefulness of the design in practice. The design provides a tool to address fundamental and applied questions related to the evolution and breeding of apomixis.
Directory of Open Access Journals (Sweden)
Özlem TÜRKŞEN
2018-03-01
Full Text Available Some of the experimental designs can be composed of replicated response measures in which the replications cannot be identified exactly and may have uncertainty different than randomness. Then, the classical regression analysis may not be proper to model the designed data because of the violation of probabilistic modeling assumptions. In this case, fuzzy regression analysis can be used as a modeling tool. In this study, the replicated response values are newly formed to fuzzy numbers by using descriptive statistics of replications and golden ratio. The main aim of the study is obtaining the most suitable fuzzy model for replicated response measures through fuzzification of the replicated values by taking into account the data structure of the replications in statistical framework. Here, the response and unknown model coefficients are considered as triangular type-1 fuzzy numbers (TT1FNs whereas the inputs are crisp. Predicted fuzzy models are obtained according to the proposed fuzzification rules by using Fuzzy Least Squares (FLS approach. The performances of the predicted fuzzy models are compared by using Root Mean Squared Error (RMSE criteria. A data set from the literature, called wheel cover component data set, is used to illustrate the performance of the proposed approach and the obtained results are discussed. The calculation results show that the combined formulation of the descriptive statistics and the golden ratio is the most preferable fuzzification rule according to the well-known decision making method, called TOPSIS, for the data set.
To test photon statistics by atomic beam deflection
International Nuclear Information System (INIS)
Wang Yuzhu; Chen Yudan; Huang Weigang; Liu Liang
1985-02-01
There exists a simple relation between the photon statistics in resonance fluorescence and the statistics of the momentum transferred to an atom by a plane travelling wave [Cook, R.J., Opt. Commun., 35, 347(1980)]. Using an atomic beam deflection by light pressure, we have observed sub-Poissonian statistics in resonance fluorescence of two-level atoms. (author)
Development and testing of improved statistical wind power forecasting methods.
Energy Technology Data Exchange (ETDEWEB)
Mendes, J.; Bessa, R.J.; Keko, H.; Sumaili, J.; Miranda, V.; Ferreira, C.; Gama, J.; Botterud, A.; Zhou, Z.; Wang, J. (Decision and Information Sciences); (INESC Porto)
2011-12-06
Wind power forecasting (WPF) provides important inputs to power system operators and electricity market participants. It is therefore not surprising that WPF has attracted increasing interest within the electric power industry. In this report, we document our research on improving statistical WPF algorithms for point, uncertainty, and ramp forecasting. Below, we provide a brief introduction to the research presented in the following chapters. For a detailed overview of the state-of-the-art in wind power forecasting, we refer to [1]. Our related work on the application of WPF in operational decisions is documented in [2]. Point forecasts of wind power are highly dependent on the training criteria used in the statistical algorithms that are used to convert weather forecasts and observational data to a power forecast. In Chapter 2, we explore the application of information theoretic learning (ITL) as opposed to the classical minimum square error (MSE) criterion for point forecasting. In contrast to the MSE criterion, ITL criteria do not assume a Gaussian distribution of the forecasting errors. We investigate to what extent ITL criteria yield better results. In addition, we analyze time-adaptive training algorithms and how they enable WPF algorithms to cope with non-stationary data and, thus, to adapt to new situations without requiring additional offline training of the model. We test the new point forecasting algorithms on two wind farms located in the U.S. Midwest. Although there have been advancements in deterministic WPF, a single-valued forecast cannot provide information on the dispersion of observations around the predicted value. We argue that it is essential to generate, together with (or as an alternative to) point forecasts, a representation of the wind power uncertainty. Wind power uncertainty representation can take the form of probabilistic forecasts (e.g., probability density function, quantiles), risk indices (e.g., prediction risk index) or scenarios
A statistical test for the habitable zone concept
Checlair, J.; Abbot, D. S.
2017-12-01
Traditional habitable zone theory assumes that the silicate-weathering feedback regulates the atmospheric CO2 of planets within the habitable zone to maintain surface temperatures that allow for liquid water. There is some non-definitive evidence that this feedback has worked in Earth history, but it is untested in an exoplanet context. A critical prediction of the silicate-weathering feedback is that, on average, within the habitable zone planets that receive a higher stellar flux should have a lower CO2 in order to maintain liquid water at their surface. We can test this prediction directly by using a statistical approach involving low-precision CO2 measurements on many planets with future instruments such as JWST, LUVOIR, or HabEx. The purpose of this work is to carefully outline the requirements for such a test. First, we use a radiative-transfer model to compute the amount of CO2 necessary to maintain surface liquid water on planets for different values of insolation and planetary parameters. We run a large ensemble of Earth-like planets with different masses, atmospheric masses, inert atmospheric composition, cloud composition and level, and other greenhouse gases. Second, we post-process this data to determine the precision with which future instruments such as JWST, LUVOIR, and HabEx could measure the CO2. We then combine the variation due to planetary parameters and observational error to determine the number of planet measurements that would be needed to effectively marginalize over uncertainties and resolve the predicted trend in CO2 vs. stellar flux. The results of this work may influence the usage of JWST and will enhance mission planning for LUVOIR and HabEx.
Hayen, Andrew; Macaskill, Petra; Irwig, Les; Bossuyt, Patrick
2010-01-01
To explain which measures of accuracy and which statistical methods should be used in studies to assess the value of a new binary test as a replacement test, an add-on test, or a triage test. Selection and explanation of statistical methods, illustrated with examples. Statistical methods for
Decision Support Systems: Applications in Statistics and Hypothesis Testing.
Olsen, Christopher R.; Bozeman, William C.
1988-01-01
Discussion of the selection of appropriate statistical procedures by educators highlights a study conducted to investigate the effectiveness of decision aids in facilitating the use of appropriate statistics. Experimental groups and a control group using a printed flow chart, a computer-based decision aid, and a standard text are described. (11…
Links to sources of cancer-related statistics, including the Surveillance, Epidemiology and End Results (SEER) Program, SEER-Medicare datasets, cancer survivor prevalence data, and the Cancer Trends Progress Report.
International Nuclear Information System (INIS)
Williams, O. R.; Bennett, K.; Much, R.; Schoenfelder, V.; Blom, J. J.; Ryan, J.
1997-01-01
The maximum likelihood-ratio method is frequently used in COMPTEL analysis to determine the significance of a point source at a given location. In this paper we do not consider whether the likelihood-ratio at a particular location indicates a detection, but rather whether distributions of likelihood-ratios derived from many locations depart from that expected for source free data. We have constructed distributions of likelihood-ratios by reading values from standard COMPTEL maximum-likelihood ratio maps at positions corresponding to the locations of different categories of AGN. Distributions derived from the locations of Seyfert galaxies are indistinguishable, according to a Kolmogorov-Smirnov test, from those obtained from ''random'' locations, but differ slightly from those obtained from the locations of flat spectrum radio loud quasars, OVVs, and BL Lac objects. This difference is not due to known COMPTEL sources, since regions near these sources are excluded from the analysis. We suggest that it might arise from a number of sources with fluxes below the COMPTEL detection threshold
International Nuclear Information System (INIS)
2005-01-01
For the years 2004 and 2005 the figures shown in the tables of Energy Review are partly preliminary. The annual statistics published in Energy Review are presented in more detail in a publication called Energy Statistics that comes out yearly. Energy Statistics also includes historical time-series over a longer period of time (see e.g. Energy Statistics, Statistics Finland, Helsinki 2004.) The applied energy units and conversion coefficients are shown in the back cover of the Review. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in GDP, energy consumption and electricity consumption, Carbon dioxide emissions from fossile fuels use, Coal consumption, Consumption of natural gas, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices in heat production, Fuel prices in electricity production, Price of electricity by type of consumer, Average monthly spot prices at the Nord pool power exchange, Total energy consumption by source and CO 2 -emissions, Supplies and total consumption of electricity GWh, Energy imports by country of origin in January-June 2003, Energy exports by recipient country in January-June 2003, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Price of natural gas by type of consumer, Price of electricity by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes, precautionary stock fees and oil pollution fees
International Nuclear Information System (INIS)
2001-01-01
For the year 2000, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period (see e.g. Energiatilastot 1999, Statistics Finland, Helsinki 2000, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions from the use of fossil fuels, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in 2000, Energy exports by recipient country in 2000, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products
International Nuclear Information System (INIS)
2000-01-01
For the year 1999 and 2000, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period (see e.g., Energiatilastot 1998, Statistics Finland, Helsinki 1999, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in January-March 2000, Energy exports by recipient country in January-March 2000, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products
International Nuclear Information System (INIS)
1999-01-01
For the year 1998 and the year 1999, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period (see e.g. Energiatilastot 1998, Statistics Finland, Helsinki 1999, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in January-June 1999, Energy exports by recipient country in January-June 1999, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products
Directory of Open Access Journals (Sweden)
Petersen Ann-Kristin
2012-06-01
Full Text Available Abstract Background Genome-wide association studies (GWAS with metabolic traits and metabolome-wide association studies (MWAS with traits of biomedical relevance are powerful tools to identify the contribution of genetic, environmental and lifestyle factors to the etiology of complex diseases. Hypothesis-free testing of ratios between all possible metabolite pairs in GWAS and MWAS has proven to be an innovative approach in the discovery of new biologically meaningful associations. The p-gain statistic was introduced as an ad-hoc measure to determine whether a ratio between two metabolite concentrations carries more information than the two corresponding metabolite concentrations alone. So far, only a rule of thumb was applied to determine the significance of the p-gain. Results Here we explore the statistical properties of the p-gain through simulation of its density and by sampling of experimental data. We derive critical values of the p-gain for different levels of correlation between metabolite pairs and show that B/(2*α is a conservative critical value for the p-gain, where α is the level of significance and B the number of tested metabolite pairs. Conclusions We show that the p-gain is a well defined measure that can be used to identify statistically significant metabolite ratios in association studies and provide a conservative significance cut-off for the p-gain for use in future association studies with metabolic traits.
International Nuclear Information System (INIS)
2003-01-01
For the year 2002, part of the figures shown in the tables of the Energy Review are partly preliminary. The annual statistics of the Energy Review also includes historical time-series over a longer period (see e.g. Energiatilastot 2001, Statistics Finland, Helsinki 2002). The applied energy units and conversion coefficients are shown in the inside back cover of the Review. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in GDP, energy consumption and electricity consumption, Carbon dioxide emissions from fossile fuels use, Coal consumption, Consumption of natural gas, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices in heat production, Fuel prices in electricity production, Price of electricity by type of consumer, Average monthly spot prices at the Nord pool power exchange, Total energy consumption by source and CO 2 -emissions, Supply and total consumption of electricity GWh, Energy imports by country of origin in January-June 2003, Energy exports by recipient country in January-June 2003, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Price of natural gas by type of consumer, Price of electricity by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Excise taxes, precautionary stock fees on oil pollution fees on energy products
International Nuclear Information System (INIS)
2004-01-01
For the year 2003 and 2004, the figures shown in the tables of the Energy Review are partly preliminary. The annual statistics of the Energy Review also includes historical time-series over a longer period (see e.g. Energiatilastot, Statistics Finland, Helsinki 2003, ISSN 0785-3165). The applied energy units and conversion coefficients are shown in the inside back cover of the Review. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in GDP, energy consumption and electricity consumption, Carbon dioxide emissions from fossile fuels use, Coal consumption, Consumption of natural gas, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices in heat production, Fuel prices in electricity production, Price of electricity by type of consumer, Average monthly spot prices at the Nord pool power exchange, Total energy consumption by source and CO 2 -emissions, Supplies and total consumption of electricity GWh, Energy imports by country of origin in January-March 2004, Energy exports by recipient country in January-March 2004, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Price of natural gas by type of consumer, Price of electricity by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Excise taxes, precautionary stock fees on oil pollution fees
International Nuclear Information System (INIS)
2000-01-01
For the year 1999 and 2000, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy also includes historical time series over a longer period (see e.g., Energiatilastot 1999, Statistics Finland, Helsinki 2000, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in January-June 2000, Energy exports by recipient country in January-June 2000, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products
Statistical Analysis of Geo-electric Imaging and Geotechnical Test ...
Indian Academy of Sciences (India)
12
On the other hand cost-effective geoelctric imaging methods provide 2-D / 3-D .... SPSS (Statistical package for social sciences) have been used to carry out linear ..... P W J 1997 Theory of ionic surface electrical conduction in porous media;.
Testing for changes using permutations of U-statistics
Czech Academy of Sciences Publication Activity Database
Horvath, L.; Hušková, Marie
2005-01-01
Roč. 2005, č. 128 (2005), s. 351-371 ISSN 0378-3758 R&D Projects: GA ČR GA201/00/0769 Institutional research plan: CEZ:AV0Z10750506 Keywords : U-statistics * permutations * change-point * weighted approximation * Brownian bridge Subject RIV: BD - Theory of Information Impact factor: 0.481, year: 2005
Optimal allocation of testing resources for statistical simulations
Quintana, Carolina; Millwater, Harry R.; Singh, Gulshan; Golden, Patrick
2015-07-01
Statistical estimates from simulation involve uncertainty caused by the variability in the input random variables due to limited data. Allocating resources to obtain more experimental data of the input variables to better characterize their probability distributions can reduce the variance of statistical estimates. The methodology proposed determines the optimal number of additional experiments required to minimize the variance of the output moments given single or multiple constraints. The method uses multivariate t-distribution and Wishart distribution to generate realizations of the population mean and covariance of the input variables, respectively, given an amount of available data. This method handles independent and correlated random variables. A particle swarm method is used for the optimization. The optimal number of additional experiments per variable depends on the number and variance of the initial data, the influence of the variable in the output function and the cost of each additional experiment. The methodology is demonstrated using a fretting fatigue example.
Zero-inflated Poisson model based likelihood ratio test for drug safety signal detection.
Huang, Lan; Zheng, Dan; Zalkikar, Jyoti; Tiwari, Ram
2017-02-01
In recent decades, numerous methods have been developed for data mining of large drug safety databases, such as Food and Drug Administration's (FDA's) Adverse Event Reporting System, where data matrices are formed by drugs such as columns and adverse events as rows. Often, a large number of cells in these data matrices have zero cell counts and some of them are "true zeros" indicating that the drug-adverse event pairs cannot occur, and these zero counts are distinguished from the other zero counts that are modeled zero counts and simply indicate that the drug-adverse event pairs have not occurred yet or have not been reported yet. In this paper, a zero-inflated Poisson model based likelihood ratio test method is proposed to identify drug-adverse event pairs that have disproportionately high reporting rates, which are also called signals. The maximum likelihood estimates of the model parameters of zero-inflated Poisson model based likelihood ratio test are obtained using the expectation and maximization algorithm. The zero-inflated Poisson model based likelihood ratio test is also modified to handle the stratified analyses for binary and categorical covariates (e.g. gender and age) in the data. The proposed zero-inflated Poisson model based likelihood ratio test method is shown to asymptotically control the type I error and false discovery rate, and its finite sample performance for signal detection is evaluated through a simulation study. The simulation results show that the zero-inflated Poisson model based likelihood ratio test method performs similar to Poisson model based likelihood ratio test method when the estimated percentage of true zeros in the database is small. Both the zero-inflated Poisson model based likelihood ratio test and likelihood ratio test methods are applied to six selected drugs, from the 2006 to 2011 Adverse Event Reporting System database, with varying percentages of observed zero-count cells.
Understanding the Sampling Distribution and Its Use in Testing Statistical Significance.
Breunig, Nancy A.
Despite the increasing criticism of statistical significance testing by researchers, particularly in the publication of the 1994 American Psychological Association's style manual, statistical significance test results are still popular in journal articles. For this reason, it remains important to understand the logic of inferential statistics. A…
Statistical Analysis for Test Papers with Software SPSS
Institute of Scientific and Technical Information of China (English)
张燕君
2012-01-01
Test paper evaluation is an important work for the management of tests, which results are significant bases for scientific summation of teaching and learning. Taking an English test paper of high students’monthly examination as the object, it focuses on the interpretation of SPSS output concerning item and whole quantitative analysis of papers. By analyzing and evaluating the papers, it can be a feedback for teachers to check the students’progress and adjust their teaching process.
Alvar engine. An engine with variable compression ratio. Experiments and tests
Energy Technology Data Exchange (ETDEWEB)
Erlandsson, Olof
1998-09-01
This report is focused on tests with Variable Compression Ratio (VCR) engines, according to the Alvar engine principle. Variable compression ratio means an engine design where it is possible to change the nominal compression ratio. The purpose is to increase the fuel efficiency at part load by increasing the compression ratio. At maximum load, and maybe supercharging with for example turbocharger, it is not possible to keep a high compression ratio because of the knock phenomena. Knock is a shock wave caused by self-ignition of the fuel-air mix. If knock occurs, the engine will be exposed to a destructive load. Because of the reasons mentioned it would be an advantage if it would be possible to change the compression ratio continuously when the load changes. The Alvar engine provides a solution for variable compression ratio based on well-known engine components. This paper provides information about efficiency and emission characteristics from tests with two Alvar engines. Results from tests with a phase shift mechanism (for automatic compression ratio control) for the Alvar engine are also reviewed Examination paper. 5 refs, 23 figs, 2 tabs, 5 appendices
An evaluation of damping ratios for HVAC duct systems using vibration test data
International Nuclear Information System (INIS)
Gunyasu, K.; Horimizu, Y.; Kawakami, A.; Iokibe, H.; Yamazaki, T.
1988-01-01
The function of Heating Ventilating Air Conditioning (HVAC) systems must be maintained including HVAC duct systems to keep the operation of safety-related equipment in nuclear power plants during earthquake excitations. Therefore, it is important to carry out seismic design for HVAC duct systems. In the previous aseismic design for HVAC duct systems, the 0.5% damping ratio has been used in Japan. In recent years, vibration tests, held on actual duct systems in nuclear power plants and mockup duct systems were performed in order to investigate damping ratios for HVAC duct systems. Based on the results, it was confirmed that the damping ratio for HVAC duct systems, evaluated from these tests, were much greater than the 0.5% damping ratio used in the previous aseismic design of Japan. The new damping ratio in aseismic design was proposed to be 2.5%. The present paper describes the results of the above mentioned investigation
Statistics of sampling for microbiological testing of foodborne pathogens
Despite the many recent advances in protocols for testing for pathogens in foods, a number of challenges still exist. For example, the microbiological safety of food cannot be completely ensured by testing because microorganisms are not evenly distributed throughout the food. Therefore, since it i...
Statistical tests for equal predictive ability across multiple forecasting methods
DEFF Research Database (Denmark)
Borup, Daniel; Thyrsgaard, Martin
We develop a multivariate generalization of the Giacomini-White tests for equal conditional predictive ability. The tests are applicable to a mixture of nested and non-nested models, incorporate estimation uncertainty explicitly, and allow for misspecification of the forecasting model as well as ...
Statistical analysis of nematode counts from interlaboratory proficiency tests
Berg, van den W.; Hartsema, O.; Nijs, Den J.M.F.
2014-01-01
A series of proficiency tests on potato cyst nematode (PCN; n=29) and free-living stages of Meloidogyne and Pratylenchus (n=23) were investigated to determine the accuracy and precision of the nematode counts and to gain insights into possible trends and potential improvements. In each test, each
Sex ratios in the two Germanies: a test of the economic stress hypothesis.
Catalano, Ralph A
2003-09-01
Literature describing temporal variation in the secondary sex ratio among humans reports an association between population stressors and declines in the odds of male birth. Explanations of this phenomenon draw on reports that stressed females spontaneously abort male more than female fetuses, and that stressed males exhibit reduced sperm motility. This work has led to the argument that population stress induced by a declining economy reduces the human sex ratio. No direct test of this hypothesis appears in the literature. Here, a test is offered based on a comparison of the sex ratio in East and West Germany for the years 1946 to 1999. The theory suggests that the East German sex ratio should be lower in 1991, when East Germany's economy collapsed, than expected from its own history and from the sex ratio in West Germany. The hypothesis is tested using time-series modelling methods. The data support the hypothesis. The sex ratio in East Germany was at its lowest in 1991. This first direct test supports the hypothesis that economic decline reduces the human sex ratio.
Jsub(Ic)-testing of A-533 B - statistical evaluation of some different testing techniques
International Nuclear Information System (INIS)
Nilsson, F.
1978-01-01
The purpose of the present study was to compare statistically some different methods for the evaluation of fracture toughness of the nuclear reactor material A-533 B. Since linear elastic fracture mechanics is not applicable to this material at the interesting temperature (275 0 C), the so-called Jsub(Ic) testing method was employed. Two main difficulties are inherent in this type of testing. The first one is to determine the quantity J as a function of the deflection of the three-point bend specimens used. Three different techniques were used, the first two based on the experimentally observed input of energy to the specimen and the third employing finite element calculations. The second main problem is to determine the point when crack growth begins. For this, two methods were used, a direct electrical method and the indirect R-curve method. A total of forty specimens were tested at two laboratories. No statistically significant different results were obtained from the respective laboratories. The three methods of calculating J yielded somewhat different results, although the discrepancy was small. Also the two methods of determination of the growth initiation point yielded consistent results. The R-curve method, however, exhibited a larger uncertainty as measured by the standard deviation. The resulting Jsub(Ic) value also agreed well with earlier presented results. The relative standard deviation was of the order of 25%, which is quite small for this type of experiment. (author)
Statistics applied to the testing of cladding tubes
International Nuclear Information System (INIS)
Perdijon, J.
1987-01-01
Cladding tubes, either steel or zircaloy, are generally given a 100 % inspection through ultrasonic non-destructive testing. This inspection may be completed beneficially with an eddy current test, as this is not sensitive to the same defects as those typically traced by ultrasonic testing. Unfortunately, the two methods (as with other non-destructive tests) exhibit poor precision; this means that a flaw, whose size is close to that denoted as rejection limit, may be accepted or rejected. Currently, rejection, i.e. the measurement above which a tube is rejected, is generally determined through measuring a calibration tube at regular time intervals, and the signal of a given tube is compared to that of the most recently completed calibration. This measurement is thus subject to variations which can be attributed to an actual shift of adjustments as well as to poor precision. For this reason, monitoring instrument adjustments using the so-called control chart method are proposed
Statistics of software vulnerability detection in certification testing
Barabanov, A. V.; Markov, A. S.; Tsirlov, V. L.
2018-05-01
The paper discusses practical aspects of introduction of the methods to detect software vulnerability in the day-to-day activities of the accredited testing laboratory. It presents the approval results of the vulnerability detection methods as part of the study of the open source software and the software that is a test object of the certification tests under information security requirements, including software for communication networks. Results of the study showing the allocation of identified vulnerabilities by types of attacks, country of origin, programming languages used in the development, methods for detecting vulnerability, etc. are given. The experience of foreign information security certification systems related to the detection of certified software vulnerabilities is analyzed. The main conclusion based on the study is the need to implement practices for developing secure software in the development life cycle processes. The conclusions and recommendations for the testing laboratories on the implementation of the vulnerability analysis methods are laid down.
STATISTICAL EVALUATION OF EXAMINATION TESTS IN MATHEMATICS FOR ECONOMISTS
Directory of Open Access Journals (Sweden)
KASPŘÍKOVÁ, Nikola
2012-12-01
Full Text Available Examination results are rather important for many students with regard to their future profession development. Results of exams should be carefully inspected by the teachers to help improve design and evaluation of tests and education process in general. Analysis of examination papers in mathematics taken by students of basic mathematics course at University of Economics in Prague is reported. The first issue addressed is identification of significant dependencies between performance in particular problem areas covered in the test and also between particular items and total score in test or ability level as a latent trait. The assessment is first performed with Spearman correlation coefficient, items in the test are then evaluated within Item Response Theory framework. The second analytical task addressed is a search for groups of students who are similar with respect to performance in test. Cluster analysis is performed using partitioning around medoids method and final model selection is made according to average silhouette width. Results of clustering, which may be also considered in connection with setting of the minimum score for passing the exam, show that two groups of students can be identified. The group which may be called "well-performers" is the more clearly defined one.
International Nuclear Information System (INIS)
Lee, Eul Kyu; Choi, Kwan Woo; Jeong, Hoi Woun; Jang, Seo Goo; Kim, Ki Won; Son, Soon Yong; Min, Jung Whan; Son, Jin Hyun
2016-01-01
The purpose of this study was to needed basis of measure MRI CAD development for signal to noise ratio (SNR) by pulse sequence analysis from region of interest (ROI) in brain magnetic resonance imaging (MRI) contrast. We examined images of brain MRI contrast enhancement of 117 patients, from January 2005 to December 2015 in a University-affiliated hospital, Seoul, Korea. Diagnosed as one of two brain diseases such as meningioma and cysts SNR for each patient's image of brain MRI were calculated by using Image J. Differences of SNR among two brain diseases were tested by SPSS Statistics21 ANOVA test for there was statistical significance (p < 0.05). We have analysis socio-demographical variables, SNR according to sequence disease, 95% confidence according to SNR of sequence and difference in a mean of SNR. Meningioma results, with the quality of distributions in the order of T1CE, T2 and T1, FLAIR. Cysts results, with the quality of distributions in the order of T2 and T1, T1CE and FLAIR. SNR of MRI sequences of the brain would be useful to classify disease. Therefore, this study will contribute to evaluate brain diseases, and be a fundamental to enhancing the accuracy of CAD development
Energy Technology Data Exchange (ETDEWEB)
Lee, Eul Kyu [Inje Paik University Hospital Jeo-dong, Seoul (Korea, Republic of); Choi, Kwan Woo [Asan Medical Center, Seoul (Korea, Republic of); Jeong, Hoi Woun [The Baekseok Culture University, Cheonan (Korea, Republic of); Jang, Seo Goo [The Soonchunhyang University, Asan (Korea, Republic of); Kim, Ki Won [Kyung Hee University Hospital at Gang-dong, Seoul (Korea, Republic of); Son, Soon Yong [The Wonkwang Health Science University, Iksan (Korea, Republic of); Min, Jung Whan; Son, Jin Hyun [The Shingu University, Sungnam (Korea, Republic of)
2016-09-15
The purpose of this study was to needed basis of measure MRI CAD development for signal to noise ratio (SNR) by pulse sequence analysis from region of interest (ROI) in brain magnetic resonance imaging (MRI) contrast. We examined images of brain MRI contrast enhancement of 117 patients, from January 2005 to December 2015 in a University-affiliated hospital, Seoul, Korea. Diagnosed as one of two brain diseases such as meningioma and cysts SNR for each patient's image of brain MRI were calculated by using Image J. Differences of SNR among two brain diseases were tested by SPSS Statistics21 ANOVA test for there was statistical significance (p < 0.05). We have analysis socio-demographical variables, SNR according to sequence disease, 95% confidence according to SNR of sequence and difference in a mean of SNR. Meningioma results, with the quality of distributions in the order of T1CE, T2 and T1, FLAIR. Cysts results, with the quality of distributions in the order of T2 and T1, T1CE and FLAIR. SNR of MRI sequences of the brain would be useful to classify disease. Therefore, this study will contribute to evaluate brain diseases, and be a fundamental to enhancing the accuracy of CAD development.
After statistics reform : Should we still teach significance testing?
A. Hak (Tony)
2014-01-01
textabstractIn the longer term null hypothesis significance testing (NHST) will disappear because p- values are not informative and not replicable. Should we continue to teach in the future the procedures of then abolished routines (i.e., NHST)? Three arguments are discussed for not teaching NHST in
Statistical Tests for Frequency Distribution of Mean Gravity Anomalies
African Journals Online (AJOL)
The hypothesis that a very large number of lOx 10mean gravity anomalies are normally distributed has been rejected at 5% Significance level based on the X2 and the unit normal deviate tests. However, the 50 equal area mean anomalies derived from the lOx 10data, have been found to be normally distributed at the same ...
Testing the performance of a blind burst statistic
Energy Technology Data Exchange (ETDEWEB)
Vicere, A [Istituto di Fisica, Universita di Urbino (Italy); Calamai, G [Istituto Nazionale di Fisica Nucleare, Sez. Firenze/Urbino (Italy); Campagna, E [Istituto Nazionale di Fisica Nucleare, Sez. Firenze/Urbino (Italy); Conforto, G [Istituto di Fisica, Universita di Urbino (Italy); Cuoco, E [Istituto Nazionale di Fisica Nucleare, Sez. Firenze/Urbino (Italy); Dominici, P [Istituto di Fisica, Universita di Urbino (Italy); Fiori, I [Istituto di Fisica, Universita di Urbino (Italy); Guidi, G M [Istituto di Fisica, Universita di Urbino (Italy); Losurdo, G [Istituto Nazionale di Fisica Nucleare, Sez. Firenze/Urbino (Italy); Martelli, F [Istituto di Fisica, Universita di Urbino (Italy); Mazzoni, M [Istituto Nazionale di Fisica Nucleare, Sez. Firenze/Urbino (Italy); Perniola, B [Istituto di Fisica, Universita di Urbino (Italy); Stanga, R [Istituto Nazionale di Fisica Nucleare, Sez. Firenze/Urbino (Italy); Vetrano, F [Istituto di Fisica, Universita di Urbino (Italy)
2003-09-07
In this work, we estimate the performance of a method for the detection of burst events in the data produced by interferometric gravitational wave detectors. We compute the receiver operating characteristics in the specific case of a simulated noise having the spectral density expected for Virgo, using test signals taken from a library of possible waveforms emitted during the collapse of the core of type II supernovae.
Goegebeur, Y.; de Boeck, P.; Molenberghs, G.
2010-01-01
The local influence diagnostics, proposed by Cook (1986), provide a flexible way to assess the impact of minor model perturbations on key model parameters’ estimates. In this paper, we apply the local influence idea to the detection of test speededness in a model describing nonresponse in test data,
International Nuclear Information System (INIS)
Reid, B.D.; Gerlach, D.C.; Love, E.F.; McNeece, J.P.; Livingston, J.V.; Greenwood, L.R.; Petersen, S.L.; Morgan, W.C.
1999-01-01
This report describes an irradiation test designed to investigate the suitability of uranium as a graphite isotope ratio method (GIRM) low fluence indicator. GIRM is a demonstrated concept that gives a graphite-moderated reactor's lifetime production based on measuring changes in the isotopic ratio of elements known to exist in trace quantities within reactor-grade graphite. Appendix I of this report provides a tutorial on the GIRM concept
A weighted generalized score statistic for comparison of predictive values of diagnostic tests.
Kosinski, Andrzej S
2013-03-15
Positive and negative predictive values are important measures of a medical diagnostic test performance. We consider testing equality of two positive or two negative predictive values within a paired design in which all patients receive two diagnostic tests. The existing statistical tests for testing equality of predictive values are either Wald tests based on the multinomial distribution or the empirical Wald and generalized score tests within the generalized estimating equations (GEE) framework. As presented in the literature, these test statistics have considerably complex formulas without clear intuitive insight. We propose their re-formulations that are mathematically equivalent but algebraically simple and intuitive. As is clearly seen with a new re-formulation we presented, the generalized score statistic does not always reduce to the commonly used score statistic in the independent samples case. To alleviate this, we introduce a weighted generalized score (WGS) test statistic that incorporates empirical covariance matrix with newly proposed weights. This statistic is simple to compute, always reduces to the score statistic in the independent samples situation, and preserves type I error better than the other statistics as demonstrated by simulations. Thus, we believe that the proposed WGS statistic is the preferred statistic for testing equality of two predictive values and for corresponding sample size computations. The new formulas of the Wald statistics may be useful for easy computation of confidence intervals for difference of predictive values. The introduced concepts have potential to lead to development of the WGS test statistic in a general GEE setting. Copyright © 2012 John Wiley & Sons, Ltd.
David, André; Petrucciani, Giovanni
2015-01-01
Using the likelihood ratio test statistic, we present a method which can be employed to test the hypothesis of a single Higgs boson using the matrix of measured signal strengths. This method can be applied in the presence of censored data and takes into account uncertainties on the measurements. The p-value against the hypothesis of a single Higgs boson is defined from the expected distribution of the test statistic, generated using pseudo-experiments. The applicability of the likelihood-based test is demonstrated using numerical examples with uncertainties and missing matrix elements.
Monte Carlo testing in spatial statistics, with applications to spatial residuals
DEFF Research Database (Denmark)
Mrkvička, Tomáš; Soubeyrand, Samuel; Myllymäki, Mari
2016-01-01
This paper reviews recent advances made in testing in spatial statistics and discussed at the Spatial Statistics conference in Avignon 2015. The rank and directional quantile envelope tests are discussed and practical rules for their use are provided. These tests are global envelope tests...... with an appropriate type I error probability. Two novel examples are given on their usage. First, in addition to the test based on a classical one-dimensional summary function, the goodness-of-fit of a point process model is evaluated by means of the test based on a higher dimensional functional statistic, namely...
Mathur, Sunil; Sadana, Ajit
2015-12-01
We present a rank-based test statistic for the identification of differentially expressed genes using a distance measure. The proposed test statistic is highly robust against extreme values and does not assume the distribution of parent population. Simulation studies show that the proposed test is more powerful than some of the commonly used methods, such as paired t-test, Wilcoxon signed rank test, and significance analysis of microarray (SAM) under certain non-normal distributions. The asymptotic distribution of the test statistic, and the p-value function are discussed. The application of proposed method is shown using a real-life data set. © The Author(s) 2011.
Normality Tests for Statistical Analysis: A Guide for Non-Statisticians
Ghasemi, Asghar; Zahediasl, Saleh
2012-01-01
Statistical errors are common in scientific literature and about 50% of the published articles have at least one error. The assumption of normality needs to be checked for many statistical procedures, namely parametric tests, because their validity depends on it. The aim of this commentary is to overview checking for normality in statistical analysis using SPSS. PMID:23843808
Improved ASTM G72 Test Method for Ensuring Adequate Fuel-to-Oxidizer Ratios
Juarez, Alfredo; Harper, Susana Tapia
2016-01-01
The ASTM G72/G72M-15 Standard Test Method for Autogenous Ignition Temperature of Liquids and Solids in a High-Pressure Oxygen-Enriched Environment is currently used to evaluate materials for the ignition susceptibility driven by exposure to external heat in an enriched oxygen environment. Testing performed on highly volatile liquids such as cleaning solvents has proven problematic due to inconsistent test results (non-ignitions). Non-ignition results can be misinterpreted as favorable oxygen compatibility, although they are more likely associated with inadequate fuel-to-oxidizer ratios. Forced evaporation during purging and inadequate sample size were identified as two potential causes for inadequate available sample material during testing. In an effort to maintain adequate fuel-to-oxidizer ratios within the reaction vessel during test, several parameters were considered, including sample size, pretest sample chilling, pretest purging, and test pressure. Tests on a variety of solvents exhibiting a range of volatilities are presented in this paper. A proposed improvement to the standard test protocol as a result of this evaluation is also presented. Execution of the final proposed improved test protocol outlines an incremental step method of determining optimal conditions using increased sample sizes while considering test system safety limits. The proposed improved test method increases confidence in results obtained by utilizing the ASTM G72 autogenous ignition temperature test method and can aid in the oxygen compatibility assessment of highly volatile liquids and other conditions that may lead to false non-ignition results.
Mandal, Shyamapada; Santhi, B.; Sridhar, S.; Vinolia, K.; Swaminathan, P.
2017-06-01
In this paper, an online fault detection and classification method is proposed for thermocouples used in nuclear power plants. In the proposed method, the fault data are detected by the classification method, which classifies the fault data from the normal data. Deep belief network (DBN), a technique for deep learning, is applied to classify the fault data. The DBN has a multilayer feature extraction scheme, which is highly sensitive to a small variation of data. Since the classification method is unable to detect the faulty sensor; therefore, a technique is proposed to identify the faulty sensor from the fault data. Finally, the composite statistical hypothesis test, namely generalized likelihood ratio test, is applied to compute the fault pattern of the faulty sensor signal based on the magnitude of the fault. The performance of the proposed method is validated by field data obtained from thermocouple sensors of the fast breeder test reactor.
Wald Sequential Probability Ratio Test for Analysis of Orbital Conjunction Data
Carpenter, J. Russell; Markley, F. Landis; Gold, Dara
2013-01-01
We propose a Wald Sequential Probability Ratio Test for analysis of commonly available predictions associated with spacecraft conjunctions. Such predictions generally consist of a relative state and relative state error covariance at the time of closest approach, under the assumption that prediction errors are Gaussian. We show that under these circumstances, the likelihood ratio of the Wald test reduces to an especially simple form, involving the current best estimate of collision probability, and a similar estimate of collision probability that is based on prior assumptions about the likelihood of collision.
Belley , Philippe; Havet , Nathalie; Lacroix , Guy
2012-01-01
The paper focuses on the early career patterns of young male and female workers. It investigates potential dynamic links between statistical discrimination, mobility, tenure and wage profiles. The model assumes that it is more costly for an employer to assess female workers' productivity and that the noise/signal ratio tapers off more rapidly for male workers. These two assumptions yield numerous theoretical predictions pertaining to gender wage gaps. These predictions are tested using data f...
A note on imperfect hedging: a method for testing stability of the hedge ratio
Directory of Open Access Journals (Sweden)
Michal Černý
2012-01-01
Full Text Available Companies producing, processing and consuming commodities in the production process often hedge their commodity expositions using derivative strategies based on different, highly correlated underlying commodities. Once the open position in a commodity is hedged using a derivative position with another underlying commodity, the appropriate hedge ratio must be determined in order the hedge relationship be as effective as possible. However, it is questionable whether the hedge ratio determined at the inception of the risk management strategy remains stable over the whole period for which the hedging strategy exists. Usually it is assumed that in the short run, the relationship (say, correlation between the two commodities remains stable, while in the long run it may vary. We propose a method, based on statistical theory of stability, for on-line detection whether market movements of prices of the commodities involved in the hedge relationship indicate that the hedge ratio may have been subject to a recent change. The change in the hedge ratio decreases the effectiveness of the original hedge relationship and creates a new open position. The method proposed should inform the risk manager that it could be reasonable to adjust the derivative strategy in a way reflecting the market conditions after the change in the hedge ratio.
Purves, L.; Strang, R. F.; Dube, M. P.; Alea, P.; Ferragut, N.; Hershfeld, D.
1983-01-01
The software and procedures of a system of programs used to generate a report of the statistical correlation between NASTRAN modal analysis results and physical tests results from modal surveys are described. Topics discussed include: a mathematical description of statistical correlation, a user's guide for generating a statistical correlation report, a programmer's guide describing the organization and functions of individual programs leading to a statistical correlation report, and a set of examples including complete listings of programs, and input and output data.
Xu, Maoqi; Chen, Liang
2018-01-01
The individual sample heterogeneity is one of the biggest obstacles in biomarker identification for complex diseases such as cancers. Current statistical models to identify differentially expressed genes between disease and control groups often overlook the substantial human sample heterogeneity. Meanwhile, traditional nonparametric tests lose detailed data information and sacrifice the analysis power, although they are distribution free and robust to heterogeneity. Here, we propose an empirical likelihood ratio test with a mean-variance relationship constraint (ELTSeq) for the differential expression analysis of RNA sequencing (RNA-seq). As a distribution-free nonparametric model, ELTSeq handles individual heterogeneity by estimating an empirical probability for each observation without making any assumption about read-count distribution. It also incorporates a constraint for the read-count overdispersion, which is widely observed in RNA-seq data. ELTSeq demonstrates a significant improvement over existing methods such as edgeR, DESeq, t-tests, Wilcoxon tests and the classic empirical likelihood-ratio test when handling heterogeneous groups. It will significantly advance the transcriptomics studies of cancers and other complex disease. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Jet-Surface Interaction: High Aspect Ratio Nozzle Test, Nozzle Design and Preliminary Data
Brown, Clifford; Dippold, Vance
2015-01-01
The Jet-Surface Interaction High Aspect Ratio (JSI-HAR) nozzle test is part of an ongoing effort to measure and predict the noise created when an aircraft engine exhausts close to an airframe surface. The JSI-HAR test is focused on parameters derived from the Turbo-electric Distributed Propulsion (TeDP) concept aircraft which include a high-aspect ratio mailslot exhaust nozzle, internal septa, and an aft deck. The size and mass flow rate limits of the test rig also limited the test nozzle to a 16:1 aspect ratio, half the approximately 32:1 on the TeDP concept. Also, unlike the aircraft, the test nozzle must transition from a single round duct on the High Flow Jet Exit Rig, located in the AeroAcoustic Propulsion Laboratory at the NASA Glenn Research Center, to the rectangular shape at the nozzle exit. A parametric nozzle design method was developed to design three low noise round-to-rectangular transitions, with 8:1, 12:1, and 16: aspect ratios, that minimizes flow separations and shocks while providing a flat flow profile at the nozzle exit. These designs validated using the WIND-US CFD code. A preliminary analysis of the test data shows that the actual flow profile is close to that predicted and that the noise results appear consistent with data from previous, smaller scale, tests. The JSI-HAR test is ongoing through October 2015. The results shown in the presentation are intended to provide an overview of the test and a first look at the preliminary results.
Testing Measurement Invariance Using MIMIC: Likelihood Ratio Test with a Critical Value Adjustment
Kim, Eun Sook; Yoon, Myeongsun; Lee, Taehun
2012-01-01
Multiple-indicators multiple-causes (MIMIC) modeling is often used to test a latent group mean difference while assuming the equivalence of factor loadings and intercepts over groups. However, this study demonstrated that MIMIC was insensitive to the presence of factor loading noninvariance, which implies that factor loading invariance should be…
Testing Genetic Pleiotropy with GWAS Summary Statistics for Marginal and Conditional Analyses.
Deng, Yangqing; Pan, Wei
2017-12-01
There is growing interest in testing genetic pleiotropy, which is when a single genetic variant influences multiple traits. Several methods have been proposed; however, these methods have some limitations. First, all the proposed methods are based on the use of individual-level genotype and phenotype data; in contrast, for logistical, and other, reasons, summary statistics of univariate SNP-trait associations are typically only available based on meta- or mega-analyzed large genome-wide association study (GWAS) data. Second, existing tests are based on marginal pleiotropy, which cannot distinguish between direct and indirect associations of a single genetic variant with multiple traits due to correlations among the traits. Hence, it is useful to consider conditional analysis, in which a subset of traits is adjusted for another subset of traits. For example, in spite of substantial lowering of low-density lipoprotein cholesterol (LDL) with statin therapy, some patients still maintain high residual cardiovascular risk, and, for these patients, it might be helpful to reduce their triglyceride (TG) level. For this purpose, in order to identify new therapeutic targets, it would be useful to identify genetic variants with pleiotropic effects on LDL and TG after adjusting the latter for LDL; otherwise, a pleiotropic effect of a genetic variant detected by a marginal model could simply be due to its association with LDL only, given the well-known correlation between the two types of lipids. Here, we develop a new pleiotropy testing procedure based only on GWAS summary statistics that can be applied for both marginal analysis and conditional analysis. Although the main technical development is based on published union-intersection testing methods, care is needed in specifying conditional models to avoid invalid statistical estimation and inference. In addition to the previously used likelihood ratio test, we also propose using generalized estimating equations under the
Energy Technology Data Exchange (ETDEWEB)
Cikota, Aleksandar [European Southern Observatory, Karl-Schwarzschild-Strasse 2, D-85748 Garching b. München (Germany); Deustua, Susana [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Marleau, Francine, E-mail: acikota@eso.org [Institute for Astro- and Particle Physics, University of Innsbruck, Technikerstrasse 25/8, A-6020 Innsbruck (Austria)
2016-03-10
We investigate limits on the extinction values of Type Ia supernovae (SNe Ia) to statistically determine the most probable color excess, E(B – V), with galactocentric distance, and use these statistics to determine the absorption-to-reddening ratio, R{sub V}, for dust in the host galaxies. We determined pixel-based dust mass surface density maps for 59 galaxies from the Key Insight on Nearby Galaxies: a Far-infrared Survey with Herschel (KINGFISH). We use SN Ia spectral templates to develop a Monte Carlo simulation of color excess E(B – V) with R{sub V} = 3.1 and investigate the color excess probabilities E(B – V) with projected radial galaxy center distance. Additionally, we tested our model using observed spectra of SN 1989B, SN 2002bo, and SN 2006X, which occurred in three KINGFISH galaxies. Finally, we determined the most probable reddening for Sa–Sap, Sab–Sbp, Sbc–Scp, Scd–Sdm, S0, and irregular galaxy classes as a function of R/R{sub 25}. We find that the largest expected reddening probabilities are in Sab–Sb and Sbc–Sc galaxies, while S0 and irregular galaxies are very dust poor. We present a new approach for determining the absorption-to-reddening ratio R{sub V} using color excess probability functions and find values of R{sub V} = 2.71 ± 1.58 for 21 SNe Ia observed in Sab–Sbp galaxies, and R{sub V} = 1.70 ± 0.38, for 34 SNe Ia observed in Sbc–Scp galaxies.
An improved single sensor parity space algorithm for sequential probability ratio test
Energy Technology Data Exchange (ETDEWEB)
Racz, A. [Hungarian Academy of Sciences, Budapest (Hungary). Atomic Energy Research Inst.
1995-12-01
In our paper we propose a modification of the single sensor parity algorithm in order to make the statistical properties of the generated residual determinable in advance. The algorithm is tested via computer simulated ramp failure at the temperature readings of the pressurizer. (author).
"What If" Analyses: Ways to Interpret Statistical Significance Test Results Using EXCEL or "R"
Ozturk, Elif
2012-01-01
The present paper aims to review two motivations to conduct "what if" analyses using Excel and "R" to understand the statistical significance tests through the sample size context. "What if" analyses can be used to teach students what statistical significance tests really do and in applied research either prospectively to estimate what sample size…
Amalia, Junita; Purhadi, Otok, Bambang Widjanarko
2017-11-01
Poisson distribution is a discrete distribution with count data as the random variables and it has one parameter defines both mean and variance. Poisson regression assumes mean and variance should be same (equidispersion). Nonetheless, some case of the count data unsatisfied this assumption because variance exceeds mean (over-dispersion). The ignorance of over-dispersion causes underestimates in standard error. Furthermore, it causes incorrect decision in the statistical test. Previously, paired count data has a correlation and it has bivariate Poisson distribution. If there is over-dispersion, modeling paired count data is not sufficient with simple bivariate Poisson regression. Bivariate Poisson Inverse Gaussian Regression (BPIGR) model is mix Poisson regression for modeling paired count data within over-dispersion. BPIGR model produces a global model for all locations. In another hand, each location has different geographic conditions, social, cultural and economic so that Geographically Weighted Regression (GWR) is needed. The weighting function of each location in GWR generates a different local model. Geographically Weighted Bivariate Poisson Inverse Gaussian Regression (GWBPIGR) model is used to solve over-dispersion and to generate local models. Parameter estimation of GWBPIGR model obtained by Maximum Likelihood Estimation (MLE) method. Meanwhile, hypothesis testing of GWBPIGR model acquired by Maximum Likelihood Ratio Test (MLRT) method.
Evidence Based Medicine; Positive and Negative Likelihood Ratios of Diagnostic Tests
Directory of Open Access Journals (Sweden)
Alireza Baratloo
2015-10-01
Full Text Available In the previous two parts of educational manuscript series in Emergency, we explained some screening characteristics of diagnostic tests including accuracy, sensitivity, specificity, and positive and negative predictive values. In the 3rd part we aimed to explain positive and negative likelihood ratio (LR as one of the most reliable performance measures of a diagnostic test. To better understand this characteristic of a test, it is first necessary to fully understand the concept of sensitivity and specificity. So we strongly advise you to review the 1st part of this series again. In short, the likelihood ratios are about the percentage of people with and without a disease but having the same test result. The prevalence of a disease can directly influence screening characteristics of a diagnostic test, especially its sensitivity and specificity. Trying to eliminate this effect, LR was developed. Pre-test probability of a disease multiplied by positive or negative LR can estimate post-test probability. Therefore, LR is the most important characteristic of a test to rule out or rule in a diagnosis. A positive likelihood ratio > 1 means higher probability of the disease to be present in a patient with a positive test. The further from 1, either higher or lower, the stronger the evidence to rule in or rule out the disease, respectively. It is obvious that tests with LR close to one are less practical. On the other hand, LR further from one will have more value for application in medicine. Usually tests with 0.1 < LR > 10 are considered suitable for implication in routine practice.
EVALUATION OF A NEW MEAN SCALED AND MOMENT ADJUSTED TEST STATISTIC FOR SEM.
Tong, Xiaoxiao; Bentler, Peter M
2013-01-01
Recently a new mean scaled and skewness adjusted test statistic was developed for evaluating structural equation models in small samples and with potentially nonnormal data, but this statistic has received only limited evaluation. The performance of this statistic is compared to normal theory maximum likelihood and two well-known robust test statistics. A modification to the Satorra-Bentler scaled statistic is developed for the condition that sample size is smaller than degrees of freedom. The behavior of the four test statistics is evaluated with a Monte Carlo confirmatory factor analysis study that varies seven sample sizes and three distributional conditions obtained using Headrick's fifth-order transformation to nonnormality. The new statistic performs badly in most conditions except under the normal distribution. The goodness-of-fit χ(2) test based on maximum-likelihood estimation performed well under normal distributions as well as under a condition of asymptotic robustness. The Satorra-Bentler scaled test statistic performed best overall, while the mean scaled and variance adjusted test statistic outperformed the others at small and moderate sample sizes under certain distributional conditions.
Comparison of IRT Likelihood Ratio Test and Logistic Regression DIF Detection Procedures
Atar, Burcu; Kamata, Akihito
2011-01-01
The Type I error rates and the power of IRT likelihood ratio test and cumulative logit ordinal logistic regression procedures in detecting differential item functioning (DIF) for polytomously scored items were investigated in this Monte Carlo simulation study. For this purpose, 54 simulation conditions (combinations of 3 sample sizes, 2 sample…
Sex Ratios, Economic Power, and Women's Roles: A Theoretical Extension and Empirical Test.
South, Scott J.
1988-01-01
Tested hypotheses concerning sex ratios, women's roles, and economic power with data from 111 countries. Found undersupply of women positively associated with proportion of women who marry and fertility rate; inversely associated with women's average age at marriage, literacy rate, and divorce rate. Suggests women's economic power may counteract…
Do exchange rates follow random walks? A variance ratio test of the ...
African Journals Online (AJOL)
The random-walk hypothesis in foreign-exchange rates market is one of the most researched areas, particularly in developed economies. However, emerging markets in sub-Saharan Africa have received little attention in this regard. This study applies Lo and MacKinlay's (1988) conventional variance ratio test and Wright's ...
The effects of multiple features of alternatively spliced exons on the KA/KS ratio test
Directory of Open Access Journals (Sweden)
Chen Feng-Chi
2006-05-01
Full Text Available Abstract Background The evolution of alternatively spliced exons (ASEs is of primary interest because these exons are suggested to be a major source of functional diversity of proteins. Many exon features have been suggested to affect the evolution of ASEs. However, previous studies have relied on the KA/KS ratio test without taking into consideration information sufficiency (i.e., exon length > 75 bp, cross-species divergence > 5% of the studied exons, leading to potentially biased interpretations. Furthermore, which exon feature dominates the results of the KA/KS ratio test and whether multiple exon features have additive effects have remained unexplored. Results In this study, we collect two different datasets for analysis – the ASE dataset (which includes lineage-specific ASEs and conserved ASEs and the ACE dataset (which includes only conserved ASEs. We first show that information sufficiency can significantly affect the interpretation of relationship between exons features and the KA/KS ratio test results. After discarding exons with insufficient information, we use a Boolean method to analyze the relationship between test results and four exon features (namely length, protein domain overlapping, inclusion level, and exonic splicing enhancer (ESE frequency for the ASE dataset. We demonstrate that length and protein domain overlapping are dominant factors, and they have similar impacts on test results of ASEs. In addition, despite the weak impacts of inclusion level and ESE motif frequency when considered individually, combination of these two factors still have minor additive effects on test results. However, the ACE dataset shows a slightly different result in that inclusion level has a marginally significant effect on test results. Lineage-specific ASEs may have contributed to the difference. Overall, in both ASEs and ACEs, protein domain overlapping is the most dominant exon feature while ESE frequency is the weakest one in affecting
Directory of Open Access Journals (Sweden)
Yingjun Jiang
2015-04-01
Full Text Available In order to better understand the mechanical properties of graded crushed rocks (GCRs and to optimize the relevant design, a numerical test method based on the particle flow modeling technique PFC2D is developed for the California bearing ratio (CBR test on GCRs. The effects of different testing conditions and micro-mechanical parameters used in the model on the CBR numerical results have been systematically studied. The reliability of the numerical technique is verified. The numerical results suggest that the influences of the loading rate and Poisson's ratio on the CBR numerical test results are not significant. As such, a loading rate of 1.0–3.0 mm/min, a piston diameter of 5 cm, a specimen height of 15 cm and a specimen diameter of 15 cm are adopted for the CBR numerical test. The numerical results reveal that the CBR values increase with the friction coefficient at the contact and shear modulus of the rocks, while the influence of Poisson's ratio on the CBR values is insignificant. The close agreement between the CBR numerical results and experimental results suggests that the numerical simulation of the CBR values is promising to help assess the mechanical properties of GCRs and to optimize the grading design. Besides, the numerical study can provide useful insights on the mesoscopic mechanism.
Zhang, Fanghong; Miyaoka, Etsuo; Huang, Fuping; Tanaka, Yutaka
2015-01-01
The problem for establishing noninferiority is discussed between a new treatment and a standard (control) treatment with ordinal categorical data. A measure of treatment effect is used and a method of specifying noninferiority margin for the measure is provided. Two Z-type test statistics are proposed where the estimation of variance is constructed under the shifted null hypothesis using U-statistics. Furthermore, the confidence interval and the sample size formula are given based on the proposed test statistics. The proposed procedure is applied to a dataset from a clinical trial. A simulation study is conducted to compare the performance of the proposed test statistics with that of the existing ones, and the results show that the proposed test statistics are better in terms of the deviation from nominal level and the power.
Xu, Kuan-Man
2006-01-01
A new method is proposed to compare statistical differences between summary histograms, which are the histograms summed over a large ensemble of individual histograms. It consists of choosing a distance statistic for measuring the difference between summary histograms and using a bootstrap procedure to calculate the statistical significance level. Bootstrapping is an approach to statistical inference that makes few assumptions about the underlying probability distribution that describes the data. Three distance statistics are compared in this study. They are the Euclidean distance, the Jeffries-Matusita distance and the Kuiper distance. The data used in testing the bootstrap method are satellite measurements of cloud systems called cloud objects. Each cloud object is defined as a contiguous region/patch composed of individual footprints or fields of view. A histogram of measured values over footprints is generated for each parameter of each cloud object and then summary histograms are accumulated over all individual histograms in a given cloud-object size category. The results of statistical hypothesis tests using all three distances as test statistics are generally similar, indicating the validity of the proposed method. The Euclidean distance is determined to be most suitable after comparing the statistical tests of several parameters with distinct probability distributions among three cloud-object size categories. Impacts on the statistical significance levels resulting from differences in the total lengths of satellite footprint data between two size categories are also discussed.
Spent fuel sabotage aerosol ratio program : FY 2004 test and data summary
International Nuclear Information System (INIS)
Brucher, Wenzel; Koch, Wolfgang; Pretzsch, Gunter Guido; Loiseau, Olivier; Mo, Tin; Billone, Michael C.; Autrusson, Bruno A.; Young, F. I.; Coats, Richard Lee; Burtseva, Tatiana; Luna, Robert Earl; Dickey, Roy R.; Sorenson, Ken Bryce; Nolte, Oliver; Thompson, Nancy Slater; Hibbs, Russell S.; Gregson, Michael Warren; Lange, Florentin; Molecke, Martin Alan; Tsai, Han-Chung
2005-01-01
This multinational, multi-phase spent fuel sabotage test program is quantifying the aerosol particles produced when the products of a high energy density device (HEDD) interact with and explosively particulate test rodlets that contain pellets of either surrogate materials or actual spent fuel. This program has been underway for several years. This program provides data that are relevant to some sabotage scenarios in relation to spent fuel transport and storage casks, and associated risk assessments. The program also provides significant technical and political benefits in international cooperation. We are quantifying the Spent Fuel Ratio (SFR), the ratio of the aerosol particles released from HEDD-impacted actual spent fuel to the aerosol particles produced from surrogate materials, measured under closely matched test conditions, in a contained test chamber. In addition, we are measuring the amounts, nuclide content, size distribution of the released aerosol materials, and enhanced sorption of volatile fission product nuclides onto specific aerosol particle size fractions. These data are the input for follow-on modeling studies to quantify respirable hazards, associated radiological risk assessments, vulnerability assessments, and potential cask physical protection design modifications. This document includes an updated description of the test program and test components for all work and plans made, or revised, during FY 2004. It also serves as a program status report as of the end of FY 2004. All available test results, observations, and aerosol analyses plus interpretations--primarily for surrogate material Phase 2 tests, series 2/5A through 2/9B, using cerium oxide sintered ceramic pellets are included. Advanced plans and progress are described for upcoming tests with unirradiated, depleted uranium oxide and actual spent fuel test rodlets. This spent fuel sabotage--aerosol test program is coordinated with the international Working Group for Sabotage Concerns of
Spent fuel sabotage aerosol ratio program : FY 2004 test and data summary.
Energy Technology Data Exchange (ETDEWEB)
Brucher, Wenzel (Gesellschaft fur Anlagen- und Reaktorsicherheit, Germany); Koch, Wolfgang (Fraunhofer Institut fur Toxikologie und Experimentelle Medizin, Germany); Pretzsch, Gunter Guido (Gesellschaft fur Anlagen- und Reaktorsicherheit, Germany); Loiseau, Olivier (Institut de Radioprotection et de Surete Nucleaire, France); Mo, Tin (U.S. Nuclear Regulatory Commission, Washington, DC); Billone, Michael C. (Argonne National Laboratory, Argonne, IL); Autrusson, Bruno A. (Institut de Radioprotection et de Surete Nucleaire, France); Young, F. I. (U.S. Nuclear Regulatory Commission, Washington, DC); Coats, Richard Lee; Burtseva, Tatiana (Argonne National Laboratory, Argonne, IL); Luna, Robert Earl; Dickey, Roy R.; Sorenson, Ken Bryce; Nolte, Oliver (Fraunhofer Institut fur Toxikologie und Experimentelle Medizin, Germany); Thompson, Nancy Slater (U.S. Department of Energy, Washington, DC); Hibbs, Russell S. (U.S. Department of Energy, Washington, DC); Gregson, Michael Warren; Lange, Florentin (Gesellschaft fur Anlagen- und Reaktorsicherheit, Germany); Molecke, Martin Alan; Tsai, Han-Chung (Argonne National Laboratory, Argonne, IL)
2005-07-01
This multinational, multi-phase spent fuel sabotage test program is quantifying the aerosol particles produced when the products of a high energy density device (HEDD) interact with and explosively particulate test rodlets that contain pellets of either surrogate materials or actual spent fuel. This program has been underway for several years. This program provides data that are relevant to some sabotage scenarios in relation to spent fuel transport and storage casks, and associated risk assessments. The program also provides significant technical and political benefits in international cooperation. We are quantifying the Spent Fuel Ratio (SFR), the ratio of the aerosol particles released from HEDD-impacted actual spent fuel to the aerosol particles produced from surrogate materials, measured under closely matched test conditions, in a contained test chamber. In addition, we are measuring the amounts, nuclide content, size distribution of the released aerosol materials, and enhanced sorption of volatile fission product nuclides onto specific aerosol particle size fractions. These data are the input for follow-on modeling studies to quantify respirable hazards, associated radiological risk assessments, vulnerability assessments, and potential cask physical protection design modifications. This document includes an updated description of the test program and test components for all work and plans made, or revised, during FY 2004. It also serves as a program status report as of the end of FY 2004. All available test results, observations, and aerosol analyses plus interpretations--primarily for surrogate material Phase 2 tests, series 2/5A through 2/9B, using cerium oxide sintered ceramic pellets are included. Advanced plans and progress are described for upcoming tests with unirradiated, depleted uranium oxide and actual spent fuel test rodlets. This spent fuel sabotage--aerosol test program is coordinated with the international Working Group for Sabotage Concerns of
Chloride accelerated test: influence of silica fume, water/binder ratio and concrete cover thickness
Directory of Open Access Journals (Sweden)
E. Pereira
Full Text Available In developed countries like the UK, France, Italy and Germany, it is estimated that spending on maintenance and repair is practically the same as investment in new constructions. Therefore, this paper aims to study different ways of interfering in the corrosion kinetic using an accelerated corrosion test - CAIM, that simulates the chloride attack. The three variables are: concrete cover thickness, use of silica fume and the water/binder ratio. It was found, by analysis of variance of the weight loss of the steel bars and chloride content in the concrete cover thickness, there is significant influence of the three variables. Also, the results indicate that the addition of silica fume is the path to improve the corrosion protection of low water/binder ratio concretes (like 0.4 and elevation of the concrete cover thickness is the most effective solution to increase protection of high water/binder ratio concrete (above 0.5.
A new efficient statistical test for detecting variability in the gene expression data.
Mathur, Sunil; Dolo, Samuel
2008-08-01
DNA microarray technology allows researchers to monitor the expressions of thousands of genes under different conditions. The detection of differential gene expression under two different conditions is very important in microarray studies. Microarray experiments are multi-step procedures and each step is a potential source of variance. This makes the measurement of variability difficult because approach based on gene-by-gene estimation of variance will have few degrees of freedom. It is highly possible that the assumption of equal variance for all the expression levels may not hold. Also, the assumption of normality of gene expressions may not hold. Thus it is essential to have a statistical procedure which is not based on the normality assumption and also it can detect genes with differential variance efficiently. The detection of differential gene expression variance will allow us to identify experimental variables that affect different biological processes and accuracy of DNA microarray measurements.In this article, a new nonparametric test for scale is developed based on the arctangent of the ratio of two expression levels. Most of the tests available in literature require the assumption of normal distribution, which makes them inapplicable in many situations, and it is also hard to verify the suitability of the normal distribution assumption for the given data set. The proposed test does not require the assumption of the distribution for the underlying population and hence makes it more practical and widely applicable. The asymptotic relative efficiency is calculated under different distributions, which show that the proposed test is very powerful when the assumption of normality breaks down. Monte Carlo simulation studies are performed to compare the power of the proposed test with some of the existing procedures. It is found that the proposed test is more powerful than commonly used tests under almost all the distributions considered in the study. A
Hall, Steven R.; Walker, Bruce K.
1990-01-01
A new failure detection and isolation algorithm for linear dynamic systems is presented. This algorithm, the Orthogonal Series Generalized Likelihood Ratio (OSGLR) test, is based on the assumption that the failure modes of interest can be represented by truncated series expansions. This assumption leads to a failure detection algorithm with several desirable properties. Computer simulation results are presented for the detection of the failures of actuators and sensors of a C-130 aircraft. The results show that the OSGLR test generally performs as well as the GLR test in terms of time to detect a failure and is more robust to failure mode uncertainty. However, the OSGLR test is also somewhat more sensitive to modeling errors than the GLR test.
Ensuring Positiveness of the Scaled Difference Chi-square Test Statistic.
Satorra, Albert; Bentler, Peter M
2010-06-01
A scaled difference test statistic [Formula: see text] that can be computed from standard software of structural equation models (SEM) by hand calculations was proposed in Satorra and Bentler (2001). The statistic [Formula: see text] is asymptotically equivalent to the scaled difference test statistic T̄(d) introduced in Satorra (2000), which requires more involved computations beyond standard output of SEM software. The test statistic [Formula: see text] has been widely used in practice, but in some applications it is negative due to negativity of its associated scaling correction. Using the implicit function theorem, this note develops an improved scaling correction leading to a new scaled difference statistic T̄(d) that avoids negative chi-square values.
Rigby, A S
2001-11-10
The odds ratio is an appropriate method of analysis for data in 2 x 2 contingency tables. However, other methods of analysis exist. One such method is based on the chi2 test of goodness-of-fit. Key players in the development of statistical theory include Pearson, Fisher and Yates. Data are presented in the form of 2 x 2 contingency tables and a method of analysis based on the chi2 test is introduced. There are many variations of the basic test statistic, one of which is the chi2 test with Yates' continuity correction. The usefulness (or not) of Yates' continuity correction is discussed. Problems of interpretation when the method is applied to k x m tables are highlighted. Some properties of the chi2 the test are illustrated by taking examples from the author's teaching experiences. Journal editors should be encouraged to give both observed and expected cell frequencies so that better information comes out of the chi2 test statistic.
Selecting the most appropriate inferential statistical test for your quantitative research study.
Bettany-Saltikov, Josette; Whittaker, Victoria Jane
2014-06-01
To discuss the issues and processes relating to the selection of the most appropriate statistical test. A review of the basic research concepts together with a number of clinical scenarios is used to illustrate this. Quantitative nursing research generally features the use of empirical data which necessitates the selection of both descriptive and statistical tests. Different types of research questions can be answered by different types of research designs, which in turn need to be matched to a specific statistical test(s). Discursive paper. This paper discusses the issues relating to the selection of the most appropriate statistical test and makes some recommendations as to how these might be dealt with. When conducting empirical quantitative studies, a number of key issues need to be considered. Considerations for selecting the most appropriate statistical tests are discussed and flow charts provided to facilitate this process. When nursing clinicians and researchers conduct quantitative research studies, it is crucial that the most appropriate statistical test is selected to enable valid conclusions to be made. © 2013 John Wiley & Sons Ltd.
A testing procedure for wind turbine generators based on the power grid statistical model
DEFF Research Database (Denmark)
Farajzadehbibalan, Saber; Ramezani, Mohammad Hossein; Nielsen, Peter
2017-01-01
In this study, a comprehensive test procedure is developed to test wind turbine generators with a hardware-in-loop setup. The procedure employs the statistical model of the power grid considering the restrictions of the test facility and system dynamics. Given the model in the latent space...
Common pitfalls in statistical analysis: Understanding the properties of diagnostic tests - Part 1.
Ranganathan, Priya; Aggarwal, Rakesh
2018-01-01
In this article in our series on common pitfalls in statistical analysis, we look at some of the attributes of diagnostic tests (i.e., tests which are used to determine whether an individual does or does not have disease). The next article in this series will focus on further issues related to diagnostic tests.
A Third Moment Adjusted Test Statistic for Small Sample Factor Analysis.
Lin, Johnny; Bentler, Peter M
2012-01-01
Goodness of fit testing in factor analysis is based on the assumption that the test statistic is asymptotically chi-square; but this property may not hold in small samples even when the factors and errors are normally distributed in the population. Robust methods such as Browne's asymptotically distribution-free method and Satorra Bentler's mean scaling statistic were developed under the presumption of non-normality in the factors and errors. This paper finds new application to the case where factors and errors are normally distributed in the population but the skewness of the obtained test statistic is still high due to sampling error in the observed indicators. An extension of Satorra Bentler's statistic is proposed that not only scales the mean but also adjusts the degrees of freedom based on the skewness of the obtained test statistic in order to improve its robustness under small samples. A simple simulation study shows that this third moment adjusted statistic asymptotically performs on par with previously proposed methods, and at a very small sample size offers superior Type I error rates under a properly specified model. Data from Mardia, Kent and Bibby's study of students tested for their ability in five content areas that were either open or closed book were used to illustrate the real-world performance of this statistic.
Directory of Open Access Journals (Sweden)
Shirin Iranfar
2013-12-01
Full Text Available Introduction: Test anxiety is a common phenomenon among students and is one of the problems of educational system. The present study was conducted to investigate the test anxiety in vital statistics course and its association with academic performance of students at Kermanshah University of Medical Sciences. This study was descriptive-analytical and the study sample included the students studying in nursing and midwifery, paramedicine and health faculties that had taken vital statistics course and were selected through census method. Sarason questionnaire was used to analyze the test anxiety. Data were analyzed by descriptive and inferential statistics. The findings indicated no significant correlation between test anxiety and score of vital statistics course.
A Modified Jonckheere Test Statistic for Ordered Alternatives in Repeated Measures Design
Directory of Open Access Journals (Sweden)
Hatice Tül Kübra AKDUR
2016-09-01
Full Text Available In this article, a new test based on Jonckheere test [1] for randomized blocks which have dependent observations within block is presented. A weighted sum for each block statistic rather than the unweighted sum proposed by Jonckheereis included. For Jonckheere type statistics, the main assumption is independency of observations within block. In the case of repeated measures design, the assumption of independence is violated. The weighted Jonckheere type statistic for the situation of dependence for different variance-covariance structure and the situation based on ordered alternative hypothesis structure of each block on the design is used. Also, the proposed statistic is compared to the existing test based on Jonckheere in terms of type I error rates by performing Monte Carlo simulation. For the strong correlations, circular bootstrap version of the proposed Jonckheere test provides lower rates of type I error.
Tay, Louis; Drasgow, Fritz
2012-01-01
Two Monte Carlo simulation studies investigated the effectiveness of the mean adjusted X[superscript 2]/df statistic proposed by Drasgow and colleagues and, because of problems with the method, a new approach for assessing the goodness of fit of an item response theory model was developed. It has been previously recommended that mean adjusted…
The Statistic Test on Influence of Surface Treatment to Fatigue Lifetime with Limited Data
Suhartono, Agus
2009-01-01
Justifications on the influences of two or more parameters on fatigue strength are some times problematic due to the scatter nature of the fatigue data. Statistic test can facilitate the evaluation, whether the changes in material characteristics as a result of specific parameters of interest is significant. The statistic tests were applied to fatigue data of AISI 1045 steel specimens. The specimens are consisted of as received specimen, shot peened specimen with 15 and 16 Almen intensity as ...
Abraham, Arick Reed A.; Johnson, Kenneth L.; Nichols, Charles T.; Saulsberry, Regor L.; Waller, Jess M.
2012-01-01
Broadband modal acoustic emission (AE) data were acquired during intermittent load hold tensile test profiles on Toray T1000G carbon fiber-reinforced epoxy (C/Ep) single tow specimens. A novel trend seeking statistical method to determine the onset of significant AE was developed, resulting in more linear decreases in the Felicity ratio (FR) with load, potentially leading to more accurate failure prediction. The method developed uses an exponentially weighted moving average (EWMA) control chart. Comparison of the EWMA with previously used FR onset methods, namely the discrete (n), mean (n (raised bar)), normalized (n%) and normalized mean (n(raised bar)%) methods, revealed the EWMA method yields more consistently linear FR versus load relationships between specimens. Other findings include a correlation between AE data richness and FR linearity based on the FR methods discussed in this paper, and evidence of premature failure at lower than expected loads. Application of the EWMA method should be extended to other composite materials and, eventually, composite components such as composite overwrapped pressure vessels. Furthermore, future experiments should attempt to uncover the factors responsible for infant mortality in C/Ep strands.
Effect of home testing of international normalized ratio on clinical events.
Matchar, David B; Jacobson, Alan; Dolor, Rowena; Edson, Robert; Uyeda, Lauren; Phibbs, Ciaran S; Vertrees, Julia E; Shih, Mei-Chiung; Holodniy, Mark; Lavori, Philip
2010-10-21
Warfarin anticoagulation reduces thromboembolic complications in patients with atrial fibrillation or mechanical heart valves, but effective management is complex, and the international normalized ratio (INR) is often outside the target range. As compared with venous plasma testing, point-of-care INR measuring devices allow greater testing frequency and patient involvement and may improve clinical outcomes. We randomly assigned 2922 patients who were taking warfarin because of mechanical heart valves or atrial fibrillation and who were competent in the use of point-of-care INR devices to either weekly self-testing at home or monthly high-quality testing in a clinic. The primary end point was the time to a first major event (stroke, major bleeding episode, or death). The patients were followed for 2.0 to 4.75 years, for a total of 8730 patient-years of follow-up. The time to the first primary event was not significantly longer in the self-testing group than in the clinic-testing group (hazard ratio, 0.88; 95% confidence interval, 0.75 to 1.04; P=0.14). The two groups had similar rates of clinical outcomes except that the self-testing group reported more minor bleeding episodes. Over the entire follow-up period, the self-testing group had a small but significant improvement in the percentage of time during which the INR was within the target range (absolute difference between groups, 3.8 percentage points; P<0.001). At 2 years of follow-up, the self-testing group also had a small but significant improvement in patient satisfaction with anticoagulation therapy (P=0.002) and quality of life (P<0.001). As compared with monthly high-quality clinic testing, weekly self-testing did not delay the time to a first stroke, major bleeding episode, or death to the extent suggested by prior studies. These results do not support the superiority of self-testing over clinic testing in reducing the risk of stroke, major bleeding episode, and death among patients taking warfarin
A statistical study of high coronal densities from X-ray line-ratios of Mg XI
Linford, G. A.; Lemen, J. R.; Strong, K. T.
1991-01-01
An X-ray line-ratio density diagnostic was applied to 50 Mg XI spectra of flaring active regions on the sun recorded by the Flat Crystal Spectrometer on the SMM. The plasma density is derived from R, the flux ratio of the forbidden to intercombination lines of the He-like ion, Mg XI. The R ratio for Mg XI is only density sensitive when the electron density exceeds a critical value (about 10 to the 12th/cu cm), the low-density limit (LDL). This theoretical value of the low-density limit is uncertain as it depends on complex atomic theory. Reported coronal densities above 10 to the 12th/cu cm are uncommon. In this study, the distribution of R ratio values about the LDL is estimated and the empirical values are derived for the 1st and 2nd moments of this distribution from 50 Mg XI spectra. From these derived parameters, the percentage of observations is derived which indicated densities above this limit.
Liu, Xiaofeng
2003-01-01
This article considers optimal sample allocation between the treatment and control condition in multilevel designs when the costs per sampling unit vary due to treatment assignment. Optimal unequal allocation may reduce the cost from that of a balanced design without sacrificing any power. The optimum sample allocation ratio depends only on the…
Comment on the asymptotics of a distribution-free goodness of fit test statistic.
Browne, Michael W; Shapiro, Alexander
2015-03-01
In a recent article Jennrich and Satorra (Psychometrika 78: 545-552, 2013) showed that a proof by Browne (British Journal of Mathematical and Statistical Psychology 37: 62-83, 1984) of the asymptotic distribution of a goodness of fit test statistic is incomplete because it fails to prove that the orthogonal component function employed is continuous. Jennrich and Satorra (Psychometrika 78: 545-552, 2013) showed how Browne's proof can be completed satisfactorily but this required the development of an extensive and mathematically sophisticated framework for continuous orthogonal component functions. This short note provides a simple proof of the asymptotic distribution of Browne's (British Journal of Mathematical and Statistical Psychology 37: 62-83, 1984) test statistic by using an equivalent form of the statistic that does not involve orthogonal component functions and consequently avoids all complicating issues associated with them.
Tests of Full-Scale Helicopter Rotors at High Advancing Tip Mach Numbers and Advance Ratios
Biggers, James C.; McCloud, John L., III; Stroub, Robert H.
2015-01-01
As a continuation of the studies of reference 1, three full-scale helicopter rotors have been tested in the Ames Research Center 40- by SO-foot wind tunnel. All three of them were two-bladed, teetering rotors. One of the rotors incorporated the NACA 0012 airfoil section over the entire length of the blade. This rotor was tested at advance ratios up to 1.05. Both of the other rotors were tapered in thickness and incorporated leading-edge camber over the outer 20 percent of the blade radius. The larger of these rotors was tested at advancing tip Mach numbers up to 1.02. Data were obtained for a wide range of lift and propulsive force, and are presented without discussion.
A test procedure for determining the influence of stress ratio on fatigue crack growth
Fitzgerald, J. H.; Wei, R. P.
1974-01-01
A test procedure is outlined by which the rate of fatigue crack growth over a range of stress ratios and stress intensities can be determined expeditiously using a small number of specimens. This procedure was developed to avoid or circumvent the effects of load interactions on fatigue crack growth, and was used to develop data on a mill annealed Ti-6Al-4V alloy plate. Experimental data suggest that the rates of fatigue crack growth among the various stress ratios may be correlated in terms of an effective stress intensity range at given values of K max. This procedure is not to be used, however, for determining the corrosion fatigue crack growth characteristics of alloys when nonsteady-state effects are significant.
International Nuclear Information System (INIS)
Ihara, Hitoshi; Nishimura, Hideo; Ikawa, Koji; Miura, Nobuyuki; Iwanaga, Masayuki; Kusano, Toshitsugu.
1988-03-01
An Near-Real-Time Materials Accountancy(NRTA) system had been developed as an advanced safeguards measure for PNC Tokai Reprocessing Plant; a minicomputer system for NRTA data processing was designed and constructed. A full scale field test was carried out as a JASPAS(Japan Support Program for Agency Safeguards) project with the Agency's participation and the NRTA data processing system was used. Using this field test data, investigation of the detection power of a statistical test under real circumstances was carried out for five statistical tests, i.e., a significance test of MUF, CUMUF test, average loss test, MUF residual test and Page's test on MUF residuals. The result shows that the CUMUF test, average loss test, MUF residual test and the Page's test on MUF residual test are useful to detect a significant loss or diversion. An unmeasured inventory estimation model for the PNC reprocessing plant was developed in this study. Using this model, the field test data from the C-1 to 85 - 2 campaigns were re-analyzed. (author)
Comparison of small n statistical tests of differential expression applied to microarrays
Directory of Open Access Journals (Sweden)
Lee Anna Y
2009-02-01
Full Text Available Abstract Background DNA microarrays provide data for genome wide patterns of expression between observation classes. Microarray studies often have small samples sizes, however, due to cost constraints or specimen availability. This can lead to poor random error estimates and inaccurate statistical tests of differential expression. We compare the performance of the standard t-test, fold change, and four small n statistical test methods designed to circumvent these problems. We report results of various normalization methods for empirical microarray data and of various random error models for simulated data. Results Three Empirical Bayes methods (CyberT, BRB, and limma t-statistics were the most effective statistical tests across simulated and both 2-colour cDNA and Affymetrix experimental data. The CyberT regularized t-statistic in particular was able to maintain expected false positive rates with simulated data showing high variances at low gene intensities, although at the cost of low true positive rates. The Local Pooled Error (LPE test introduced a bias that lowered false positive rates below theoretically expected values and had lower power relative to the top performers. The standard two-sample t-test and fold change were also found to be sub-optimal for detecting differentially expressed genes. The generalized log transformation was shown to be beneficial in improving results with certain data sets, in particular high variance cDNA data. Conclusion Pre-processing of data influences performance and the proper combination of pre-processing and statistical testing is necessary for obtaining the best results. All three Empirical Bayes methods assessed in our study are good choices for statistical tests for small n microarray studies for both Affymetrix and cDNA data. Choice of method for a particular study will depend on software and normalization preferences.
The TL,NO/TL,CO ratio in pulmonary function test interpretation.
Hughes, J Michael B; van der Lee, Ivo
2013-02-01
The transfer factor of the lung for nitric oxide (T(L,NO)) is a new test for pulmonary gas exchange. The procedure is similar to the already well-established transfer factor of the lung for carbon monoxide (T(L,CO)). Physiologically, T(L,NO) predominantly measures the diffusion pathway from the alveoli to capillary plasma. In the Roughton-Forster equation, T(L,NO) acts as a surrogate for the membrane diffusing capacity (D(M)). The red blood cell resistance to carbon monoxide uptake accounts for ~50% of the total resistance from gas to blood, but it is much less for nitric oxide. T(L,NO) and T(L,CO) can be measured simultaneously with the single breath technique, and D(M) and pulmonary capillary blood volume (V(c)) can be estimated. T(L,NO), unlike T(L,CO), is independent of oxygen tension and haematocrit. The T(L,NO)/T(L,CO) ratio is weighted towards the D(M)/V(c) ratio and to α; where α is the ratio of physical diffusivities of NO to CO (α=1.97). The T(L,NO)/T(L,CO) ratio is increased in heavy smokers, with and without computed tomography evidence of emphysema, and reduced in the voluntary restriction of lung expansion; it is expected to be reduced in chronic heart failure. The T(L,NO)/T(L,CO) ratio is a new index of gas exchange that may, more than derivations from them of D(M) and V(c) with their in-built assumptions, give additional insights into pulmonary pathology.
Shaikh, Masood Ali
2017-09-01
Assessment of research articles in terms of study designs used, statistical tests applied and the use of statistical analysis programmes help determine research activity profile and trends in the country. In this descriptive study, all original articles published by Journal of Pakistan Medical Association (JPMA) and Journal of the College of Physicians and Surgeons Pakistan (JCPSP), in the year 2015 were reviewed in terms of study designs used, application of statistical tests, and the use of statistical analysis programmes. JPMA and JCPSP published 192 and 128 original articles, respectively, in the year 2015. Results of this study indicate that cross-sectional study design, bivariate inferential statistical analysis entailing comparison between two variables/groups, and use of statistical software programme SPSS to be the most common study design, inferential statistical analysis, and statistical analysis software programmes, respectively. These results echo previously published assessment of these two journals for the year 2014.
Impact of controlling the sum of error probability in the sequential probability ratio test
Directory of Open Access Journals (Sweden)
Bijoy Kumarr Pradhan
2013-05-01
Full Text Available A generalized modified method is proposed to control the sum of error probabilities in sequential probability ratio test to minimize the weighted average of the two average sample numbers under a simple null hypothesis and a simple alternative hypothesis with the restriction that the sum of error probabilities is a pre-assigned constant to find the optimal sample size and finally a comparison is done with the optimal sample size found from fixed sample size procedure. The results are applied to the cases when the random variate follows a normal law as well as Bernoullian law.
Evaluating Two Models of Collaborative Tests in an Online Introductory Statistics Course
Björnsdóttir, Auðbjörg; Garfield, Joan; Everson, Michelle
2015-01-01
This study explored the use of two different types of collaborative tests in an online introductory statistics course. A study was designed and carried out to investigate three research questions: (1) What is the difference in students' learning between using consensus and non-consensus collaborative tests in the online environment?, (2) What is…
P-Value, a true test of statistical significance? a cautionary note ...
African Journals Online (AJOL)
While it's not the intention of the founders of significance testing and hypothesis testing to have the two ideas intertwined as if they are complementary, the inconvenient marriage of the two practices into one coherent, convenient, incontrovertible and misinterpreted practice has dotted our standard statistics textbooks and ...
DEFF Research Database (Denmark)
Steffen, J.H.; Ford, E.B.; Rowe, J.F.
2012-01-01
We analyze the deviations of transit times from a linear ephemeris for the Kepler Objects of Interest (KOI) through quarter six of science data. We conduct two statistical tests for all KOIs and a related statistical test for all pairs of KOIs in multi-transiting systems. These tests identify...... several systems which show potentially interesting transit timing variations (TTVs). Strong TTV systems have been valuable for the confirmation of planets and their mass measurements. Many of the systems identified in this study should prove fruitful for detailed TTV studies....
International Nuclear Information System (INIS)
Steffen, Jason H.; Ford, Eric B.; Rowe, Jason F.; Borucki, William J.; Bryson, Steve; Caldwell, Douglas A.; Jenkins, Jon M.; Koch, David G.; Sanderfer, Dwight T.; Seader, Shawn; Twicken, Joseph D.; Fabrycky, Daniel C.; Holman, Matthew J.; Welsh, William F.; Batalha, Natalie M.; Ciardi, David R.; Kjeldsen, Hans; Prša, Andrej
2012-01-01
We analyze the deviations of transit times from a linear ephemeris for the Kepler Objects of Interest (KOI) through quarter six of science data. We conduct two statistical tests for all KOIs and a related statistical test for all pairs of KOIs in multi-transiting systems. These tests identify several systems which show potentially interesting transit timing variations (TTVs). Strong TTV systems have been valuable for the confirmation of planets and their mass measurements. Many of the systems identified in this study should prove fruitful for detailed TTV studies.
Statistical alignment: computational properties, homology testing and goodness-of-fit
DEFF Research Database (Denmark)
Hein, J; Wiuf, Carsten; Møller, Martin
2000-01-01
The model of insertions and deletions in biological sequences, first formulated by Thorne, Kishino, and Felsenstein in 1991 (the TKF91 model), provides a basis for performing alignment within a statistical framework. Here we investigate this model.Firstly, we show how to accelerate the statistical...... alignment algorithms several orders of magnitude. The main innovations are to confine likelihood calculations to a band close to the similarity based alignment, to get good initial guesses of the evolutionary parameters and to apply an efficient numerical optimisation algorithm for finding the maximum...... analysis.Secondly, we propose a new homology test based on this model, where homology means that an ancestor to a sequence pair can be found finitely far back in time. This test has statistical advantages relative to the traditional shuffle test for proteins.Finally, we describe a goodness-of-fit test...
Improved Test Planning and Analysis Through the Use of Advanced Statistical Methods
Green, Lawrence L.; Maxwell, Katherine A.; Glass, David E.; Vaughn, Wallace L.; Barger, Weston; Cook, Mylan
2016-01-01
The goal of this work is, through computational simulations, to provide statistically-based evidence to convince the testing community that a distributed testing approach is superior to a clustered testing approach for most situations. For clustered testing, numerous, repeated test points are acquired at a limited number of test conditions. For distributed testing, only one or a few test points are requested at many different conditions. The statistical techniques of Analysis of Variance (ANOVA), Design of Experiments (DOE) and Response Surface Methods (RSM) are applied to enable distributed test planning, data analysis and test augmentation. The D-Optimal class of DOE is used to plan an optimally efficient single- and multi-factor test. The resulting simulated test data are analyzed via ANOVA and a parametric model is constructed using RSM. Finally, ANOVA can be used to plan a second round of testing to augment the existing data set with new data points. The use of these techniques is demonstrated through several illustrative examples. To date, many thousands of comparisons have been performed and the results strongly support the conclusion that the distributed testing approach outperforms the clustered testing approach.
The Likelihood Ratio Test of Common Factors under Non-Ideal Conditions
Directory of Open Access Journals (Sweden)
Ana M. Angulo
2011-01-01
Full Text Available El modelo espacial de Durbin ocupa una posición interesante en econometría espacial. Es la forma reducida de un modelo de corte transversal con dependencia en los errores y puede ser utilizado como ecuación de anidación en un enfoque más general de selección de modelos. En concreto, a partir de esta ecuación puede obtenerse el Ratio de Verosimilitudes conocido como test de Factores Comunes (LRCOM. Como se muestra en Mur y Angulo (2006, este test tiene buenas propiedades si el modelo está correctamente especificado. Sin embargo, por lo que sabemos, no hay referencias en la literatura sobre el comportamiento de este test bajo condiciones no ideales. En concreto, estudiamos el comportamiento del test en los casos de heterocedasticidad, no normalidad, endogeneidad, matrices de contactos densas y no-linealidad. Nuestros resultados ofrecen una visión positiva del test de Factores Comunes que parece una técnica útil en el instrumental propio de la econometría espacial contemporánea.
DEFF Research Database (Denmark)
Schneider, Jesper Wiborg
2012-01-01
In this paper we discuss and question the use of statistical significance tests in relation to university rankings as recently suggested. We outline the assumptions behind and interpretations of statistical significance tests and relate this to examples from the recent SCImago Institutions Rankin...
Swanson, David M; Blacker, Deborah; Alchawa, Taofik; Ludwig, Kerstin U; Mangold, Elisabeth; Lange, Christoph
2013-11-07
The advent of genome-wide association studies has led to many novel disease-SNP associations, opening the door to focused study on their biological underpinnings. Because of the importance of analyzing these associations, numerous statistical methods have been devoted to them. However, fewer methods have attempted to associate entire genes or genomic regions with outcomes, which is potentially more useful knowledge from a biological perspective and those methods currently implemented are often permutation-based. One property of some permutation-based tests is that their power varies as a function of whether significant markers are in regions of linkage disequilibrium (LD) or not, which we show from a theoretical perspective. We therefore develop two methods for quantifying the degree of association between a genomic region and outcome, both of whose power does not vary as a function of LD structure. One method uses dimension reduction to "filter" redundant information when significant LD exists in the region, while the other, called the summary-statistic test, controls for LD by scaling marker Z-statistics using knowledge of the correlation matrix of markers. An advantage of this latter test is that it does not require the original data, but only their Z-statistics from univariate regressions and an estimate of the correlation structure of markers, and we show how to modify the test to protect the type 1 error rate when the correlation structure of markers is misspecified. We apply these methods to sequence data of oral cleft and compare our results to previously proposed gene tests, in particular permutation-based ones. We evaluate the versatility of the modification of the summary-statistic test since the specification of correlation structure between markers can be inaccurate. We find a significant association in the sequence data between the 8q24 region and oral cleft using our dimension reduction approach and a borderline significant association using the
Price limits and stock market efficiency: Evidence from rolling bicorrelation test statistic
International Nuclear Information System (INIS)
Lim, Kian-Ping; Brooks, Robert D.
2009-01-01
Using the rolling bicorrelation test statistic, the present paper compares the efficiency of stock markets from China, Korea and Taiwan in selected sub-periods with different price limits regimes. The statistical results do not support the claims that restrictive price limits and price limits per se are jeopardizing market efficiency. However, the evidence does not imply that price limits have no effect on the price discovery process but rather suggesting that market efficiency is not merely determined by price limits.
A NEW TEST OF THE STATISTICAL NATURE OF THE BRIGHTEST CLUSTER GALAXIES
International Nuclear Information System (INIS)
Lin, Yen-Ting; Ostriker, Jeremiah P.; Miller, Christopher J.
2010-01-01
A novel statistic is proposed to examine the hypothesis that all cluster galaxies are drawn from the same luminosity distribution (LD). In such a 'statistical model' of galaxy LD, the brightest cluster galaxies (BCGs) are simply the statistical extreme of the galaxy population. Using a large sample of nearby clusters, we show that BCGs in high luminosity clusters (e.g., L tot ∼> 4 x 10 11 h -2 70 L sun ) are unlikely (probability ≤3 x 10 -4 ) to be drawn from the LD defined by all red cluster galaxies more luminous than M r = -20. On the other hand, BCGs in less luminous clusters are consistent with being the statistical extreme. Applying our method to the second brightest galaxies, we show that they are consistent with being the statistical extreme, which implies that the BCGs are also distinct from non-BCG luminous, red, cluster galaxies. We point out some issues with the interpretation of the classical tests proposed by Tremaine and Richstone (TR) that are designed to examine the statistical nature of BCGs, investigate the robustness of both our statistical test and those of TR against difficulties in photometry of galaxies of large angular size, and discuss the implication of our findings on surveys that use the luminous red galaxies to measure the baryon acoustic oscillation features in the galaxy power spectrum.
Xu, Xu Steven; Yuan, Min; Yang, Haitao; Feng, Yan; Xu, Jinfeng; Pinheiro, Jose
2017-01-01
Covariate analysis based on population pharmacokinetics (PPK) is used to identify clinically relevant factors. The likelihood ratio test (LRT) based on nonlinear mixed effect model fits is currently recommended for covariate identification, whereas individual empirical Bayesian estimates (EBEs) are considered unreliable due to the presence of shrinkage. The objectives of this research were to investigate the type I error for LRT and EBE approaches, to confirm the similarity of power between the LRT and EBE approaches from a previous report and to explore the influence of shrinkage on LRT and EBE inferences. Using an oral one-compartment PK model with a single covariate impacting on clearance, we conducted a wide range of simulations according to a two-way factorial design. The results revealed that the EBE-based regression not only provided almost identical power for detecting a covariate effect, but also controlled the false positive rate better than the LRT approach. Shrinkage of EBEs is likely not the root cause for decrease in power or inflated false positive rate although the size of the covariate effect tends to be underestimated at high shrinkage. In summary, contrary to the current recommendations, EBEs may be a better choice for statistical tests in PPK covariate analysis compared to LRT. We proposed a three-step covariate modeling approach for population PK analysis to utilize the advantages of EBEs while overcoming their shortcomings, which allows not only markedly reducing the run time for population PK analysis, but also providing more accurate covariate tests.
Statistical test data selection for reliability evalution of process computer software
International Nuclear Information System (INIS)
Volkmann, K.P.; Hoermann, H.; Ehrenberger, W.
1976-01-01
The paper presents a concept for converting knowledge about the characteristics of process states into practicable procedures for the statistical selection of test cases in testing process computer software. Process states are defined as vectors whose components consist of values of input variables lying in discrete positions or within given limits. Two approaches for test data selection, based on knowledge about cases of demand, are outlined referring to a purely probabilistic method and to the mathematics of stratified sampling. (orig.) [de
Austin, Peter C; Steyerberg, Ewout W
2012-06-20
When outcomes are binary, the c-statistic (equivalent to the area under the Receiver Operating Characteristic curve) is a standard measure of the predictive accuracy of a logistic regression model. An analytical expression was derived under the assumption that a continuous explanatory variable follows a normal distribution in those with and without the condition. We then conducted an extensive set of Monte Carlo simulations to examine whether the expressions derived under the assumption of binormality allowed for accurate prediction of the empirical c-statistic when the explanatory variable followed a normal distribution in the combined sample of those with and without the condition. We also examine the accuracy of the predicted c-statistic when the explanatory variable followed a gamma, log-normal or uniform distribution in combined sample of those with and without the condition. Under the assumption of binormality with equality of variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the product of the standard deviation of the normal components (reflecting more heterogeneity) and the log-odds ratio (reflecting larger effects). Under the assumption of binormality with unequal variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the standardized difference of the explanatory variable in those with and without the condition. In our Monte Carlo simulations, we found that these expressions allowed for reasonably accurate prediction of the empirical c-statistic when the distribution of the explanatory variable was normal, gamma, log-normal, and uniform in the entire sample of those with and without the condition. The discriminative ability of a continuous explanatory variable cannot be judged by its odds ratio alone, but always needs to be considered in relation to the heterogeneity of the population.
Karzmark, Peter; Deutsch, Gayle K
2018-01-01
This investigation was designed to determine the predictive accuracy of a comprehensive neuropsychological and brief neuropsychological test battery with regard to the capacity to perform instrumental activities of daily living (IADLs). Accuracy statistics that included measures of sensitivity, specificity, positive and negative predicted power and positive likelihood ratio were calculated for both types of batteries. The sample was drawn from a general neurological group of adults (n = 117) that included a number of older participants (age >55; n = 38). Standardized neuropsychological assessments were administered to all participants and were comprised of the Halstead Reitan Battery and portions of the Wechsler Adult Intelligence Scale-III. A comprehensive test battery yielded a moderate increase over base-rate in predictive accuracy that generalized to older individuals. There was only limited support for using a brief battery, for although sensitivity was high, specificity was low. We found that a comprehensive neuropsychological test battery provided good classification accuracy for predicting IADL capacity.
Bonnice, W. F.; Motyka, P.; Wagner, E.; Hall, S. R.
1986-01-01
The performance of the orthogonal series generalized likelihood ratio (OSGLR) test in detecting and isolating commercial aircraft control surface and actuator failures is evaluated. A modification to incorporate age-weighting which significantly reduces the sensitivity of the algorithm to modeling errors is presented. The steady-state implementation of the algorithm based on a single linear model valid for a cruise flight condition is tested using a nonlinear aircraft simulation. A number of off-nominal no-failure flight conditions including maneuvers, nonzero flap deflections, different turbulence levels and steady winds were tested. Based on the no-failure decision functions produced by off-nominal flight conditions, the failure detection and isolation performance at the nominal flight condition was determined. The extension of the algorithm to a wider flight envelope by scheduling on dynamic pressure and flap deflection is examined. Based on this testing, the OSGLR algorithm should be capable of detecting control surface failures that would affect the safe operation of a commercial aircraft. Isolation may be difficult if there are several surfaces which produce similar effects on the aircraft. Extending the algorithm over the entire operating envelope of a commercial aircraft appears feasible.
Evaluating statistical tests on OLAP cubes to compare degree of disease.
Ordonez, Carlos; Chen, Zhibo
2009-09-01
Statistical tests represent an important technique used to formulate and validate hypotheses on a dataset. They are particularly useful in the medical domain, where hypotheses link disease with medical measurements, risk factors, and treatment. In this paper, we propose to compute parametric statistical tests treating patient records as elements in a multidimensional cube. We introduce a technique that combines dimension lattice traversal and statistical tests to discover significant differences in the degree of disease within pairs of patient groups. In order to understand a cause-effect relationship, we focus on patient group pairs differing in one dimension. We introduce several optimizations to prune the search space, to discover significant group pairs, and to summarize results. We present experiments showing important medical findings and evaluating scalability with medical datasets.
International normalized ratio self-testing and self-management: improving patient outcomes
Directory of Open Access Journals (Sweden)
Pozzi M
2016-10-01
Full Text Available Matteo Pozzi,1 Julia Mitchell,2 Anna Maria Henaine,3 Najib Hanna,4 Ola Safi,4 Roland Henaine2 1Department of Adult Cardiac Surgery, “Louis Pradel” Cardiologic Hospital, Lyon, France; 2Department of Congenital Cardiac Surgery, “Louis Pradel” Cardiologic Hospital, Lyon, France; 3Clinical Pharmacology Unit, Lebanese University, Beirut, Lebanon; 4Pediatric Unit, “Hotel Dieu de France” Hospital, Saint Joseph University, Beirut, Lebanon Abstract: Long term oral anti-coagulation with vitamin K antagonists is a risk factor of hemorrhagic or thromebomlic complications. Periodic laboratory testing of international normalized ratio (INR and a subsequent dose adjustment are therefore mandatory. The use of home testing devices to measure INR has been suggested as a potential way to improve the comfort and compliance of the patients and their families, the frequency of monitoring and, finally, the management and safety of long-term oral anticoagulation. In pediatric patients, increased doses to obtain and maintain the therapeutic target INR, more frequent adjustments and INR testing, multiple medication, inconstant nutritional intake, difficult venepunctures, and the need to go to the laboratory for testing (interruption of school and parents’ work attendance highlight those difficulties. After reviewing the most relevant published studies of self-testing and self-management of INR for adult patients and children on oral anticoagulation, it seems that these are valuable and effective strategies of INR control. Despite an unclear relationship between INR control and clinical effects, these self-strategies provide a better control of the anticoagulant effect, improve patients and their family quality of life, and are an appealing solution in term of cost-effectiveness. Structured education and knowledge evaluation by trained health care professionals is required for children, to be able to adjust their dose treatment safely and accurately. However
Cho, Seon; Kim, Suyoung; Cho, Han-Ik
2017-01-01
Background Albuminuria is generally known as a sensitive marker of renal and cardiovascular dysfunction. It can be used to help predict the occurrence of nephropathy and cardiovascular disorders in diabetes. Individuals with prediabetes have a tendency to develop macrovascular and microvascular pathology, resulting in an increased risk of retinopathy, cardiovascular diseases, and chronic renal diseases. We evaluated the clinical value of a strip test for measuring the urinary albumin-to-creatinine ratio (ACR) in prediabetes and diabetes. Methods Spot urine samples were obtained from 226 prediabetic and 275 diabetic subjects during regular health checkups. Urinary ACR was measured by using strip and laboratory quantitative tests. Results The positive rates of albuminuria measured by using the ACR strip test were 15.5% (microalbuminuria, 14.6%; macroalbuminuria, 0.9%) and 30.5% (microalbuminuria, 25.1%; macroalbuminuria, 5.5%) in prediabetes and diabetes, respectively. In the prediabetic population, the sensitivity, specificity, positive predictive value, negative predictive value, and overall accuracy of the ACR strip method were 92.0%, 94.0%, 65.7%, 99.0%, and 93.8%, respectively; the corresponding values in the diabetic population were 80.0%, 91.6%, 81.0%, 91.1%, and 88.0%, respectively. The median [interquartile range] ACR values in the strip tests for measurement ranges of 300 mg/g were 9.4 [6.3-15.4], 46.9 [26.5-87.7], and 368.8 [296.2-575.2] mg/g, respectively, using the laboratory method. Conclusions The ACR strip test showed high sensitivity, specificity, and negative predictive value, suggesting that the test can be used to screen for albuminuria in cases of prediabetes and diabetes. PMID:27834062
VanZante, Dale E.; Podboy, Gary G.; Miller, Christopher J.; Thorp, Scott A.
2009-01-01
A 1/5 scale model rotor representative of a current technology, high bypass ratio, turbofan engine was installed and tested in the W8 single-stage, high-speed, compressor test facility at NASA Glenn Research Center (GRC). The same fan rotor was tested previously in the GRC 9x15 Low Speed Wind Tunnel as a fan module consisting of the rotor and outlet guide vanes mounted in a flight-like nacelle. The W8 test verified that the aerodynamic performance and detailed flow field of the rotor as installed in W8 were representative of the wind tunnel fan module installation. Modifications to W8 were necessary to ensure that this internal flow facility would have a flow field at the test package that is representative of flow conditions in the wind tunnel installation. Inlet flow conditioning was designed and installed in W8 to lower the fan face turbulence intensity to less than 1.0 percent in order to better match the wind tunnel operating environment. Also, inlet bleed was added to thin the casing boundary layer to be more representative of a flight nacelle boundary layer. On the 100 percent speed operating line the fan pressure rise and mass flow rate agreed with the wind tunnel data to within 1 percent. Detailed hot film surveys of the inlet flow, inlet boundary layer and fan exit flow were compared to results from the wind tunnel. The effect of inlet casing boundary layer thickness on fan performance was quantified. Challenges and lessons learned from testing this high flow, low static pressure rise fan in an internal flow facility are discussed.
Operational statistical analysis of the results of computer-based testing of students
Directory of Open Access Journals (Sweden)
Виктор Иванович Нардюжев
2018-12-01
Full Text Available The article is devoted to the issues of statistical analysis of results of computer-based testing for evaluation of educational achievements of students. The issues are relevant due to the fact that computerbased testing in Russian universities has become an important method for evaluation of educational achievements of students and quality of modern educational process. Usage of modern methods and programs for statistical analysis of results of computer-based testing and assessment of quality of developed tests is an actual problem for every university teacher. The article shows how the authors solve this problem using their own program “StatInfo”. For several years the program has been successfully applied in a credit system of education at such technological stages as loading computerbased testing protocols into a database, formation of queries, generation of reports, lists, and matrices of answers for statistical analysis of quality of test items. Methodology, experience and some results of its usage by university teachers are described in the article. Related topics of a test development, models, algorithms, technologies, and software for large scale computer-based testing has been discussed by the authors in their previous publications which are presented in the reference list.
A Space Object Detection Algorithm using Fourier Domain Likelihood Ratio Test
Becker, D.; Cain, S.
Space object detection is of great importance in the highly dependent yet competitive and congested space domain. Detection algorithms employed play a crucial role in fulfilling the detection component in the situational awareness mission to detect, track, characterize and catalog unknown space objects. Many current space detection algorithms use a matched filter or a spatial correlator to make a detection decision at a single pixel point of a spatial image based on the assumption that the data follows a Gaussian distribution. This paper explores the potential for detection performance advantages when operating in the Fourier domain of long exposure images of small and/or dim space objects from ground based telescopes. A binary hypothesis test is developed based on the joint probability distribution function of the image under the hypothesis that an object is present and under the hypothesis that the image only contains background noise. The detection algorithm tests each pixel point of the Fourier transformed images to make the determination if an object is present based on the criteria threshold found in the likelihood ratio test. Using simulated data, the performance of the Fourier domain detection algorithm is compared to the current algorithm used in space situational awareness applications to evaluate its value.
Laboratory test on maximum and minimum void ratio of tropical sand matrix soils
Othman, B. A.; Marto, A.
2018-04-01
Sand is generally known as loose granular material which has a grain size finer than gravel and coarser than silt and can be very angular to well-rounded in shape. The present of various amount of fines which also influence the loosest and densest state of sand in natural condition have been well known to contribute to the deformation and loss of shear strength of soil. This paper presents the effect of various range of fines content on minimum void ratio e min and maximum void ratio e max of sand matrix soils. Laboratory tests to determine e min and e max of sand matrix soil were conducted using non-standard method introduced by previous researcher. Clean sand was obtained from natural mining site at Johor, Malaysia. A set of 3 different sizes of sand (fine sand, medium sand, and coarse sand) were mixed with 0% to 40% by weight of low plasticity fine (kaolin). Results showed that generally e min and e max decreased with the increase of fines content up to a minimal value of 0% to 30%, and then increased back thereafter.
Directory of Open Access Journals (Sweden)
Caroline M Hammerschlag-Peyer
Full Text Available Ontogenetic niche shifts occur across diverse taxonomic groups, and can have critical implications for population dynamics, community structure, and ecosystem function. In this study, we provide a hypothesis-testing framework combining univariate and multivariate analyses to examine ontogenetic niche shifts using stable isotope ratios. This framework is based on three distinct ontogenetic niche shift scenarios, i.e., (1 no niche shift, (2 niche expansion/reduction, and (3 discrete niche shift between size classes. We developed criteria for identifying each scenario, as based on three important resource use characteristics, i.e., niche width, niche position, and niche overlap. We provide an empirical example for each ontogenetic niche shift scenario, illustrating differences in resource use characteristics among different organisms. The present framework provides a foundation for future studies on ontogenetic niche shifts, and also can be applied to examine resource variability among other population sub-groupings (e.g., by sex or phenotype.
Glass-surface area to solution-volume ratio and its implications to accelerated leach testing
International Nuclear Information System (INIS)
Pederson, L.R.; Buckwalter, C.Q.; McVay, G.L.; Riddle, B.L.
1982-10-01
The value of glass surface area to solution volume ratio (SA/V) can strongly influence the leaching rate of PNL 76-68 glass. The leaching rate is largely governed by silicon solubility constraints. Silicic acid in solution reduced the elemental release of all glass components. No components are leached to depths greater than that of silicon. The presence of the reaction layer had no measurable effect on the rate of leaching. Accelerated leach testing is possible since PNL 76-68 glass leaching is solubility-controlled (except at very low SA/V values). A series of glasses leached with SA/V x time = constant will yield identical elemental release
AXIAL RATIO OF EDGE-ON SPIRAL GALAXIES AS A TEST FOR BRIGHT RADIO HALOS
International Nuclear Information System (INIS)
Singal, J.; Jones, E.; Dunlap, H.; Kogut, A.
2015-01-01
We use surface brightness contour maps of nearby edge-on spiral galaxies to determine whether extended bright radio halos are common. In particular, we test a recent model of the spatial structure of the diffuse radio continuum by Subrahmanyan and Cowsik which posits that a substantial fraction of the observed high-latitude surface brightness originates from an extended Galactic halo of uniform emissivity. Measurements of the axial ratio of emission contours within a sample of normal spiral galaxies at 1500 MHz and below show no evidence for such a bright, extended radio halo. Either the Galaxy is atypical compared to nearby quiescent spirals or the bulk of the observed high-latitude emission does not originate from this type of extended halo. (letters)
Testing independence of bivariate interval-censored data using modified Kendall's tau statistic.
Kim, Yuneung; Lim, Johan; Park, DoHwan
2015-11-01
In this paper, we study a nonparametric procedure to test independence of bivariate interval censored data; for both current status data (case 1 interval-censored data) and case 2 interval-censored data. To do it, we propose a score-based modification of the Kendall's tau statistic for bivariate interval-censored data. Our modification defines the Kendall's tau statistic with expected numbers of concordant and disconcordant pairs of data. The performance of the modified approach is illustrated by simulation studies and application to the AIDS study. We compare our method to alternative approaches such as the two-stage estimation method by Sun et al. (Scandinavian Journal of Statistics, 2006) and the multiple imputation method by Betensky and Finkelstein (Statistics in Medicine, 1999b). © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Directory of Open Access Journals (Sweden)
Chaeyoung Lee
2012-11-01
Full Text Available Epistasis that may explain a large portion of the phenotypic variation for complex economic traits of animals has been ignored in many genetic association studies. A Baysian method was introduced to draw inferences about multilocus genotypic effects based on their marginal posterior distributions by a Gibbs sampler. A simulation study was conducted to provide statistical powers under various unbalanced designs by using this method. Data were simulated by combined designs of number of loci, within genotype variance, and sample size in unbalanced designs with or without null combined genotype cells. Mean empirical statistical power was estimated for testing posterior mean estimate of combined genotype effect. A practical example for obtaining empirical statistical power estimates with a given sample size was provided under unbalanced designs. The empirical statistical powers would be useful for determining an optimal design when interactive associations of multiple loci with complex phenotypes were examined.
Effect of non-normality on test statistics for one-way independent groups designs.
Cribbie, Robert A; Fiksenbaum, Lisa; Keselman, H J; Wilcox, Rand R
2012-02-01
The data obtained from one-way independent groups designs is typically non-normal in form and rarely equally variable across treatment populations (i.e., population variances are heterogeneous). Consequently, the classical test statistic that is used to assess statistical significance (i.e., the analysis of variance F test) typically provides invalid results (e.g., too many Type I errors, reduced power). For this reason, there has been considerable interest in finding a test statistic that is appropriate under conditions of non-normality and variance heterogeneity. Previously recommended procedures for analysing such data include the James test, the Welch test applied either to the usual least squares estimators of central tendency and variability, or the Welch test with robust estimators (i.e., trimmed means and Winsorized variances). A new statistic proposed by Krishnamoorthy, Lu, and Mathew, intended to deal with heterogeneous variances, though not non-normality, uses a parametric bootstrap procedure. In their investigation of the parametric bootstrap test, the authors examined its operating characteristics under limited conditions and did not compare it to the Welch test based on robust estimators. Thus, we investigated how the parametric bootstrap procedure and a modified parametric bootstrap procedure based on trimmed means perform relative to previously recommended procedures when data are non-normal and heterogeneous. The results indicated that the tests based on trimmed means offer the best Type I error control and power when variances are unequal and at least some of the distribution shapes are non-normal. © 2011 The British Psychological Society.
Efficient statistical tests to compare Youden index: accounting for contingency correlation.
Chen, Fangyao; Xue, Yuqiang; Tan, Ming T; Chen, Pingyan
2015-04-30
Youden index is widely utilized in studies evaluating accuracy of diagnostic tests and performance of predictive, prognostic, or risk models. However, both one and two independent sample tests on Youden index have been derived ignoring the dependence (association) between sensitivity and specificity, resulting in potentially misleading findings. Besides, paired sample test on Youden index is currently unavailable. This article develops efficient statistical inference procedures for one sample, independent, and paired sample tests on Youden index by accounting for contingency correlation, namely associations between sensitivity and specificity and paired samples typically represented in contingency tables. For one and two independent sample tests, the variances are estimated by Delta method, and the statistical inference is based on the central limit theory, which are then verified by bootstrap estimates. For paired samples test, we show that the estimated covariance of the two sensitivities and specificities can be represented as a function of kappa statistic so the test can be readily carried out. We then show the remarkable accuracy of the estimated variance using a constrained optimization approach. Simulation is performed to evaluate the statistical properties of the derived tests. The proposed approaches yield more stable type I errors at the nominal level and substantially higher power (efficiency) than does the original Youden's approach. Therefore, the simple explicit large sample solution performs very well. Because we can readily implement the asymptotic and exact bootstrap computation with common software like R, the method is broadly applicable to the evaluation of diagnostic tests and model performance. Copyright © 2015 John Wiley & Sons, Ltd.
Reliability Verification of DBE Environment Simulation Test Facility by using Statistics Method
International Nuclear Information System (INIS)
Jang, Kyung Nam; Kim, Jong Soeg; Jeong, Sun Chul; Kyung Heum
2011-01-01
In the nuclear power plant, all the safety-related equipment including cables under the harsh environment should perform the equipment qualification (EQ) according to the IEEE std 323. There are three types of qualification methods including type testing, operating experience and analysis. In order to environmentally qualify the safety-related equipment using type testing method, not analysis or operation experience method, the representative sample of equipment, including interfaces, should be subjected to a series of tests. Among these tests, Design Basis Events (DBE) environment simulating test is the most important test. DBE simulation test is performed in DBE simulation test chamber according to the postulated DBE conditions including specified high-energy line break (HELB), loss of coolant accident (LOCA), main steam line break (MSLB) and etc, after thermal and radiation aging. Because most DBE conditions have 100% humidity condition, in order to trace temperature and pressure of DBE condition, high temperature steam should be used. During DBE simulation test, if high temperature steam under high pressure inject to the DBE test chamber, the temperature and pressure in test chamber rapidly increase over the target temperature. Therefore, the temperature and pressure in test chamber continue fluctuating during the DBE simulation test to meet target temperature and pressure. We should ensure fairness and accuracy of test result by confirming the performance of DBE environment simulation test facility. In this paper, in order to verify reliability of DBE environment simulation test facility, statistics method is used
Page, Robert; Satake, Eiki
2017-01-01
While interest in Bayesian statistics has been growing in statistics education, the treatment of the topic is still inadequate in both textbooks and the classroom. Because so many fields of study lead to careers that involve a decision-making process requiring an understanding of Bayesian methods, it is becoming increasingly clear that Bayesian…
IEEE Std 101-1987: IEEE guide for the statistical analysis of thermal life test data
International Nuclear Information System (INIS)
Anon.
1992-01-01
This revision of IEEE Std 101-1972 describes statistical analyses for data from thermally accelerated aging tests. It explains the basis and use of statistical calculations for an engineer or scientist. Accelerated test procedures usually call for a number of specimens to be aged at each of several temperatures appreciably above normal operating temperatures. High temperatures are chosen to produce specimen failures (according to specified failure criteria) in typically one week to one year. The test objective is to determine the dependence of median life on temperature from the data, and to estimate, by extrapolation, the median life to be expected at service temperature. This guide presents methods for analyzing such data and for comparing test data on different materials
A general statistical test for correlations in a finite-length time series.
Hanson, Jeffery A; Yang, Haw
2008-06-07
The statistical properties of the autocorrelation function from a time series composed of independently and identically distributed stochastic variables has been studied. Analytical expressions for the autocorrelation function's variance have been derived. It has been found that two common ways of calculating the autocorrelation, moving-average and Fourier transform, exhibit different uncertainty characteristics. For periodic time series, the Fourier transform method is preferred because it gives smaller uncertainties that are uniform through all time lags. Based on these analytical results, a statistically robust method has been proposed to test the existence of correlations in a time series. The statistical test is verified by computer simulations and an application to single-molecule fluorescence spectroscopy is discussed.
Statistical tests for the Gaussian nature of primordial fluctuations through CBR experiments
International Nuclear Information System (INIS)
Luo, X.
1994-01-01
Information about the physical processes that generate the primordial fluctuations in the early Universe can be gained by testing the Gaussian nature of the fluctuations through cosmic microwave background radiation (CBR) temperature anisotropy experiments. One of the crucial aspects of density perturbations that are produced by the standard inflation scenario is that they are Gaussian, whereas seeds produced by topological defects left over from an early cosmic phase transition tend to be non-Gaussian. To carry out this test, sophisticated statistical tools are required. In this paper, we will discuss several such statistical tools, including multivariant skewness and kurtosis, Euler-Poincare characteristics, the three-point temperature correlation function, and Hotelling's T 2 statistic defined through bispectral estimates of a one-dimensional data set. The effect of noise present in the current data is discussed in detail and the COBE 53 GHz data set is analyzed. Our analysis shows that, on the large angular scale to which COBE is sensitive, the statistics are probably Gaussian. On the small angular scales, the importance of Hotelling's T 2 statistic is stressed, and the minimum sample size required to test Gaussianity is estimated. Although the current data set available from various experiments at half-degree scales is still too small, improvement of the data set by roughly a factor of 2 will be enough to test the Gaussianity statistically. On the arc min scale, we analyze the recent RING data through bispectral analysis, and the result indicates possible deviation from Gaussianity. Effects of point sources are also discussed. It is pointed out that the Gaussianity problem can be resolved in the near future by ground-based or balloon-borne experiments
Application of statistical methods to the testing of nuclear counting assemblies
International Nuclear Information System (INIS)
Gilbert, J.P.; Friedling, G.
1965-01-01
This report describes the application of the hypothesis test theory to the control of the 'statistical purity' and of the stability of the counting batteries used for measurements on activation detectors in research reactors. The principles involved and the experimental results obtained at Cadarache on batteries operating with the reactors PEGGY and AZUR are given. (authors) [fr
Interpreting Statistical Significance Test Results: A Proposed New "What If" Method.
Kieffer, Kevin M.; Thompson, Bruce
As the 1994 publication manual of the American Psychological Association emphasized, "p" values are affected by sample size. As a result, it can be helpful to interpret the results of statistical significant tests in a sample size context by conducting so-called "what if" analyses. However, these methods can be inaccurate…
Recent Literature on Whether Statistical Significance Tests Should or Should Not Be Banned.
Deegear, James
This paper summarizes the literature regarding statistical significant testing with an emphasis on recent literature in various discipline and literature exploring why researchers have demonstrably failed to be influenced by the American Psychological Association publication manual's encouragement to report effect sizes. Also considered are…
Statistical Methods for the detection of answer copying on achievement tests
Sotaridona, Leonardo
2003-01-01
This thesis contains a collection of studies where statistical methods for the detection of answer copying on achievement tests in multiple-choice format are proposed and investigated. Although all methods are suited to detect answer copying, each method is designed to address specific
Pivotal statistics for testing subsets of structural parameters in the IV Regression Model
Kleibergen, F.R.
2000-01-01
We construct a novel statistic to test hypothezes on subsets of the structural parameters in anInstrumental Variables (IV) regression model. We derive the chi squared limiting distribution of thestatistic and show that it has a degrees of freedom parameter that is equal to the number ofstructural
A Critique of One-Tailed Hypothesis Test Procedures in Business and Economics Statistics Textbooks.
Liu, Tung; Stone, Courtenay C.
1999-01-01
Surveys introductory business and economics statistics textbooks and finds that they differ over the best way to explain one-tailed hypothesis tests: the simple null-hypothesis approach or the composite null-hypothesis approach. Argues that the composite null-hypothesis approach contains methodological shortcomings that make it more difficult for…
A Comparison of Several Statistical Tests of Reciprocity of Self-Disclosure.
Dindia, Kathryn
1988-01-01
Reports the results of a study that used several statistical tests of reciprocity of self-disclosure. Finds little evidence for reciprocity of self-disclosure, and concludes that either reciprocity is an illusion, or that different or more sophisticated methods are needed to detect it. (MS)
Statistical Requirements For Pass-Fail Testing Of Contraband Detection Systems
International Nuclear Information System (INIS)
Gilliam, David M.
2011-01-01
Contraband detection systems for homeland security applications are typically tested for probability of detection (PD) and probability of false alarm (PFA) using pass-fail testing protocols. Test protocols usually require specified values for PD and PFA to be demonstrated at a specified level of statistical confidence CL. Based on a recent more theoretical treatment of this subject [1], this summary reviews the definition of CL and provides formulas and spreadsheet functions for constructing tables of general test requirements and for determining the minimum number of tests required. The formulas and tables in this article may be generally applied to many other applications of pass-fail testing, in addition to testing of contraband detection systems.
International Nuclear Information System (INIS)
Doherty, W.
2013-01-01
A nebulizer-centric response function model of the analytical inductively coupled argon plasma ion source was used to investigate the statistical frequency distributions and noise reduction factors of simultaneously measured flicker noise limited isotope ion signals and their ratios. The response function model was extended by assuming i) a single gaussian distributed random noise source (nebulizer gas pressure fluctuations) and ii) the isotope ion signal response is a parabolic function of the nebulizer gas pressure. Model calculations of ion signal and signal ratio histograms were obtained by applying the statistical method of translation to the non-linear response function model of the plasma. Histograms of Ni, Cu, Pr, Tl and Pb isotope ion signals measured using a multi-collector plasma mass spectrometer were, without exception, negative skew. Histograms of the corresponding isotope ratios of Ni, Cu, Tl and Pb were either positive or negative skew. There was a complete agreement between the measured and model calculated histogram skew properties. The nebulizer-centric response function model was also used to investigate the effect of non-linear response functions on the effectiveness of noise cancellation by signal division. An alternative noise correction procedure suitable for parabolic signal response functions was derived and applied to measurements of isotope ratios of Cu, Ni, Pb and Tl. The largest noise reduction factors were always obtained when the non-linearity of the response functions was taken into account by the isotope ratio calculation. Possible applications of the nebulizer-centric response function model to other types of analytical instrumentation, large amplitude signal noise sources (e.g., lasers, pumped nebulizers) and analytical error in isotope ratio measurements by multi-collector plasma mass spectrometry are discussed. - Highlights: ► Isotope ion signal noise is modelled as a parabolic transform of a gaussian variable. ► Flicker
Testing statistical self-similarity in the topology of river networks
Troutman, Brent M.; Mantilla, Ricardo; Gupta, Vijay K.
2010-01-01
Recent work has demonstrated that the topological properties of real river networks deviate significantly from predictions of Shreve's random model. At the same time the property of mean self-similarity postulated by Tokunaga's model is well supported by data. Recently, a new class of network model called random self-similar networks (RSN) that combines self-similarity and randomness has been introduced to replicate important topological features observed in real river networks. We investigate if the hypothesis of statistical self-similarity in the RSN model is supported by data on a set of 30 basins located across the continental United States that encompass a wide range of hydroclimatic variability. We demonstrate that the generators of the RSN model obey a geometric distribution, and self-similarity holds in a statistical sense in 26 of these 30 basins. The parameters describing the distribution of interior and exterior generators are tested to be statistically different and the difference is shown to produce the well-known Hack's law. The inter-basin variability of RSN parameters is found to be statistically significant. We also test generator dependence on two climatic indices, mean annual precipitation and radiative index of dryness. Some indication of climatic influence on the generators is detected, but this influence is not statistically significant with the sample size available. Finally, two key applications of the RSN model to hydrology and geomorphology are briefly discussed.
Assessment of the beryllium lymphocyte proliferation test using statistical process control.
Cher, Daniel J; Deubner, David C; Kelsh, Michael A; Chapman, Pamela S; Ray, Rose M
2006-10-01
Despite more than 20 years of surveillance and epidemiologic studies using the beryllium blood lymphocyte proliferation test (BeBLPT) as a measure of beryllium sensitization (BeS) and as an aid for diagnosing subclinical chronic beryllium disease (CBD), improvements in specific understanding of the inhalation toxicology of CBD have been limited. Although epidemiologic data suggest that BeS and CBD risks vary by process/work activity, it has proven difficult to reach specific conclusions regarding the dose-response relationship between workplace beryllium exposure and BeS or subclinical CBD. One possible reason for this uncertainty could be misclassification of BeS resulting from variation in BeBLPT testing performance. The reliability of the BeBLPT, a biological assay that measures beryllium sensitization, is unknown. To assess the performance of four laboratories that conducted this test, we used data from a medical surveillance program that offered testing for beryllium sensitization with the BeBLPT. The study population was workers exposed to beryllium at various facilities over a 10-year period (1992-2001). Workers with abnormal results were offered diagnostic workups for CBD. Our analyses used a standard statistical technique, statistical process control (SPC), to evaluate test reliability. The study design involved a repeated measures analysis of BeBLPT results generated from the company-wide, longitudinal testing. Analytical methods included use of (1) statistical process control charts that examined temporal patterns of variation for the stimulation index, a measure of cell reactivity to beryllium; (2) correlation analysis that compared prior perceptions of BeBLPT instability to the statistical measures of test variation; and (3) assessment of the variation in the proportion of missing test results and how time periods with more missing data influenced SPC findings. During the period of this study, all laboratories displayed variation in test results that
DEFF Research Database (Denmark)
Jones, Allan; Sommerlund, Bo
2007-01-01
The uses of null hypothesis significance testing (NHST) and statistical power analysis within psychological research are critically discussed. The article looks at the problems of relying solely on NHST when dealing with small and large sample sizes. The use of power-analysis in estimating...... the potential error introduced by small and large samples is advocated. Power analysis is not recommended as a replacement to NHST but as an additional source of information about the phenomena under investigation. Moreover, the importance of conceptual analysis in relation to statistical analysis of hypothesis...
Statistical power analysis a simple and general model for traditional and modern hypothesis tests
Murphy, Kevin R; Wolach, Allen
2014-01-01
Noted for its accessible approach, this text applies the latest approaches of power analysis to both null hypothesis and minimum-effect testing using the same basic unified model. Through the use of a few simple procedures and examples, the authors show readers with little expertise in statistical analysis how to obtain the values needed to carry out the power analysis for their research. Illustrations of how these analyses work and how they can be used to choose the appropriate criterion for defining statistically significant outcomes are sprinkled throughout. The book presents a simple and g
Testing the statistical isotropy of large scale structure with multipole vectors
International Nuclear Information System (INIS)
Zunckel, Caroline; Huterer, Dragan; Starkman, Glenn D.
2011-01-01
A fundamental assumption in cosmology is that of statistical isotropy - that the Universe, on average, looks the same in every direction in the sky. Statistical isotropy has recently been tested stringently using cosmic microwave background data, leading to intriguing results on large angular scales. Here we apply some of the same techniques used in the cosmic microwave background to the distribution of galaxies on the sky. Using the multipole vector approach, where each multipole in the harmonic decomposition of galaxy density field is described by unit vectors and an amplitude, we lay out the basic formalism of how to reconstruct the multipole vectors and their statistics out of galaxy survey catalogs. We apply the algorithm to synthetic galaxy maps, and study the sensitivity of the multipole vector reconstruction accuracy to the density, depth, sky coverage, and pixelization of galaxy catalog maps.
Observations in the statistical analysis of NBG-18 nuclear graphite strength tests
International Nuclear Information System (INIS)
Hindley, Michael P.; Mitchell, Mark N.; Blaine, Deborah C.; Groenwold, Albert A.
2012-01-01
Highlights: ► Statistical analysis of NBG-18 nuclear graphite strength test. ► A Weibull distribution and normal distribution is tested for all data. ► A Bimodal distribution in the CS data is confirmed. ► The CS data set has the lowest variance. ► A Combined data set is formed and has Weibull distribution. - Abstract: The purpose of this paper is to report on the selection of a statistical distribution chosen to represent the experimental material strength of NBG-18 nuclear graphite. Three large sets of samples were tested during the material characterisation of the Pebble Bed Modular Reactor and Core Structure Ceramics materials. These sets of samples are tensile strength, flexural strength and compressive strength (CS) measurements. A relevant statistical fit is determined and the goodness of fit is also evaluated for each data set. The data sets are also normalised for ease of comparison, and combined into one representative data set. The validity of this approach is demonstrated. A second failure mode distribution is found on the CS test data. Identifying this failure mode supports the similar observations made in the past. The success of fitting the Weibull distribution through the normalised data sets allows us to improve the basis for the estimates of the variability. This could also imply that the variability on the graphite strength for the different strength measures is based on the same flaw distribution and thus a property of the material.
Testing effective quantum gravity with gravitational waves from extreme mass ratio inspirals
International Nuclear Information System (INIS)
Yunes, N; Sopuerta, C F
2010-01-01
Testing deviation of GR is one of the main goals of the proposed Laser Interferometer Space Antenna. For the first time, we consistently compute the generation of gravitational waves from extreme-mass ratio inspirals (stellar compact objects into supermassive black holes) in a well-motivated alternative theory of gravity, that to date remains weakly constrained by double binary pulsar observations. The theory we concentrate on is Chern-Simons (CS) modified gravity, a 4-D, effective theory that is motivated both from string theory and loop-quantum gravity, and which enhances the Einstein-Hilbert action through the addition of a dynamical scalar field and the parity-violating Pontryagin density. We show that although point particles continue to follow geodesics in the modified theory, the background about which they inspiral is a modification to the Kerr metric, which imprints a CS correction on the gravitational waves emitted. CS modified gravitational waves are sufficiently different from the General Relativistic expectation that they lead to significant dephasing after 3 weeks of evolution, but such dephasing will probably not prevent detection of these signals, but instead lead to a systematic error in the determination of parameters. We end with a study of radiation-reaction in the modified theory and show that, to leading-order, energy-momentum emission is not CS modified, except possibly for the subdominant effect of scalar-field emission. The inclusion of radiation-reaction will allow for tests of CS modified gravity with space-borne detectors that might be two orders of magnitude larger than current binary pulsar bounds.
Liu, Rong
2017-01-01
Obtaining a fast and reliable decision is an important issue in brain-computer interfaces (BCI), particularly in practical real-time applications such as wheelchair or neuroprosthetic control. In this study, the EEG signals were firstly analyzed with a power projective base method. Then we were applied a decision-making model, the sequential probability ratio testing (SPRT), for single-trial classification of motor imagery movement events. The unique strength of this proposed classification method lies in its accumulative process, which increases the discriminative power as more and more evidence is observed over time. The properties of the method were illustrated on thirteen subjects' recordings from three datasets. Results showed that our proposed power projective method outperformed two benchmark methods for every subject. Moreover, with sequential classifier, the accuracies across subjects were significantly higher than that with nonsequential ones. The average maximum accuracy of the SPRT method was 84.1%, as compared with 82.3% accuracy for the sequential Bayesian (SB) method. The proposed SPRT method provides an explicit relationship between stopping time, thresholds, and error, which is important for balancing the time-accuracy trade-off. These results suggest SPRT would be useful in speeding up decision-making while trading off errors in BCI. PMID:29348781
Computer processing of 14C data; statistical tests and corrections of data
International Nuclear Information System (INIS)
Obelic, B.; Planinic, J.
1977-01-01
The described computer program calculates the age of samples and performs statistical tests and corrections of data. Data are obtained from the proportional counter that measures anticoincident pulses per 20 minute intervals. After every 9th interval the counter measures total number of counts per interval. Input data are punched on cards. The output list contains input data schedule and the following results: mean CPM value, correction of CPM for normal pressure and temperature (NTP), sample age calculation based on 14 C half life of 5570 and 5730 years, age correction for NTP, dendrochronological corrections and the relative radiocarbon concentration. All results are given with one standard deviation. Input data test (Chauvenet's criterion), gas purity test, standard deviation test and test of the data processor are also included in the program. (author)
DEFF Research Database (Denmark)
Boonstra, Philip S; Gruber, Stephen B; Raymond, Victoria M
2010-01-01
Anticipation, manifested through decreasing age of onset or increased severity in successive generations, has been noted in several genetic diseases. Statistical methods for genetic anticipation range from a simple use of the paired t-test for age of onset restricted to affected parent-child pairs......, and this right truncation effect is more pronounced in children than in parents. In this study, we first review different statistical methods for testing genetic anticipation in affected parent-child pairs that address the issue of bias due to right truncation. Using affected parent-child pair data, we compare...... the issue of multiplex ascertainment and its effect on the different methods. We then focus on exploring genetic anticipation in Lynch syndrome and analyze new data on the age of onset in affected parent-child pairs from families seen at the University of Michigan Cancer Genetics clinic with a mutation...
Taroni, F; Biedermann, A; Bozza, S
2016-02-01
Many people regard the concept of hypothesis testing as fundamental to inferential statistics. Various schools of thought, in particular frequentist and Bayesian, have promoted radically different solutions for taking a decision about the plausibility of competing hypotheses. Comprehensive philosophical comparisons about their advantages and drawbacks are widely available and continue to span over large debates in the literature. More recently, controversial discussion was initiated by an editorial decision of a scientific journal [1] to refuse any paper submitted for publication containing null hypothesis testing procedures. Since the large majority of papers published in forensic journals propose the evaluation of statistical evidence based on the so called p-values, it is of interest to expose the discussion of this journal's decision within the forensic science community. This paper aims to provide forensic science researchers with a primer on the main concepts and their implications for making informed methodological choices. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Testing a statistical method of global mean palotemperature estimations in a long climate simulation
Energy Technology Data Exchange (ETDEWEB)
Zorita, E.; Gonzalez-Rouco, F. [GKSS-Forschungszentrum Geesthacht GmbH (Germany). Inst. fuer Hydrophysik
2001-07-01
Current statistical methods of reconstructing the climate of the last centuries are based on statistical models linking climate observations (temperature, sea-level-pressure) and proxy-climate data (tree-ring chronologies, ice-cores isotope concentrations, varved sediments, etc.). These models are calibrated in the instrumental period, and the longer time series of proxy data are then used to estimate the past evolution of the climate variables. Using such methods the global mean temperature of the last 600 years has been recently estimated. In this work this method of reconstruction is tested using data from a very long simulation with a climate model. This testing allows to estimate the errors of the estimations as a function of the number of proxy data and the time scale at which the estimations are probably reliable. (orig.)
International Nuclear Information System (INIS)
Brodsky, A.
1979-01-01
Some recent reports of Mancuso, Stewart and Kneale claim findings of radiation-produced cancer in the Hanford worker population. These claims are based on statistical computations that use small differences in accumulated exposures between groups dying of cancer and groups dying of other causes; actual mortality and longevity were not reported. This paper presents a statistical method for evaluation of actual mortality and longevity longitudinally over time, as applied in a primary analysis of the mortality experience of the Hanford worker population. Although available, this method was not utilized in the Mancuso-Stewart-Kneale paper. The author's preliminary longitudinal analysis shows that the gross mortality experience of persons employed at Hanford during 1943-70 interval did not differ significantly from that of certain controls, when both employees and controls were selected from families with two or more offspring and comparison were matched by age, sex, race and year of entry into employment. This result is consistent with findings reported by Sanders (Health Phys. vol.35, 521-538, 1978). The method utilizes an approximate chi-square (1 D.F.) statistic for testing population subgroup comparisons, as well as the cumulation of chi-squares (1 D.F.) for testing the overall result of a particular type of comparison. The method is available for computer testing of the Hanford mortality data, and could also be adapted to morbidity or other population studies. (author)
Statistical power analyses using G*Power 3.1: tests for correlation and regression analyses.
Faul, Franz; Erdfelder, Edgar; Buchner, Axel; Lang, Albert-Georg
2009-11-01
G*Power is a free power analysis program for a variety of statistical tests. We present extensions and improvements of the version introduced by Faul, Erdfelder, Lang, and Buchner (2007) in the domain of correlation and regression analyses. In the new version, we have added procedures to analyze the power of tests based on (1) single-sample tetrachoric correlations, (2) comparisons of dependent correlations, (3) bivariate linear regression, (4) multiple linear regression based on the random predictor model, (5) logistic regression, and (6) Poisson regression. We describe these new features and provide a brief introduction to their scope and handling.
Testing statistical significance scores of sequence comparison methods with structure similarity
Directory of Open Access Journals (Sweden)
Leunissen Jack AM
2006-10-01
Full Text Available Abstract Background In the past years the Smith-Waterman sequence comparison algorithm has gained popularity due to improved implementations and rapidly increasing computing power. However, the quality and sensitivity of a database search is not only determined by the algorithm but also by the statistical significance testing for an alignment. The e-value is the most commonly used statistical validation method for sequence database searching. The CluSTr database and the Protein World database have been created using an alternative statistical significance test: a Z-score based on Monte-Carlo statistics. Several papers have described the superiority of the Z-score as compared to the e-value, using simulated data. We were interested if this could be validated when applied to existing, evolutionary related protein sequences. Results All experiments are performed on the ASTRAL SCOP database. The Smith-Waterman sequence comparison algorithm with both e-value and Z-score statistics is evaluated, using ROC, CVE and AP measures. The BLAST and FASTA algorithms are used as reference. We find that two out of three Smith-Waterman implementations with e-value are better at predicting structural similarities between proteins than the Smith-Waterman implementation with Z-score. SSEARCH especially has very high scores. Conclusion The compute intensive Z-score does not have a clear advantage over the e-value. The Smith-Waterman implementations give generally better results than their heuristic counterparts. We recommend using the SSEARCH algorithm combined with e-values for pairwise sequence comparisons.
Marill, Keith A; Chang, Yuchiao; Wong, Kim F; Friedman, Ari B
2017-08-01
Objectives Assessing high-sensitivity tests for mortal illness is crucial in emergency and critical care medicine. Estimating the 95% confidence interval (CI) of the likelihood ratio (LR) can be challenging when sample sensitivity is 100%. We aimed to develop, compare, and automate a bootstrapping method to estimate the negative LR CI when sample sensitivity is 100%. Methods The lowest population sensitivity that is most likely to yield sample sensitivity 100% is located using the binomial distribution. Random binomial samples generated using this population sensitivity are then used in the LR bootstrap. A free R program, "bootLR," automates the process. Extensive simulations were performed to determine how often the LR bootstrap and comparator method 95% CIs cover the true population negative LR value. Finally, the 95% CI was compared for theoretical sample sizes and sensitivities approaching and including 100% using: (1) a technique of individual extremes, (2) SAS software based on the technique of Gart and Nam, (3) the Score CI (as implemented in the StatXact, SAS, and R PropCI package), and (4) the bootstrapping technique. Results The bootstrapping approach demonstrates appropriate coverage of the nominal 95% CI over a spectrum of populations and sample sizes. Considering a study of sample size 200 with 100 patients with disease, and specificity 60%, the lowest population sensitivity with median sample sensitivity 100% is 99.31%. When all 100 patients with disease test positive, the negative LR 95% CIs are: individual extremes technique (0,0.073), StatXact (0,0.064), SAS Score method (0,0.057), R PropCI (0,0.062), and bootstrap (0,0.048). Similar trends were observed for other sample sizes. Conclusions When study samples demonstrate 100% sensitivity, available methods may yield inappropriately wide negative LR CIs. An alternative bootstrapping approach and accompanying free open-source R package were developed to yield realistic estimates easily. This
DEFF Research Database (Denmark)
Sommer, Helle Mølgaard; Holst, Helle; Spliid, Henrik
1995-01-01
Three identical microbiological experiments were carried out and analysed in order to examine the variability of the parameter estimates. The microbiological system consisted of a substrate (toluene) and a biomass (pure culture) mixed together in an aquifer medium. The degradation of the substrate...... and the growth of the biomass are described by the Monod model consisting of two nonlinear coupled first-order differential equations. The objective of this study was to estimate the kinetic parameters in the Monod model and to test whether the parameters from the three identical experiments have the same values....... Estimation of the parameters was obtained using an iterative maximum likelihood method and the test used was an approximative likelihood ratio test. The test showed that the three sets of parameters were identical only on a 4% alpha level....
Berti, Matteo; Corsini, Alessandro; Franceschini, Silvia; Iannacone, Jean Pascal
2013-04-01
The application of space borne synthetic aperture radar interferometry has progressed, over the last two decades, from the pioneer use of single interferograms for analyzing changes on the earth's surface to the development of advanced multi-interferogram techniques to analyze any sort of natural phenomena which involves movements of the ground. The success of multi-interferograms techniques in the analysis of natural hazards such as landslides and subsidence is widely documented in the scientific literature and demonstrated by the consensus among the end-users. Despite the great potential of this technique, radar interpretation of slope movements is generally based on the sole analysis of average displacement velocities, while the information embraced in multi interferogram time series is often overlooked if not completely neglected. The underuse of PS time series is probably due to the detrimental effect of residual atmospheric errors, which make the PS time series characterized by erratic, irregular fluctuations often difficult to interpret, and also to the difficulty of performing a visual, supervised analysis of the time series for a large dataset. In this work is we present a procedure for automatic classification of PS time series based on a series of statistical characterization tests. The procedure allows to classify the time series into six distinctive target trends (0=uncorrelated; 1=linear; 2=quadratic; 3=bilinear; 4=discontinuous without constant velocity; 5=discontinuous with change in velocity) and retrieve for each trend a series of descriptive parameters which can be efficiently used to characterize the temporal changes of ground motion. The classification algorithms were developed and tested using an ENVISAT datasets available in the frame of EPRS-E project (Extraordinary Plan of Environmental Remote Sensing) of the Italian Ministry of Environment (track "Modena", Northern Apennines). This dataset was generated using standard processing, then the
Directory of Open Access Journals (Sweden)
Melissa Coulson
2010-07-01
Full Text Available A statistically significant result, and a non-significant result may differ little, although significance status may tempt an interpretation of difference. Two studies are reported that compared interpretation of such results presented using null hypothesis significance testing (NHST, or confidence intervals (CIs. Authors of articles published in psychology, behavioural neuroscience, and medical journals were asked, via email, to interpret two fictitious studies that found similar results, one statistically significant, and the other non-significant. Responses from 330 authors varied greatly, but interpretation was generally poor, whether results were presented as CIs or using NHST. However, when interpreting CIs respondents who mentioned NHST were 60% likely to conclude, unjustifiably, the two results conflicted, whereas those who interpreted CIs without reference to NHST were 95% likely to conclude, justifiably, the two results were consistent. Findings were generally similar for all three disciplines. An email survey of academic psychologists confirmed that CIs elicit better interpretations if NHST is not invoked. Improved statistical inference can result from encouragement of meta-analytic thinking and use of CIs but, for full benefit, such highly desirable statistical reform requires also that researchers interpret CIs without recourse to NHST.
International Nuclear Information System (INIS)
Gershgorin, B.; Majda, A.J.
2011-01-01
A statistically exactly solvable model for passive tracers is introduced as a test model for the authors' Nonlinear Extended Kalman Filter (NEKF) as well as other filtering algorithms. The model involves a Gaussian velocity field and a passive tracer governed by the advection-diffusion equation with an imposed mean gradient. The model has direct relevance to engineering problems such as the spread of pollutants in the air or contaminants in the water as well as climate change problems concerning the transport of greenhouse gases such as carbon dioxide with strongly intermittent probability distributions consistent with the actual observations of the atmosphere. One of the attractive properties of the model is the existence of the exact statistical solution. In particular, this unique feature of the model provides an opportunity to design and test fast and efficient algorithms for real-time data assimilation based on rigorous mathematical theory for a turbulence model problem with many active spatiotemporal scales. Here, we extensively study the performance of the NEKF which uses the exact first and second order nonlinear statistics without any approximations due to linearization. The role of partial and sparse observations, the frequency of observations and the observation noise strength in recovering the true signal, its spectrum, and fat tail probability distribution are the central issues discussed here. The results of our study provide useful guidelines for filtering realistic turbulent systems with passive tracers through partial observations.
Flynn, Clare; Pickering, Kenneth E.; Crawford, James H.; Lamsol, Lok; Krotkov, Nickolay; Herman, Jay; Weinheimer, Andrew; Chen, Gao; Liu, Xiong; Szykman, James;
2014-01-01
To investigate the ability of column (or partial column) information to represent surface air quality, results of linear regression analyses between surface mixing ratio data and column abundances for O3 and NO2 are presented for the July 2011 Maryland deployment of the DISCOVER-AQ mission. Data collected by the P-3B aircraft, ground-based Pandora spectrometers, Aura/OMI satellite instrument, and simulations for July 2011 from the CMAQ air quality model during this deployment provide a large and varied data set, allowing this problem to be approached from multiple perspectives. O3 columns typically exhibited a statistically significant and high degree of correlation with surface data (R(sup 2) > 0.64) in the P- 3B data set, a moderate degree of correlation (0.16 analysis.
Examining publication bias—a simulation-based evaluation of statistical tests on publication bias
Directory of Open Access Journals (Sweden)
Andreas Schneck
2017-11-01
Full Text Available Background Publication bias is a form of scientific misconduct. It threatens the validity of research results and the credibility of science. Although several tests on publication bias exist, no in-depth evaluations are available that examine which test performs best for different research settings. Methods Four tests on publication bias, Egger’s test (FAT, p-uniform, the test of excess significance (TES, as well as the caliper test, were evaluated in a Monte Carlo simulation. Two different types of publication bias and its degree (0%, 50%, 100% were simulated. The type of publication bias was defined either as file-drawer, meaning the repeated analysis of new datasets, or p-hacking, meaning the inclusion of covariates in order to obtain a significant result. In addition, the underlying effect (β = 0, 0.5, 1, 1.5, effect heterogeneity, the number of observations in the simulated primary studies (N = 100, 500, and the number of observations for the publication bias tests (K = 100, 1,000 were varied. Results All tests evaluated were able to identify publication bias both in the file-drawer and p-hacking condition. The false positive rates were, with the exception of the 15%- and 20%-caliper test, unbiased. The FAT had the largest statistical power in the file-drawer conditions, whereas under p-hacking the TES was, except under effect heterogeneity, slightly better. The CTs were, however, inferior to the other tests under effect homogeneity and had a decent statistical power only in conditions with 1,000 primary studies. Discussion The FAT is recommended as a test for publication bias in standard meta-analyses with no or only small effect heterogeneity. If two-sided publication bias is suspected as well as under p-hacking the TES is the first alternative to the FAT. The 5%-caliper test is recommended under conditions of effect heterogeneity and a large number of primary studies, which may be found if publication bias is examined in a
Association testing for next-generation sequencing data using score statistics
DEFF Research Database (Denmark)
Skotte, Line; Korneliussen, Thorfinn Sand; Albrechtsen, Anders
2012-01-01
computationally feasible due to the use of score statistics. As part of the joint likelihood, we model the distribution of the phenotypes using a generalized linear model framework, which works for both quantitative and discrete phenotypes. Thus, the method presented here is applicable to case-control studies...... of genotype calls into account have been proposed; most require numerical optimization which for large-scale data is not always computationally feasible. We show that using a score statistic for the joint likelihood of observed phenotypes and observed sequencing data provides an attractive approach...... to association testing for next-generation sequencing data. The joint model accounts for the genotype classification uncertainty via the posterior probabilities of the genotypes given the observed sequencing data, which gives the approach higher power than methods based on called genotypes. This strategy remains...
Statistical auditing and randomness test of lotto k/N-type games
Coronel-Brizio, H. F.; Hernández-Montoya, A. R.; Rapallo, F.; Scalas, E.
2008-11-01
One of the most popular lottery games worldwide is the so-called “lotto k/N”. It considers N numbers 1,2,…,N from which k are drawn randomly, without replacement. A player selects k or more numbers and the first prize is shared amongst those players whose selected numbers match all of the k randomly drawn. Exact rules may vary in different countries. In this paper, mean values and covariances for the random variables representing the numbers drawn from this kind of game are presented, with the aim of using them to audit statistically the consistency of a given sample of historical results with theoretical values coming from a hypergeometric statistical model. The method can be adapted to test pseudorandom number generators.
Shan, HuanYuan; Liu, Xiangkun; Hildebrandt, Hendrik; Pan, Chuzhong; Martinet, Nicolas; Fan, Zuhui; Schneider, Peter; Asgari, Marika; Harnois-Déraps, Joachim; Hoekstra, Henk; Wright, Angus; Dietrich, Jörg P.; Erben, Thomas; Getman, Fedor; Grado, Aniello; Heymans, Catherine; Klaes, Dominik; Kuijken, Konrad; Merten, Julian; Puddu, Emanuella; Radovich, Mario; Wang, Qiao
2018-02-01
This paper is the first of a series of papers constraining cosmological parameters with weak lensing peak statistics using ˜ 450 deg2 of imaging data from the Kilo Degree Survey (KiDS-450). We measure high signal-to-noise ratio (SNR: ν) weak lensing convergence peaks in the range of 3 < ν < 5, and employ theoretical models to derive expected values. These models are validated using a suite of simulations. We take into account two major systematic effects, the boost factor and the effect of baryons on the mass-concentration relation of dark matter haloes. In addition, we investigate the impacts of other potential astrophysical systematics including the projection effects of large-scale structures, intrinsic galaxy alignments, as well as residual measurement uncertainties in the shear and redshift calibration. Assuming a flat Λ cold dark matter model, we find constraints for S_8=σ _8(Ω _m/0.3)^{0.5}=0.746^{+0.046}_{-0.107} according to the degeneracy direction of the cosmic shear analysis and Σ _8=σ _8(Ω _m/0.3)^{0.38}=0.696^{+0.048}_{-0.050} based on the derived degeneracy direction of our high-SNR peak statistics. The difference between the power index of S8 and in Σ8 indicates that combining cosmic shear with peak statistics has the potential to break the degeneracy in σ8 and Ωm. Our results are consistent with the cosmic shear tomographic correlation analysis of the same data set and ˜2σ lower than the Planck 2016 results.
DEFF Research Database (Denmark)
Minty, Ross; Thomason, James L.; Petersen, Helga Nørgaard
2015-01-01
This paper focuses on an investigation into the role of the epoxy resin: curing agent ratio in composite interfacial shear strength of glass fibre composites. The procedure involved changing the percentage of curing agent (Triethylenetetramine [TETA]) used in the mixture with several different...... percentages used, ranging from 4% up to 30%, including the stoichiometric ratio. It was found by using the microbond test, that there may exist a relationship between the epoxy resin to curing agent ratio and the level of adhesion between the reinforcing fibre and the polymer matrix of the composite....
A Note on Comparing the Power of Test Statistics at Low Significance Levels.
Morris, Nathan; Elston, Robert
2011-01-01
It is an obvious fact that the power of a test statistic is dependent upon the significance (alpha) level at which the test is performed. It is perhaps a less obvious fact that the relative performance of two statistics in terms of power is also a function of the alpha level. Through numerous personal discussions, we have noted that even some competent statisticians have the mistaken intuition that relative power comparisons at traditional levels such as α = 0.05 will be roughly similar to relative power comparisons at very low levels, such as the level α = 5 × 10 -8 , which is commonly used in genome-wide association studies. In this brief note, we demonstrate that this notion is in fact quite wrong, especially with respect to comparing tests with differing degrees of freedom. In fact, at very low alpha levels the cost of additional degrees of freedom is often comparatively low. Thus we recommend that statisticians exercise caution when interpreting the results of power comparison studies which use alpha levels that will not be used in practice.
Testing of January Anomaly at ISE-100 Index with Power Ratio Method
Directory of Open Access Journals (Sweden)
Şule Yüksel Yiğiter
2015-12-01
Full Text Available AbstractNone of investors that can access all informations in the same ratio is not possible to earn higher returns according to Efficient Market Hypothesis. However, it has been set forth effect of time on returns in several studies and reached conflicting conclusions with hypothesis. In this context, one of the most important existing anomalies is also January month anomaly. In this study, it has been researched that if there is January effect in BIST-100 index covering 2008-2014 period by using power ratio method. The presence of January month anomaly in BIST-100 index within specified period determined by analysis results.Keywords: Efficient Markets Hypothesis, January Month Anomaly, Power Ratio MethodJEL Classification Codes: G1,C22
IEEE Std 101-1972: IEEE guide for the statistical analysis of thermal life test data
International Nuclear Information System (INIS)
Anon.
1992-01-01
Procedures for estimating the thermal life of electrical insulation systems and materials call for life tests at several temperatures, usually well above the expected normal operating temperature. By the selection of high temperatures for the tests, life of the insulation samples will be terminated, according to some selected failure criterion or criteria, within relatively short times -- typically one week to one year. The result of these thermally accelerated life tests will be a set of data of life values for a corresponding set of temperatures. Usually the data consist of a set of life values for each of two to four (occasionally more) test temperatures, 10 C to 25 C apart. The objective then is to establish from these data the mean life vales at each temperature and the functional dependence of life on temperature, as well as the statistical consistency and the confidence to be attributed to the mean life values and the functional life temperature dependence. The purpose of this guide is to assist in this objective and to give guidance for comparing the results of tests on different materials and of different tests on the same materials
Using the method of statistic tests for determining the pressure in the UNC-600 vacuum chamber
International Nuclear Information System (INIS)
Kiver, A.M.; Mirzoev, K.G.
1998-01-01
The aim of the paper is to simulate the process of pumping-out the UNC-600 vacuum chamber. The simulation is carried out by the Monte-Carlo statistic test method. It is shown that the pressure value in every liner of the chamber may be determined from the pressure in the pump branch pipe, determined by the discharge current of this pump. Therefore, it is possible to precise the working pressure in the ion guide of the UNC-600 vacuum chamber [ru
DEFF Research Database (Denmark)
Holbech, Henrik
-contribution of each individual to the measured response. Furthermore, the combination of a Gamma-Poisson stochastic part with a Weibull concentration-response model allowed accounting for the inter-replicate variability. Second, we checked for the possibility of optimizing the initial experimental design through...... was twofold. First, we refined the statistical analyses of reproduction data accounting for mortality all along the test period. The variable “number of clutches/eggs produced per individual-day” was used for EC x modelling, as classically done in epidemiology in order to account for the time...
Selection of hidden layer nodes in neural networks by statistical tests
International Nuclear Information System (INIS)
Ciftcioglu, Ozer
1992-05-01
A statistical methodology for selection of the number of hidden layer nodes in feedforward neural networks is described. The method considers the network as an empirical model for the experimental data set subject to pattern classification so that the selection process becomes a model estimation through parameter identification. The solution is performed for an overdetermined estimation problem for identification using nonlinear least squares minimization technique. The number of the hidden layer nodes is determined as result of hypothesis testing. Accordingly the redundant network structure with respect to the number of parameters is avoided and the classification error being kept to a minimum. (author). 11 refs.; 4 figs.; 1 tab
van Krimpen-Stoop, Edith M. L. A.; Meijer, Rob R.
Person-fit research in the context of paper-and-pencil tests is reviewed, and some specific problems regarding person fit in the context of computerized adaptive testing (CAT) are discussed. Some new methods are proposed to investigate person fit in a CAT environment. These statistics are based on Statistical Process Control (SPC) theory. A…
Wu, Hao
2018-05-01
In structural equation modelling (SEM), a robust adjustment to the test statistic or to its reference distribution is needed when its null distribution deviates from a χ 2 distribution, which usually arises when data do not follow a multivariate normal distribution. Unfortunately, existing studies on this issue typically focus on only a few methods and neglect the majority of alternative methods in statistics. Existing simulation studies typically consider only non-normal distributions of data that either satisfy asymptotic robustness or lead to an asymptotic scaled χ 2 distribution. In this work we conduct a comprehensive study that involves both typical methods in SEM and less well-known methods from the statistics literature. We also propose the use of several novel non-normal data distributions that are qualitatively different from the non-normal distributions widely used in existing studies. We found that several under-studied methods give the best performance under specific conditions, but the Satorra-Bentler method remains the most viable method for most situations. © 2017 The British Psychological Society.
A test of the mean density approximation for Lennard-Jones mixtures with large size ratios
International Nuclear Information System (INIS)
Ely, J.F.
1986-01-01
The mean density approximation for mixture radial distribution functions plays a central role in modern corresponding-states theories. This approximation is reasonably accurate for systems that do not differ widely in size and energy ratios and which are nearly equimolar. As the size ratio increases, however, or if one approaches an infinite dilution of one of the components, the approximation becomes progressively worse, especially for the small molecule pair. In an attempt to better understand and improve this approximation, isothermal molecular dynamics simulations have been performed on a series of Lennard-Jones mixtures. Thermodynamic properties, including the mixture radial distribution functions, have been obtained at seven compositions ranging from 5 to 95 mol%. In all cases the size ratio was fixed at two and three energy ratios were investigated, 22 / 11 =0.5, 1.0, and 1.5. The results of the simulations are compared with the mean density approximation and a modification to integrals evaluated with the mean density approximation is proposed
Dynamic moduli and damping ratios of soil evaluated from pressuremeter test
International Nuclear Information System (INIS)
Yoshida, Yasuo; Ezashi, Yasuyuki; Kokusho, Takaji; Nishi, Yoshikazu
1984-01-01
Dynamic and static properties of soils are investigated using the newly developed equipment of in-situ test, which imposes dynamic repeated pressure on borehole wall at any depth covering a wide range of strain amplitude. This paper describes mainly the shear modulus and damping characteristics of soils obtained by using the equipment in several sites covering wide variety of soils. The test results are compared and with those obtained by other test methods such as the dynamic triaxial test, the simple shear test and the shear wave velocity test, and discussions are made with regard to their relation ships to each other, which demonstrates the efficiency of this in-situ test. (author)
Statistical testing and power analysis for brain-wide association study.
Gong, Weikang; Wan, Lin; Lu, Wenlian; Ma, Liang; Cheng, Fan; Cheng, Wei; Grünewald, Stefan; Feng, Jianfeng
2018-04-05
The identification of connexel-wise associations, which involves examining functional connectivities between pairwise voxels across the whole brain, is both statistically and computationally challenging. Although such a connexel-wise methodology has recently been adopted by brain-wide association studies (BWAS) to identify connectivity changes in several mental disorders, such as schizophrenia, autism and depression, the multiple correction and power analysis methods designed specifically for connexel-wise analysis are still lacking. Therefore, we herein report the development of a rigorous statistical framework for connexel-wise significance testing based on the Gaussian random field theory. It includes controlling the family-wise error rate (FWER) of multiple hypothesis testings using topological inference methods, and calculating power and sample size for a connexel-wise study. Our theoretical framework can control the false-positive rate accurately, as validated empirically using two resting-state fMRI datasets. Compared with Bonferroni correction and false discovery rate (FDR), it can reduce false-positive rate and increase statistical power by appropriately utilizing the spatial information of fMRI data. Importantly, our method bypasses the need of non-parametric permutation to correct for multiple comparison, thus, it can efficiently tackle large datasets with high resolution fMRI images. The utility of our method is shown in a case-control study. Our approach can identify altered functional connectivities in a major depression disorder dataset, whereas existing methods fail. A software package is available at https://github.com/weikanggong/BWAS. Copyright © 2018 Elsevier B.V. All rights reserved.
International Nuclear Information System (INIS)
Hopkins, Philip F.; Hernquist, Lars
2009-01-01
We use the observed distribution of Eddington ratios as a function of supermassive black hole (BH) mass to constrain models of quasar/active galactic nucleus (AGN) lifetimes and light curves. Given the observed (well constrained) AGN luminosity function, a particular model for AGN light curves L(t) or, equivalently, the distribution of AGN lifetimes (time above a given luminosity t(>L)) translates directly and uniquely (without further assumptions) to a predicted distribution of Eddington ratios at each BH mass. Models for self-regulated BH growth, in which feedback produces a self-regulating 'decay' or 'blowout' phase after the AGN reaches some peak luminosity/BH mass and begins to expel gas and shut down accretion, make specific predictions for the light curves/lifetimes, distinct from, e.g., the expected distribution if AGN simply shut down by gas starvation (without feedback) and very different from the prediction of simple phenomenological 'light bulb' scenarios. We show that the present observations of the Eddington ratio distribution, spanning nearly 5 orders of magnitude in Eddington ratio, 3 orders of magnitude in BH mass, and redshifts z = 0-1, agree well with the predictions of self-regulated models, and rule out phenomenological 'light bulb' or pure exponential models, as well as gas starvation models, at high significance (∼5σ). We also compare with observations of the distribution of Eddington ratios at a given AGN luminosity, and find similar good agreement (but show that these observations are much less constraining). We fit the functional form of the quasar lifetime distribution and provide these fits for use, and show how the Eddington ratio distributions place precise, tight limits on the AGN lifetimes at various luminosities, in agreement with model predictions. We compare with independent estimates of episodic lifetimes and use this to constrain the shape of the typical AGN light curve, and provide simple analytic fits to these for use in
Directory of Open Access Journals (Sweden)
Dominic Beaulieu-Prévost
2006-03-01
Full Text Available For the last 50 years of research in quantitative social sciences, the empirical evaluation of scientific hypotheses has been based on the rejection or not of the null hypothesis. However, more than 300 articles demonstrated that this method was problematic. In summary, null hypothesis testing (NHT is unfalsifiable, its results depend directly on sample size and the null hypothesis is both improbable and not plausible. Consequently, alternatives to NHT such as confidence intervals (CI and measures of effect size are starting to be used in scientific publications. The purpose of this article is, first, to provide the conceptual tools necessary to implement an approach based on confidence intervals, and second, to briefly demonstrate why such an approach is an interesting alternative to an approach based on NHT. As demonstrated in the article, the proposed CI approach avoids most problems related to a NHT approach and can often improve the scientific and contextual relevance of the statistical interpretations by testing range hypotheses instead of a point hypothesis and by defining the minimal value of a substantial effect. The main advantage of such a CI approach is that it replaces the notion of statistical power by an easily interpretable three-value logic (probable presence of a substantial effect, probable absence of a substantial effect and probabilistic undetermination. The demonstration includes a complete example.
Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing
2016-01-08
A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method.
Directory of Open Access Journals (Sweden)
Ke Li
2016-01-01
Full Text Available A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF and Diagnostic Bayesian Network (DBN is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO. To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA is proposed to evaluate the sensitiveness of symptom parameters (SPs for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method.
Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing
2016-01-01
A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method. PMID:26761006
Estimation of In Situ Stresses with Hydro-Fracturing Tests and a Statistical Method
Lee, Hikweon; Ong, See Hong
2018-03-01
At great depths, where borehole-based field stress measurements such as hydraulic fracturing are challenging due to difficult downhole conditions or prohibitive costs, in situ stresses can be indirectly estimated using wellbore failures such as borehole breakouts and/or drilling-induced tensile failures detected by an image log. As part of such efforts, a statistical method has been developed in which borehole breakouts detected on an image log are used for this purpose (Song et al. in Proceedings on the 7th international symposium on in situ rock stress, 2016; Song and Chang in J Geophys Res Solid Earth 122:4033-4052, 2017). The method employs a grid-searching algorithm in which the least and maximum horizontal principal stresses ( S h and S H) are varied, and the corresponding simulated depth-related breakout width distribution as a function of the breakout angle ( θ B = 90° - half of breakout width) is compared to that observed along the borehole to determine a set of S h and S H having the lowest misfit between them. An important advantage of the method is that S h and S H can be estimated simultaneously in vertical wells. To validate the statistical approach, the method is applied to a vertical hole where a set of field hydraulic fracturing tests have been carried out. The stress estimations using the proposed method were found to be in good agreement with the results interpreted from the hydraulic fracturing test measurements.
Susanto, Sandi; Tjahjana, Dominicus Danardono Dwi Prija; Santoso, Budi
2018-02-01
Cross-flow wind turbine is one of the alternative energy harvester for low wind speeds area. Several factors that influence the power coefficient of cross-flow wind turbine are the diameter ratio of blades and the number of blades. The aim of this study is to find out the influence of the number of blades and the diameter ratio on the performance of cross-flow wind turbine and to find out the best configuration between number of blades and diameter ratio of the turbine. The experimental test were conducted under several variation including diameter ratio between outer and inner diameter of the turbine and number of blades. The variation of turbine diameter ratio between inner and outer diameter consisted of 0.58, 0.63, 0.68 and 0.73 while the variations of the number of blades used was 16, 20 and 24. The experimental test were conducted under certain wind speed which are 3m/s until 4 m/s. The result showed that the configurations between 0.68 diameter ratio and 20 blade numbers is the best configurations that has power coefficient of 0.049 and moment coefficient of 0.185.
Partial discharge testing: a progress report. Statistical evaluation of PD data
International Nuclear Information System (INIS)
Warren, V.; Allan, J.
2005-01-01
It has long been known that comparing the partial discharge results obtained from a single machine is a valuable tool enabling companies to observe the gradual deterioration of a machine stator winding and thus plan appropriate maintenance for the machine. In 1998, at the annual Iris Rotating Machines Conference (IRMC), a paper was presented that compared thousands of PD test results to establish the criteria for comparing results from different machines and the expected PD levels. At subsequent annual Iris conferences, using similar analytical procedures, papers were presented that supported the previous criteria and: in 1999, established sensor location as an additional criterion; in 2000, evaluated the effect of insulation type and age on PD activity; in 2001, evaluated the effect of manufacturer on PD activity; in 2002, evaluated the effect of operating pressure for hydrogen-cooled machines; in 2003, evaluated the effect of insulation type and setting Trac alarms; in 2004, re-evaluated the effect of manufacturer on PD activity. Before going further in database analysis procedures, it would be prudent to statistically evaluate the anecdotal evidence observed to date. The goal was to determine which variables of machine conditions greatly influenced the PD results and which didn't. Therefore, this year's paper looks at the impact of operating voltage, machine type and winding type on the test results for air-cooled machines. Because of resource constraints, only data collected through 2003 was used; however, as before, it is still standardized for frequency bandwidth and pruned to include only full-load-hot (FLH) results collected for one sensor on operating machines. All questionable data, or data from off-line testing or unusual machine conditions was excluded, leaving 6824 results. Calibration of on-line PD test results is impractical; therefore, only results obtained using the same method of data collection and noise separation techniques are compared. For
Debate on GMOs health risks after statistical findings in regulatory tests.
de Vendômois, Joël Spiroux; Cellier, Dominique; Vélot, Christian; Clair, Emilie; Mesnage, Robin; Séralini, Gilles-Eric
2010-10-05
We summarize the major points of international debate on health risk studies for the main commercialized edible GMOs. These GMOs are soy, maize and oilseed rape designed to contain new pesticide residues since they have been modified to be herbicide-tolerant (mostly to Roundup) or to produce mutated Bt toxins. The debated alimentary chronic risks may come from unpredictable insertional mutagenesis effects, metabolic effects, or from the new pesticide residues. The most detailed regulatory tests on the GMOs are three-month long feeding trials of laboratory rats, which are biochemically assessed. The tests are not compulsory, and are not independently conducted. The test data and the corresponding results are kept in secret by the companies. Our previous analyses of regulatory raw data at these levels, taking the representative examples of three GM maize NK 603, MON 810, and MON 863 led us to conclude that hepatorenal toxicities were possible, and that longer testing was necessary. Our study was criticized by the company developing the GMOs in question and the regulatory bodies, mainly on the divergent biological interpretations of statistically significant biochemical and physiological effects. We present the scientific reasons for the crucially different biological interpretations and also highlight the shortcomings in the experimental protocols designed by the company. The debate implies an enormous responsibility towards public health and is essential due to nonexistent traceability or epidemiological studies in the GMO-producing countries.
Posner, A. J.
2017-12-01
The Middle Rio Grande River (MRG) traverses New Mexico from Cochiti to Elephant Butte reservoirs. Since the 1100s, cultivating and inhabiting the valley of this alluvial river has required various river training works. The mid-20th century saw a concerted effort to tame the river through channelization, Jetty Jacks, and dam construction. A challenge for river managers is to better understand the interactions between a river training works, dam construction, and the geomorphic adjustments of a desert river driven by spring snowmelt and summer thunderstorms carrying water and large sediment inputs from upstream and ephemeral tributaries. Due to its importance to the region, a vast wealth of data exists for conditions along the MRG. The investigation presented herein builds upon previous efforts by combining hydraulic model results, digitized planforms, and stream gage records in various statistical and conceptual models in order to test our understanding of this complex system. Spatially continuous variables were clipped by a set of river cross section data that is collected at decadal intervals since the early 1960s, creating a spatially homogenous database upon which various statistical testing was implemented. Conceptual models relate forcing variables and response variables to estimate river planform changes. The developed database, represents a unique opportunity to quantify and test geomorphic conceptual models in the unique characteristics of the MRG. The results of this investigation provides a spatially distributed characterization of planform variable changes, permitting managers to predict planform at a much higher resolution than previously available, and a better understanding of the relationship between flow regime and planform changes such as changes to longitudinal slope, sinuosity, and width. Lastly, data analysis and model interpretation led to the development of a new conceptual model for the impact of ephemeral tributaries in alluvial rivers.
Cosmological Non-Gaussian Signature Detection: Comparing Performance of Different Statistical Tests
Directory of Open Access Journals (Sweden)
O. Forni
2005-09-01
Full Text Available Currently, it appears that the best method for non-Gaussianity detection in the cosmic microwave background (CMB consists in calculating the kurtosis of the wavelet coefficients. We know that wavelet-kurtosis outperforms other methods such as the bispectrum, the genus, ridgelet-kurtosis, and curvelet-kurtosis on an empirical basis, but relatively few studies have compared other transform-based statistics, such as extreme values, or more recent tools such as higher criticism (HC, or proposed Ã¢Â€Âœbest possibleÃ¢Â€Â choices for such statistics. In this paper, we consider two models for transform-domain coefficients: (a a power-law model, which seems suited to the wavelet coefficients of simulated cosmic strings, and (b a sparse mixture model, which seems suitable for the curvelet coefficients of filamentary structure. For model (a, if power-law behavior holds with finite 8th moment, excess kurtosis is an asymptotically optimal detector, but if the 8th moment is not finite, a test based on extreme values is asymptotically optimal. For model (b, if the transform coefficients are very sparse, a recent test, higher criticism, is an optimal detector, but if they are dense, kurtosis is an optimal detector. Empirical wavelet coefficients of simulated cosmic strings have power-law character, infinite 8th moment, while curvelet coefficients of the simulated cosmic strings are not very sparse. In all cases, excess kurtosis seems to be an effective test in moderate-resolution imagery.
Hagell, Peter; Westergren, Albert
Sample size is a major factor in statistical null hypothesis testing, which is the basis for many approaches to testing Rasch model fit. Few sample size recommendations for testing fit to the Rasch model concern the Rasch Unidimensional Measurement Models (RUMM) software, which features chi-square and ANOVA/F-ratio based fit statistics, including Bonferroni and algebraic sample size adjustments. This paper explores the occurrence of Type I errors with RUMM fit statistics, and the effects of algebraic sample size adjustments. Data with simulated Rasch model fitting 25-item dichotomous scales and sample sizes ranging from N = 50 to N = 2500 were analysed with and without algebraically adjusted sample sizes. Results suggest the occurrence of Type I errors with N less then or equal to 500, and that Bonferroni correction as well as downward algebraic sample size adjustment are useful to avoid such errors, whereas upward adjustment of smaller samples falsely signal misfit. Our observations suggest that sample sizes around N = 250 to N = 500 may provide a good balance for the statistical interpretation of the RUMM fit statistics studied here with respect to Type I errors and under the assumption of Rasch model fit within the examined frame of reference (i.e., about 25 item parameters well targeted to the sample).
Can persistence hunting signal male quality? A test considering digit ratio in endurance athletes.
Directory of Open Access Journals (Sweden)
Daniel Longman
Full Text Available Various theories have been posed to explain the fitness payoffs of hunting success among hunter-gatherers. 'Having' theories refer to the acquisition of resources, and include the direct provisioning hypothesis. In contrast, 'getting' theories concern the signalling of male resourcefulness and other desirable traits, such as athleticism and intelligence, via hunting prowess. We investigated the association between androgenisation and endurance running ability as a potential signalling mechanism, whereby running prowess, vital for persistence hunting, might be used as a reliable signal of male reproductive fitness by females. Digit ratio (2D:4D was used as a proxy for prenatal androgenisation in 439 males and 103 females, while a half marathon race (21km, representing a distance/duration comparable with that of persistence hunting, was used to assess running ability. Digit ratio was significantly and positively correlated with half-marathon time in males (right hand: r = 0.45, p<0.001; left hand: r = 0.42, p<0.001 and females (right hand: r = 0.26, p<0.01; left hand: r = 0.23, p = 0.02. Sex-interaction analysis showed that this correlation was significantly stronger in males than females, suggesting that androgenisation may have experienced stronger selective pressure from endurance running in males. As digit ratio has previously been shown to predict reproductive success, our results are consistent with the hypothesis that endurance running ability may signal reproductive potential in males, through its association with prenatal androgen exposure. However, further work is required to establish whether and how females respond to this signalling for fitness.
Directory of Open Access Journals (Sweden)
Guray Kucukkocaoglu
2016-02-01
Full Text Available In this study, inspired by the Credit Portfolio View approach, we intend to develop an econometric credit risk model to estimate credit loss distributions of Turkish Banking System under baseline and stress macro scenarios, by substituting default rates with non-performing loan (NPL ratios. Since customer number based historical default rates are not available for the whole Turkish banking system’s credit portfolio, we used NPL ratios as dependent variable instead of default rates, a common practice for many countries where historical default rates are not available. Although, there are many problems in using NPL ratios as default rates such as underestimating portfolio losses as a result of totally non-homogeneous total credit portfolios and transferring non-performing loans to asset management companies from banks’ balance sheets, our aim is to underline and limit some ignored problems using accounting based NPL ratios as default rates in macroeconomic credit risk modeling. Developed models confirm the strong statistical relationship between systematic component of credit risk and macroeconomic variables in Turkey. Stress test results also are compatible with the past experiences
Mokrani, Nabil; Gillard, Philippe
2018-03-26
This paper presents a physical and statistical approach to laser-induced breakdown in n-decane/N 2 + O 2 mixtures as a function of incident or absorbed energy. A parametric study, with pressure, fuel purity and equivalence ratio, was conducted to determine the incident and absorbed energies involved in producing breakdown, followed or not by ignition. The experiments were performed using a Q-switched Nd-YAG laser (1064 nm) inside a cylindrical 1-l combustion chamber in the range of 1-100 mJ of incident energy. A stochastic study of breakdown and ignition probabilities showed that the mixture composition had a significant effect on ignition with large variation of incident or absorbed energy required to obtain 50% of breakdown. It was observed that the combustion products absorb more energy coming from the laser. The effect of pressure on the ignition probabilities of lean and near stoichiometric mixtures was also investigated. It was found that a high ignition energy E50% is required for lean mixtures at high pressures (3 bar). The present study provides new data obtained on an original experimental setup and the results, close to laboratory-produced laser ignition phenomena, will enhance the understanding of initial conditions on the breakdown or ignition probabilities for different mixtures. Copyright © 2018 Elsevier B.V. All rights reserved.
Restrictions on the Ratio of Normal to Tangential Field Components in Magnetic Rubber Testing
National Research Council Canada - National Science Library
Burke, S. K; Ibrahim, M. E
2007-01-01
Magnetic Rubber Testing (MRT) is an extremely sensitive method for deteckng surface-breaking cracks in ferromagnetic materials, and is used extensively in critical inspections for D6ac steel components of the F-111 aircraft...
The ξ/ξ2nd ratio as a test for Effective Polyakov Loop Actions
Caselle, Michele; Nada, Alessandro
2018-03-01
Effective Polyakov line actions are a powerful tool to study the finite temperature behaviour of lattice gauge theories. They are much simpler to simulate than the original (3+1) dimensional LGTs and are affected by a milder sign problem. However it is not clear to which extent they really capture the rich spectrum of the original theories, a feature which is instead of great importance if one aims to address the sign problem. We propose here a simple way to address this issue based on the so called second moment correlation length ξ2nd. The ratio ξ/ξ2nd between the exponential correlation length and the second moment one is equal to 1 if only a single mass is present in the spectrum, and becomes larger and larger as the complexity of the spectrum increases. Since both ξexp and ξ2nd are easy to measure on the lattice, this is an economic and effective way to keep track of the spectrum of the theory. In this respect we show using both numerical simulation and effective string calculations that this ratio increases dramatically as the temperature decreases. This non-trivial behaviour should be reproduced by the Polyakov loop effective action.
Folenta, Dezi; Lebo, William
1988-01-01
A 450 hp high ratio Self-Aligning Bearingless Planetary (SABP) for a helicopter application was designed, manufactured, and spin tested under NASA contract NAS3-24539. The objective of the program was to conduct research and development work on a high contact ratio helical gear SABP to reduce weight and noise and to improve efficiency. The results accomplished include the design, manufacturing, and no-load spin testing of two prototype helicopter transmissions, rated at 450 hp with an input speed of 35,000 rpm and an output speed of 350 rpm. The weight power density ratio of these gear units is 0.33 lb hp. The measured airborne noise at 35,000 rpm input speed and light load is 94 dB at 5 ft. The high speed, high contact ratio SABP transmission appears to be significantly lighter and quieter than comtemporary helicopter transmissions. The concept of the SABP is applicable not only to high ratio helicopter type transmissions but also to other rotorcraft and aircraft propulsion systems.
Noel, Jean; Prieto, Juan C.; Styner, Martin
2017-03-01
Functional Analysis of Diffusion Tensor Tract Statistics (FADTTS) is a toolbox for analysis of white matter (WM) fiber tracts. It allows associating diffusion properties along major WM bundles with a set of covariates of interest, such as age, diagnostic status and gender, and the structure of the variability of these WM tract properties. However, to use this toolbox, a user must have an intermediate knowledge in scripting languages (MATLAB). FADTTSter was created to overcome this issue and make the statistical analysis accessible to any non-technical researcher. FADTTSter is actively being used by researchers at the University of North Carolina. FADTTSter guides non-technical users through a series of steps including quality control of subjects and fibers in order to setup the necessary parameters to run FADTTS. Additionally, FADTTSter implements interactive charts for FADTTS' outputs. This interactive chart enhances the researcher experience and facilitates the analysis of the results. FADTTSter's motivation is to improve usability and provide a new analysis tool to the community that complements FADTTS. Ultimately, by enabling FADTTS to a broader audience, FADTTSter seeks to accelerate hypothesis testing in neuroimaging studies involving heterogeneous clinical data and diffusion tensor imaging. This work is submitted to the Biomedical Applications in Molecular, Structural, and Functional Imaging conference. The source code of this application is available in NITRC.
Using the Δ3 statistic to test for missed levels in mixed sequence neutron resonance data
International Nuclear Information System (INIS)
Mulhall, Declan
2009-01-01
The Δ 3 (L) statistic is studied as a tool to detect missing levels in the neutron resonance data where two sequences are present. These systems are problematic because there is no level repulsion, and the resonances can be too close to resolve. Δ 3 (L) is a measure of the fluctuations in the number of levels in an interval of length L on the energy axis. The method used is tested on ensembles of mixed Gaussian orthogonal ensemble spectra, with a known fraction of levels (x%) randomly depleted, and can accurately return x. The accuracy of the method as a function of spectrum size is established. The method is used on neutron resonance data for 11 isotopes with either s-wave neutrons on odd-A isotopes, or p-wave neutrons on even-A isotopes. The method compares favorably with a maximum likelihood method applied to the level spacing distribution. Nuclear data ensembles were made from 20 isotopes in total, and their Δ 3 (L) statistics are discussed in the context of random matrix theory.
Energy Technology Data Exchange (ETDEWEB)
Jha, Sumit Kumar [University of Central Florida, Orlando; Pullum, Laura L [ORNL; Ramanathan, Arvind [ORNL
2016-01-01
Embedded intelligent systems ranging from tiny im- plantable biomedical devices to large swarms of autonomous un- manned aerial systems are becoming pervasive in our daily lives. While we depend on the flawless functioning of such intelligent systems, and often take their behavioral correctness and safety for granted, it is notoriously difficult to generate test cases that expose subtle errors in the implementations of machine learning algorithms. Hence, the validation of intelligent systems is usually achieved by studying their behavior on representative data sets, using methods such as cross-validation and bootstrapping.In this paper, we present a new testing methodology for studying the correctness of intelligent systems. Our approach uses symbolic decision procedures coupled with statistical hypothesis testing to. We also use our algorithm to analyze the robustness of a human detection algorithm built using the OpenCV open-source computer vision library. We show that the human detection implementation can fail to detect humans in perturbed video frames even when the perturbations are so small that the corresponding frames look identical to the naked eye.
Statistics 101 for Radiologists.
Anvari, Arash; Halpern, Elkan F; Samir, Anthony E
2015-10-01
Diagnostic tests have wide clinical applications, including screening, diagnosis, measuring treatment effect, and determining prognosis. Interpreting diagnostic test results requires an understanding of key statistical concepts used to evaluate test efficacy. This review explains descriptive statistics and discusses probability, including mutually exclusive and independent events and conditional probability. In the inferential statistics section, a statistical perspective on study design is provided, together with an explanation of how to select appropriate statistical tests. Key concepts in recruiting study samples are discussed, including representativeness and random sampling. Variable types are defined, including predictor, outcome, and covariate variables, and the relationship of these variables to one another. In the hypothesis testing section, we explain how to determine if observed differences between groups are likely to be due to chance. We explain type I and II errors, statistical significance, and study power, followed by an explanation of effect sizes and how confidence intervals can be used to generalize observed effect sizes to the larger population. Statistical tests are explained in four categories: t tests and analysis of variance, proportion analysis tests, nonparametric tests, and regression techniques. We discuss sensitivity, specificity, accuracy, receiver operating characteristic analysis, and likelihood ratios. Measures of reliability and agreement, including κ statistics, intraclass correlation coefficients, and Bland-Altman graphs and analysis, are introduced. © RSNA, 2015.
Tabor, Josh
2010-01-01
On the 2009 AP[c] Statistics Exam, students were asked to create a statistic to measure skewness in a distribution. This paper explores several of the most popular student responses and evaluates which statistic performs best when sampling from various skewed populations. (Contains 8 figures, 3 tables, and 4 footnotes.)
DWPF Sample Vial Insert Study-Statistical Analysis of DWPF Mock-Up Test Data
Energy Technology Data Exchange (ETDEWEB)
Harris, S.P. [Westinghouse Savannah River Company, AIKEN, SC (United States)
1997-09-18
This report is prepared as part of Technical/QA Task Plan WSRC-RP-97-351 which was issued in response to Technical Task Request HLW/DWPF/TTR-970132 submitted by DWPF. Presented in this report is a statistical analysis of DWPF Mock-up test data for evaluation of two new analytical methods which use insert samples from the existing HydragardTM sampler. The first is a new hydrofluoric acid based method called the Cold Chemical Method (Cold Chem) and the second is a modified fusion method.Either new DWPF analytical method could result in a two to three fold improvement in sample analysis time.Both new methods use the existing HydragardTM sampler to collect a smaller insert sample from the process sampling system. The insert testing methodology applies to the DWPF Slurry Mix Evaporator (SME) and the Melter Feed Tank (MFT) samples.The insert sample is named after the initial trials which placed the container inside the sample (peanut) vials. Samples in small 3 ml containers (Inserts) are analyzed by either the cold chemical method or a modified fusion method. The current analytical method uses a HydragardTM sample station to obtain nearly full 15 ml peanut vials. The samples are prepared by a multi-step process for Inductively Coupled Plasma (ICP) analysis by drying, vitrification, grinding and finally dissolution by either mixed acid or fusion. In contrast, the insert sample is placed directly in the dissolution vessel, thus eliminating the drying, vitrification and grinding operations for the Cold chem method. Although the modified fusion still requires drying and calcine conversion, the process is rapid due to the decreased sample size and that no vitrification step is required.A slurry feed simulant material was acquired from the TNX pilot facility from the test run designated as PX-7.The Mock-up test data were gathered on the basis of a statistical design presented in SRT-SCS-97004 (Rev. 0). Simulant PX-7 samples were taken in the DWPF Analytical Cell Mock
DWPF Sample Vial Insert Study-Statistical Analysis of DWPF Mock-Up Test Data
International Nuclear Information System (INIS)
Harris, S.P.
1997-01-01
This report is prepared as part of Technical/QA Task Plan WSRC-RP-97-351 which was issued in response to Technical Task Request HLW/DWPF/TTR-970132 submitted by DWPF. Presented in this report is a statistical analysis of DWPF Mock-up test data for evaluation of two new analytical methods which use insert samples from the existing HydragardTM sampler. The first is a new hydrofluoric acid based method called the Cold Chemical Method (Cold Chem) and the second is a modified fusion method.Both new methods use the existing HydragardTM sampler to collect a smaller insert sample from the process sampling system. The insert testing methodology applies to the DWPF Slurry Mix Evaporator (SME) and the Melter Feed Tank (MFT) samples. Samples in small 3 ml containers (Inserts) are analyzed by either the cold chemical method or a modified fusion method. The current analytical method uses a HydragardTM sample station to obtain nearly full 15 ml peanut vials. The samples are prepared by a multi-step process for Inductively Coupled Plasma (ICP) analysis by drying, vitrification, grinding and finally dissolution by either mixed acid or fusion. In contrast, the insert sample is placed directly in the dissolution vessel, thus eliminating the drying, vitrification and grinding operations for the Cold chem method. Although the modified fusion still requires drying and calcine conversion, the process is rapid due to the decreased sample size and that no vitrification step is required.A slurry feed simulant material was acquired from the TNX pilot facility from the test run designated as PX-7.The Mock-up test data were gathered on the basis of a statistical design presented in SRT-SCS-97004 (Rev. 0). Simulant PX-7 samples were taken in the DWPF Analytical Cell Mock-up Facility using 3 ml inserts and 15 ml peanut vials. A number of the insert samples were analyzed by Cold Chem and compared with full peanut vial samples analyzed by the current methods. The remaining inserts were analyzed by
DEFF Research Database (Denmark)
Petersen, Helga Nørgaard; Thomason, James L.; Minty, Ross
2015-01-01
The interfacial properties as Interfacial Shear Stress (IFSS) in fibre reinforced polymers are essential for further understanding of the mechanical properties of the composite. In this work a single fibre testing method is used in combination with an epoxy matrix made from Araldite 506 epoxy res...
Zheng, Yinggan; Gierl, Mark J.; Cui, Ying
2010-01-01
This study combined the kernel smoothing procedure and a nonparametric differential item functioning statistic--Cochran's Z--to statistically test the difference between the kernel-smoothed item response functions for reference and focal groups. Simulation studies were conducted to investigate the Type I error and power of the proposed…
Luh, Wei-Ming; Guo, Jiin-Huarng
2005-01-01
To deal with nonnormal and heterogeneous data for the one-way fixed effect analysis of variance model, the authors adopted a trimmed means method in conjunction with Hall's invertible transformation into a heteroscedastic test statistic (Alexander-Govern test or Welch test). The results of simulation experiments showed that the proposed technique…
The Wedge Splitting Test: Influence of Aggregate Size and Water-to-Cement Ratio
DEFF Research Database (Denmark)
Pease, Bradley Justin; Skocek, Jan; Geiker, Mette Rica
2007-01-01
Since the development of the wedge splitting test (WST), techniques have been used to extract material properties that can describe the fracture behavior of the tested materials. Inverse analysis approaches are commonly used to estimate the stress-crack width relationship; which is described...... by the elastic modulus, tensile strength, fracture energy, and the assumed softening behavior. The stress-crack width relation can be implemented in finite element models for computing the cracking behavior of cementitious systems. While inverse analysis provides information about the material properties...... of various concrete mixtures there are limitations to the current analysis techniques. To date these techniques analyze the result of one WST specimen, thereby providing an estimate of material properties from single result. This paper utilizes a recent improvement to the inverse analysis technique, which...
Bioactivity tests of calcium phosphates with variant molar ratios of main components.
Pluta, Klaudia; Sobczak-Kupiec, Agnieszka; Półtorak, Olga; Malina, Dagmara; Tyliszczak, Bożena
2018-03-09
Calcium phosphates constitute attractive materials of biomedical applications. Among them particular attention is devoted to bioactive hydroxyapatite (HAp) and bioresorbable tricalcium phosphate (TCP) that possess ability to bind to living bones and can be used clinically as important bone substitutes. Notably, in vivo bone bioactivity can be predicted from apatite formation of bone immersed in SBF fluids. Thus, analyses of behavior of calcium phosphates immersed in various bio fluids are of great importance. Recently, stoichiometric HAp and TCP structures have been widely studied, whereas only limited number of publications have been devoted to analyses of nonstoichiometric calcium phosphates. Here, we report physicochemical analysis of natural and synthetic phosphates with variable Ca/P molar ratios. Subsequently attained structures were subjected to incubation in either artificial saliva or Ringer's fluids. Both pH and conductivity of such fluids were determined before and after incubation. Furthermore, the influence of the Ca/P values on such parameters was exemplified. Physicochemical analysis of received materials was performed by XRD and FT-IR characterization techniques. Their potential antibacterial activity and behavior in the presence of infectious microorganisms as Escherichia coli and Staphylococcus aureus was also evaluated. © 2018 Wiley Periodicals, Inc. J Biomed Mater Res Part A, 2018. © 2018 Wiley Periodicals, Inc.
Palazón, L; Navas, A
2017-06-01
Information on sediment contribution and transport dynamics from the contributing catchments is needed to develop management plans to tackle environmental problems related with effects of fine sediment as reservoir siltation. In this respect, the fingerprinting technique is an indirect technique known to be valuable and effective for sediment source identification in river catchments. Large variability in sediment delivery was found in previous studies in the Barasona catchment (1509 km 2 , Central Spanish Pyrenees). Simulation results with SWAT and fingerprinting approaches identified badlands and agricultural uses as the main contributors to sediment supply in the reservoir. In this study the Kruskal-Wallis H-test and (3) principal components analysis. Source contribution results were different between assessed options with the greatest differences observed for option using #3, including the two step process: principal components analysis and discriminant function analysis. The characteristics of the solutions by the applied mixing model and the conceptual understanding of the catchment showed that the most reliable solution was achieved using #2, the two step process of Kruskal-Wallis H-test and discriminant function analysis. The assessment showed the importance of the statistical procedure used to define the optimum composite fingerprint for sediment fingerprinting applications. Copyright © 2016 Elsevier Ltd. All rights reserved.
Statistical testing of the full-range leadership theory in nursing.
Kanste, Outi; Kääriäinen, Maria; Kyngäs, Helvi
2009-12-01
The aim of this study is to test statistically the structure of the full-range leadership theory in nursing. The data were gathered by postal questionnaires from nurses and nurse leaders working in healthcare organizations in Finland. A follow-up study was performed 1 year later. The sample consisted of 601 nurses and nurse leaders, and the follow-up study had 78 respondents. Theory was tested through structural equation modelling, standard regression analysis and two-way anova. Rewarding transformational leadership seems to promote and passive laissez-faire leadership to reduce willingness to exert extra effort, perceptions of leader effectiveness and satisfaction with the leader. Active management-by-exception seems to reduce willingness to exert extra effort and perception of leader effectiveness. Rewarding transformational leadership remained as a strong explanatory factor of all outcome variables measured 1 year later. The data supported the main structure of the full-range leadership theory, lending support to the universal nature of the theory.
Directory of Open Access Journals (Sweden)
Henry Braun
2017-11-01
Full Text Available Abstract Background Economists are making increasing use of measures of student achievement obtained through large-scale survey assessments such as NAEP, TIMSS, and PISA. The construction of these measures, employing plausible value (PV methodology, is quite different from that of the more familiar test scores associated with assessments such as the SAT or ACT. These differences have important implications both for utilization and interpretation. Although much has been written about PVs, it appears that there are still misconceptions about whether and how to employ them in secondary analyses. Methods We address a range of technical issues, including those raised in a recent article that was written to inform economists using these databases. First, an extensive review of the relevant literature was conducted, with particular attention to key publications that describe the derivation and psychometric characteristics of such achievement measures. Second, a simulation study was carried out to compare the statistical properties of estimates based on the use of PVs with those based on other, commonly used methods. Results It is shown, through both theoretical analysis and simulation, that under fairly general conditions appropriate use of PV yields approximately unbiased estimates of model parameters in regression analyses of large scale survey data. The superiority of the PV methodology is particularly evident when measures of student achievement are employed as explanatory variables. Conclusions The PV methodology used to report student test performance in large scale surveys remains the state-of-the-art for secondary analyses of these databases.
Statistical methods for the analysis of a screening test for chronic beryllium disease
Energy Technology Data Exchange (ETDEWEB)
Frome, E.L.; Neubert, R.L. [Oak Ridge National Lab., TN (United States). Mathematical Sciences Section; Smith, M.H.; Littlefield, L.G.; Colyer, S.P. [Oak Ridge Inst. for Science and Education, TN (United States). Medical Sciences Div.
1994-10-01
The lymphocyte proliferation test (LPT) is a noninvasive screening procedure used to identify persons who may have chronic beryllium disease. A practical problem in the analysis of LPT well counts is the occurrence of outlying data values (approximately 7% of the time). A log-linear regression model is used to describe the expected well counts for each set of test conditions. The variance of the well counts is proportional to the square of the expected counts, and two resistant regression methods are used to estimate the parameters of interest. The first approach uses least absolute values (LAV) on the log of the well counts to estimate beryllium stimulation indices (SIs) and the coefficient of variation. The second approach uses a resistant regression version of maximum quasi-likelihood estimation. A major advantage of the resistant regression methods is that it is not necessary to identify and delete outliers. These two new methods for the statistical analysis of the LPT data and the outlier rejection method that is currently being used are applied to 173 LPT assays. The authors strongly recommend the LAV method for routine analysis of the LPT.
Konijn, Elly A.; van de Schoot, Rens; Winter, Sonja D.; Ferguson, Christopher J.
2015-01-01
The present paper argues that an important cause of publication bias resides in traditional frequentist statistics forcing binary decisions. An alternative approach through Bayesian statistics provides various degrees of support for any hypothesis allowing balanced decisions and proper null
Grogan, Anne; Coughlan, Michael; Prizeman, Geraldine; O'Connell, Niamh; O'Mahony, Nora; Quinn, Katherine; McKee, Gabrielle
2017-12-01
To elicit the perceptions of patients, who self-tested their international normalized ratio and communicated their results via a text or phone messaging system, to determine their satisfaction with the education and support that they received and to establish their confidence to move to self-management. Self-testing of international normalized ratio has been shown to be reliable and is fast becoming common practice. As innovations are introduced to point of care testing, more research is needed to elicit patients' perceptions of the self-testing process. This three site study used a cross-sectional prospective descriptive survey. Three hundred and thirty patients who were prescribed warfarin and using international normalized ratio self-testing were invited to take part in the study. The anonymous survey examined patient profile, patients' usage, issues, perceptions, confidence and satisfaction with using the self-testing system and their preparedness for self-management of warfarin dosage. The response rate was 57% (n = 178). Patients' confidence in self-testing was high (90%). Patients expressed a high level of satisfaction with the support received, but expressed the need for more information on support groups, side effects of warfarin, dietary information and how to dispose of needles. When asked if they felt confident to adjust their own warfarin levels 73% agreed. Chi-squared tests for independence revealed that none of the patient profile factors examined influenced this confidence. The patients cited the greatest advantages of the service were reduced burden, more autonomy, convenience and ease of use. The main disadvantages cited were cost and communication issues. Patients were satisfied with self-testing. The majority felt they were ready to move to self-management. The introduction of innovations to remote point of care testing, such as warfarin self-testing, needs to have support at least equal to that provided in a hospital setting. © 2017 John
International Nuclear Information System (INIS)
Létourneau, Daniel; McNiven, Andrea; Keller, Harald; Wang, An; Amin, Md Nurul; Pearce, Jim; Norrlinger, Bernhard; Jaffray, David A.
2014-01-01
Purpose: High-quality radiation therapy using highly conformal dose distributions and image-guided techniques requires optimum machine delivery performance. In this work, a monitoring system for multileaf collimator (MLC) performance, integrating semiautomated MLC quality control (QC) tests and statistical process control tools, was developed. The MLC performance monitoring system was used for almost a year on two commercially available MLC models. Control charts were used to establish MLC performance and assess test frequency required to achieve a given level of performance. MLC-related interlocks and servicing events were recorded during the monitoring period and were investigated as indicators of MLC performance variations. Methods: The QC test developed as part of the MLC performance monitoring system uses 2D megavoltage images (acquired using an electronic portal imaging device) of 23 fields to determine the location of the leaves with respect to the radiation isocenter. The precision of the MLC performance monitoring QC test and the MLC itself was assessed by detecting the MLC leaf positions on 127 megavoltage images of a static field. After initial calibration, the MLC performance monitoring QC test was performed 3–4 times/week over a period of 10–11 months to monitor positional accuracy of individual leaves for two different MLC models. Analysis of test results was performed using individuals control charts per leaf with control limits computed based on the measurements as well as two sets of specifications of ±0.5 and ±1 mm. Out-of-specification and out-of-control leaves were automatically flagged by the monitoring system and reviewed monthly by physicists. MLC-related interlocks reported by the linear accelerator and servicing events were recorded to help identify potential causes of nonrandom MLC leaf positioning variations. Results: The precision of the MLC performance monitoring QC test and the MLC itself was within ±0.22 mm for most MLC leaves
Létourneau, Daniel; Wang, An; Amin, Md Nurul; Pearce, Jim; McNiven, Andrea; Keller, Harald; Norrlinger, Bernhard; Jaffray, David A
2014-12-01
High-quality radiation therapy using highly conformal dose distributions and image-guided techniques requires optimum machine delivery performance. In this work, a monitoring system for multileaf collimator (MLC) performance, integrating semiautomated MLC quality control (QC) tests and statistical process control tools, was developed. The MLC performance monitoring system was used for almost a year on two commercially available MLC models. Control charts were used to establish MLC performance and assess test frequency required to achieve a given level of performance. MLC-related interlocks and servicing events were recorded during the monitoring period and were investigated as indicators of MLC performance variations. The QC test developed as part of the MLC performance monitoring system uses 2D megavoltage images (acquired using an electronic portal imaging device) of 23 fields to determine the location of the leaves with respect to the radiation isocenter. The precision of the MLC performance monitoring QC test and the MLC itself was assessed by detecting the MLC leaf positions on 127 megavoltage images of a static field. After initial calibration, the MLC performance monitoring QC test was performed 3-4 times/week over a period of 10-11 months to monitor positional accuracy of individual leaves for two different MLC models. Analysis of test results was performed using individuals control charts per leaf with control limits computed based on the measurements as well as two sets of specifications of ± 0.5 and ± 1 mm. Out-of-specification and out-of-control leaves were automatically flagged by the monitoring system and reviewed monthly by physicists. MLC-related interlocks reported by the linear accelerator and servicing events were recorded to help identify potential causes of nonrandom MLC leaf positioning variations. The precision of the MLC performance monitoring QC test and the MLC itself was within ± 0.22 mm for most MLC leaves and the majority of the
Directory of Open Access Journals (Sweden)
Elżbieta Sandurska
2016-12-01
Full Text Available Introduction: Application of statistical software typically does not require extensive statistical knowledge, allowing to easily perform even complex analyses. Consequently, test selection criteria and important assumptions may be easily overlooked or given insufficient consideration. In such cases, the results may likely lead to wrong conclusions. Aim: To discuss issues related to assumption violations in the case of Student's t-test and one-way ANOVA, two parametric tests frequently used in the field of sports science, and to recommend solutions. Description of the state of knowledge: Student's t-test and ANOVA are parametric tests, and therefore some of the assumptions that need to be satisfied include normal distribution of the data and homogeneity of variances in groups. If the assumptions are violated, the original design of the test is impaired, and the test may then be compromised giving spurious results. A simple method to normalize the data and to stabilize the variance is to use transformations. If such approach fails, a good alternative to consider is a nonparametric test, such as Mann-Whitney, the Kruskal-Wallis or Wilcoxon signed-rank tests. Summary: Thorough verification of the parametric tests assumptions allows for correct selection of statistical tools, which is the basis of well-grounded statistical analysis. With a few simple rules, testing patterns in the data characteristic for the study of sports science comes down to a straightforward procedure.
Signal to noise ratio enhancement for Eddy Current testing of steam generator tubes in PWR's
International Nuclear Information System (INIS)
Georgel, B.
1985-01-01
Noise reduction is a compulsory task when we try to recognize and characterize flaws. The signals we deal with come from Eddy Current testings of steam generator steel tubes. We point out the need for a spectral invariant in digital spectral analysis of 2 components signals. We make clear the pros and cons of classical passband filtering and suggest the use of a new noise cancellation method first discussed by Moriwaki and Tlusty. We generalize this tricky technique and prove it is a very special case of the well-known Wiener filter. In that sense the M-T method is shown to be optimal. 6 refs
Testing of a "smart-pebble" for measuring particle transport statistics
Kitsikoudis, Vasileios; Avgeris, Loukas; Valyrakis, Manousos
2017-04-01
This paper presents preliminary results from novel experiments aiming to assess coarse sediment transport statistics for a range of transport conditions, via the use of an innovative "smart-pebble" device. This device is a waterproof sphere, which has 7 cm diameter and is equipped with a number of sensors that provide information about the velocity, acceleration and positioning of the "smart-pebble" within the flow field. A series of specifically designed experiments are carried out to monitor the entrainment of a "smart-pebble" for fully developed, uniform, turbulent flow conditions over a hydraulically rough bed. Specifically, the bed surface is configured to three sections, each of them consisting of well packed glass beads of slightly increasing size at the downstream direction. The first section has a streamwise length of L1=150 cm and beads size of D1=15 mm, the second section has a length of L2=85 cm and beads size of D2=22 mm, and the third bed section has a length of L3=55 cm and beads size of D3=25.4 mm. Two cameras monitor the area of interest to provide additional information regarding the "smart-pebble" movement. Three-dimensional flow measurements are obtained with the aid of an acoustic Doppler velocimeter along a measurement grid to assess the flow forcing field. A wide range of flow rates near and above the threshold of entrainment is tested, while using four distinct densities for the "smart-pebble", which can affect its transport speed and total momentum. The acquired data are analyzed to derive Lagrangian transport statistics and the implications of such an important experiment for the transport of particles by rolling are discussed. The flow conditions for the initiation of motion, particle accelerations and equilibrium particle velocities (translating into transport rates), statistics of particle impact and its motion, can be extracted from the acquired data, which can be further compared to develop meaningful insights for sediment transport
DEFF Research Database (Denmark)
Løkkegaard, Thomas; Pedersen, Tina Heidi; Lind, Bent
2015-01-01
INTRODUCTION: Oral anticoagulation treatment (OACT) with warfarin is common in general practice. Increasingly, international normalised ratio (INR) point of care testing (POCT) is being used to manage patients. The aim of this study was to describe and analyse the quality of OACT with warfarin...... practices using INR POCT in the management of patients in warfarin treatment provided good quality of care. Sampling interval and diagnostic coding were significantly correlated with treatment quality....
Energy Technology Data Exchange (ETDEWEB)
Molecke, M.A.; Gregson, M.W.; Sorenson, K.B. [Sandia National Labs. (United States); Billone, M.C.; Tsai, H. [Argonne National Lab. (United States); Koch, W.; Nolte, O. [Fraunhofer Inst. fuer Toxikologie und Experimentelle Medizin (Germany); Pretzsch, G.; Lange, F. [Gesellschaft fuer Anlagen- und Reaktorsicherheit (Germany); Autrusson, B.; Loiseau, O. [Inst. de Radioprotection et de Surete Nucleaire (France); Thompson, N.S.; Hibbs, R.S. [U.S. Dept. of Energy (United States); Young, F.I.; Mo, T. [U.S. Nuclear Regulatory Commission (United States)
2004-07-01
We provide a detailed overview of an ongoing, multinational test program that is developing aerosol data for some spent fuel sabotage scenarios on spent fuel transport and storage casks. Experiments are being performed to quantify the aerosolized materials plus volatilized fission products generated from actual spent fuel and surrogate material test rods, due to impact by a high energy density device, HEDD. The program participants in the U.S. plus Germany, France, and the U.K., part of the international Working Group for Sabotage Concerns of Transport and Storage Casks, WGSTSC have strongly supported and coordinated this research program. Sandia National Laboratories, SNL, has the lead role for conducting this research program; test program support is provided by both the U.S. Department of Energy and Nuclear Regulatory Commission. WGSTSC partners need this research to better understand potential radiological impacts from sabotage of nuclear material shipments and storage casks, and to support subsequent risk assessments, modeling, and preventative measures. We provide a summary of the overall, multi-phase test design and a description of all explosive containment and aerosol collection test components used. We focus on the recently initiated tests on ''surrogate'' spent fuel, unirradiated depleted uranium oxide, and forthcoming actual spent fuel tests. The depleted uranium oxide test rodlets were prepared by the Institut de Radioprotection et de Surete Nucleaire, in France. These surrogate test rodlets closely match the diameter of the test rodlets of actual spent fuel from the H.B. Robinson reactor (high burnup PWR fuel) and the Surry reactor (lower, medium burnup PWR fuel), generated from U.S. reactors. The characterization of the spent fuels and fabrication into short, pressurized rodlets has been performed by Argonne National Laboratory, for testing at SNL. The ratio of the aerosol and respirable particles released from HEDD-impacted spent
Directory of Open Access Journals (Sweden)
Wonkuk Kim
Full Text Available Recent studies suggest that copy number polymorphisms (CNPs may play an important role in disease susceptibility and onset. Currently, the detection of CNPs mainly depends on microarray technology. For case-control studies, conventionally, subjects are assigned to a specific CNP category based on the continuous quantitative measure produced by microarray experiments, and cases and controls are then compared using a chi-square test of independence. The purpose of this work is to specify the likelihood ratio test statistic (LRTS for case-control sampling design based on the underlying continuous quantitative measurement, and to assess its power and relative efficiency (as compared to the chi-square test of independence on CNP counts. The sample size and power formulas of both methods are given. For the latter, the CNPs are classified using the Bayesian classification rule. The LRTS is more powerful than this chi-square test for the alternatives considered, especially alternatives in which the at-risk CNP categories have low frequencies. An example of the application of the LRTS is given for a comparison of CNP distributions in individuals of Caucasian or Taiwanese ethnicity, where the LRTS appears to be more powerful than the chi-square test, possibly due to misclassification of the most common CNP category into a less common category.
Awédikian , Roy; Yannou , Bernard
2012-01-01
International audience; With the growing complexity of industrial software applications, industrials are looking for efficient and practical methods to validate the software. This paper develops a model-based statistical testing approach that automatically generates online and offline test cases for embedded software. It discusses an integrated framework that combines solutions for three major software testing research questions: (i) how to select test inputs; (ii) how to predict the expected...
2015-01-01
Abstract Titanium dioxide nanoparticles are photoactive and produce reactive oxygen species under natural sunlight. Reactive oxygen species can be detrimental to many organisms, causing oxidative damage, cell injury, and death. Most studies investigating TiO2 nanoparticle toxicity did not consider photoactivation and performed tests either in dark conditions or under artificial lighting that did not simulate natural irradiation. The present study summarizes the literature and derives a phototoxicity ratio between the results of nano‐titanium dioxide (nano‐TiO2) experiments conducted in the absence of sunlight and those conducted under solar or simulated solar radiation (SSR) for aquatic species. Therefore, the phototoxicity ratio can be used to correct endpoints of the toxicity tests with nano‐TiO2 that were performed in absence of sunlight. Such corrections also may be important for regulators and risk assessors when reviewing previously published data. A significant difference was observed between the phototoxicity ratios of 2 distinct groups: aquatic species belonging to order Cladocera, and all other aquatic species. Order Cladocera appeared very sensitive and prone to nano‐TiO2 phototoxicity. On average nano‐TiO2 was 20 times more toxic to non‐Cladocera and 1867 times more toxic to Cladocera (median values 3.3 and 24.7, respectively) after illumination. Both median value and 75% quartile of the phototoxicity ratio are chosen as the most practical values for the correction of endpoints of nano‐TiO2 toxicity tests that were performed in dark conditions, or in the absence of sunlight. Environ Toxicol Chem 2015;34:1070–1077. © 2015 The Author. Published by SETAC. PMID:25640001
A method to identify dependencies between organizational factors using statistical independence test
International Nuclear Information System (INIS)
Kim, Y.; Chung, C.H.; Kim, C.; Jae, M.; Jung, J.H.
2004-01-01
A considerable number of studies on organizational factors in nuclear power plants have been made especially in recent years, most of which have assumed organizational factors to be independent. However, since organizational factors characterize the organization in terms of safety and efficiency etc. and there would be some factors that have close relations between them. Therefore, from whatever point of view, if we want to identify the characteristics of an organization, the dependence relationships should be considered to get an accurate result. In this study the organization of a reference nuclear power plant in Korea was analyzed for the trip cases of that plant using 20 organizational factors that Jacobs and Haber had suggested: 1) coordination of work, 2) formalization, 3) organizational knowledge, 4) roles and responsibilities, 5) external communication, 6) inter-department communications, 7) intra-departmental communications, 8) organizational culture, 9) ownership, 10) safety culture, 11) time urgency, 12) centralization, 13) goal prioritization, 14) organizational learning, 15) problem identification, 16) resource allocation, 17) performance evaluation, 18) personnel selection, 19) technical knowledge, and 20) training. By utilizing the results of the analysis, a method to identify the dependence relationships between organizational factors is presented. The statistical independence test for the analysis result of the trip cases is adopted to reveal dependencies. This method is geared to the needs to utilize many kinds of data that has been obtained as the operating years of nuclear power plants increase, and more reliable dependence relations may be obtained by using these abundant data
Semenov, Alexander V; Elsas, Jan Dirk; Glandorf, Debora C M; Schilthuizen, Menno; Boer, Willem F
2013-08-01
To fulfill existing guidelines, applicants that aim to place their genetically modified (GM) insect-resistant crop plants on the market are required to provide data from field experiments that address the potential impacts of the GM plants on nontarget organisms (NTO's). Such data may be based on varied experimental designs. The recent EFSA guidance document for environmental risk assessment (2010) does not provide clear and structured suggestions that address the statistics of field trials on effects on NTO's. This review examines existing practices in GM plant field testing such as the way of randomization, replication, and pseudoreplication. Emphasis is placed on the importance of design features used for the field trials in which effects on NTO's are assessed. The importance of statistical power and the positive and negative aspects of various statistical models are discussed. Equivalence and difference testing are compared, and the importance of checking the distribution of experimental data is stressed to decide on the selection of the proper statistical model. While for continuous data (e.g., pH and temperature) classical statistical approaches - for example, analysis of variance (ANOVA) - are appropriate, for discontinuous data (counts) only generalized linear models (GLM) are shown to be efficient. There is no golden rule as to which statistical test is the most appropriate for any experimental situation. In particular, in experiments in which block designs are used and covariates play a role GLMs should be used. Generic advice is offered that will help in both the setting up of field testing and the interpretation and data analysis of the data obtained in this testing. The combination of decision trees and a checklist for field trials, which are provided, will help in the interpretation of the statistical analyses of field trials and to assess whether such analyses were correctly applied. We offer generic advice to risk assessors and applicants that will
Eum, Seenae; Bergsbaken, Robert L; Harvey, Craig L; Warren, J Bryan; Rotschafer, John C
2016-09-27
This study demonstrated a statistically significant difference in vancomycin minimum inhibitory concentration (MIC) for Staphylococcus aureus between a common automated system (Vitek 2) and the E-test method in patients with S. aureus bloodstream infections. At an area under the serum concentration time curve (AUC) threshold of 400 mg∙h/L, we would have reached the current Infectious Diseases Society of America (IDSA)/American Society of Health System Pharmacists (ASHP)/Society of Infectious Diseases Pharmacists (SIDP) guideline suggested AUC/MIC target in almost 100% of patients while using the Vitek 2 MIC data; however, we could only generate 40% target attainment while using E-test MIC data ( p AUC of 450 mg∙h/L or greater was required to achieve 100% target attainment using either Vitek 2 or E-test MIC results.
Kuretzki, Carlos Henrique; Campos, Antônio Carlos Ligocki; Malafaia, Osvaldo; Soares, Sandramara Scandelari Kusano de Paula; Tenório, Sérgio Bernardo; Timi, Jorge Rufino Ribas
2016-03-01
The use of information technology is often applied in healthcare. With regard to scientific research, the SINPE(c) - Integrated Electronic Protocols was created as a tool to support researchers, offering clinical data standardization. By the time, SINPE(c) lacked statistical tests obtained by automatic analysis. Add to SINPE(c) features for automatic realization of the main statistical methods used in medicine . The study was divided into four topics: check the interest of users towards the implementation of the tests; search the frequency of their use in health care; carry out the implementation; and validate the results with researchers and their protocols. It was applied in a group of users of this software in their thesis in the strict sensu master and doctorate degrees in one postgraduate program in surgery. To assess the reliability of the statistics was compared the data obtained both automatically by SINPE(c) as manually held by a professional in statistics with experience with this type of study. There was concern for the use of automatic statistical tests, with good acceptance. The chi-square, Mann-Whitney, Fisher and t-Student were considered as tests frequently used by participants in medical studies. These methods have been implemented and thereafter approved as expected. The incorporation of the automatic SINPE (c) Statistical Analysis was shown to be reliable and equal to the manually done, validating its use as a research tool for medical research.
Fidalgo, Angel M.; Alavi, Seyed Mohammad; Amirian, Seyed Mohammad Reza
2014-01-01
This study examines three controversial aspects in differential item functioning (DIF) detection by logistic regression (LR) models: first, the relative effectiveness of different analytical strategies for detecting DIF; second, the suitability of the Wald statistic for determining the statistical significance of the parameters of interest; and…
The Effects of Pre-Lecture Quizzes on Test Anxiety and Performance in a Statistics Course
Brown, Michael J.; Tallon, Jennifer
2015-01-01
The purpose of our study was to examine the effects of pre-lecture quizzes in a statistics course. Students (N = 70) from 2 sections of an introductory statistics course served as participants in this study. One section completed pre-lecture quizzes whereas the other section did not. Completing pre-lecture quizzes was associated with improved exam…
DEFF Research Database (Denmark)
Denwood, M.J.; McKendrick, I.J.; Matthews, L.
Introduction. There is an urgent need for a method of analysing FECRT data that is computationally simple and statistically robust. A method for evaluating the statistical power of a proposed FECRT study would also greatly enhance the current guidelines. Methods. A novel statistical framework has...... been developed that evaluates observed FECRT data against two null hypotheses: (1) the observed efficacy is consistent with the expected efficacy, and (2) the observed efficacy is inferior to the expected efficacy. The method requires only four simple summary statistics of the observed data. Power...... that the notional type 1 error rate of the new statistical test is accurate. Power calculations demonstrate a power of only 65% with a sample size of 20 treatment and control animals, which increases to 69% with 40 control animals or 79% with 40 treatment animals. Discussion. The method proposed is simple...
Directory of Open Access Journals (Sweden)
Azim Honarmand
2014-01-01
Full Text Available Background: Failed intubation is imperative source of anesthetic interrelated patient′s mortality. The aim of this present study was to compare the ability to predict difficult visualization of the larynx from the following pre-operative airway predictive indices, in isolation and combination: Modified Mallampati test (MMT, the ratio of height to thyromental distance (RHTMD, hyomental distance ratios (HMDR, and the upper-lip-bite test (ULBT. Materials and Methods: We collected data on 525 consecutive patients scheduled for elective surgery under general anesthesia requiring endotracheal intubation and then evaluated all four factors before surgery. A skilled anesthesiologist, not imparted of the noted pre-operative airway assessment, did the laryngoscopy and rating (as per Cormack and Lehane′s classification. Sensitivity, specificity, and positive predictive value for every airway predictor in isolation and in combination were established. Results: The most sensitive of the single tests was ULBT with a sensitivity of 90.2%. The hyomental distance extreme of head extension was the least sensitive of the single tests with a sensitivity of 56.9. The HMDR had sensitivity 86.3%. The ULBT had the highest negative predictive value: And the area under a receiver-operating characteristic curve (AUC of ROC curve among single predictors. The AUC of ROC curve for ULBT, HMDR and RHTMD was significantly more than for MMT (P 0.05. Conclusion: The HMDR is comparable with RHTMD and ULBT for prediction of difficult laryngoscopy in the general population, but was significantly more than for MMT.
Statistical Modeling for Quality Assurance of Human Papillomavirus DNA Batch Testing.
Beylerian, Emily N; Slavkovsky, Rose C; Holme, Francesca M; Jeronimo, Jose A
2018-03-22
Our objective was to simulate the distribution of human papillomavirus (HPV) DNA test results from a 96-well microplate assay to identify results that may be consistent with well-to-well contamination, enabling programs to apply specific quality assurance parameters. For this modeling study, we designed an algorithm that generated the analysis population of 900,000 to simulate the results of 10,000 microplate assays, assuming discrete HPV prevalences of 12%, 13%, 14%, 15%, and 16%. Using binomial draws, the algorithm created a vector of results for each prevalence and reassembled them into 96-well matrices for results distribution analysis of the number of positive cells and number and size of cell clusters (≥2 positive cells horizontally or vertically adjacent) per matrix. For simulation conditions of 12% and 16% HPV prevalence, 95% of the matrices displayed the following characteristics: 5 to 17 and 8 to 22 total positive cells, 0 to 4 and 0 to 5 positive cell clusters, and largest cluster sizes of up to 5 and up to 6 positive cells, respectively. Our results suggest that screening programs in regions with an oncogenic HPV prevalence of 12% to 16% can expect 5 to 22 positive results per microplate in approximately 95% of assays and 0 to 5 positive results clusters with no cluster larger than 6 positive results. Results consistently outside of these ranges deviate from what is statistically expected and could be the result of well-to-well contamination. Our results provide guidance that laboratories can use to identify microplates suspicious for well-to-well contamination, enabling improved quality assurance.This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal.
Directory of Open Access Journals (Sweden)
Claire Ramus
2016-03-01
Full Text Available This data article describes a controlled, spiked proteomic dataset for which the “ground truth” of variant proteins is known. It is based on the LC-MS analysis of samples composed of a fixed background of yeast lysate and different spiked amounts of the UPS1 mixture of 48 recombinant proteins. It can be used to objectively evaluate bioinformatic pipelines for label-free quantitative analysis, and their ability to detect variant proteins with good sensitivity and low false discovery rate in large-scale proteomic studies. More specifically, it can be useful for tuning software tools parameters, but also testing new algorithms for label-free quantitative analysis, or for evaluation of downstream statistical methods. The raw MS files can be downloaded from ProteomeXchange with identifier http://www.ebi.ac.uk/pride/archive/projects/PXD001819. Starting from some raw files of this dataset, we also provide here some processed data obtained through various bioinformatics tools (including MaxQuant, Skyline, MFPaQ, IRMa-hEIDI and Scaffold in different workflows, to exemplify the use of such data in the context of software benchmarking, as discussed in details in the accompanying manuscript [1]. The experimental design used here for data processing takes advantage of the different spike levels introduced in the samples composing the dataset, and processed data are merged in a single file to facilitate the evaluation and illustration of software tools results for the detection of variant proteins with different absolute expression levels and fold change values.
Wang, Hao; Wang, Qunwei; He, Ming
2018-05-01
In order to investigate and improve the level of detection technology of water content in liquid chemical reagents of domestic laboratories, proficiency testing provider PT0031 (CNAS) has organized proficiency testing program of water content in toluene, 48 laboratories from 18 provinces/cities/municipals took part in the PT. This paper introduces the implementation process of proficiency testing for determination of water content in toluene, including sample preparation, homogeneity and stability test, the results of statistics of iteration robust statistic technique and analysis, summarized and analyzed those of the different test standards which are widely used in the laboratories, put forward the technological suggestions for the improvement of the test quality of water content. Satisfactory results were obtained by 43 laboratories, amounting to 89.6% of the total participating laboratories.
Stabe, Roy G.; Schwab, John R.
1991-01-01
A 0.767-scale model of a turbine stator designed for the core of a high-bypass-ratio aircraft engine was tested with uniform inlet conditions and with an inlet radial temperature profile simulating engine conditions. The principal measurements were radial and circumferential surveys of stator-exit total temperature, total pressure, and flow angle. The stator-exit flow field was also computed by using a three-dimensional Navier-Stokes solver. Other than temperature, there were no apparent differences in performance due to the inlet conditions. The computed results compared quite well with the experimental results.
DEFF Research Database (Denmark)
Løkkegaard, Thomas; Pedersen, Tina Heidi; Lind, Bent
2015-01-01
collected retrospectively for a period of six months. For each patient, time in therapeutic range (TTR) was calculated and correlated with practice and patient characteristics using multilevel linear regression models. RESULTS: We identified 447 patients in warfarin treatment in the 20 practices using POCT......INTRODUCTION: Oral anticoagulation treatment (OACT) with warfarin is common in general practice. Increasingly, international normalised ratio (INR) point of care testing (POCT) is being used to manage patients. The aim of this study was to describe and analyse the quality of OACT with warfarin...
DEFF Research Database (Denmark)
Løkkegaard, Thomas; Pedersen, Tina Heidi; Lind, Bent
2015-01-01
INTRODUCTION: Oral anticoagulation treatment (OACT)with warfarin is common in general practice. Increasingly,international normalised ratio (INR) point of care testing(POCT) is being used to manage patients. The aim of thisstudy was to describe and analyse the quality of OACT withwarfarin...... in the management of patients in warfarintreatment provided good quality of care. Sampling intervaland diagnostic coding were significantly correlated withtreatment quality. FUNDING: The study received financial support from theSarah Krabbe Foundation, the General Practitioners’ Educationand Development Foundation...
Fang, Yongxiang; Wit, Ernst
2008-01-01
Fisher’s combined probability test is the most commonly used method to test the overall significance of a set independent p-values. However, it is very obviously that Fisher’s statistic is more sensitive to smaller p-values than to larger p-value and a small p-value may overrule the other p-values
Pestman, Wiebe R
2009-01-01
This textbook provides a broad and solid introduction to mathematical statistics, including the classical subjects hypothesis testing, normal regression analysis, and normal analysis of variance. In addition, non-parametric statistics and vectorial statistics are considered, as well as applications of stochastic analysis in modern statistics, e.g., Kolmogorov-Smirnov testing, smoothing techniques, robustness and density estimation. For students with some elementary mathematical background. With many exercises. Prerequisites from measure theory and linear algebra are presented.
Rényi statistics for testing composite hypotheses in general exponential models
Czech Academy of Sciences Publication Activity Database
Morales, D.; Pardo, L.; Pardo, M. C.; Vajda, Igor
2004-01-01
Roč. 38, č. 2 (2004), s. 133-147 ISSN 0233-1888 R&D Projects: GA ČR GA201/02/1391 Grant - others:BMF(ES) 2003-00892; BMF(ES) 2003-04820 Institutional research plan: CEZ:AV0Z1075907 Keywords : natural exponential models * Levy processes * generalized Wald statistics Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.323, year: 2004
Comments on statistical issues in numerical modeling for underground nuclear test monitoring
International Nuclear Information System (INIS)
Nicholson, W.L.; Anderson, K.K.
1993-01-01
The Symposium concluded with prepared summaries by four experts in the involved disciplines. These experts made no mention of statistics and/or the statistical content of issues. The first author contributed an extemporaneous statement at the Symposium because there are important issues associated with conducting and evaluating numerical modeling that are familiar to statisticians and often treated successfully by them. This note expands upon these extemporaneous remarks
Weerasinghe, Dash; Orsak, Timothy; Mendro, Robert
In an age of student accountability, public school systems must find procedures for identifying effective schools, classrooms, and teachers that help students continue to learn academically. As a result, researchers have been modeling schools and classrooms to calculate productivity indicators that will withstand not only statistical review but…
Ou, Lu; Chow, Sy-Miin; Ji, Linying; Molenaar, Peter C M
2017-01-01
The autoregressive latent trajectory (ALT) model synthesizes the autoregressive model and the latent growth curve model. The ALT model is flexible enough to produce a variety of discrepant model-implied change trajectories. While some researchers consider this a virtue, others have cautioned that this may confound interpretations of the model's parameters. In this article, we show that some-but not all-of these interpretational difficulties may be clarified mathematically and tested explicitly via likelihood ratio tests (LRTs) imposed on the initial conditions of the model. We show analytically the nested relations among three variants of the ALT model and the constraints needed to establish equivalences. A Monte Carlo simulation study indicated that LRTs, particularly when used in combination with information criterion measures, can allow researchers to test targeted hypotheses about the functional forms of the change process under study. We further demonstrate when and how such tests may justifiably be used to facilitate our understanding of the underlying process of change using a subsample (N = 3,995) of longitudinal family income data from the National Longitudinal Survey of Youth.
International Nuclear Information System (INIS)
Gross, D.H.E.
2006-01-01
Heat can flow from cold to hot at any phase separation even in macroscopic systems. Therefore also Lynden-Bell's famous gravo-thermal catastrophe must be reconsidered. In contrast to traditional canonical Boltzmann-Gibbs statistics this is correctly described only by microcanonical statistics. Systems studied in chemical thermodynamics (ChTh) by using canonical statistics consist of several homogeneous macroscopic phases. Evidently, macroscopic statistics as in chemistry cannot and should not be applied to non-extensive or inhomogeneous systems like nuclei or galaxies. Nuclei are small and inhomogeneous. Multifragmented nuclei are even more inhomogeneous and the fragments even smaller. Phase transitions of first order and especially phase separations therefore cannot be described by a (homogeneous) canonical ensemble. Taking this serious, fascinating perspectives open for statistical nuclear fragmentation as test ground for the basic principles of statistical mechanics, especially of phase transitions, without the use of the thermodynamic limit. Moreover, there is also a lot of similarity between the accessible phase space of fragmenting nuclei and inhomogeneous multistellar systems. This underlines the fundamental significance for statistical physics in general. (orig.)
Costache, Romulus; Zaharia, Liliana
2017-06-01
Given the significant worldwide human and economic losses caused due to floods annually, reducing the negative consequences of these hazards is a major concern in development strategies at different spatial scales. A basic step in flood risk management is identifying areas susceptible to flood occurrences. This paper proposes a methodology allowing the identification of areas with high potential of accelerated surface run-off and consequently, of flash-flood occurrences. The methodology involves assessment and mapping in GIS environment of flash flood potential index (FFPI), by integrating two statistical methods: frequency ratio and weights-of-evidence. The methodology was applied for Bâsca Chiojdului River catchment (340 km2), located in the Carpathians Curvature region (Romania). Firstly, the areas with torrential phenomena were identified and the main factors controlling the surface run-off were selected (in this study nine geographical factors were considered). Based on the features of the considered factors, many classes were set for each of them. In the next step, the weights of each class/category of the considered factors were determined, by identifying their spatial relationships with the presence or absence of torrential phenomena. Finally, the weights for each class/category of geographical factors were summarized in GIS, resulting the FFPI values for each of the two statistical methods. These values were divided into five classes of intensity and were mapped. The final results were used to estimate the flash-flood potential and also to identify the most susceptible areas to this phenomenon. Thus, the high and very high values of FFPI characterize more than one-third of the study catchment. The result validation was performed by (i) quantifying the rate of the number of pixels corresponding to the torrential phenomena considered for the study (training area) and for the results' testing (validating area) and (ii) plotting the ROC (receiver operating
Statistical testing of input factors in the carbonation of brine impacted fly ash.
Grace, Muriithi N; Wilson, Gitari M; Leslie, Petrik F
2012-01-01
A D-optimal design was applied in the study of input factors: temperature, pressure, solid/liquid (S/L) ratio and particle size and their influence on the carbonation of brine impacted fly ash (FA) determined. Both temperature and pressure were at two levels (30°C and 90°C; 1 Mpa and 4 Mpa), S/L ratio was at three levels (0.1, 0.5 and 1) while particle size was at 4 levels (bulk ash, 150 μm). Pressure was observed to have a slight influence on the % CaCO(3) yield while higher temperatures led to higher percentage CaCO(3) yield. The particle size range of 20 μm - 150 μm enhanced the degree of carbonation of the fly ash/brine slurries. This was closely followed by the bulk ash while the >150 μm particle fraction had the least influence on the % CaCO(3). The effect of S/L ratio was temperature dependent. At low temperature, the S/L ratio of 1 resulted in the highest % CaCO(3) formation while at high temperature, the ratio of 0.5 resulted in the highest percentage CaCO(3) formation. Overall the two most important factors in the carbonation of FA and brine were found to be particle size and temperature.
... Testing Treatment & Outcomes Health Professionals Statistics More Resources Candidiasis Candida infections of the mouth, throat, and esophagus Vaginal candidiasis Invasive candidiasis Definition Symptoms Risk & Prevention Sources Diagnosis ...
Buttino, Isabella; Vitiello, Valentina; Macchia, Simona; Scuderi, Alice; Pellegrini, David
2018-03-01
The copepod Acartia tonsa was used as a model species to assess marine sediment quality. Acute and chronic bioassays, such as larval development ratio (LDR) and different end-points were evaluated. As a pelagic species, A. tonsa is mainly exposed to water-soluble toxicants and bioassays are commonly performed in seawater. However, an interaction among A. tonsa eggs and the first larval stages with marine sediments might occur in shallow water environments. Here we tested two different LDR protocols by incubating A. tonsa eggs in elutriates and sediments coming from two areas located in Tuscany Region (Central Italy): Livorno harbour and Viareggio coast. The end-points analyzed were larval mortality (LM) and development inhibition (DI) expressed as the percentage of copepods that completed the metamorphosis from nauplius to copepodite. Aims of this study were: i) to verify the suitability of A. tonsa copepod for the bioassay with sediment and ii) to compare the sensitivity of A. tonsa exposed to different matrices, such as water and sediment. A preliminary acute test was also performed. Acute tests showed the highest toxicity of Livorno's samples (two out of three) compared to Viareggio samples, for which no effect was observed. On the contrary, LDR tests with sediments and elutriates revealed some toxic effects also for Viareggio's samples. Results were discussed with regards to the chemical characterization of the samples. Our results indicated that different end-points were affected in A. tonsa, depending on the matrices to which the copepods were exposed and on the test used. Bioassays with elutriates and sediments are suggested and LDR test could help decision-makers to identify a more appropriate management of dredging materials. Copyright © 2017 Elsevier Inc. All rights reserved.
McArtor, Daniel B; Lubke, Gitta H; Bergeman, C S
2017-12-01
Person-centered methods are useful for studying individual differences in terms of (dis)similarities between response profiles on multivariate outcomes. Multivariate distance matrix regression (MDMR) tests the significance of associations of response profile (dis)similarities and a set of predictors using permutation tests. This paper extends MDMR by deriving and empirically validating the asymptotic null distribution of its test statistic, and by proposing an effect size for individual outcome variables, which is shown to recover true associations. These extensions alleviate the computational burden of permutation tests currently used in MDMR and render more informative results, thus making MDMR accessible to new research domains.
McAlinden, Colm; Khadka, Jyoti; Pesudovs, Konrad
2011-07-01
The ever-expanding choice of ocular metrology and imaging equipment has driven research into the validity of their measurements. Consequently, studies of the agreement between two instruments or clinical tests have proliferated in the ophthalmic literature. It is important that researchers apply the appropriate statistical tests in agreement studies. Correlation coefficients are hazardous and should be avoided. The 'limits of agreement' method originally proposed by Altman and Bland in 1983 is the statistical procedure of choice. Its step-by-step use and practical considerations in relation to optometry and ophthalmology are detailed in addition to sample size considerations and statistical approaches to precision (repeatability or reproducibility) estimates. Ophthalmic & Physiological Optics © 2011 The College of Optometrists.
Assessment of noise in a digital image using the join-count statistic and the Moran test
International Nuclear Information System (INIS)
Kehshih Chuang; Huang, H.K.
1992-01-01
It is assumed that data bits of a pixel in digital images can be divided into signal and noise bits. The signal bits occupy the most significant part of the pixel. The signal parts of each pixel are correlated while the noise parts are uncorrelated. Two statistical methods, the Moran test and the join-count statistic, are used to examine the noise parts. Images from computerized tomography, magnetic resonance and computed radiography are used for the evaluation of the noise bits. A residual image is formed by subtracting the original image from its smoothed version. The noise level in the residual image is then identical to that in the original image. Both statistical tests are then performed on the bit planes of the residual image. Results show that most digital images contain only 8-9 bits of correlated information. Both methods are easy to implement and fast to perform. (author)
Residuals and the Residual-Based Statistic for Testing Goodness of Fit of Structural Equation Models
Foldnes, Njal; Foss, Tron; Olsson, Ulf Henning
2012-01-01
The residuals obtained from fitting a structural equation model are crucial ingredients in obtaining chi-square goodness-of-fit statistics for the model. The authors present a didactic discussion of the residuals, obtaining a geometrical interpretation by recognizing the residuals as the result of oblique projections. This sheds light on the…
Basic Mathematics Test Predicts Statistics Achievement and Overall First Year Academic Success
Fonteyne, Lot; De Fruyt, Filip; Dewulf, Nele; Duyck, Wouter; Erauw, Kris; Goeminne, Katy; Lammertyn, Jan; Marchant, Thierry; Moerkerke, Beatrijs; Oosterlinck, Tom; Rosseel, Yves
2015-01-01
In the psychology and educational science programs at Ghent University, only 36.1% of the new incoming students in 2011 and 2012 passed all exams. Despite availability of information, many students underestimate the scientific character of social science programs. Statistics courses are a major obstacle in this matter. Not all enrolling students…
Statistical inference a short course
Panik, Michael J
2012-01-01
A concise, easily accessible introduction to descriptive and inferential techniques Statistical Inference: A Short Course offers a concise presentation of the essentials of basic statistics for readers seeking to acquire a working knowledge of statistical concepts, measures, and procedures. The author conducts tests on the assumption of randomness and normality, provides nonparametric methods when parametric approaches might not work. The book also explores how to determine a confidence interval for a population median while also providing coverage of ratio estimation, randomness, and causal
International Nuclear Information System (INIS)
Samec, R. G.; Labadorf, C. M.; Hawkins, N. C.; Faulkner, D. R.; Van Hamme, W.
2011-01-01
We present precision CCD light curves, a period study, photometrically derived standard magnitudes, and a five-color simultaneous Wilson code solution of the totally eclipsing, yet shallow amplitude (A v ∼ 0.4 mag) eclipsing, binary V1853 Orionis. It is determined to be an extreme mass ratio, q = 0.20, W-type W UMa overcontact binary. From our standard star observations, we find that the variable is a late-type F spectral-type dwarf, with a secondary component of about 0.24 solar masses (stellar type M5V). Its long eclipse duration (41 minutes) as compared to its period, 0.383 days, attests to the small relative size of the secondary. Furthermore, it has reached a Roche lobe fill-out of ∼50% of its outer critical lobe as it approaches its final stages of binary star evolution, that of a fast spinning single star. Finally, a summary of about 25 extreme mass ratio solar-type binaries is given.
Directory of Open Access Journals (Sweden)
Maczka Paulina
2014-06-01
Full Text Available Dissolution tests of amlodipine and perindopril from their fixed dose formulations were performed in 900 mL of phosphate buffer of pH 5.5 at 37°C using the paddle apparatus. Then, two simple and rapid derivative spectrophotometric methods were used for the quantitative measurements of amlodipine and perindopril. The first method was zero crossing first derivative spectrophotometry in which measuring of amplitudes at 253 nm for amlodipine and 229 nm for perindopril were used. The second method was ratio derivative spectrophotometry in which spectra of amlodipine over the linearity range were divided by one selected standard spectrum of perindopril and then amplitudes at 242 nm were measured. Similarly, spectra of perindopril were divided by one selected standard spectrum of amlodipine and then amplitudes at 298 nm were measured. Both of the methods were validated to meet official requirements and were demonstrated to be selective, precise and accurate. Since there is no official monograph for these drugs in binary formulations, the dissolution tests and quantification procedure presented here can be used as a quality control test for amlodipine and perindopril in respective dosage forms.
Directory of Open Access Journals (Sweden)
Ying Bi
2017-02-01
Full Text Available An active control technique utilizing piezoelectric actuators to alleviate gust-response loads of a large-aspect-ratio flexible wing is investigated. Piezoelectric materials have been extensively used for active vibration control of engineering structures. In this paper, piezoelectric materials further attempt to suppress the vibration of the aeroelastic wing caused by gust. The motion equation of the flexible wing with piezoelectric patches is obtained by Hamilton’s principle with the modal approach, and then numerical gust responses are analyzed, based on which a gust load alleviation (GLA control system is proposed. The gust load alleviation system employs classic proportional-integral-derivative (PID controllers which treat piezoelectric patches as control actuators and acceleration as the feedback signal. By a numerical method, the control mechanism that piezoelectric actuators can be used to alleviate gust-response loads is also analyzed qualitatively. Furthermore, through low-speed wind tunnel tests, the effectiveness of the gust load alleviation active control technology is validated. The test results agree well with the numerical results. Test results show that at a certain frequency range, the control scheme can effectively alleviate the z and x wingtip accelerations and the root bending moment of the wing to a certain extent. The control system gives satisfying gust load alleviation efficacy with the reduction rate being generally over 20%.
International Nuclear Information System (INIS)
Shibata, H.; Ito, A.; Tanaka, K.; Niino, T.; Gotoh, N.
1981-01-01
Generally, damping phenomena of structures and equipments is caused by very complex energy dissipation. Especially, as piping systems are composed of many components, it is very difficult to evaluate damping characteristics of its system theoretically. On the other hand, the damping value for aseismic design of nuclear power plants is very important design factor to decide seismic response loads of structures, equipments and piping systems. The very extensive studies titled SDREP (Seismic Damping Ratio Evaluation Program) were performed to establish proper damping values for seismic design of piping as a joint work among a university, electric companies and plant makers. In SDREP, various systematic vibration tests were conducted to investigate factors which may contribute to damping characteristics of piping systems and to supplement the data of the pre-operating tests. This study is related to the component damping characteristics tests of that program. The object of this study is to clarify damping characteristics and mechanism of hanger supports used in piping systems, and to establish the evaluation technique of dispersing energy at hanger support points and its effect to the total damping ability of piping system. (orig./WL)
Hartanto, R.; Jantra, M. A. C.; Santosa, S. A. B.; Purnomoadi, A.
2018-01-01
The purpose of this research was to find an appropriate relationship model between the feed energy and protein ratio with the amount of production and quality of milk proteins. This research was conducted at Getasan Sub-district, Semarang Regency, Central Java Province, Indonesia using 40 samples (Holstein Friesian cattle, lactation period II-III and lactation month 3-4). Data were analyzed using linear and quadratic regressions, to predict the production and quality of milk protein from feed energy and protein ratio that describe the diet. The significance of model was tested using analysis of variance. Coefficient of determination (R2), residual variance (RV) and root mean square prediction error (RMSPE) were reported for the developed equations as an indicator of the goodness of model fit. The results showed no relationship in milk protein (kg), milk casein (%), milk casein (kg) and milk urea N (mg/dl) as function of CP/TDN. The significant relationship was observed in milk production (L or kg) and milk protein (%) as function of CP/TDN, both in linear and quadratic models. In addition, a quadratic change in milk production (L) (P = 0.003), milk production (kg) (P = 0.003) and milk protein concentration (%) (P = 0.026) were observed with increase of CP/TDN. It can be concluded that quadratic equation was the good fitting model for this research, because quadratic equation has larger R2, smaller RV and smaller RMSPE than those of linear equation.
Stabe, R. G.; Whitney, W. J.; Moffitt, T. P.
1984-01-01
Experimental results are presented for a 0.767 scale model of the first stage of a two-stage turbine designed for a high by-pass ratio engine. The turbine was tested with both uniform inlet conditions and with an inlet radial temperature profile simulating engine conditions. The inlet temperature profile was essentially mixed-out in the rotor. There was also substantial underturning of the exit flow at the mean diameter. Both of these effects were attributed to strong secondary flows in the rotor blading. There were no significant differences in the stage performance with either inlet condition when differences in tip clearance were considered. Performance was very close to design intent in both cases. Previously announced in STAR as N84-24589
Statistical Tests Black swans or dragon-kings? A simple test for deviations from the power law★
Janczura, J.; Weron, R.
2012-05-01
We develop a simple test for deviations from power law tails. Actually, from the tails of any distribution. We use this test - which is based on the asymptotic properties of the empirical distribution function - to answer the question whether great natural disasters, financial crashes or electricity price spikes should be classified as dragon-kings or `only' as black swans.
DEFF Research Database (Denmark)
Lund, Mikkel N.; Chaplin, William J.; Kjeldsen, Hans
2012-01-01
hence a candidate detection). We apply the method to solar photometry data, whose quality was systematically degraded to test the performance of the MWPS at low signal-to-noise ratios. We also compare the performance of the MWPS against the frequently applied power-spectrum-of-power-spectrum (PSx...
Festing, Michael F W
2014-01-01
The safety of chemicals, drugs, novel foods and genetically modified crops is often tested using repeat-dose sub-acute toxicity tests in rats or mice. It is important to avoid misinterpretations of the results as these tests are used to help determine safe exposure levels in humans. Treated and control groups are compared for a range of haematological, biochemical and other biomarkers which may indicate tissue damage or other adverse effects. However, the statistical analysis and presentation of such data poses problems due to the large number of statistical tests which are involved. Often, it is not clear whether a "statistically significant" effect is real or a false positive (type I error) due to sampling variation. The author's conclusions appear to be reached somewhat subjectively by the pattern of statistical significances, discounting those which they judge to be type I errors and ignoring any biomarker where the p-value is greater than p = 0.05. However, by using standardised effect sizes (SESs) a range of graphical methods and an over-all assessment of the mean absolute response can be made. The approach is an extension, not a replacement of existing methods. It is intended to assist toxicologists and regulators in the interpretation of the results. Here, the SES analysis has been applied to data from nine published sub-acute toxicity tests in order to compare the findings with those of the author's. Line plots, box plots and bar plots show the pattern of response. Dose-response relationships are easily seen. A "bootstrap" test compares the mean absolute differences across dose groups. In four out of seven papers where the no observed adverse effect level (NOAEL) was estimated by the authors, it was set too high according to the bootstrap test, suggesting that possible toxicity is under-estimated.
Directory of Open Access Journals (Sweden)
Michael F W Festing
Full Text Available The safety of chemicals, drugs, novel foods and genetically modified crops is often tested using repeat-dose sub-acute toxicity tests in rats or mice. It is important to avoid misinterpretations of the results as these tests are used to help determine safe exposure levels in humans. Treated and control groups are compared for a range of haematological, biochemical and other biomarkers which may indicate tissue damage or other adverse effects. However, the statistical analysis and presentation of such data poses problems due to the large number of statistical tests which are involved. Often, it is not clear whether a "statistically significant" effect is real or a false positive (type I error due to sampling variation. The author's conclusions appear to be reached somewhat subjectively by the pattern of statistical significances, discounting those which they judge to be type I errors and ignoring any biomarker where the p-value is greater than p = 0.05. However, by using standardised effect sizes (SESs a range of graphical methods and an over-all assessment of the mean absolute response can be made. The approach is an extension, not a replacement of existing methods. It is intended to assist toxicologists and regulators in the interpretation of the results. Here, the SES analysis has been applied to data from nine published sub-acute toxicity tests in order to compare the findings with those of the author's. Line plots, box plots and bar plots show the pattern of response. Dose-response relationships are easily seen. A "bootstrap" test compares the mean absolute differences across dose groups. In four out of seven papers where the no observed adverse effect level (NOAEL was estimated by the authors, it was set too high according to the bootstrap test, suggesting that possible toxicity is under-estimated.
Murphy, Thomas; Schwedock, Julie; Nguyen, Kham; Mills, Anna; Jones, David
2015-01-01
New recommendations for the validation of rapid microbiological methods have been included in the revised Technical Report 33 release from the PDA. The changes include a more comprehensive review of the statistical methods to be used to analyze data obtained during validation. This case study applies those statistical methods to accuracy, precision, ruggedness, and equivalence data obtained using a rapid microbiological methods system being evaluated for water bioburden testing. Results presented demonstrate that the statistical methods described in the PDA Technical Report 33 chapter can all be successfully applied to the rapid microbiological method data sets and gave the same interpretation for equivalence to the standard method. The rapid microbiological method was in general able to pass the requirements of PDA Technical Report 33, though the study shows that there can be occasional outlying results and that caution should be used when applying statistical methods to low average colony-forming unit values. Prior to use in a quality-controlled environment, any new method or technology has to be shown to work as designed by the manufacturer for the purpose required. For new rapid microbiological methods that detect and enumerate contaminating microorganisms, additional recommendations have been provided in the revised PDA Technical Report No. 33. The changes include a more comprehensive review of the statistical methods to be used to analyze data obtained during validation. This paper applies those statistical methods to analyze accuracy, precision, ruggedness, and equivalence data obtained using a rapid microbiological method system being validated for water bioburden testing. The case study demonstrates that the statistical methods described in the PDA Technical Report No. 33 chapter can be successfully applied to rapid microbiological method data sets and give the same comparability results for similarity or difference as the standard method. © PDA, Inc
Fang, Yongxiang; Wit, Ernst
2008-01-01
Fisher’s combined probability test is the most commonly used method to test the overall significance of a set independent p-values. However, it is very obviously that Fisher’s statistic is more sensitive to smaller p-values than to larger p-value and a small p-value may overrule the other p-values and decide the test result. This is, in some cases, viewed as a flaw. In order to overcome this flaw and improve the power of the test, the joint tail probability of a set p-values is proposed as a ...
Comparison of statistical tests for association between rare variants and binary traits.
Bacanu, SA; Nelson, MR; Whittaker, JC
2012-01-01
: Genome-wide association studies have found thousands of common genetic variants associated with a wide variety of diseases and other complex traits. However, a large portion of the predicted genetic contribution to many traits remains unknown. One plausible explanation is that some of the missing variation is due to the effects of rare variants. Nonetheless, the statistical analysis of rare variants is challenging. A commonly used method is to contrast, within the same region (gene), the fr...
A powerful score-based test statistic for detecting gene-gene co-association.
Xu, Jing; Yuan, Zhongshang; Ji, Jiadong; Zhang, Xiaoshuai; Li, Hongkai; Wu, Xuesen; Xue, Fuzhong; Liu, Yanxun
2016-01-29
The genetic variants identified by Genome-wide association study (GWAS) can only account for a small proportion of the total heritability for complex disease. The existence of gene-gene joint effects which contains the main effects and their co-association is one of the possible explanations for the "missing heritability" problems. Gene-gene co-association refers to the extent to which the joint effects of two genes differ from the main effects, not only due to the traditional interaction under nearly independent condition but the correlation between genes. Generally, genes tend to work collaboratively within specific pathway or network contributing to the disease and the specific disease-associated locus will often be highly correlated (e.g. single nucleotide polymorphisms (SNPs) in linkage disequilibrium). Therefore, we proposed a novel score-based statistic (SBS) as a gene-based method for detecting gene-gene co-association. Various simulations illustrate that, under different sample sizes, marginal effects of causal SNPs and co-association levels, the proposed SBS has the better performance than other existed methods including single SNP-based and principle component analysis (PCA)-based logistic regression model, the statistics based on canonical correlations (CCU), kernel canonical correlation analysis (KCCU), partial least squares path modeling (PLSPM) and delta-square (δ (2)) statistic. The real data analysis of rheumatoid arthritis (RA) further confirmed its advantages in practice. SBS is a powerful and efficient gene-based method for detecting gene-gene co-association.
Kruschke, John K; Liddell, Torrin M
2018-02-01
In the practice of data analysis, there is a conceptual distinction between hypothesis testing, on the one hand, and estimation with quantified uncertainty on the other. Among frequentists in psychology, a shift of emphasis from hypothesis testing to estimation has been dubbed "the New Statistics" (Cumming 2014). A second conceptual distinction is between frequentist methods and Bayesian methods. Our main goal in this article is to explain how Bayesian methods achieve the goals of the New Statistics better than frequentist methods. The article reviews frequentist and Bayesian approaches to hypothesis testing and to estimation with confidence or credible intervals. The article also describes Bayesian approaches to meta-analysis, randomized controlled trials, and power analysis.
[Do we always correctly interpret the results of statistical nonparametric tests].
Moczko, Jerzy A
2014-01-01
Mann-Whitney, Wilcoxon, Kruskal-Wallis and Friedman tests create a group of commonly used tests to analyze the results of clinical and laboratory data. These tests are considered to be extremely flexible and their asymptotic relative efficiency exceeds 95 percent. Compared with the corresponding parametric tests they do not require checking the fulfillment of the conditions such as the normality of data distribution, homogeneity of variance, the lack of correlation means and standard deviations, etc. They can be used both in the interval and or-dinal scales. The article presents an example Mann-Whitney test, that does not in any case the choice of these four nonparametric tests treated as a kind of gold standard leads to correct inference.
Petocz, Peter; Sowey, Eric
2008-01-01
In this article, the authors focus on hypothesis testing--that peculiarly statistical way of deciding things. Statistical methods for testing hypotheses were developed in the 1920s and 1930s by some of the most famous statisticians, in particular Ronald Fisher, Jerzy Neyman and Egon Pearson, who laid the foundations of almost all modern methods of…
Directory of Open Access Journals (Sweden)
Jinqi Zhao
2017-12-01
Full Text Available In recent years, multi-temporal imagery from spaceborne sensors has provided a fast and practical means for surveying and assessing changes in terrain surfaces. Owing to the all-weather imaging capability, polarimetric synthetic aperture radar (PolSAR has become a key tool for change detection. Change detection methods include both unsupervised and supervised methods. Supervised change detection, which needs some human intervention, is generally ineffective and impractical. Due to this limitation, unsupervised methods are widely used in change detection. The traditional unsupervised methods only use a part of the polarization information, and the required thresholding algorithms are independent of the multi-temporal data, which results in the change detection map being ineffective and inaccurate. To solve these problems, a novel method of change detection using a test statistic based on the likelihood ratio test and the improved Kittler and Illingworth (K&I minimum-error thresholding algorithm is introduced in this paper. The test statistic is used to generate the comparison image (CI of the multi-temporal PolSAR images, and improved K&I using a generalized Gaussian model simulates the distribution of the CI. As a result of these advantages, we can obtain the change detection map using an optimum threshold. The efficiency of the proposed method is demonstrated by the use of multi-temporal PolSAR images acquired by RADARSAT-2 over Wuhan, China. The experimental results show that the proposed method is effective and highly accurate.
Woodruff, David; Wu, Yi-Fang
2012-01-01
The purpose of this paper is to illustrate alpha's robustness and usefulness, using actual and simulated educational test data. The sampling properties of alpha are compared with the sampling properties of several other reliability coefficients: Guttman's lambda[subscript 2], lambda[subscript 4], and lambda[subscript 6]; test-retest reliability;…
DEFF Research Database (Denmark)
Nielsen, Morten Ørregaard
This paper presents a family of simple nonparametric unit root tests indexed by one parameter, d, and containing Breitung's (2002) test as the special case d = 1. It is shown that (i) each member of the family with d > 0 is consistent, (ii) the asymptotic distribution depends on d, and thus refle...
Statistically based reevaluation of PISC-II round robin test data
International Nuclear Information System (INIS)
Heasler, P.G.; Taylor, T.T.; Doctor, S.R.
1993-05-01
This report presents a re-analysis of an international PISC-II (Programme for Inspection of Steel Components, Phase 2) round-robin inspection results using formal statistical techniques to account for experimental error. The analysis examines US team performance vs. other participants performance,flaw sizing performance and errors associated with flaw sizing, factors influencing flaw detection probability, performance of all participants with respect to recently adopted ASME Section 11 flaw detection performance demonstration requirements, and develops conclusions concerning ultrasonic inspection capability. Inspection data were gathered on four heavy section steel components which included two plates and two nozzle configurations
Berlin, Sofia; Smith, Nick G C
2005-11-10
Adaptive evolution appears to be a common feature of reproductive proteins across a very wide range of organisms. A promising way of addressing the evolutionary forces responsible for this general phenomenon is to test for adaptive evolution in the same gene but among groups of species, which differ in their reproductive biology. One can then test evolutionary hypotheses by asking whether the variation in adaptive evolution is consistent with the variation in reproductive biology. We have attempted to apply this approach to the study of a female reproductive protein, zona pellucida C (ZPC), which has been previously shown by the use of likelihood ratio tests (LRTs) to be under positive selection in mammals. We tested for evidence of adaptive evolution of ZPC in 15 mammalian species, in 11 avian species and in six fish species using three different LRTs (M1a-M2a, M7-M8, and M8a-M8). The only significant findings of adaptive evolution came from the M7-M8 test in mammals and fishes. Since LRTs of adaptive evolution may yield false positives in some situations, we examined the properties of the LRTs by several different simulation methods. When we simulated data to test the robustness of the LRTs, we found that the pattern of evolution in ZPC generates an excess of false positives for the M7-M8 LRT but not for the M1a-M2a or M8a-M8 LRTs. This bias is strong enough to have generated the significant M7-M8 results for mammals and fishes. We conclude that there is no strong evidence for adaptive evolution of ZPC in any of the vertebrate groups we studied, and that the M7-M8 LRT can be biased towards false inference of adaptive evolution by certain patterns of non-adaptive evolution.
Besser, Rachel E J; Shields, Beverley M; Hammersley, Suzanne E; Colclough, Kevin; McDonald, Timothy J; Gray, Zoe; Heywood, James J N; Barrett, Timothy G; Hattersley, Andrew T
2013-05-01
Making the correct diabetes diagnosis in children is crucial for lifelong management. Type 2 diabetes and maturity onset diabetes of the young (MODY) are seen in the pediatric setting, and can be difficult to discriminate from type 1 diabetes. Postprandial urinary C-peptide creatinine ratio (UCPCR) is a non-invasive measure of endogenous insulin secretion that has not been tested as a diagnostic tool in children or in patients with diabetes duration MODY and type 2 in pediatric diabetes. Two-hour postprandial UCPCR was measured in 264 patients aged MODY, n = 63). Receiver operating characteristic curves were used to identify the optimal UCPCR cutoff for discriminating diabetes subtypes. UCPCR was lower in type 1 diabetes [0.05 (MODY [3.51 (2.37-5.32) nmol/mmol, p MODY (p = 0.25), so patients were combined for subsequent analyses. After 2-yr duration, UCPCR ≥ 0.7 nmol/mmol has 100% sensitivity [95% confidence interval (CI): 92-100] and 97% specificity (95% CI: 91-99) for identifying non-type 1 (MODY + type 2 diabetes) from type 1 diabetes [area under the curve (AUC) 0.997]. UCPCR was poor at discriminating MODY from type 2 diabetes (AUC 0.57). UCPCR testing can be used in diabetes duration greater than 2 yr to identify pediatric patients with non-type 1 diabetes. UCPCR testing is a practical non-invasive method for use in the pediatric outpatient setting. © 2013 John Wiley & Sons A/S.
Directory of Open Access Journals (Sweden)
Ertan Şahin
2018-04-01
Full Text Available Aim: Neutrophil/lymphocyte ratio (NLR and platelet/lymphocyte ratio (PLR are used as inflammatory markers in several diseases. However, there are little data regarding the diagnostic ability of NLR and PLR in Helicobacter pylori. We aimed to assess the association between the 14C urea breath test (14C-UBT results and NLR and PLR in H. pylori diagnosis. Methods: Results of 89 patients were retrospectively analysed in this study. According to the 14C-UBT results, patients were divided into two groups: H. pylori (+ and H. pylori (- (control group. Haematological parameters, including hemoglobine, white blood cell (WBC count, neutrophil count, lymphocyte count, NLR, platelet count, and PLR were compared between the two groups. Results: The mean total WBC count, neutrophil count, NLR and PLR in H. pylori (+ patients were significantly higher than in the control group (p<0.001 for all these parameters. In the receiver operating characteristic curve analysis, the cut-off value for NLR and PLR for the presence of H. pylori was calculated as ≥2.39 [sensitivity: 67.3%, specificity: 79.4%, area under the curve (AUC: 0.747 (0.637-0.856, p<0.0001] and ≥133.3 [sensitivity: 61.8%, specificity: 55.9%, AUC: 0.572 (0.447-0.697, p<0.05], respectively. Conclusion: The present study shows that NLR and PLR are associated with H. pylori positivity based on 14C-UBT, and they can be used as an additional biomarker for supporting the 14C-UBT results.
Statistical homogeneity tests applied to large data sets from high energy physics experiments
Trusina, J.; Franc, J.; Kůs, V.
2017-12-01
Homogeneity tests are used in high energy physics for the verification of simulated Monte Carlo samples, it means if they have the same distribution as a measured data from particle detector. Kolmogorov-Smirnov, χ 2, and Anderson-Darling tests are the most used techniques to assess the samples’ homogeneity. Since MC generators produce plenty of entries from different models, each entry has to be re-weighted to obtain the same sample size as the measured data has. One way of the homogeneity testing is through the binning. If we do not want to lose any information, we can apply generalized tests based on weighted empirical distribution functions. In this paper, we propose such generalized weighted homogeneity tests and introduce some of their asymptotic properties. We present the results based on numerical analysis which focuses on estimations of the type-I error and power of the test. Finally, we present application of our homogeneity tests to data from the experiment DØ in Fermilab.
Jokhio, Gul A.; Syed Mohsin, Sharifah M.; Gul, Yasmeen
2018-04-01
It has been established that Adobe provides, in addition to being sustainable and economic, a better indoor air quality without spending extensive amounts of energy as opposed to the modern synthetic materials. The material, however, suffers from weak structural behaviour when subjected to adverse loading conditions. A wide range of mechanical properties has been reported in literature owing to lack of research and standardization. The present paper presents the statistical analysis of the results that were obtained through compressive and flexural tests on Adobe samples. Adobe specimens with and without wire mesh reinforcement were tested and the results were reported. The statistical analysis of these results presents an interesting read. It has been found that the compressive strength of adobe increases by about 43% after adding a single layer of wire mesh reinforcement. This increase is statistically significant. The flexural response of Adobe has also shown improvement with the addition of wire mesh reinforcement, however, the statistical significance of the same cannot be established.
Linford, G. A.; Lemen, J. R.; Strong, K. T.
1988-01-01
Since the repair of the Solar Maximum Mission (SMM) spacecraft, the Flat Crystal Spectrometer (FCS) has recorded many high temperature spectra of helium-like ions under a wide variety of coronal conditions including active regions, long duration events, compact events, and double flares. The plasma density and temperature are derived from the ratios R and G, where R = f/i, G = (f + i)/r, and r, f, and i denote the resonance, forbidden, and intercombination line fluxes. A new method for obtaining the density and temperature for events observed with the FCS aboard SMM is presented. The results for these events are presented and compared to earlier results, and the method is evaluated based on these comparisons.
Statistical reliability assessment of UT round-robin test data for piping welds
International Nuclear Information System (INIS)
Kim, H.M.; Park, I.K.; Park, U.S.; Park, Y.W.; Kang, S.C.; Lee, J.H.
2004-01-01
Ultrasonic NDE is one of important technologies in the life-time maintenance of nuclear power plant. Ultrasonic inspection system is consisted of the operator, equipment and procedure. The reliability of ultrasonic inspection system is affected by its ability. The performance demonstration round robin was conducted to quantify the capability of ultrasonic inspection for in-service. Several teams employed procedures that met or exceeded with ASME sec. XI code requirements detected the piping of nuclear power plant with various cracks to evaluate the capability of detection and sizing. In this paper, the statistical reliability assessment of ultrasonic nondestructive inspection data using probability of detection (POD) is presented. The result of POD using logistic model was useful to the reliability assessment for the NDE hit or miss data. (orig.)
A Statistical Test of Correlations and Periodicities in the Geological Records
Yabushita, S.
1997-09-01
Matsumoto & Kubotani argued that there is a positive and statistically significant correlation between cratering and mass extinction. This argument is critically examined by adopting a method of Ertel used by Matsumoto & Kubotani but by applying it more directly to the extinction and cratering records. It is shown that on the null-hypothesis of random distribution of crater ages, the observed correlation has a probability of occurrence of 13%. However, when large craters are excluded whose ages agree with the times of peaks of extinction rate of marine fauna, one obtains a negative correlation. This result strongly indicates that mass extinction are not due to accumulation of impacts but due to isolated gigantic impacts.
Weibull statistics effective area and volume in the ball-on-ring testing method
DEFF Research Database (Denmark)
Frandsen, Henrik Lund
2014-01-01
The ball-on-ring method is together with other biaxial bending methods often used for measuring the strength of plates of brittle materials, because machining defects are remote from the high stresses causing the failure of the specimens. In order to scale the measured Weibull strength...... to geometries relevant for the application of the material, the effective area or volume for the test specimen must be evaluated. In this work analytical expressions for the effective area and volume of the ball-on-ring test specimen is derived. In the derivation the multiaxial stress field has been accounted...
Energy Technology Data Exchange (ETDEWEB)
Maharaj, H.P., E-mail: H_P_Maharaj@hc-sc.gc.ca [Health Canada, Dept. of Health, Consumer and Clinical Radiaton Protection Bureau, Ottawa, Ontario (Canada)
2016-03-15
This paper aims to provide an overview of an optimized benefit/risk ratio for a radiation emitting device. The device, which is portable, hand-held, and open-beam x-ray tube based, is utilized by a wide variety of industries for purposes of determining elemental or chemical analyses of materials in-situ based on fluorescent x-rays. These analyses do not cause damage or permanent alteration of the test materials and are considered a non-destructive test (NDT). Briefly, the key characteristics, principles of use and radiation hazards associated with the Hay device are presented and discussed. In view of the potential radiation risks, a long term strategy that incorporates risk factors and guiding principles intended to mitigate the radiation risks to the end user was considered and applied. Consequently, an operator certification program was developed on the basis of an International Standards Organization (ISO) standard (ISO 20807:2004) and in collaboration with various stake holders and was implemented by a federal national NDT certification body several years ago. It comprises a written radiation safety examination and hands-on training with the x-ray device. The operator certification program was recently revised and the changes appear beneficial. There is a fivefold increase in operator certification (Levels 1 a nd 2) to date compared with earlier years. Results are favorable and promising. An operational guidance document is available to help mitigate radiation risks. Operator certification in conjunction with the use of the operational guidance document is prudent, and is recommended for end users of the x-ray device. Manufacturers and owners of the x-ray devices will also benefit from the operational guidance document. (author)
International Nuclear Information System (INIS)
Maharaj, H.P.
2016-01-01
This paper aims to provide an overview of an optimized benefit/risk ratio for a radiation emitting device. The device, which is portable, hand-held, and open-beam x-ray tube based, is utilized by a wide variety of industries for purposes of determining elemental or chemical analyses of materials in-situ based on fluorescent x-rays. These analyses do not cause damage or permanent alteration of the test materials and are considered a non-destructive test (NDT). Briefly, the key characteristics, principles of use and radiation hazards associated with the Hay device are presented and discussed. In view of the potential radiation risks, a long term strategy that incorporates risk factors and guiding principles intended to mitigate the radiation risks to the end user was considered and applied. Consequently, an operator certification program was developed on the basis of an International Standards Organization (ISO) standard (ISO 20807:2004) and in collaboration with various stake holders and was implemented by a federal national NDT certification body several years ago. It comprises a written radiation safety examination and hands-on training with the x-ray device. The operator certification program was recently revised and the changes appear beneficial. There is a fivefold increase in operator certification (Levels 1 a nd 2) to date compared with earlier years. Results are favorable and promising. An operational guidance document is available to help mitigate radiation risks. Operator certification in conjunction with the use of the operational guidance document is prudent, and is recommended for end users of the x-ray device. Manufacturers and owners of the x-ray devices will also benefit from the operational guidance document. (author)
Establishing statistical models of manufacturing parameters
International Nuclear Information System (INIS)
Senevat, J.; Pape, J.L.; Deshayes, J.F.
1991-01-01
This paper reports on the effect of pilgering and cold-work parameters on contractile strain ratio and mechanical properties that were investigated using a large population of Zircaloy tubes. Statistical models were established between: contractile strain ratio and tooling parameters, mechanical properties (tensile test, creep test) and cold-work parameters, and mechanical properties and stress-relieving temperature
Sarrigiannis, Ptolemaios G; Zhao, Yifan; Wei, Hua-Liang; Billings, Stephen A; Fotheringham, Jayne; Hadjivassiliou, Marios
2014-01-01
To introduce a new method of quantitative EEG analysis in the time domain, the error reduction ratio (ERR)-causality test. To compare performance against cross-correlation and coherence with phase measures. A simulation example was used as a gold standard to assess the performance of ERR-causality, against cross-correlation and coherence. The methods were then applied to real EEG data. Analysis of both simulated and real EEG data demonstrates that ERR-causality successfully detects dynamically evolving changes between two signals, with very high time resolution, dependent on the sampling rate of the data. Our method can properly detect both linear and non-linear effects, encountered during analysis of focal and generalised seizures. We introduce a new quantitative EEG method of analysis. It detects real time levels of synchronisation in the linear and non-linear domains. It computes directionality of information flow with corresponding time lags. This novel dynamic real time EEG signal analysis unveils hidden neural network interactions with a very high time resolution. These interactions cannot be adequately resolved by the traditional methods of coherence and cross-correlation, which provide limited results in the presence of non-linear effects and lack fidelity for changes appearing over small periods of time. Copyright © 2013 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Lambert, G.H.; Schoeller, D.A.; Kotake, A.N.; Lietz, H. (Univ. of Chicago, IL (USA)); Humphrey, H.E.B.; Budd, M. (Michigan Dept. of Public Health, Lansing (USA)); Campbell, M.; Kalow, W.; Spielberg, P. (Univ. of Toronto, Ontario (Canada))
1990-11-01
A field biochemical epidemiology study was conducted using the Michigan cohort consisting of 51 rural residents exposed to polybrominated biphenyls (PBB). The study had three major objectives: (a) to determine the serum half-life of the major PBB congener, hexabromobiphenyl (HBB), in the human, (b) to determine if the PBB-exposed subjects had elevated cytochrome P-450I function as determined by the caffeine breath test (CBT) and the caffeine urinary metabolite ratio (CMR), and (c) to determine the applicability of the CBT and CMR in field studies. PBB serum levels were detected in 36 of the 51 PBB-exposed subjects. The serum half-life of HBB was determined by comparing the current serum HBB values to the subject's previous serum values obtained 5 to 8 years earlier. The median HBB half-life was 12 years (range 4-97 years). The CBT and CMR were elevated in the subjects exposed to PBBs as compared to the values obtained from urban nonsmokers and were similar to those found in adults who smoke. A gender effect was seen in the PBB-exposed subjects. There was a correlation between the CBT and the HBB serum values but not between CMR and HBB serum values. The CBT and CMR were easily conducted in the field and appear to be useful metabolic probes of cytochrome P-450I activity in human environmental toxicology.
International Nuclear Information System (INIS)
Na, Man Gyun; Oh, Seungrohk
2002-01-01
A neuro-fuzzy inference system combined with the wavelet denoising, principal component analysis (PCA), and sequential probability ratio test (SPRT) methods has been developed to monitor the relevant sensor using the information of other sensors. The parameters of the neuro-fuzzy inference system that estimates the relevant sensor signal are optimized by a genetic algorithm and a least-squares algorithm. The wavelet denoising technique was applied to remove noise components in input signals into the neuro-fuzzy system. By reducing the dimension of an input space into the neuro-fuzzy system without losing a significant amount of information, the PCA was used to reduce the time necessary to train the neuro-fuzzy system, simplify the structure of the neuro-fuzzy inference system, and also, make easy the selection of the input signals into the neuro-fuzzy system. By using the residual signals between the estimated signals and the measured signals, the SPRT is applied to detect whether the sensors are degraded or not. The proposed sensor-monitoring algorithm was verified through applications to the pressurizer water level, the pressurizer pressure, and the hot-leg temperature sensors in pressurized water reactors
A statistical characterization of the finger tapping test: modeling, estimation, and applications.
Austin, Daniel; McNames, James; Klein, Krystal; Jimison, Holly; Pavel, Misha
2015-03-01
Sensory-motor performance is indicative of both cognitive and physical function. The Halstead-Reitan finger tapping test is a measure of sensory-motor speed commonly used to assess function as part of a neuropsychological evaluation. Despite the widespread use of this test, the underlying motor and cognitive processes driving tapping behavior during the test are not well characterized or understood. This lack of understanding may make clinical inferences from test results about health or disease state less accurate because important aspects of the task such as variability or fatigue are unmeasured. To overcome these limitations, we enhanced the tapper with a sensor that enables us to more fully characterize all the aspects of tapping. This modification enabled us to decompose the tapping performance into six component phases and represent each phase with a set of parameters having clear functional interpretation. This results in a set of 29 total parameters for each trial, including change in tapping over time, and trial-to-trial and tap-to-tap variability. These parameters can be used to more precisely link different aspects of cognition or motor function to tapping behavior. We demonstrate the benefits of this new instrument with a simple hypothesis-driven trial comparing single and dual-task tapping.
Godleski, Stephanie A.; Ostrov, Jamie M.
2010-01-01
The present study used both categorical and dimensional approaches to test the association between relational and physical aggression and hostile intent attributions for both relational and instrumental provocation situations using the National Institute of Child Health and Human Development longitudinal Study of Early Child Care and Youth…
Statistical Indexes for Monitoring Item Behavior under Computer Adaptive Testing Environment.
Zhu, Renbang; Yu, Feng; Liu, Su
A computerized adaptive test (CAT) administration usually requires a large supply of items with accurately estimated psychometric properties, such as item response theory (IRT) parameter estimates, to ensure the precision of examinee ability estimation. However, an estimated IRT model of a given item in any given pool does not always correctly…
DEFF Research Database (Denmark)
Paisley, Larry
2002-01-01
The evolution of monitoring and surveillance for bovine spongiform encephalopathy (BSE) from the phase of passive surveillance that began in the United Kingdom in 1988 until the present is described. Currently, surveillance for BSE in Europe consists of mass testing of cattle slaughtered for human...
Directory of Open Access Journals (Sweden)
Sachin Chittawar
2013-01-01
Full Text Available Background: Demonstration of central: Peripheral adrenocorticotropic hormone (ACTH gradient is important for diagnosis of Cushing′s disease. Aim: The aim was to assess the utility of internal jugular vein (IJV: Peripheral vein ACTH ratio for diagnosis of Cushing′s disease. Materials and Methods: Patients with ACTH-dependent Cushing′s syndrome (CS patients were the subjects for this study. One blood sample each was collected from right and left IJV following intravenous hCRH at 3 and 5 min, respectively. A simultaneous peripheral vein sample was also collected with each IJV sample for calculation of IJV: Peripheral vein ACTH ratio. IJV sample collection was done under ultrasound guidance. ACTH was assayed using electrochemiluminescence immunoassay (ECLIA. Results: Thirty-two patients participated in this study. The IJV: Peripheral vein ACTH ratio ranged from 1.07 to 6.99 ( n = 32. It was more than 1.6 in 23 patients. Cushing′s disease could be confirmed in 20 of the 23 cases with IJV: Peripheral vein ratio more than 1.6. Four patients with Cushing′s disease and 2 patients with ectopic ACTH syndrome had IJV: Peripheral vein ACTH ratio less than 1.6. Six cases with unknown ACTH source were excluded for calculation of sensitivity and specificity of the test. Conclusion: IJV: Peripheral vein ACTH ratio calculated from a single sample from each IJV obtained after hCRH had 83% sensitivity and 100% specificity for diagnosis of CD.
International Nuclear Information System (INIS)
Baghaee Moghaddam, Taher; Soltani, Mehrtash; Karim, Mohamed Rehan
2015-01-01
Highlights: • Effect of PET modification on stiffness property of asphalt mixture was examined. • Different temperatures and loading amounts were designated. • Statistical analysis was used to find interactions between selected variables. • A good agreement between experimental results and predicted values was obtained. • Optimal amount of PET was calculated to achieve the highest mixture performance. - Abstract: Stiffness of asphalt mixture is a fundamental design parameter of flexible pavement. According to literature, stiffness value is very susceptible to environmental and loading conditions. In this paper, effects of applied stress and temperature on the stiffness modulus of unmodified and Polyethylene Terephthalate (PET) modified asphalt mixtures were evaluated using Response Surface Methodology (RSM). A quadratic model was successfully fitted to the experimental data. Based on the results achieved in this study, the temperature variation had the highest impact on the mixture’s stiffness. Besides, PET content and amount of stress showed to have almost the same effect on the stiffness of mixtures. The optimal amount of PET was found to be 0.41% by weight of aggregate particles to reach the highest stiffness value
Callegaro, Giulia; Malkoc, Kasja; Corvi, Raffaella; Urani, Chiara; Stefanini, Federico M
2017-12-01
The identification of the carcinogenic risk of chemicals is currently mainly based on animal studies. The in vitro Cell Transformation Assays (CTAs) are a promising alternative to be considered in an integrated approach. CTAs measure the induction of foci of transformed cells. CTAs model key stages of the in vivo neoplastic process and are able to detect both genotoxic and some non-genotoxic compounds, being the only in vitro method able to deal with the latter. Despite their favorable features, CTAs can be further improved, especially reducing the possible subjectivity arising from the last phase of the protocol, namely visual scoring of foci using coded morphological features. By taking advantage of digital image analysis, the aim of our work is to translate morphological features into statistical descriptors of foci images, and to use them to mimic the classification performances of the visual scorer to discriminate between transformed and non-transformed foci. Here we present a classifier based on five descriptors trained on a dataset of 1364 foci, obtained with different compounds and concentrations. Our classifier showed accuracy, sensitivity and specificity equal to 0.77 and an area under the curve (AUC) of 0.84. The presented classifier outperforms a previously published model. Copyright © 2017 Elsevier Ltd. All rights reserved.
Relationship between the COI test and other sensory profiles by statistical procedures
Directory of Open Access Journals (Sweden)
Calvente, J. J.
1994-04-01
Full Text Available Relationships between 139 sensory attributes evaluated on 32 samples of virgin olive oil have been analysed by a statistical sensory wheel that guarantees the objectiveness and prediction of its conclusions concerning the best clusters of attributes: green, bitter-pungent, ripe fruit, fruity, sweet fruit, undesirable attributes and two miscellanies. The procedure allows the sensory notes evaluated for potential consumers of this edible oil from the point of view of its habitual consumers to be understood with special reference to The European Communities Regulation n-2568/91. Five different panels: Spanish, Greek, Italian, Dutch and British, have been used to evaluate the samples. Analysis of the relationships between stimuli perceived by aroma, flavour, smell, mouthfeel and taste together with Linear Sensory Profiles based on Fuzzy Logic are provided. A 3-dimensional plot indicates the usefulness of the proposed procedure in the authentication of different varieties of virgin olive oil. An analysis of the volatile compounds responsible for most of the attributes gives weight to the conclusions. Directions which promise to improve the E.G. Regulation on the sensory quality of olive oil are also given.
Goedhart, Paul W; van der Voet, Hilko; Baldacchino, Ferdinando; Arpaia, Salvatore
2014-04-01
Genetic modification of plants may result in unintended effects causing potentially adverse effects on the environment. A comparative safety assessment is therefore required by authorities, such as the European Food Safety Authority, in which the genetically modified plant is compared with its conventional counterpart. Part of the environmental risk assessment is a comparative field experiment in which the effect on non-target organisms is compared. Statistical analysis of such trials come in two flavors: difference testing and equivalence testing. It is important to know the statistical properties of these, for example, the power to detect environmental change of a given magnitude, before the start of an experiment. Such prospective power analysis can best be studied by means of a statistical simulation model. This paper describes a general framework for simulating data typically encountered in environmental risk assessment of genetically modified plants. The simulation model, available as Supplementary Material, can be used to generate count data having different statistical distributions possibly with excess-zeros. In addition the model employs completely randomized or randomized block experiments, can be used to simulate single or multiple trials across environments, enables genotype by environment interaction by adding random variety effects, and finally includes repeated measures in time following a constant, linear or quadratic pattern in time possibly with some form of autocorrelation. The model also allows to add a set of reference varieties to the GM plants and its comparator to assess the natural variation which can then be used to set limits of concern for equivalence testing. The different count distributions are described in some detail and some examples of how to use the simulation model to study various aspects, including a prospective power analysis, are provided.
DEFF Research Database (Denmark)
Paulsen, Rasmus Reinhold; Larsen, Rasmus; Ersbøll, Bjarne Kjær
2002-01-01
surface models are built by using the anatomical landmarks to warp a template mesh onto all shapes in the training set. Testing the gender related differences is done by initially reducing the dimensionality using principal component analysis of the vertices of the warped meshes. The number of components...... to retain is chosen using Horn's parallel analysis. Finally a multivariate analysis of variance is performed on these components....
Austin, Peter C; Mamdani, Muhammad M; Juurlink, David N; Hux, Janet E
2006-09-01
To illustrate how multiple hypotheses testing can produce associations with no clinical plausibility. We conducted a study of all 10,674,945 residents of Ontario aged between 18 and 100 years in 2000. Residents were randomly assigned to equally sized derivation and validation cohorts and classified according to their astrological sign. Using the derivation cohort, we searched through 223 of the most common diagnoses for hospitalization until we identified two for which subjects born under one astrological sign had a significantly higher probability of hospitalization compared to subjects born under the remaining signs combined (P<0.05). We tested these 24 associations in the independent validation cohort. Residents born under Leo had a higher probability of gastrointestinal hemorrhage (P=0.0447), while Sagittarians had a higher probability of humerus fracture (P=0.0123) compared to all other signs combined. After adjusting the significance level to account for multiple comparisons, none of the identified associations remained significant in either the derivation or validation cohort. Our analyses illustrate how the testing of multiple, non-prespecified hypotheses increases the likelihood of detecting implausible associations. Our findings have important implications for the analysis and interpretation of clinical studies.
Directory of Open Access Journals (Sweden)
García-Casco, J. M.
2013-04-01
Full Text Available In the present work we have analyzed a total of 734 subcutaneous fat samples from Iberian pigs with different feeding systems for fattening (“Bellota”, “Recebo”, “Campo” and “Cebo” over three consecutive years, 2009-2011. Lipids were extracted from the subcutaneous fat on the rump, and after esterification, they were analyzed by Gas Chromatography (GC-FID and Gas Chromatography- Combustion-Isotope Ratio Mass Spectrometry (GC-C-IRMS. Mean fatty acids and isotope ratios show that there are differences according to the year and feeding systems, two factors that should be taken into account when classifying the animals. The application of different prediction models based on Discriminant analysis has allowed us to establish a method for the classification of animals according to the feeding system type, with a correct percentage of 85% using three or four classification categories (Bellota, Recebo, Campo and/or Cebo and 91% using only two categories, Cebo and Bellota. This model could provide the basis for appropriate classification of Iberian pigs according to their feeding regime.En el presente trabajo se han analizado un total de 734 muestras de tejido subcutáneo de cerdos ibéricos con distintos tipos de alimentación de engorde (Bellota, Recebo, Cebo y Campo a lo largo de tres años consecutivos, 2009-2011. Se han extraído los lípidos de la grasa subcutánea de rabadilla, y después de su esterificación, se han analizado por Cromatografía de gases (GC-FID y por Espectrometría de masas de relaciones isotópicas (GC-C-IRMS. Las medias de los ácidos grasos y de las relaciones isotópicas muestran que existen diferencias según el año y tipo de alimentación, factores que deberían tenerse en cuenta a la hora de clasificar los animales. La aplicación de distintos modelos de predicción basados en análisis discriminante permite establecer un método para la clasificación de los animales según el tipo de alimentación, con
Kipiński, Lech; König, Reinhard; Sielużycki, Cezary; Kordecki, Wojciech
2011-10-01
Stationarity is a crucial yet rarely questioned assumption in the analysis of time series of magneto- (MEG) or electroencephalography (EEG). One key drawback of the commonly used tests for stationarity of encephalographic time series is the fact that conclusions on stationarity are only indirectly inferred either from the Gaussianity (e.g. the Shapiro-Wilk test or Kolmogorov-Smirnov test) or the randomness of the time series and the absence of trend using very simple time-series models (e.g. the sign and trend tests by Bendat and Piersol). We present a novel approach to the analysis of the stationarity of MEG and EEG time series by applying modern statistical methods which were specifically developed in econometrics to verify the hypothesis that a time series is stationary. We report our findings of the application of three different tests of stationarity--the Kwiatkowski-Phillips-Schmidt-Schin (KPSS) test for trend or mean stationarity, the Phillips-Perron (PP) test for the presence of a unit root and the White test for homoscedasticity--on an illustrative set of MEG data. For five stimulation sessions, we found already for short epochs of duration of 250 and 500 ms that, although the majority of the studied epochs of single MEG trials were usually mean-stationary (KPSS test and PP test), they were classified as nonstationary due to their heteroscedasticity (White test). We also observed that the presence of external auditory stimulation did not significantly affect the findings regarding the stationarity of the data. We conclude that the combination of these tests allows a refined analysis of the stationarity of MEG and EEG time series.
DEFF Research Database (Denmark)
Gardner, Ian A.; Greiner, Matthias
2006-01-01
Receiver-operating characteristic (ROC) curves provide a cutoff-independent method for the evaluation of continuous or ordinal tests used in clinical pathology laboratories. The area under the curve is a useful overall measure of test accuracy and can be used to compare different tests (or...... different equipment) used by the same tester, as well as the accuracy of different diagnosticians that use the same test material. To date, ROC analysis has not been widely used in veterinary clinical pathology studies, although it should be considered a useful complement to estimates of sensitivity...... and specificity in test evaluation studies. In addition, calculation of likelihood ratios can potentially improve the clinical utility of such studies because likelihood ratios provide an indication of how the post-test probability changes as a function of the magnitude of the test results. For ordinal test...
Directory of Open Access Journals (Sweden)
Rafdzah Zaki
2013-06-01
Full Text Available Objective(s: Reliability measures precision or the extent to which test results can be replicated. This is the first ever systematic review to identify statistical methods used to measure reliability of equipment measuring continuous variables. This studyalso aims to highlight the inappropriate statistical method used in the reliability analysis and its implication in the medical practice. Materials and Methods: In 2010, five electronic databases were searched between 2007 and 2009 to look for reliability studies. A total of 5,795 titles were initially identified. Only 282 titles were potentially related, and finally 42 fitted the inclusion criteria. Results: The Intra-class Correlation Coefficient (ICC is the most popular method with 25 (60% studies having used this method followed by the comparing means (8 or 19%. Out of 25 studies using the ICC, only 7 (28% reported the confidence intervals and types of ICC used. Most studies (71% also tested the agreement of instruments. Conclusion: This study finds that the Intra-class Correlation Coefficient is the most popular method used to assess the reliability of medical instruments measuring continuous outcomes. There are also inappropriate applications and interpretations of statistical methods in some studies. It is important for medical researchers to be aware of this issue, and be able to correctly perform analysis in reliability studies.
Siddiqi, Ariba; Arjunan, Sridhar P; Kumar, Dinesh K
2016-08-01
Age-associated changes in the surface electromyogram (sEMG) of Tibialis Anterior (TA) muscle can be attributable to neuromuscular alterations that precede strength loss. We have used our sEMG model of the Tibialis Anterior to interpret the age-related changes and compared with the experimental sEMG. Eighteen young (20-30 years) and 18 older (60-85 years) performed isometric dorsiflexion at 6 different percentage levels of maximum voluntary contractions (MVC), and their sEMG from the TA muscle was recorded. Six different age-related changes in the neuromuscular system were simulated using the sEMG model at the same MVCs as the experiment. The maximal power of the spectrum, Gaussianity and Linearity Test Statistics were computed from the simulated and experimental sEMG. A correlation analysis at α=0.05 was performed between the simulated and experimental age-related change in the sEMG features. The results show the loss in motor units was distinguished by the Gaussianity and Linearity test statistics; while the maximal power of the PSD distinguished between the muscular factors. The simulated condition of 40% loss of motor units with halved the number of fast fibers best correlated with the age-related change observed in the experimental sEMG higher order statistical features. The simulated aging condition found by this study corresponds with the moderate motor unit remodelling and negligible strength loss reported in literature for the cohorts aged 60-70 years.
Rana, Santosh; Dhanotia, Jitendra; Bhatia, Vimal; Prakash, Shashi
2018-04-01
In this paper, we propose a simple, fast, and accurate technique for detection of collimation position of an optical beam using the self-imaging phenomenon and correlation analysis. Herrera-Fernandez et al. [J. Opt.18, 075608 (2016)JOOPDB0150-536X10.1088/2040-8978/18/7/075608] proposed an experimental arrangement for collimation testing by comparing the period of two different self-images produced by a single diffraction grating. Following their approach, we propose a testing procedure based on correlation coefficient (CC) for efficient detection of variation in the size and fringe width of the Talbot self-images and thereby the collimation position. When the beam is collimated, the physical properties of the self-images of the grating, such as its size and fringe width, do not vary from one Talbot plane to the other and are identical; the CC is maximum in such a situation. For the de-collimated position, the size and fringe width of the self-images vary, and correspondingly the CC decreases. Hence, the magnitude of CC is a measure of degree of collimation. Using the method, we could set the collimation position to a resolution of 1 μm, which relates to ±0.25 μ radians in terms of collimation angle (for testing a collimating lens of diameter 46 mm and focal length 300 mm). In contrast to most collimation techniques reported to date, the proposed technique does not require a translation/rotation of the grating, use of complicated phase evaluation algorithms, or an intricate method for determination of period of the grating or its self-images. The technique is fully automated and provides high resolution and precision.
On the Integrity of Online Testing for Introductory Statistics Courses: A Latent Variable Approach
Directory of Open Access Journals (Sweden)
Alan Fask
2015-04-01
Full Text Available There has been a remarkable growth in distance learning courses in higher education. Despite indications that distance learning courses are more vulnerable to cheating behavior than traditional courses, there has been little research studying whether online exams facilitate a relatively greater level of cheating. This article examines this issue by developing an approach using a latent variable to measure student cheating. This latent variable is linked to both known student mastery related variables and variables unrelated to student mastery. Grade scores from a proctored final exam and an unproctored final exam are used to test for increased cheating behavior in the unproctored exam
Davis-Sharts, J
1986-10-01
Maslow's hierarchy of basic human needs provides a major theoretical framework in nursing science. The purpose of this study was to empirically test Maslow's need theory, specifically at the levels of physiological and security needs, using a hologeistic comparative method. Thirty cultures taken from the 60 cultural units in the Health Relations Area Files (HRAF) Probability Sample were found to have data available for examining hypotheses about thermoregulatory (physiological) and protective (security) behaviors practiced prior to sleep onset. The findings demonstrate there is initial worldwide empirical evidence to support Maslow's need hierarchy.
Ko, Vincent; Nanji, Shabin; Tambouret, Rosemary H; Wilbur, David C
2007-04-25
Inappropriate use of the category of atypical squamous cells of undetermined significance (ASCUS) can result in overtreatment or undertreatment of patients, which may decrease the cost effectiveness of screening. Quality assurance tools, such as the ASCUS to squamous intraepithelial lesion ratio (ASCUS:SIL) and case review, are imperfect. High-risk HPV (hrHPV) testing is an objective test for a known viral carcinogen, and hrHPV may be more useful in monitoring the quality of ASCUS interpretations. hrHPV rates for cytologic diagnoses and patient age groups were calculated for a 2-year period. All hrHPV results for ASCUS and SIL over a 17-month period were analyzed by patient age group, over time, and by individual cytopathologist to compare hrHPV rates with the corresponding ASCUS:SIL. The hrHPV positive rate for SIL was >90%, and it was 32.6% for ASCUS. Stratification by patient age showed that approximately 50% of patients younger than 30 years and older than 70 years of age were hrHPV positive, whereas other patients had a lower rate ranging from 14% to 34%. The overall ASCUS:SIL was 1.42, and the overall hrHPV positive rate was 39.9%. Over time and by individual cytopathologist, the hrHPV rate performed similarly to the ASCUS:SIL. The analysis by patient age showed a high statistical correlation (R(2) = 0.9772) between the 2 methods. Despite differences between these techniques, the hrHPV rate closely recapitulates the ASCUS:SIL. When used together, the 2 methods can complement each other. The desirable hrHPV-positive range appears to be 40% to 50%; however, this may vary based on the patient population. The hrHPV rate is as quick and cost effective as determining the ASCUS:SIL. (c) 2007 American Cancer Society.
Rodriguez, Jesse M.
2013-01-01
Studies that map disease genes rely on accurate annotations that indicate whether individuals in the studied cohorts are related to each other or not. For example, in genome-wide association studies, the cohort members are assumed to be unrelated to one another. Investigators can correct for individuals in a cohort with previously-unknown shared familial descent by detecting genomic segments that are shared between them, which are considered to be identical by descent (IBD). Alternatively, elevated frequencies of IBD segments near a particular locus among affected individuals can be indicative of a disease-associated gene. As genotyping studies grow to use increasingly large sample sizes and meta-analyses begin to include many data sets, accurate and efficient detection of hidden relatedness becomes a challenge. To enable disease-mapping studies of increasingly large cohorts, a fast and accurate method to detect IBD segments is required. We present PARENTE, a novel method for detecting related pairs of individuals and shared haplotypic segments within these pairs. PARENTE is a computationally-efficient method based on an embedded likelihood ratio test. As demonstrated by the results of our simulations, our method exhibits better accuracy than the current state of the art, and can be used for the analysis of large genotyped cohorts. PARENTE\\'s higher accuracy becomes even more significant in more challenging scenarios, such as detecting shorter IBD segments or when an extremely low false-positive rate is required. PARENTE is publicly and freely available at http://parente.stanford.edu/. © 2013 Springer-Verlag.
International Nuclear Information System (INIS)
Coleman, S.Y.; Nicholls, J.R.
2006-01-01
Cyclic oxidation testing at elevated temperatures requires careful experimental design and the adoption of standard procedures to ensure reliable data. This is a major aim of the 'COTEST' research programme. Further, as such tests are both time consuming and costly, in terms of human effort, to take measurements over a large number of cycles, it is important to gain maximum information from a minimum number of tests (trials). This search for standardisation of cyclic oxidation conditions leads to a series of tests to determine the relative effects of cyclic parameters on the oxidation process. Following a review of the available literature, databases and the experience of partners to the COTEST project, the most influential parameters, upper dwell temperature (oxidation temperature) and time (hot time), lower dwell time (cold time) and environment, were investigated in partners' laboratories. It was decided to test upper dwell temperature at 3 levels, at and equidistant from a reference temperature; to test upper dwell time at a reference, a higher and a lower time; to test lower dwell time at a reference and a higher time and wet and dry environments. Thus an experiment, consisting of nine trials, was designed according to statistical criteria. The results of the trial were analysed statistically, to test the main linear and quadratic effects of upper dwell temperature and hot time and the main effects of lower dwell time (cold time) and environment. The nine trials are a quarter fraction of the 36 possible combinations of parameter levels that could have been studied. The results have been analysed by half Normal plots as there are only 2 degrees of freedom for the experimental error variance, which is rather low for a standard analysis of variance. Half Normal plots give a visual indication of which factors are statistically significant. In this experiment each trial has 3 replications, and the data are analysed in terms of mean mass change, oxidation kinetics
Cohn, T.A.; England, J.F.; Berenbrock, C.E.; Mason, R.R.; Stedinger, J.R.; Lamontagne, J.R.
2013-01-01
he Grubbs-Beck test is recommended by the federal guidelines for detection of low outliers in flood flow frequency computation in the United States. This paper presents a generalization of the Grubbs-Beck test for normal data (similar to the Rosner (1983) test; see also Spencer and McCuen (1996)) that can provide a consistent standard for identifying multiple potentially influential low flows. In cases where low outliers have been identified, they can be represented as “less-than” values, and a frequency distribution can be developed using censored-data statistical techniques, such as the Expected Moments Algorithm. This approach can improve the fit of the right-hand tail of a frequency distribution and provide protection from lack-of-fit due to unimportant but potentially influential low flows (PILFs) in a flood series, thus making the flood frequency analysis procedure more robust.
Directory of Open Access Journals (Sweden)
Yuli Soesetio
2008-02-01
Full Text Available Dividend Payout Ratio used to calculate all of revenue that will be accepted by stockholders as cash dividend, usually explained as percentage. This research was conducted to know several factors that affected change of Dividend Payout Ratio and to know the significance level and the correlation between dependent and independent variable. Analysis instrument used was parametric statistic. Based on the result of statistic test, The Change of Return on Asset (X1, The Change of Debt to Equity Ratio (X2, were able to explain dependent variable of the change Dividend Payout Ratio, and The Change of CashRatio can’t explain dependent variable of the change Dividend Payout Ratio
Statistical inference based on divergence measures
Pardo, Leandro
2005-01-01
The idea of using functionals of Information Theory, such as entropies or divergences, in statistical inference is not new. However, in spite of the fact that divergence statistics have become a very good alternative to the classical likelihood ratio test and the Pearson-type statistic in discrete models, many statisticians remain unaware of this powerful approach.Statistical Inference Based on Divergence Measures explores classical problems of statistical inference, such as estimation and hypothesis testing, on the basis of measures of entropy and divergence. The first two chapters form an overview, from a statistical perspective, of the most important measures of entropy and divergence and study their properties. The author then examines the statistical analysis of discrete multivariate data with emphasis is on problems in contingency tables and loglinear models using phi-divergence test statistics as well as minimum phi-divergence estimators. The final chapter looks at testing in general populations, prese...
Nielsen, Allan A.; Conradsen, Knut; Skriver, Henning
2016-10-01
Test statistics for comparison of real (as opposed to complex) variance-covariance matrices exist in the statistics literature [1]. In earlier publications we have described a test statistic for the equality of two variance-covariance matrices following the complex Wishart distribution with an associated p-value [2]. We showed their application to bitemporal change detection and to edge detection [3] in multilook, polarimetric synthetic aperture radar (SAR) data in the covariance matrix representation [4]. The test statistic and the associated p-value is described in [5] also. In [6] we focussed on the block-diagonal case, we elaborated on some computer implementation issues, and we gave examples on the application to change detection in both full and dual polarization bitemporal, bifrequency, multilook SAR data. In [7] we described an omnibus test statistic Q for the equality of k variance-covariance matrices following the complex Wishart distribution. We also described a factorization of Q = R2 R3 … Rk where Q and Rj determine if and when a difference occurs. Additionally, we gave p-values for Q and Rj. Finally, we demonstrated the use of Q and Rj and the p-values to change detection in truly multitemporal, full polarization SAR data. Here we illustrate the methods by means of airborne L-band SAR data (EMISAR) [8,9]. The methods may be applied to other polarimetric SAR data also such as data from Sentinel-1, COSMO-SkyMed, TerraSAR-X, ALOS, and RadarSat-2 and also to single-pol data. The account given here closely follows that given our recent IEEE TGRS paper [7]. Selected References [1] Anderson, T. W., An Introduction to Multivariate Statistical Analysis, John Wiley, New York, third ed. (2003). [2] Conradsen, K., Nielsen, A. A., Schou, J., and Skriver, H., "A test statistic in the complex Wishart distribution and its application to change detection in polarimetric SAR data," IEEE Transactions on Geoscience and Remote Sensing 41(1): 4-19, 2003. [3] Schou, J
Directory of Open Access Journals (Sweden)
Yun-shil Cha
2013-01-01
Full Text Available Birnbaum (2011, 2012 questioned the iid (independent and identically distributed sampling assumptions used by state-of-the-art statistical tests in Regenwetter, Dana and Davis-Stober's (2010, 2011 analysis of the ``linear order model''. Birnbaum (2012 cited, but did not use, a test of iid by Smith and Batchelder (2008 with analytically known properties. Instead, he created two new test statistics with unknown sampling distributions. Our rebuttal has five components: 1 We demonstrate that the Regenwetter et al. data pass Smith and Batchelder's test of iid with flying colors. 2 We provide evidence from Monte Carlo simulations that Birnbaum's (2012 proposed tests have unknown Type-I error rates, which depend on the actual choice probabilities and on how data are coded as well as on the null hypothesis of iid sampling. 3 Birnbaum analyzed only a third of Regenwetter et al.'s data. We show that his two new tests fail to replicate on the other two-thirds of the data, within participants. 4 Birnbaum selectively picked data of one respondent to suggest that choice probabilities may have changed partway into the experiment. Such nonstationarity could potentially cause a seemingly good fit to be a Type-II error. We show that the linear order model fits equally well if we allow for warm-up effects. 5 Using hypothetical data, Birnbaum (2012 claimed to show that ``true-and-error'' models for binary pattern probabilities overcome the alleged short-comings of Regenwetter et al.'s approach. We disprove this claim on the same data.
Statistical analysis on the fluence factor of surveillance test data of Korean nuclear power plants
Energy Technology Data Exchange (ETDEWEB)
Lee, Gyeong Geun; Kim, Min Chul; Yoon, Ji Hyun; Lee, Bong Sang; Lim, Sang Yeob; Kwon, Jun Hyun [Nuclear Materials Safety Research Division, Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2017-06-15
The transition temperature shift (TTS) of the reactor pressure vessel materials is an important factor that determines the lifetime of a nuclear power plant. The prediction of the TTS at the end of a plant’s lifespan is calculated based on the equation of Regulatory Guide 1.99 revision 2 (RG1.99/2) from the US. The fluence factor in the equation was expressed as a power function, and the exponent value was determined by the early surveillance data in the US. Recently, an advanced approach to estimate the TTS was proposed in various countries for nuclear power plants, and Korea is considering the development of a new TTS model. In this study, the TTS trend of the Korean surveillance test results was analyzed using a nonlinear regression model and a mixed-effect model based on the power function. The nonlinear regression model yielded a similar exponent as the power function in the fluence compared with RG1.99/2. The mixed-effect model had a higher value of the exponent and showed superior goodness of fit compared with the nonlinear regression model. Compared with RG1.99/2 and RG1.99/3, the mixed-effect model provided a more accurate prediction of the TTS.
Mieth, Bettina; Kloft, Marius; Rodríguez, Juan Antonio; Sonnenburg, Sören; Vobruba, Robin; Morcillo-Suárez, Carlos; Farré, Xavier; Marigorta, Urko M.; Fehr, Ernst; Dickhaus, Thorsten; Blanchard, Gilles; Schunk, Daniel; Navarro, Arcadi; Müller, Klaus-Robert
2016-01-01
The standard approach to the analysis of genome-wide association studies (GWAS) is based on testing each position in the genome individually for statistical significance of its association with the phenotype under investigation. To improve the analysis of GWAS, we propose a combination of machine learning and statistical testing that takes correlation structures within the set of SNPs under investigation in a mathematically well-controlled manner into account. The novel two-step algorithm, COMBI, first trains a support vector machine to determine a subset of candidate SNPs and then performs hypothesis tests for these SNPs together with an adequate threshold correction. Applying COMBI to data from a WTCCC study (2007) and measuring performance as replication by independent GWAS published within the 2008–2015 period, we show that our method outperforms ordinary raw p-value thresholding as well as other state-of-the-art methods. COMBI presents higher power and precision than the examined alternatives while yielding fewer false (i.e. non-replicated) and more true (i.e. replicated) discoveries when its results are validated on later GWAS studies. More than 80% of the discoveries made by COMBI upon WTCCC data have been validated by independent studies. Implementations of the COMBI method are available as a part of the GWASpi toolbox 2.0. PMID:27892471
International Nuclear Information System (INIS)
Ohm, H.
1982-01-01
Using the example of the delayed neutron spectrum of 24 s- 137 I the statistical model is tested in view of its applicability. A computer code was developed which simulates delayed neutron spectra by the Monte Carlo method under the assumption that the transition probabilities of the ν and the neutron decays obey the Porter-Thomas distribution while the distances of the neutron emitting levels are Wigner distribution. Gramow-Teller ν-transitions and simply forbidden ν-transitions from the preceding nucleus to the emitting nucleus were regarded. (orig./HSI) [de
Stockburger, D W
1999-05-01
Active server pages permit a software developer to customize the Web experience for users by inserting server-side script and database access into Web pages. This paper describes applications of these techniques and provides a primer on the use of these methods. Applications include a system that generates and grades individualized homework assignments and tests for statistics students. The student accesses the system as a Web page, prints out the assignment, does the assignment, and enters the answers on the Web page. The server, running on NT Server 4.0, grades the assignment, updates the grade book (on a database), and returns the answer key to the student.
International Nuclear Information System (INIS)
Yurkov, M.V.
2002-01-01
This paper presents an experimental study of the statistical properties of the radiation from a SASE FEL. The experiments were performed at the TESLA Test Facility VUV SASE FEL at DESY operating in a high-gain linear regime with a gain of about 10 6 . It is shown that fluctuations of the output radiation energy follows a gamma-distribution. We also measured for the first time the probability distribution of SASE radiation energy after a narrow-band monochromator. The experimental results are in good agreement with theoretical predictions, the energy fluctuations after the monochromator follow a negative exponential distribution
Sandurska, Elżbieta; Szulc, Aleksandra
2016-01-01
Sandurska Elżbieta, Szulc Aleksandra. A method of statistical analysis in the field of sports science when assumptions of parametric tests are not violated. Journal of Education Health and Sport. 2016;6(13):275-287. eISSN 2391-8306. DOI http://dx.doi.org/10.5281/zenodo.293762 http://ojs.ukw.edu.pl/index.php/johs/article/view/4278 The journal has had 7 points in Ministry of Science and Higher Education parametric evaluation. Part B item 754 (09.12.2016). 754 Journal...
Kilborn, Joshua P; Jones, David L; Peebles, Ernst B; Naar, David F
2017-04-01
Clustering data continues to be a highly active area of data analysis, and resemblance profiles are being incorporated into ecological methodologies as a hypothesis testing-based approach to clustering multivariate data. However, these new clustering techniques have not been rigorously tested to determine the performance variability based on the algorithm's assumptions or any underlying data structures. Here, we use simulation studies to estimate the statistical error rates for the hypothesis test for multivariate structure based on dissimilarity profiles (DISPROF). We concurrently tested a widely used algorithm that employs the unweighted pair group method with arithmetic mean (UPGMA) to estimate the proficiency of clustering with DISPROF as a decision criterion. We simulated unstructured multivariate data from different probability distributions with increasing numbers of objects and descriptors, and grouped data with increasing overlap, overdispersion for ecological data, and correlation among descriptors within groups. Using simulated data, we measured the resolution and correspondence of clustering solutions achieved by DISPROF with UPGMA against the reference grouping partitions used to simulate the structured test datasets. Our results highlight the dynamic interactions between dataset dimensionality, group overlap, and the properties of the descriptors within a group (i.e., overdispersion or correlation structure) that are relevant to resemblance profiles as a clustering criterion for multivariate data. These methods are particularly useful for multivariate ecological datasets that benefit from distance-based statistical analyses. We propose guidelines for using DISPROF as a clustering decision tool that will help future users avoid potential pitfalls during the application of methods and the interpretation of results.
Bjøntegaard, Øyvind; Krauss, Matias; Budelmann, Harald
2015-01-01
This report presents the Round-Robin (RR) program and test results including a statistical evaluation of the RILEM TC195-DTD committee named “Recommendation for test methods for autogenous deformation (AD) and thermal dilation (TD) of early age concrete”. The task of the committee was to investigate the linear test set-up for AD and TD measurements (Dilation Rigs) in the period from setting to the end of the hardening phase some weeks after. These are the stress-inducing deformations in a hardening concrete structure subjected to restraint conditions. The main task was to carry out an RR program on testing of AD of one concrete at 20 °C isothermal conditions in Dilation Rigs. The concrete part materials were distributed to 10 laboratories (Canada, Denmark, France, Germany, Japan, The Netherlands, Norway, Sweden and USA), and in total 30 tests on AD were carried out. Some supporting tests were also performed, as well as a smaller RR on cement paste. The committee has worked out a test procedure recommenda...
Abramov, Dimitri M; Pontes, Monique; Pontes, Adailton T; Mourao-Junior, Carlos A; Vieira, Juliana; Quero Cunha, Carla; Tamborino, Tiago; Galhanone, Paulo R; deAzevedo, Leonardo C; Lazarev, Vladimir V
2017-04-24
In ERP studies of cognitive processes during attentional tasks, the cue signals containing information about the target can increase the amplitude of the parietal cue P3 in relation to the 'neutral' temporal cue, and reduce the subsequent target P3 when this information is valid, i.e. corresponds to the target's attributes. The present study compared the cue-to-target P3 ratios in neutral and visuospatial cueing, in order to estimate the contribution of valid visuospatial information from the cue to target stages of the task performance, in terms of cognitive load. The P3 characteristics were also correlated with the results of individuals' performance of the visuospatial tasks, in order to estimate the relationship of the observed ERP with spatial reasoning. In 20 typically developing boys, aged 10-13 years (11.3±0.86), the intelligence quotient (I.Q.) was estimated by the Block Design and Vocabulary subtests from the WISC-III. The subjects performed the Attentional Network Test (ANT) accompanied by EEG recording. The cued two-choice task had three equiprobable cue conditions: No cue, with no information about the target; Neutral (temporal) cue, with an asterisk in the center of the visual field, predicting the target onset; and Spatial cues, with an asterisk in the upper or lower hemifield, predicting the onset and corresponding location of the target. The ERPs were estimated for the mid-frontal (Fz) and mid-parietal (Pz) scalp derivations. In the Pz, the Neutral cue P3 had a lower amplitude than the Spatial cue P3; whereas for the target ERPs, the P3 of the Neutral cue condition was larger than that of the Spatial cue condition. However, the sums of the magnitudes of the cue and target P3 were equal in the spatial and neutral cueing, probably indicating that in both cases the equivalent information processing load is included in either the cue or the target reaction, respectively. Meantime, in the Fz, the analog ERP components for both the cue and target
Raharja, Danang S.; Hadiwardoyo, Sigit P.; Rahayu, Wiwik; Zain, Nasuhi
2017-06-01
Geopolymer is binder material that consists of solid material and the activator solution. Geopolymer material has successfully replaced cement in the manufacture of concrete with aluminosilicate bonding system. Geopolymer concrete has properties similar to cement concrete with high compressive strength, low shrinkage value, relatively low creep value, as well as acid-resistant. Based on these, the addition of polymers in peat soils is expected to improve the bearing capacity of peat soils. A study on the influence of geopolymer addition in peat soils was done by comparing before and after the peat soil was mixed with geopolymer using CBR (California Bearing Ratio) test in unsoaked and soaked conditions. 10% mixture content of the peat dry was used, weighted with a variety of curing time 4 hours, 5 days, and 10 days. There were two methods of mixing: first, peat was mixed with fly ash geopolymer activators and mixed solution (waterglass, NaOH, water), and second, peat was mixed with fly ash and mixed geopolymer (waterglass, NaOH, water, fly ash). Changes were observed in specific gravity, dry density, acidity (pH), and the microscopic structure with Scanning Electron Microscope (SEM). Curing time did not significantly affect the CBR value. It even shows a tendency to decline with longer curing time. The first type mixture obtained CBR value of: 5.4% for 4 hours curing, 4.6% for 5 days curing and 3.6% for 10 days curing. The second type mixture obtained CBR value of: 6.1% for 4 hours curing, 5.2% for 5 days curing and 5.2% for 10 days curing. Furthermore, the specific gravity value, dry density, pH near neutral and swelling percentage increased. From both variants, the second type mixture shows better results than the first type mixture. The results of SEM (Scanning Electron Microscopy) show the structure of the peat which became denser with the fly ash particles filling the peat microporous. Also, the reaction of fly ash with geopolymer is indicated by the solid
International Nuclear Information System (INIS)
Foray, G.; Descamps-Mandine, A.; R’Mili, M.; Lamon, J.
2012-01-01
The present paper investigates glass fibre flaw size distributions. Two commercial fibre grades (HP and HD) mainly used in cement-based composite reinforcement were studied. Glass fibre fractography is a difficult and time consuming exercise, and thus is seldom carried out. An approach based on tensile tests on multifilament bundles and examination of the fibre surface by atomic force microscopy (AFM) was used. Bundles of more than 500 single filaments each were tested. Thus a statistically significant database of failure data was built up for the HP and HD glass fibres. Gaussian flaw distributions were derived from the filament tensile strength data or extracted from the AFM images. The two distributions were compared. Defect sizes computed from raw AFM images agreed reasonably well with those derived from tensile strength data. Finally, the pertinence of a Gaussian distribution was discussed. The alternative Pareto distribution provided a fair approximation when dealing with AFM flaw size.
Energy Technology Data Exchange (ETDEWEB)
Kim, B.S.; Lee, Y.S.; Park, C.K. [Cheonnam University, Kwangju (Korea); Masahiro, S. [Kyoto University, Kyoto (Japan)
1999-05-28
One of the unsolved problems of the natural gas dual fuel engine is that there is too much exhaust of Total Hydrogen Carbon(THC) at a low equivalent mixture ratio. To fix it, a natural gas mixed with hydrogen was applied to engine test. The results showed that the higher the mixture ratio of hydrogen to natural gas, the higher the combustion efficiency. And when the amount of the intake air is reached to 90% of WOT, the combustion efficiency was promoted. But, like a case making the injection timing earlier, the equivalent mixture ratio for the nocking limit decreases and the produce of NOx increases. 5 refs., 9 figs., 1 tab.
International Nuclear Information System (INIS)
Brown, L.; Schramm, D.N.
1988-02-01
It is shown that observations of the Lithium isotope ratio in high surface temperature Population II stars may be critical to cosmological nucleosynthesis models. In particular, decaying particle scenarios as derived in some supersymmetric models may stand or fall with such observations. 15 refs., 3 figs., 2 tabs
International Nuclear Information System (INIS)
Zhang Shuangnan; Xie Yi
2012-01-01
We test models for the evolution of neutron star (NS) magnetic fields (B). Our model for the evolution of the NS spin is taken from an analysis of pulsar timing noise presented by Hobbs et al.. We first test the standard model of a pulsar's magnetosphere in which B does not change with time and magnetic dipole radiation is assumed to dominate the pulsar's spin-down. We find that this model fails to predict both the magnitudes and signs of the second derivatives of the spin frequencies (ν-double dot). We then construct a phenomenological model of the evolution of B, which contains a long-term decay (LTD) modulated by short-term oscillations; a pulsar's spin is thus modified by its B-evolution. We find that an exponential LTD is not favored by the observed statistical properties of ν-double dot for young pulsars and fails to explain the fact that ν-double dot is negative for roughly half of the old pulsars. A simple power-law LTD can explain all the observed statistical properties of ν-double dot. Finally, we discuss some physical implications of our results to models of the B-decay of NSs and suggest reliable determination of the true ages of many young NSs is needed, in order to constrain further the physical mechanisms of their B-decay. Our model can be further tested with the measured evolutions of ν-dot and ν-double dot for an individual pulsar; the decay index, oscillation amplitude, and period can also be determined this way for the pulsar.
International Nuclear Information System (INIS)
Kraemer, H.A.
1982-01-01
Using the data of 339 patients, the following parameters of thyroid function were statistically evaluated. The in vitro parameters ET 3 U, TT 4 (D), FT 4 -index and PB 127 I and the radioiodine test with determination of PB 131 I before i.v. injection of 400 μg protirelin (DHP) and 120 minutes after the injection. There was no correlation between the percentage Change of the PB 121 I level 120 min after protirelin (DHP) administration and the percentage change of the TSH level 30 min after protirelin (DTP1) administration. The accuracies of the in vitro parameters ET 3 U, TT 4 (D) and FT 4 -index on the one hand and the extended protirelin test on the other hand were compared. (orig./MG) [de
Energy Technology Data Exchange (ETDEWEB)
Price-Whelan, Adrian M.; Agüeros, Marcel A. [Department of Astronomy, Columbia University, 550 W 120th Street, New York, NY 10027 (United States); Fournier, Amanda P. [Department of Physics, Broida Hall, University of California, Santa Barbara, CA 93106 (United States); Street, Rachel [Las Cumbres Observatory Global Telescope Network, Inc., 6740 Cortona Drive, Suite 102, Santa Barbara, CA 93117 (United States); Ofek, Eran O. [Benoziyo Center for Astrophysics, Weizmann Institute of Science, 76100 Rehovot (Israel); Covey, Kevin R. [Lowell Observatory, 1400 West Mars Hill Road, Flagstaff, AZ 86001 (United States); Levitan, David; Sesar, Branimir [Division of Physics, Mathematics, and Astronomy, California Institute of Technology, Pasadena, CA 91125 (United States); Laher, Russ R.; Surace, Jason, E-mail: adrn@astro.columbia.edu [Spitzer Science Center, California Institute of Technology, Mail Stop 314-6, Pasadena, CA 91125 (United States)
2014-01-20
Many photometric time-domain surveys are driven by specific goals, such as searches for supernovae or transiting exoplanets, which set the cadence with which fields are re-imaged. In the case of the Palomar Transient Factory (PTF), several sub-surveys are conducted in parallel, leading to non-uniform sampling over its ∼20,000 deg{sup 2} footprint. While the median 7.26 deg{sup 2} PTF field has been imaged ∼40 times in the R band, ∼2300 deg{sup 2} have been observed >100 times. We use PTF data to study the trade off between searching for microlensing events in a survey whose footprint is much larger than that of typical microlensing searches, but with far-from-optimal time sampling. To examine the probability that microlensing events can be recovered in these data, we test statistics used on uniformly sampled data to identify variables and transients. We find that the von Neumann ratio performs best for identifying simulated microlensing events in our data. We develop a selection method using this statistic and apply it to data from fields with >10 R-band observations, 1.1 × 10{sup 9} light curves, uncovering three candidate microlensing events. We lack simultaneous, multi-color photometry to confirm these as microlensing events. However, their number is consistent with predictions for the event rate in the PTF footprint over the survey's three years of operations, as estimated from near-field microlensing models. This work can help constrain all-sky event rate predictions and tests microlensing signal recovery in large data sets, which will be useful to future time-domain surveys, such as that planned with the Large Synoptic Survey Telescope.