WorldWideScience

Sample records for demonstrated statistically significant

  1. Statistics Related Self-Efficacy A Confirmatory Factor Analysis Demonstrating a Significant Link to Prior Mathematics Experiences for Graduate Level Students

    Directory of Open Access Journals (Sweden)

    Karen Larwin

    2014-02-01

    Full Text Available The present study examined students' statistics-related self-efficacy, as measured with the current statistics self-efficacy (CSSE inventory developed by Finney and Schraw (2003. Structural equation modeling was used to check the confirmatory factor analysis of the one-dimensional factor of CSSE. Once confirmed, this factor was used to test whether a significant link to prior mathematics experiences exists. Additionally a new post-structural equation modeling (SEM application was employed to compute error-free latent variable score for CSSE in an effort to examine the ancillary effects of gender, age, ethnicity, department, degree level, hours completed, expected course grade, number of college-level math classes, current GPA on students' CSSE scores. Results support the one-dimensional construct and as expected, the model demonstrated a significant link between CSSE scores and prior mathematics experiences to CSSE. Additionally the students' department, expected grade, and number of prior math classes were found to have a significant effect on student's CSSE scores.

  2. Statistically significant relational data mining :

    Energy Technology Data Exchange (ETDEWEB)

    Berry, Jonathan W.; Leung, Vitus Joseph; Phillips, Cynthia Ann; Pinar, Ali; Robinson, David Gerald; Berger-Wolf, Tanya; Bhowmick, Sanjukta; Casleton, Emily; Kaiser, Mark; Nordman, Daniel J.; Wilson, Alyson G.

    2014-02-01

    This report summarizes the work performed under the project (3z(BStatitically significant relational data mining.(3y (BThe goal of the project was to add more statistical rigor to the fairly ad hoc area of data mining on graphs. Our goal was to develop better algorithms and better ways to evaluate algorithm quality. We concetrated on algorithms for community detection, approximate pattern matching, and graph similarity measures. Approximate pattern matching involves finding an instance of a relatively small pattern, expressed with tolerance, in a large graph of data observed with uncertainty. This report gathers the abstracts and references for the eight refereed publications that have appeared as part of this work. We then archive three pieces of research that have not yet been published. The first is theoretical and experimental evidence that a popular statistical measure for comparison of community assignments favors over-resolved communities over approximations to a ground truth. The second are statistically motivated methods for measuring the quality of an approximate match of a small pattern in a large graph. The third is a new probabilistic random graph model. Statisticians favor these models for graph analysis. The new local structure graph model overcomes some of the issues with popular models such as exponential random graph models and latent variable models.

  3. Statistical significance versus clinical relevance.

    Science.gov (United States)

    van Rijn, Marieke H C; Bech, Anneke; Bouyer, Jean; van den Brand, Jan A J G

    2017-04-01

    In March this year, the American Statistical Association (ASA) posted a statement on the correct use of P-values, in response to a growing concern that the P-value is commonly misused and misinterpreted. We aim to translate these warnings given by the ASA into a language more easily understood by clinicians and researchers without a deep background in statistics. Moreover, we intend to illustrate the limitations of P-values, even when used and interpreted correctly, and bring more attention to the clinical relevance of study findings using two recently reported studies as examples. We argue that P-values are often misinterpreted. A common mistake is saying that P < 0.05 means that the null hypothesis is false, and P ≥0.05 means that the null hypothesis is true. The correct interpretation of a P-value of 0.05 is that if the null hypothesis were indeed true, a similar or more extreme result would occur 5% of the times upon repeating the study in a similar sample. In other words, the P-value informs about the likelihood of the data given the null hypothesis and not the other way around. A possible alternative related to the P-value is the confidence interval (CI). It provides more information on the magnitude of an effect and the imprecision with which that effect was estimated. However, there is no magic bullet to replace P-values and stop erroneous interpretation of scientific results. Scientists and readers alike should make themselves familiar with the correct, nuanced interpretation of statistical tests, P-values and CIs. © The Author 2017. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.

  4. Common pitfalls in statistical analysis: Clinical versus statistical significance

    Science.gov (United States)

    Ranganathan, Priya; Pramesh, C. S.; Buyse, Marc

    2015-01-01

    In clinical research, study results, which are statistically significant are often interpreted as being clinically important. While statistical significance indicates the reliability of the study results, clinical significance reflects its impact on clinical practice. The third article in this series exploring pitfalls in statistical analysis clarifies the importance of differentiating between statistical significance and clinical significance. PMID:26229754

  5. Statistical significance of cis-regulatory modules

    Directory of Open Access Journals (Sweden)

    Smith Andrew D

    2007-01-01

    Full Text Available Abstract Background It is becoming increasingly important for researchers to be able to scan through large genomic regions for transcription factor binding sites or clusters of binding sites forming cis-regulatory modules. Correspondingly, there has been a push to develop algorithms for the rapid detection and assessment of cis-regulatory modules. While various algorithms for this purpose have been introduced, most are not well suited for rapid, genome scale scanning. Results We introduce methods designed for the detection and statistical evaluation of cis-regulatory modules, modeled as either clusters of individual binding sites or as combinations of sites with constrained organization. In order to determine the statistical significance of module sites, we first need a method to determine the statistical significance of single transcription factor binding site matches. We introduce a straightforward method of estimating the statistical significance of single site matches using a database of known promoters to produce data structures that can be used to estimate p-values for binding site matches. We next introduce a technique to calculate the statistical significance of the arrangement of binding sites within a module using a max-gap model. If the module scanned for has defined organizational parameters, the probability of the module is corrected to account for organizational constraints. The statistical significance of single site matches and the architecture of sites within the module can be combined to provide an overall estimation of statistical significance of cis-regulatory module sites. Conclusion The methods introduced in this paper allow for the detection and statistical evaluation of single transcription factor binding sites and cis-regulatory modules. The features described are implemented in the Search Tool for Occurrences of Regulatory Motifs (STORM and MODSTORM software.

  6. The thresholds for statistical and clinical significance

    DEFF Research Database (Denmark)

    Jakobsen, Janus Christian; Gluud, Christian; Winkel, Per

    2014-01-01

    threshold if the trial is stopped early or if interim analyses have been conducted; (4) adjust the confidence intervals and the P-values for multiplicity due to number of outcome comparisons; and (5) assess clinical significance of the trial results. CONCLUSIONS: If the proposed five-step procedure...... not reflect the probability of getting a result assuming an alternative hypothesis to the null hypothesis is true. Second, a confidence interval or a P-value showing significance may be caused by multiplicity. Third, statistical significance does not necessarily result in clinical significance. Therefore......, assessment of intervention effects in randomised clinical trials deserves more rigour in order to become more valid. METHODS: Several methodologies for assessing the statistical and clinical significance of intervention effects in randomised clinical trials were considered. Balancing simplicity...

  7. Social significance of community structure: statistical view.

    Science.gov (United States)

    Li, Hui-Jia; Daniels, Jasmine J

    2015-01-01

    Community structure analysis is a powerful tool for social networks that can simplify their topological and functional analysis considerably. However, since community detection methods have random factors and real social networks obtained from complex systems always contain error edges, evaluating the significance of a partitioned community structure is an urgent and important question. In this paper, integrating the specific characteristics of real society, we present a framework to analyze the significance of a social community. The dynamics of social interactions are modeled by identifying social leaders and corresponding hierarchical structures. Instead of a direct comparison with the average outcome of a random model, we compute the similarity of a given node with the leader by the number of common neighbors. To determine the membership vector, an efficient community detection algorithm is proposed based on the position of the nodes and their corresponding leaders. Then, using a log-likelihood score, the tightness of the community can be derived. Based on the distribution of community tightness, we establish a connection between p-value theory and network analysis, and then we obtain a significance measure of statistical form . Finally, the framework is applied to both benchmark networks and real social networks. Experimental results show that our work can be used in many fields, such as determining the optimal number of communities, analyzing the social significance of a given community, comparing the performance among various algorithms, etc.

  8. Social significance of community structure: Statistical view

    Science.gov (United States)

    Li, Hui-Jia; Daniels, Jasmine J.

    2015-01-01

    Community structure analysis is a powerful tool for social networks that can simplify their topological and functional analysis considerably. However, since community detection methods have random factors and real social networks obtained from complex systems always contain error edges, evaluating the significance of a partitioned community structure is an urgent and important question. In this paper, integrating the specific characteristics of real society, we present a framework to analyze the significance of a social community. The dynamics of social interactions are modeled by identifying social leaders and corresponding hierarchical structures. Instead of a direct comparison with the average outcome of a random model, we compute the similarity of a given node with the leader by the number of common neighbors. To determine the membership vector, an efficient community detection algorithm is proposed based on the position of the nodes and their corresponding leaders. Then, using a log-likelihood score, the tightness of the community can be derived. Based on the distribution of community tightness, we establish a connection between p -value theory and network analysis, and then we obtain a significance measure of statistical form . Finally, the framework is applied to both benchmark networks and real social networks. Experimental results show that our work can be used in many fields, such as determining the optimal number of communities, analyzing the social significance of a given community, comparing the performance among various algorithms, etc.

  9. Assessing statistical significance in causal graphs

    Directory of Open Access Journals (Sweden)

    Chindelevitch Leonid

    2012-02-01

    Full Text Available Abstract Background Causal graphs are an increasingly popular tool for the analysis of biological datasets. In particular, signed causal graphs--directed graphs whose edges additionally have a sign denoting upregulation or downregulation--can be used to model regulatory networks within a cell. Such models allow prediction of downstream effects of regulation of biological entities; conversely, they also enable inference of causative agents behind observed expression changes. However, due to their complex nature, signed causal graph models present special challenges with respect to assessing statistical significance. In this paper we frame and solve two fundamental computational problems that arise in practice when computing appropriate null distributions for hypothesis testing. Results First, we show how to compute a p-value for agreement between observed and model-predicted classifications of gene transcripts as upregulated, downregulated, or neither. Specifically, how likely are the classifications to agree to the same extent under the null distribution of the observed classification being randomized? This problem, which we call "Ternary Dot Product Distribution" owing to its mathematical form, can be viewed as a generalization of Fisher's exact test to ternary variables. We present two computationally efficient algorithms for computing the Ternary Dot Product Distribution and investigate its combinatorial structure analytically and numerically to establish computational complexity bounds. Second, we develop an algorithm for efficiently performing random sampling of causal graphs. This enables p-value computation under a different, equally important null distribution obtained by randomizing the graph topology but keeping fixed its basic structure: connectedness and the positive and negative in- and out-degrees of each vertex. We provide an algorithm for sampling a graph from this distribution uniformly at random. We also highlight theoretical

  10. Significant Statistics: Viewed with a Contextual Lens

    Science.gov (United States)

    Tait-McCutcheon, Sandi

    2010-01-01

    This paper examines the pedagogical and organisational changes three lead teachers made to their statistics teaching and learning programs. The lead teachers posed the research question: What would the effect of contextually integrating statistical investigations and literacies into other curriculum areas be on student achievement? By finding the…

  11. Social significance of community structure: Statistical view

    CERN Document Server

    Li, Hui-Jia

    2015-01-01

    Community structure analysis is a powerful tool for social networks, which can simplify their topological and functional analysis considerably. However, since community detection methods have random factors and real social networks obtained from complex systems always contain error edges, evaluating the significance of community structure partitioned is an urgent and important question. In this paper, integrating the specific characteristics of real society, we present a novel framework analyzing the significance of social community specially. The dynamics of social interactions are modeled by identifying social leaders and corresponding hierarchical structures. Instead of a direct comparison with the average outcome of a random model, we compute the similarity of a given node with the leader by the number of common neighbors. To determine the membership vector, an efficient community detection algorithm is proposed based on the position of nodes and their corresponding leaders. Then, using log-likelihood sco...

  12. Use of demonstrations and experiments in teaching business statistics

    Directory of Open Access Journals (Sweden)

    D. G. Johnson

    2003-01-01

    Full Text Available The aim of a business statistics course should be to help students think statistically and to interpret and understand data, rather than to focus on mathematical detail and computation. To achieve this students must be thoroughly involved in the learning process, and encouraged to discover for themselves the meaning, importance and relevance of statistical concepts. In this paper we advocate the use of experiments and demonstrations as aids to achieving these goals. A number of demonstrations are given which can be used to illustrate and explain some key statistical ideas.

  13. Demonstrating the Gambler's Fallacy in an Introductory Statistics Class.

    Science.gov (United States)

    Riniolo, Todd C.; Schmidt, Louis A.

    1999-01-01

    Describes a classroom demonstration called the Gambler's Fallacy where students in an introductory psychology statistics class participate in simulated gambling using weekly results from professional football game outcomes over a 10 week period. Explains that the demonstration illustrates that random processes do not self-correct and statistical…

  14. Demonstrational Optics Part 2: Coherent and Statistical Optics

    CERN Document Server

    Marchenko, Oleg; Windholz, Laurentius

    2007-01-01

    Demonstrational Optics presents a new didactical approach to the study of optics. Emphasizing the importance of elaborate new experimental demonstrations, pictorial illustrations, computer simulations and models of optical phenomena in order to ensure a deeper understanding of wave and geometric optics. It includes problems focused on the pragmatic needs of students, secondary school teachers, university professors and optical engineers. Part 2, Coherent and Statistical Optics, contains chapters on interference, diffraction, Fourier optics, light quanta, thermal radiation (Shot noise and Gaussian light), Correlation of light fields and Correlation of light intensities. A substantial part of this volume is devoted to thermal radiation and its properties, especially with partial coherence. A detailed treatment of the photo-effect with respect to statistical properties leads to the basics of statistical optics. To illustrate the phenomena covered by this volume, a large number of demonstration experiments are de...

  15. Caveats for using statistical significance tests in research assessments

    OpenAIRE

    2011-01-01

    This paper raises concerns about the advantages of using statistical significance tests in research assessments as has recently been suggested in the debate about proper normalization procedures for citation indicators. Statistical significance tests are highly controversial and numerous criticisms have been leveled against their use. Based on examples from articles by proponents of the use statistical significance tests in research assessments, we address some of the numerous problems with s...

  16. Caveats for using statistical significance tests in research assessments

    CERN Document Server

    Schneider, Jesper W

    2011-01-01

    This paper raises concerns about the advantages of using statistical significance tests in research assessments as has recently been suggested in the debate about proper normalization procedures for citation indicators. Statistical significance tests are highly controversial and numerous criticisms have been leveled against their use. Based on examples from articles by proponents of the use statistical significance tests in research assessments, we address some of the numerous problems with such tests. The issues specifically discussed are the ritual practice of such tests, their dichotomous application in decision making, the difference between statistical and substantive significance, the implausibility of most null hypotheses, the crucial assumption of randomness, as well as the utility of standard errors and confidence intervals for inferential purposes. We argue that applying statistical significance tests and mechanically adhering to their results is highly problematic and detrimental to critical thinki...

  17. Significance analysis and statistical mechanics: an application to clustering.

    Science.gov (United States)

    Łuksza, Marta; Lässig, Michael; Berg, Johannes

    2010-11-26

    This Letter addresses the statistical significance of structures in random data: given a set of vectors and a measure of mutual similarity, how likely is it that a subset of these vectors forms a cluster with enhanced similarity among its elements? The computation of this cluster p value for randomly distributed vectors is mapped onto a well-defined problem of statistical mechanics. We solve this problem analytically, establishing a connection between the physics of quenched disorder and multiple-testing statistics in clustering and related problems. In an application to gene expression data, we find a remarkable link between the statistical significance of a cluster and the functional relationships between its genes.

  18. Mass spectrometry based protein identification with accurate statistical significance assignment

    OpenAIRE

    Alves, Gelio; Yu, Yi-Kuo

    2014-01-01

    Motivation: Assigning statistical significance accurately has become increasingly important as meta data of many types, often assembled in hierarchies, are constructed and combined for further biological analyses. Statistical inaccuracy of meta data at any level may propagate to downstream analyses, undermining the validity of scientific conclusions thus drawn. From the perspective of mass spectrometry based proteomics, even though accurate statistics for peptide identification can now be ach...

  19. Significance and importance: some common misapprehensions about statistics

    OpenAIRE

    Currey, John; Paul D Baxter; Pitchford, Jonathan W

    2009-01-01

    Abstract This paper attempts to discuss, in a readily understandable way, some very common misapprehensions that occur in laboratory-based scientists? thinking about statistics. We deal mainly with three issues 1) P-values are best thought of as merely guides to action: are your experimental data consistent with your null hypothesis, or not.? 2) When confronted with statistically non-significant results, you should also think about the power of the statistical test jdc1@york....

  20. The Use of Meta-Analytic Statistical Significance Testing

    Science.gov (United States)

    Polanin, Joshua R.; Pigott, Terri D.

    2015-01-01

    Meta-analysis multiplicity, the concept of conducting multiple tests of statistical significance within one review, is an underdeveloped literature. We address this issue by considering how Type I errors can impact meta-analytic results, suggest how statistical power may be affected through the use of multiplicity corrections, and propose how…

  1. Caveats for using statistical significance tests in research assessments

    DEFF Research Database (Denmark)

    Schneider, Jesper Wiborg

    2013-01-01

    This article raises concerns about the advantages of using statistical significance tests in research assessments as has recently been suggested in the debate about proper normalization procedures for citation indicators by Opthof and Leydesdorff (2010). Statistical significance tests are highly...... controversial and numerous criticisms have been leveled against their use. Based on examples from articles by proponents of the use statistical significance tests in research assessments, we address some of the numerous problems with such tests. The issues specifically discussed are the ritual practice...... of such tests, their dichotomous application in decision making, the difference between statistical and substantive significance, the implausibility of most null hypotheses, the crucial assumption of randomness, as well as the utility of standard errors and confidence intervals for inferential purposes. We...

  2. Caveats for using statistical significance tests in research assessments

    DEFF Research Database (Denmark)

    Schneider, Jesper Wiborg

    2013-01-01

    This article raises concerns about the advantages of using statistical significance tests in research assessments as has recently been suggested in the debate about proper normalization procedures for citation indicators by Opthof and Leydesdorff (2010). Statistical significance tests are highly...... controversial and numerous criticisms have been leveled against their use. Based on examples from articles by proponents of the use statistical significance tests in research assessments, we address some of the numerous problems with such tests. The issues specifically discussed are the ritual practice...... are important or not. On the contrary their use may be harmful. Like many other critics, we generally believe that statistical significance tests are over- and misused in the empirical sciences including scientometrics and we encourage a reform on these matters....

  3. A tutorial on hunting statistical significance by chasing N

    Directory of Open Access Journals (Sweden)

    Denes Szucs

    2016-09-01

    Full Text Available There is increasing concern about the replicability of studies in psychology and cognitive neuroscience. Hidden data dredging (also called p-hacking is a major contributor to this crisis because it substantially increases Type I error resulting in a much larger proportion of false positive findings than the usually expected 5%. In order to build better intuition to avoid, detect and criticise some typical problems, here I systematically illustrate the large impact of some easy to implement and so, perhaps frequent data dredging techniques on boosting false positive findings. I illustrate several forms of two special cases of data dredging. First, researchers may violate the data collection stopping rules of null hypothesis significance testing by repeatedly checking for statistical significance with various numbers of participants. Second, researchers may group participants post-hoc along potential but unplanned independent grouping variables. The first approach 'hacks' the number of participants in studies, the second approach ‘hacks’ the number of variables in the analysis. I demonstrate the high amount of false positive findings generated by these techniques with data from true null distributions. I also illustrate that it is extremely easy to introduce strong bias into data by very mild selection and re-testing. Similar, usually undocumented data dredging steps can easily lead to having 20-50%, or more false positives.

  4. The questioned p value: clinical, practical and statistical significance.

    Science.gov (United States)

    Jiménez-Paneque, Rosa

    2016-09-09

    The use of p-value and statistical significance have been questioned since the early 80s in the last century until today. Much has been discussed about it in the field of statistics and its applications, especially in Epidemiology and Public Health. As a matter of fact, the p-value and its equivalent, statistical significance, are difficult concepts to grasp for the many health professionals some way involved in research applied to their work areas. However, its meaning should be clear in intuitive terms although it is based on theoretical concepts of the field of Statistics. This paper attempts to present the p-value as a concept that applies to everyday life and therefore intuitively simple but whose proper use cannot be separated from theoretical and methodological elements of inherent complexity. The reasons behind the criticism received by the p-value and its isolated use are intuitively explained, mainly the need to demarcate statistical significance from clinical significance and some of the recommended remedies for these problems are approached as well. It finally refers to the current trend to vindicate the p-value appealing to the convenience of its use in certain situations and the recent statement of the American Statistical Association in this regard.

  5. Statistical significance test for transition matrices of atmospheric Markov chains

    Science.gov (United States)

    Vautard, Robert; Mo, Kingtse C.; Ghil, Michael

    1990-01-01

    Low-frequency variability of large-scale atmospheric dynamics can be represented schematically by a Markov chain of multiple flow regimes. This Markov chain contains useful information for the long-range forecaster, provided that the statistical significance of the associated transition matrix can be reliably tested. Monte Carlo simulation yields a very reliable significance test for the elements of this matrix. The results of this test agree with previously used empirical formulae when each cluster of maps identified as a distinct flow regime is sufficiently large and when they all contain a comparable number of maps. Monte Carlo simulation provides a more reliable way to test the statistical significance of transitions to and from small clusters. It can determine the most likely transitions, as well as the most unlikely ones, with a prescribed level of statistical significance.

  6. On detection and assessment of statistical significance of Genomic Islands

    Directory of Open Access Journals (Sweden)

    Chaudhuri Probal

    2008-04-01

    Full Text Available Abstract Background Many of the available methods for detecting Genomic Islands (GIs in prokaryotic genomes use markers such as transposons, proximal tRNAs, flanking repeats etc., or they use other supervised techniques requiring training datasets. Most of these methods are primarily based on the biases in GC content or codon and amino acid usage of the islands. However, these methods either do not use any formal statistical test of significance or use statistical tests for which the critical values and the P-values are not adequately justified. We propose a method, which is unsupervised in nature and uses Monte-Carlo statistical tests based on randomly selected segments of a chromosome. Such tests are supported by precise statistical distribution theory, and consequently, the resulting P-values are quite reliable for making the decision. Results Our algorithm (named Design-Island, an acronym for Detection of Statistically Significant Genomic Island runs in two phases. Some 'putative GIs' are identified in the first phase, and those are refined into smaller segments containing horizontally acquired genes in the refinement phase. This method is applied to Salmonella typhi CT18 genome leading to the discovery of several new pathogenicity, antibiotic resistance and metabolic islands that were missed by earlier methods. Many of these islands contain mobile genetic elements like phage-mediated genes, transposons, integrase and IS elements confirming their horizontal acquirement. Conclusion The proposed method is based on statistical tests supported by precise distribution theory and reliable P-values along with a technique for visualizing statistically significant islands. The performance of our method is better than many other well known methods in terms of their sensitivity and accuracy, and in terms of specificity, it is comparable to other methods.

  7. Common pitfalls in statistical analysis: "P" values, statistical significance and confidence intervals

    Directory of Open Access Journals (Sweden)

    Priya Ranganathan

    2015-01-01

    Full Text Available In the second part of a series on pitfalls in statistical analysis, we look at various ways in which a statistically significant study result can be expressed. We debunk some of the myths regarding the ′P′ value, explain the importance of ′confidence intervals′ and clarify the importance of including both values in a paper

  8. Common pitfalls in statistical analysis: “P” values, statistical significance and confidence intervals

    Science.gov (United States)

    Ranganathan, Priya; Pramesh, C. S.; Buyse, Marc

    2015-01-01

    In the second part of a series on pitfalls in statistical analysis, we look at various ways in which a statistically significant study result can be expressed. We debunk some of the myths regarding the ‘P’ value, explain the importance of ‘confidence intervals’ and clarify the importance of including both values in a paper PMID:25878958

  9. Systematic reviews of anesthesiologic interventions reported as statistically significant

    DEFF Research Database (Denmark)

    Imberger, Georgina; Gluud, Christian; Boylan, John

    2015-01-01

    statistically significant meta-analyses of anesthesiologic interventions, we used TSA to estimate power and imprecision in the context of sparse data and repeated updates. METHODS: We conducted a search to identify all systematic reviews with meta-analyses that investigated an intervention that may......: From 11,870 titles, we found 682 systematic reviews that investigated anesthesiologic interventions. In the 50 sampled meta-analyses, the median number of trials included was 8 (interquartile range [IQR], 5-14), the median number of participants was 964 (IQR, 523-1736), and the median number...

  10. Your Chi-Square Test Is Statistically Significant: Now What?

    Directory of Open Access Journals (Sweden)

    Donald Sharpe

    2015-04-01

    Full Text Available Applied researchers have employed chi-square tests for more than one hundred years. This paper addresses the question of how one should follow a statistically significant chi-square test result in order to determine the source of that result. Four approaches were evaluated: calculating residuals, comparing cells, ransacking, and partitioning. Data from two recent journal articles were used to illustrate these approaches. A call is made for greater consideration of foundational techniques such as the chi-square tests.

  11. [Significance of the demonstration of Actinomyces in cervical cytological smears].

    Science.gov (United States)

    Dybdahl, H; Baandrup, U

    1988-10-17

    In recent years there has been well documented evidence of a connection between adnexitis and the use of IUDs. It has also been reported that Actinomyces-caused adnexitis is often a serious precursor of tubo-ovarian abscesses which require surgical attention. The investigation included a total of 17,734 routine Pap smears taken in the pathology department over a 4-month period. The smears were screened for the presence of Actinomyces and information on type of IUD and gynecological symptoms was gathered from women testing positive for Actinomyces. Comparable information was gathered from 2 age-matched control groups. 1 group consisted of women with an IUD but without Actinomyces; the other group consisted of women without an IUD and without Actinomyces. Of the 180 patients with Actinomyces, 175 were IUD users and only 5 were nonusers. The incidence of gynecological symptoms among the patients showed increased frequency for women with Actinomyces only with regard to cervical discharge. The Nova-T IUD was found to be significantly less frequently associated with Actinomyces than the other IUDs.

  12. Lexical Co-occurrence, Statistical Significance, and Word Association

    CERN Document Server

    Chaudhari, Dipak; Laxman, Srivatsan

    2010-01-01

    Lexical co-occurrence is an important cue for detecting word associations. We present a theoretical framework for discovering statistically significant lexical co-occurrences from a given corpus. In contrast with the prevalent practice of giving weightage to unigram frequencies, we focus only on the documents containing both the terms (of a candidate bigram). We detect biases in span distributions of associated words, while being agnostic to variations in global unigram frequencies. Our framework has the fidelity to distinguish different classes of lexical co-occurrences, based on strengths of the document and corpuslevel cues of co-occurrence in the data. We perform extensive experiments on benchmark data sets to study the performance of various co-occurrence measures that are currently known in literature. We find that a relatively obscure measure called Ochiai, and a newly introduced measure CSA capture the notion of lexical co-occurrence best, followed next by LLR, Dice, and TTest, while another popular m...

  13. Fostering Students' Statistical Literacy through Significant Learning Experience

    Science.gov (United States)

    Krishnan, Saras

    2015-01-01

    A major objective of statistics education is to develop students' statistical literacy that enables them to be educated users of data in context. Teaching statistics in today's educational settings is not an easy feat because teachers have a huge task in keeping up with the demands of the new generation of learners. The present day students have…

  14. Statistical significance of seasonal warming/cooling trends

    Science.gov (United States)

    Ludescher, Josef; Bunde, Armin; Schellnhuber, Hans Joachim

    2017-04-01

    The question whether a seasonal climate trend (e.g., the increase of summer temperatures in Antarctica in the last decades) is of anthropogenic or natural origin is of great importance for mitigation and adaption measures alike. The conventional significance analysis assumes that (i) the seasonal climate trends can be quantified by linear regression, (ii) the different seasonal records can be treated as independent records, and (iii) the persistence in each of these seasonal records can be characterized by short-term memory described by an autoregressive process of first order. Here we show that assumption ii is not valid, due to strong intraannual correlations by which different seasons are correlated. We also show that, even in the absence of correlations, for Gaussian white noise, the conventional analysis leads to a strong overestimation of the significance of the seasonal trends, because multiple testing has not been taken into account. In addition, when the data exhibit long-term memory (which is the case in most climate records), assumption iii leads to a further overestimation of the trend significance. Combining Monte Carlo simulations with the Holm-Bonferroni method, we demonstrate how to obtain reliable estimates of the significance of the seasonal climate trends in long-term correlated records. For an illustration, we apply our method to representative temperature records from West Antarctica, which is one of the fastest-warming places on Earth and belongs to the crucial tipping elements in the Earth system.

  15. Statistical significance of spectral lag transition in GRB 160625B

    Science.gov (United States)

    Ganguly, Shalini; Desai, Shantanu

    2017-09-01

    Recently Wei et al.[1] have found evidence for a transition from positive time lags to negative time lags in the spectral lag data of GRB 160625B. They have fit these observed lags to a sum of two components: an assumed functional form for intrinsic time lag due to astrophysical mechanisms and an energy-dependent speed of light due to quadratic and linear Lorentz invariance violation (LIV) models. Here, we examine the statistical significance of the evidence for a transition to negative time lags. Such a transition, even if present in GRB 160625B, cannot be due to an energy dependent speed of light as this would contradict previous limits by some 3-4 orders of magnitude, and must therefore be of intrinsic astrophysical origin. We use three different model comparison techniques: a frequentist test and two information based criteria (AIC and BIC). From the frequentist model comparison test, we find that the evidence for transition in the spectral lag data is favored at 3.05σ and 3.74σ for the linear and quadratic models respectively. We find that ΔAIC and ΔBIC have values ≳ 10 for the spectral lag transition that was motivated as being due to quadratic Lorentz invariance violating model pointing to ;decisive evidence;. We note however that none of the three models (including the model of intrinsic astrophysical emission) provide a good fit to the data.

  16. Recent Literature on Whether Statistical Significance Tests Should or Should Not Be Banned.

    Science.gov (United States)

    Deegear, James

    This paper summarizes the literature regarding statistical significant testing with an emphasis on recent literature in various discipline and literature exploring why researchers have demonstrably failed to be influenced by the American Psychological Association publication manual's encouragement to report effect sizes. Also considered are…

  17. Determining sexual dimorphism in frog measurement data: integration of statistical significance, measurement error, effect size and biological significance

    Directory of Open Access Journals (Sweden)

    Hayek Lee-Ann C.

    2005-01-01

    Full Text Available Several analytic techniques have been used to determine sexual dimorphism in vertebrate morphological measurement data with no emergent consensus on which technique is superior. A further confounding problem for frog data is the existence of considerable measurement error. To determine dimorphism, we examine a single hypothesis (Ho = equal means for two groups (females and males. We demonstrate that frog measurement data meet assumptions for clearly defined statistical hypothesis testing with statistical linear models rather than those of exploratory multivariate techniques such as principal components, correlation or correspondence analysis. In order to distinguish biological from statistical significance of hypotheses, we propose a new protocol that incorporates measurement error and effect size. Measurement error is evaluated with a novel measurement error index. Effect size, widely used in the behavioral sciences and in meta-analysis studies in biology, proves to be the most useful single metric to evaluate whether statistically significant results are biologically meaningful. Definitions for a range of small, medium, and large effect sizes specifically for frog measurement data are provided. Examples with measurement data for species of the frog genus Leptodactylus are presented. The new protocol is recommended not only to evaluate sexual dimorphism for frog data but for any animal measurement data for which the measurement error index and observed or a priori effect sizes can be calculated.

  18. Tipping points in the arctic: eyeballing or statistical significance?

    Science.gov (United States)

    Carstensen, Jacob; Weydmann, Agata

    2012-02-01

    Arctic ecosystems have experienced and are projected to experience continued large increases in temperature and declines in sea ice cover. It has been hypothesized that small changes in ecosystem drivers can fundamentally alter ecosystem functioning, and that this might be particularly pronounced for Arctic ecosystems. We present a suite of simple statistical analyses to identify changes in the statistical properties of data, emphasizing that changes in the standard error should be considered in addition to changes in mean properties. The methods are exemplified using sea ice extent, and suggest that the loss rate of sea ice accelerated by factor of ~5 in 1996, as reported in other studies, but increases in random fluctuations, as an early warning signal, were observed already in 1990. We recommend to employ the proposed methods more systematically for analyzing tipping points to document effects of climate change in the Arctic.

  19. Statistical downscaling rainfall using artificial neural network: significantly wetter Bangkok?

    Science.gov (United States)

    Vu, Minh Tue; Aribarg, Thannob; Supratid, Siriporn; Raghavan, Srivatsan V.; Liong, Shie-Yui

    2016-11-01

    Artificial neural network (ANN) is an established technique with a flexible mathematical structure that is capable of identifying complex nonlinear relationships between input and output data. The present study utilizes ANN as a method of statistically downscaling global climate models (GCMs) during the rainy season at meteorological site locations in Bangkok, Thailand. The study illustrates the applications of the feed forward back propagation using large-scale predictor variables derived from both the ERA-Interim reanalyses data and present day/future GCM data. The predictors are first selected over different grid boxes surrounding Bangkok region and then screened by using principal component analysis (PCA) to filter the best correlated predictors for ANN training. The reanalyses downscaled results of the present day climate show good agreement against station precipitation with a correlation coefficient of 0.8 and a Nash-Sutcliffe efficiency of 0.65. The final downscaled results for four GCMs show an increasing trend of precipitation for rainy season over Bangkok by the end of the twenty-first century. The extreme values of precipitation determined using statistical indices show strong increases of wetness. These findings will be useful for policy makers in pondering adaptation measures due to flooding such as whether the current drainage network system is sufficient to meet the changing climate and to plan for a range of related adaptation/mitigation measures.

  20. Statistically significant data base of rock properties for geothermal use

    Science.gov (United States)

    Koch, A.; Jorand, R.; Clauser, C.

    2009-04-01

    The high risk of failure due to the unknown properties of the target rocks at depth is a major obstacle for the exploration of geothermal energy. In general, the ranges of thermal and hydraulic properties given in compilations of rock properties are too large to be useful to constrain properties at a specific site. To overcome this problem, we study the thermal and hydraulic rock properties of the main rock types in Germany in a statistical approach. An important aspect is the use of data from exploration wells that are largely untapped for the purpose of geothermal exploration. In the current project stage, we have been analyzing mostly Devonian and Carboniferous drill cores from 20 deep boreholes in the region of the Lower Rhine Embayment and the Ruhr area (western North Rhine Westphalia). In total, we selected 230 core samples with a length of up to 30 cm from the core archive of the State Geological Survey. The use of core scanning technology allowed the rapid measurement of thermal conductivity, sonic velocity, and gamma density under dry and water saturated conditions with high resolution for a large number of samples. In addition, we measured porosity, bulk density, and matrix density based on Archimedes' principle and pycnometer analysis. As first results we present arithmetic means, medians and standard deviations characterizing the petrophysical properties and their variability for specific lithostratigraphic units. Bi- and multimodal frequency distributions correspond to the occurrence of different lithologies such as shale, limestone, dolomite, sandstone, siltstone, marlstone, and quartz-schist. In a next step, the data set will be combined with logging data and complementary mineralogical analyses to derive the variation of thermal conductivity with depth. As a final result, this may be used to infer thermal conductivity for boreholes without appropriate core data which were drilled in similar geological settings.

  1. Distinguishing between statistical significance and practical/clinical meaningfulness using statistical inference.

    Science.gov (United States)

    Wilkinson, Michael

    2014-03-01

    Decisions about support for predictions of theories in light of data are made using statistical inference. The dominant approach in sport and exercise science is the Neyman-Pearson (N-P) significance-testing approach. When applied correctly it provides a reliable procedure for making dichotomous decisions for accepting or rejecting zero-effect null hypotheses with known and controlled long-run error rates. Type I and type II error rates must be specified in advance and the latter controlled by conducting an a priori sample size calculation. The N-P approach does not provide the probability of hypotheses or indicate the strength of support for hypotheses in light of data, yet many scientists believe it does. Outcomes of analyses allow conclusions only about the existence of non-zero effects, and provide no information about the likely size of true effects or their practical/clinical value. Bayesian inference can show how much support data provide for different hypotheses, and how personal convictions should be altered in light of data, but the approach is complicated by formulating probability distributions about prior subjective estimates of population effects. A pragmatic solution is magnitude-based inference, which allows scientists to estimate the true magnitude of population effects and how likely they are to exceed an effect magnitude of practical/clinical importance, thereby integrating elements of subjective Bayesian-style thinking. While this approach is gaining acceptance, progress might be hastened if scientists appreciate the shortcomings of traditional N-P null hypothesis significance testing.

  2. Changing Statistical Significance with the Amount of Information: The Adaptive α Significance Level.

    Science.gov (United States)

    Pérez, María-Eglée; Pericchi, Luis Raúl

    2014-02-01

    We put forward an adaptive alpha which changes with the amount of sample information. This calibration may be interpreted as a Bayes/non-Bayes compromise, and leads to statistical consistency. The calibration can also be used to produce confidence intervals whose size take in consideration the amount of observed information.

  3. Changing Statistical Significance with the Amount of Information: The Adaptive α Significance Level☆

    Science.gov (United States)

    Pérez, María-Eglée; Pericchi, Luis Raúl

    2014-01-01

    We put forward an adaptive alpha which changes with the amount of sample information. This calibration may be interpreted as a Bayes/non-Bayes compromise, and leads to statistical consistency. The calibration can also be used to produce confidence intervals whose size take in consideration the amount of observed information. PMID:24511173

  4. Lies, damned lies and statistics: Clinical importance versus statistical significance in research.

    Science.gov (United States)

    Mellis, Craig

    2017-02-28

    Correctly performed and interpreted statistics play a crucial role for both those who 'produce' clinical research, and for those who 'consume' this research. Unfortunately, however, there are many misunderstandings and misinterpretations of statistics by both groups. In particular, there is a widespread lack of appreciation for the severe limitations with p values. This is a particular problem with small sample sizes and low event rates - common features of many published clinical trials. These issues have resulted in increasing numbers of false positive clinical trials (false 'discoveries'), and the well-publicised inability to replicate many of the findings. While chance clearly plays a role in these errors, many more are due to either poorly performed or badly misinterpreted statistics. Consequently, it is essential that whenever p values appear, these need be accompanied by both 95% confidence limits and effect sizes. These will enable readers to immediately assess the plausible range of results, and whether or not the effect is clinically meaningful.

  5. Statistical significance of trends in monthly heavy precipitation over the US

    KAUST Repository

    Mahajan, Salil

    2011-05-11

    Trends in monthly heavy precipitation, defined by a return period of one year, are assessed for statistical significance in observations and Global Climate Model (GCM) simulations over the contiguous United States using Monte Carlo non-parametric and parametric bootstrapping techniques. The results from the two Monte Carlo approaches are found to be similar to each other, and also to the traditional non-parametric Kendall\\'s τ test, implying the robustness of the approach. Two different observational data-sets are employed to test for trends in monthly heavy precipitation and are found to exhibit consistent results. Both data-sets demonstrate upward trends, one of which is found to be statistically significant at the 95% confidence level. Upward trends similar to observations are observed in some climate model simulations of the twentieth century, but their statistical significance is marginal. For projections of the twenty-first century, a statistically significant upwards trend is observed in most of the climate models analyzed. The change in the simulated precipitation variance appears to be more important in the twenty-first century projections than changes in the mean precipitation. Stochastic fluctuations of the climate-system are found to be dominate monthly heavy precipitation as some GCM simulations show a downwards trend even in the twenty-first century projections when the greenhouse gas forcings are strong. © 2011 Springer-Verlag.

  6. Thresholds for statistical and clinical significance in systematic reviews with meta-analytic methods

    DEFF Research Database (Denmark)

    Jakobsen, Janus Christian; Wetterslev, Jørn; Winkel, Per;

    2014-01-01

    BACKGROUND: Thresholds for statistical significance when assessing meta-analysis results are being insufficiently demonstrated by traditional 95% confidence intervals and P-values. Assessment of intervention effects in systematic reviews with meta-analysis deserves greater rigour. METHODS......: Methodologies for assessing statistical and clinical significance of intervention effects in systematic reviews were considered. Balancing simplicity and comprehensiveness, an operational procedure was developed, based mainly on The Cochrane Collaboration methodology and the Grading of Recommendations...... Assessment, Development, and Evaluation (GRADE) guidelines. RESULTS: We propose an eight-step procedure for better validation of meta-analytic results in systematic reviews (1) Obtain the 95% confidence intervals and the P-values from both fixed-effect and random-effects meta-analyses and report the most...

  7. Statistical significance of variables driving systematic variation in high-dimensional data

    Science.gov (United States)

    Chung, Neo Christopher; Storey, John D.

    2015-01-01

    Motivation: There are a number of well-established methods such as principal component analysis (PCA) for automatically capturing systematic variation due to latent variables in large-scale genomic data. PCA and related methods may directly provide a quantitative characterization of a complex biological variable that is otherwise difficult to precisely define or model. An unsolved problem in this context is how to systematically identify the genomic variables that are drivers of systematic variation captured by PCA. Principal components (PCs) (and other estimates of systematic variation) are directly constructed from the genomic variables themselves, making measures of statistical significance artificially inflated when using conventional methods due to over-fitting. Results: We introduce a new approach called the jackstraw that allows one to accurately identify genomic variables that are statistically significantly associated with any subset or linear combination of PCs. The proposed method can greatly simplify complex significance testing problems encountered in genomics and can be used to identify the genomic variables significantly associated with latent variables. Using simulation, we demonstrate that our method attains accurate measures of statistical significance over a range of relevant scenarios. We consider yeast cell-cycle gene expression data, and show that the proposed method can be used to straightforwardly identify genes that are cell-cycle regulated with an accurate measure of statistical significance. We also analyze gene expression data from post-trauma patients, allowing the gene expression data to provide a molecularly driven phenotype. Using our method, we find a greater enrichment for inflammatory-related gene sets compared to the original analysis that uses a clinically defined, although likely imprecise, phenotype. The proposed method provides a useful bridge between large-scale quantifications of systematic variation and gene

  8. Efficient statistical significance approximation for local similarity analysis of high-throughput time series data.

    Science.gov (United States)

    Xia, Li C; Ai, Dongmei; Cram, Jacob; Fuhrman, Jed A; Sun, Fengzhu

    2013-01-15

    Local similarity analysis of biological time series data helps elucidate the varying dynamics of biological systems. However, its applications to large scale high-throughput data are limited by slow permutation procedures for statistical significance evaluation. We developed a theoretical approach to approximate the statistical significance of local similarity analysis based on the approximate tail distribution of the maximum partial sum of independent identically distributed (i.i.d.) random variables. Simulations show that the derived formula approximates the tail distribution reasonably well (starting at time points > 10 with no delay and > 20 with delay) and provides P-values comparable with those from permutations. The new approach enables efficient calculation of statistical significance for pairwise local similarity analysis, making possible all-to-all local association studies otherwise prohibitive. As a demonstration, local similarity analysis of human microbiome time series shows that core operational taxonomic units (OTUs) are highly synergetic and some of the associations are body-site specific across samples. The new approach is implemented in our eLSA package, which now provides pipelines for faster local similarity analysis of time series data. The tool is freely available from eLSA's website: http://meta.usc.edu/softs/lsa. Supplementary data are available at Bioinformatics online. fsun@usc.edu.

  9. Identification of Microorganisms by High Resolution Tandem Mass Spectrometry with Accurate Statistical Significance

    Science.gov (United States)

    Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y.; Drake, Steven K.; Gucek, Marjan; Suffredini, Anthony F.; Sacks, David B.; Yu, Yi-Kuo

    2016-02-01

    Correct and rapid identification of microorganisms is the key to the success of many important applications in health and safety, including, but not limited to, infection treatment, food safety, and biodefense. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is challenging correct microbial identification because of the large number of choices present. To properly disentangle candidate microbes, one needs to go beyond apparent morphology or simple `fingerprinting'; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptidome profiles of microbes to better separate them and by designing an analysis method that yields accurate statistical significance. Here, we present an analysis pipeline that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using MS/MS data of 81 samples, each composed of a single known microorganism, that the proposed pipeline can correctly identify microorganisms at least at the genus and species levels. We have also shown that the proposed pipeline computes accurate statistical significances, i.e., E-values for identified peptides and unified E-values for identified microorganisms. The proposed analysis pipeline has been implemented in MiCId, a freely available software for Microorganism Classification and Identification. MiCId is available for download at http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html.

  10. Understanding the Sampling Distribution and Its Use in Testing Statistical Significance.

    Science.gov (United States)

    Breunig, Nancy A.

    Despite the increasing criticism of statistical significance testing by researchers, particularly in the publication of the 1994 American Psychological Association's style manual, statistical significance test results are still popular in journal articles. For this reason, it remains important to understand the logic of inferential statistics. A…

  11. How to get statistically significant effects in any ERP experiment (and why you shouldn't).

    Science.gov (United States)

    Luck, Steven J; Gaspelin, Nicholas

    2017-01-01

    ERP experiments generate massive datasets, often containing thousands of values for each participant, even after averaging. The richness of these datasets can be very useful in testing sophisticated hypotheses, but this richness also creates many opportunities to obtain effects that are statistically significant but do not reflect true differences among groups or conditions (bogus effects). The purpose of this paper is to demonstrate how common and seemingly innocuous methods for quantifying and analyzing ERP effects can lead to very high rates of significant but bogus effects, with the likelihood of obtaining at least one such bogus effect exceeding 50% in many experiments. We focus on two specific problems: using the grand-averaged data to select the time windows and electrode sites for quantifying component amplitudes and latencies, and using one or more multifactor statistical analyses. Reanalyses of prior data and simulations of typical experimental designs are used to show how these problems can greatly increase the likelihood of significant but bogus results. Several strategies are described for avoiding these problems and for increasing the likelihood that significant effects actually reflect true differences among groups or conditions.

  12. animation : An R Package for Creating Animations and Demonstrating Statistical Methods

    Directory of Open Access Journals (Sweden)

    Yihui Xie

    2013-04-01

    Full Text Available Animated graphs that demonstrate statistical ideas and methods can both attract interest and assist understanding. In this paper we first discuss how animations can be related to some statistical topics such as iterative algorithms, random simulations, (resampling methods and dynamic trends, then we describe the approaches that may be used to create animations, and give an overview to the R package animation, including its design, usage and the statistical topics in the package. With the animation package, we can export the animations produced by R into a variety of formats, such as a web page, a GIF animation, a Flash movie, a PDF document, or an MP4/AVI video, so that users can publish the animations fairly easily. The design of this package is flexible enough to be readily incorporated into web applications, e.g., we can generate animations online with Rweb, which means we do not even need R to be installed locally to create animations. We will show examples of the use of animations in teaching statistics and in the presentation of statistical reports using Sweave or knitr. In fact, this paper itself was written with the knitr and animation package, and the animations are embedded in the PDF document, so that readers can watch the animations in real time when they read the paper (the Adobe Reader is required.Animations can add insight and interest to traditional static approaches to teaching statistics and reporting, making statistics a more interesting and appealing subject.

  13. "What If" Analyses: Ways to Interpret Statistical Significance Test Results Using EXCEL or "R"

    Science.gov (United States)

    Ozturk, Elif

    2012-01-01

    The present paper aims to review two motivations to conduct "what if" analyses using Excel and "R" to understand the statistical significance tests through the sample size context. "What if" analyses can be used to teach students what statistical significance tests really do and in applied research either prospectively to estimate what sample size…

  14. No difference found in time to publication by statistical significance of trial results: a methodological review

    Science.gov (United States)

    Jefferson, L; Cooper, E; Hewitt, C; Torgerson, T; Cook, L; Tharmanathan, P; Cockayne, S; Torgerson, D

    2016-01-01

    Objective Time-lag from study completion to publication is a potential source of publication bias in randomised controlled trials. This study sought to update the evidence base by identifying the effect of the statistical significance of research findings on time to publication of trial results. Design Literature searches were carried out in four general medical journals from June 2013 to June 2014 inclusive (BMJ, JAMA, the Lancet and the New England Journal of Medicine). Setting Methodological review of four general medical journals. Participants Original research articles presenting the primary analyses from phase 2, 3 and 4 parallel-group randomised controlled trials were included. Main outcome measures Time from trial completion to publication. Results The median time from trial completion to publication was 431 days (n = 208, interquartile range 278–618). A multivariable adjusted Cox model found no statistically significant difference in time to publication for trials reporting positive or negative results (hazard ratio: 0.86, 95% CI 0.64 to 1.16, p = 0.32). Conclusion In contrast to previous studies, this review did not demonstrate the presence of time-lag bias in time to publication. This may be a result of these articles being published in four high-impact general medical journals that may be more inclined to publish rapidly, whatever the findings. Further research is needed to explore the presence of time-lag bias in lower quality studies and lower impact journals. PMID:27757242

  15. STATISTICAL EVALUATION OF SMALL SCALE MIXING DEMONSTRATION SAMPLING AND BATCH TRANSFER PERFORMANCE - 12093

    Energy Technology Data Exchange (ETDEWEB)

    GREER DA; THIEN MG

    2012-01-12

    The ability to effectively mix, sample, certify, and deliver consistent batches of High Level Waste (HLW) feed from the Hanford Double Shell Tanks (DST) to the Waste Treatment and Immobilization Plant (WTP) presents a significant mission risk with potential to impact mission length and the quantity of HLW glass produced. DOE's Tank Operations Contractor, Washington River Protection Solutions (WRPS) has previously presented the results of mixing performance in two different sizes of small scale DSTs to support scale up estimates of full scale DST mixing performance. Currently, sufficient sampling of DSTs is one of the largest programmatic risks that could prevent timely delivery of high level waste to the WTP. WRPS has performed small scale mixing and sampling demonstrations to study the ability to sufficiently sample the tanks. The statistical evaluation of the demonstration results which lead to the conclusion that the two scales of small DST are behaving similarly and that full scale performance is predictable will be presented. This work is essential to reduce the risk of requiring a new dedicated feed sampling facility and will guide future optimization work to ensure the waste feed delivery mission will be accomplished successfully. This paper will focus on the analytical data collected from mixing, sampling, and batch transfer testing from the small scale mixing demonstration tanks and how those data are being interpreted to begin to understand the relationship between samples taken prior to transfer and samples from the subsequent batches transferred. An overview of the types of data collected and examples of typical raw data will be provided. The paper will then discuss the processing and manipulation of the data which is necessary to begin evaluating sampling and batch transfer performance. This discussion will also include the evaluation of the analytical measurement capability with regard to the simulant material used in the demonstration tests. The

  16. On Statistical Approaches for Demonstrating Analytical Similarity in the Presence of Correlation.

    Science.gov (United States)

    Yang, Harry; Novick, Steven; Burdick, Richard K

    Analytical similarity is the foundation for demonstration of biosimilarity between a proposed product and a reference product. For this assessment, currently the U.S. Food and Drug Administration (FDA) recommends a tiered system in which quality attributes are categorized into three tiers commensurate with their risk and approaches of varying statistical rigor are subsequently used for the three-tier quality attributes. Key to the analyses of Tiers 1 and 2 quality attributes is the establishment of equivalence acceptance criterion and quality range. For particular licensure applications, the FDA has provided advice on statistical methods for demonstration of analytical similarity. For example, for Tier 1 assessment, an equivalence test can be used based on an equivalence margin of 1.5 σR, where σR is the reference product variability estimated by the sample standard deviation SR from a sample of reference lots. The quality range for demonstrating Tier 2 analytical similarity is of the form X̄R ± K × σR where the constant K is appropriately justified. To demonstrate Tier 2 analytical similarity, a large percentage (e.g., 90%) of test product must fall in the quality range. In this paper, through both theoretical derivations and simulations, we show that when the reference drug product lots are correlated, the sample standard deviation SR underestimates the true reference product variability σR As a result, substituting SR for σR in the Tier 1 equivalence acceptance criterion and the Tier 2 quality range inappropriately reduces the statistical power and the ability to declare analytical similarity. Also explored is the impact of correlation among drug product lots on Type I error rate and power. Three methods based on generalized pivotal quantities are introduced, and their performance is compared against a two-one-sided tests (TOST) approach. Finally, strategies to mitigate risk of correlation among the reference products lots are discussed. A biosimilar is

  17. Statistical vs. Economic Significance in Economics and Econometrics: Further comments on McCloskey & Ziliak

    DEFF Research Database (Denmark)

    Engsted, Tom

    reliable estimates, and I argue that significance tests are useful tools in those cases where a statistical model serves as input in the quantification of an economic model. Finally, I provide a specific example from economics - asset return predictability - where the distinction between statistical......I comment on the controversy between McCloskey & Ziliak and Hoover & Siegler on statistical versus economic significance, in the March 2008 issue of the Journal of Economic Methodology. I argue that while McCloskey & Ziliak are right in emphasizing 'real error', i.e. non-sampling error that cannot...... be eliminated through specification testing, they fail to acknowledge those areas in economics, e.g. rational expectations macroeconomics and asset pricing, where researchers clearly distinguish between statistical and economic significance and where statistical testing plays a relatively minor role in model...

  18. EasyGene – a prokaryotic gene finder that ranks ORFs by statistical significance

    DEFF Research Database (Denmark)

    Larsen, Thomas Schou; Krogh, Anders Stærmose

    2003-01-01

    in Swiss-Prot, a high quality training set of genes is automatically extracted from the genome and used to estimate the HMM. Putative genes are then scored with the HMM, and based on score and length of an ORF, the statistical significance is calculated. The measure of statistical significance for an ORF...... is the expected number of ORFs in one megabase of random sequence at the same significance level or better, where the random sequence has the same statistics as the genome in the sense of a third order Markov chain.Conclusions: The result is a flexible gene finder whose overall performance matches or exceeds...

  19. Strategies for Testing Statistical and Practical Significance in Detecting DIF with Logistic Regression Models

    Science.gov (United States)

    Fidalgo, Angel M.; Alavi, Seyed Mohammad; Amirian, Seyed Mohammad Reza

    2014-01-01

    This study examines three controversial aspects in differential item functioning (DIF) detection by logistic regression (LR) models: first, the relative effectiveness of different analytical strategies for detecting DIF; second, the suitability of the Wald statistic for determining the statistical significance of the parameters of interest; and…

  20. Using the Bootstrap Method for a Statistical Significance Test of Differences between Summary Histograms

    Science.gov (United States)

    Xu, Kuan-Man

    2006-01-01

    A new method is proposed to compare statistical differences between summary histograms, which are the histograms summed over a large ensemble of individual histograms. It consists of choosing a distance statistic for measuring the difference between summary histograms and using a bootstrap procedure to calculate the statistical significance level. Bootstrapping is an approach to statistical inference that makes few assumptions about the underlying probability distribution that describes the data. Three distance statistics are compared in this study. They are the Euclidean distance, the Jeffries-Matusita distance and the Kuiper distance. The data used in testing the bootstrap method are satellite measurements of cloud systems called cloud objects. Each cloud object is defined as a contiguous region/patch composed of individual footprints or fields of view. A histogram of measured values over footprints is generated for each parameter of each cloud object and then summary histograms are accumulated over all individual histograms in a given cloud-object size category. The results of statistical hypothesis tests using all three distances as test statistics are generally similar, indicating the validity of the proposed method. The Euclidean distance is determined to be most suitable after comparing the statistical tests of several parameters with distinct probability distributions among three cloud-object size categories. Impacts on the statistical significance levels resulting from differences in the total lengths of satellite footprint data between two size categories are also discussed.

  1. Surprise responses in the human brain demonstrate statistical learning under high concurrent cognitive demand

    Science.gov (United States)

    Garrido, Marta Isabel; Teng, Chee Leong James; Taylor, Jeremy Alexander; Rowe, Elise Genevieve; Mattingley, Jason Brett

    2016-06-01

    The ability to learn about regularities in the environment and to make predictions about future events is fundamental for adaptive behaviour. We have previously shown that people can implicitly encode statistical regularities and detect violations therein, as reflected in neuronal responses to unpredictable events that carry a unique prediction error signature. In the real world, however, learning about regularities will often occur in the context of competing cognitive demands. Here we asked whether learning of statistical regularities is modulated by concurrent cognitive load. We compared electroencephalographic metrics associated with responses to pure-tone sounds with frequencies sampled from narrow or wide Gaussian distributions. We showed that outliers evoked a larger response than those in the centre of the stimulus distribution (i.e., an effect of surprise) and that this difference was greater for physically identical outliers in the narrow than in the broad distribution. These results demonstrate an early neurophysiological marker of the brain's ability to implicitly encode complex statistical structure in the environment. Moreover, we manipulated concurrent cognitive load by having participants perform a visual working memory task while listening to these streams of sounds. We again observed greater prediction error responses in the narrower distribution under both low and high cognitive load. Furthermore, there was no reliable reduction in prediction error magnitude under high-relative to low-cognitive load. Our findings suggest that statistical learning is not a capacity limited process, and that it proceeds automatically even when cognitive resources are taxed by concurrent demands.

  2. The statistical significance of the N-S asymmetry of solar activity revisited

    CERN Document Server

    Carbonell, M; Oliver, R; Ballester, J L

    2007-01-01

    The main aim of this study is to point out the difficulties found when trying to assess the statistical significance of the North-South asymmetry (hereafter SSNSA) of the most usually considered time series of solar activity. First of all, we distinguish between solar activity time series composed by integer or non-integer and dimensionless data, or composed by non-integer and dimensional data. For each of these cases, we discuss the most suitable statistical tests which can be applied and highlight the difficulties to obtain valid information about the statistical significance of solar activity time series. Our results suggest that, apart from the need to apply the suitable statistical tests, other effects such as the data binning, the considered units and the need, in some tests, to consider groups of data, affect substantially the determination of the statistical significance of the asymmetry. Our main conclusion is that the assessment of the statistical significance of the N-S asymmetry of solar activity ...

  3. Codon Deviation Coefficient: A novel measure for estimating codon usage bias and its statistical significance

    KAUST Repository

    Zhang, Zhang

    2012-03-22

    Background: Genetic mutation, selective pressure for translational efficiency and accuracy, level of gene expression, and protein function through natural selection are all believed to lead to codon usage bias (CUB). Therefore, informative measurement of CUB is of fundamental importance to making inferences regarding gene function and genome evolution. However, extant measures of CUB have not fully accounted for the quantitative effect of background nucleotide composition and have not statistically evaluated the significance of CUB in sequence analysis.Results: Here we propose a novel measure--Codon Deviation Coefficient (CDC)--that provides an informative measurement of CUB and its statistical significance without requiring any prior knowledge. Unlike previous measures, CDC estimates CUB by accounting for background nucleotide compositions tailored to codon positions and adopts the bootstrapping to assess the statistical significance of CUB for any given sequence. We evaluate CDC by examining its effectiveness on simulated sequences and empirical data and show that CDC outperforms extant measures by achieving a more informative estimation of CUB and its statistical significance.Conclusions: As validated by both simulated and empirical data, CDC provides a highly informative quantification of CUB and its statistical significance, useful for determining comparative magnitudes and patterns of biased codon usage for genes or genomes with diverse sequence compositions. 2012 Zhang et al; licensee BioMed Central Ltd.

  4. Codon Deviation Coefficient: a novel measure for estimating codon usage bias and its statistical significance

    Directory of Open Access Journals (Sweden)

    Zhang Zhang

    2012-03-01

    Full Text Available Abstract Background Genetic mutation, selective pressure for translational efficiency and accuracy, level of gene expression, and protein function through natural selection are all believed to lead to codon usage bias (CUB. Therefore, informative measurement of CUB is of fundamental importance to making inferences regarding gene function and genome evolution. However, extant measures of CUB have not fully accounted for the quantitative effect of background nucleotide composition and have not statistically evaluated the significance of CUB in sequence analysis. Results Here we propose a novel measure--Codon Deviation Coefficient (CDC--that provides an informative measurement of CUB and its statistical significance without requiring any prior knowledge. Unlike previous measures, CDC estimates CUB by accounting for background nucleotide compositions tailored to codon positions and adopts the bootstrapping to assess the statistical significance of CUB for any given sequence. We evaluate CDC by examining its effectiveness on simulated sequences and empirical data and show that CDC outperforms extant measures by achieving a more informative estimation of CUB and its statistical significance. Conclusions As validated by both simulated and empirical data, CDC provides a highly informative quantification of CUB and its statistical significance, useful for determining comparative magnitudes and patterns of biased codon usage for genes or genomes with diverse sequence compositions.

  5. Confidence intervals permit, but do not guarantee, better inference than statistical significance testing.

    Science.gov (United States)

    Coulson, Melissa; Healey, Michelle; Fidler, Fiona; Cumming, Geoff

    2010-01-01

    A statistically significant result, and a non-significant result may differ little, although significance status may tempt an interpretation of difference. Two studies are reported that compared interpretation of such results presented using null hypothesis significance testing (NHST), or confidence intervals (CIs). Authors of articles published in psychology, behavioral neuroscience, and medical journals were asked, via email, to interpret two fictitious studies that found similar results, one statistically significant, and the other non-significant. Responses from 330 authors varied greatly, but interpretation was generally poor, whether results were presented as CIs or using NHST. However, when interpreting CIs respondents who mentioned NHST were 60% likely to conclude, unjustifiably, the two results conflicted, whereas those who interpreted CIs without reference to NHST were 95% likely to conclude, justifiably, the two results were consistent. Findings were generally similar for all three disciplines. An email survey of academic psychologists confirmed that CIs elicit better interpretations if NHST is not invoked. Improved statistical inference can result from encouragement of meta-analytic thinking and use of CIs but, for full benefit, such highly desirable statistical reform requires also that researchers interpret CIs without recourse to NHST.

  6. Confidence intervals permit, but don't guarantee, better inference than statistical significance testing

    Directory of Open Access Journals (Sweden)

    Melissa Coulson

    2010-07-01

    Full Text Available A statistically significant result, and a non-significant result may differ little, although significance status may tempt an interpretation of difference. Two studies are reported that compared interpretation of such results presented using null hypothesis significance testing (NHST, or confidence intervals (CIs. Authors of articles published in psychology, behavioural neuroscience, and medical journals were asked, via email, to interpret two fictitious studies that found similar results, one statistically significant, and the other non-significant. Responses from 330 authors varied greatly, but interpretation was generally poor, whether results were presented as CIs or using NHST. However, when interpreting CIs respondents who mentioned NHST were 60% likely to conclude, unjustifiably, the two results conflicted, whereas those who interpreted CIs without reference to NHST were 95% likely to conclude, justifiably, the two results were consistent. Findings were generally similar for all three disciplines. An email survey of academic psychologists confirmed that CIs elicit better interpretations if NHST is not invoked. Improved statistical inference can result from encouragement of meta-analytic thinking and use of CIs but, for full benefit, such highly desirable statistical reform requires also that researchers interpret CIs without recourse to NHST.

  7. Does Statistical Significance Help to Evaluate Predictive Performance of Competing Models?

    Directory of Open Access Journals (Sweden)

    Levent Bulut

    2016-04-01

    Full Text Available In Monte Carlo experiment with simulated data, we show that as a point forecast criterion, the Clark and West's (2006 unconditional test of mean squared prediction errors does not reflect the relative performance of a superior model over a relatively weaker one. The simulation results show that even though the mean squared prediction errors of a constructed superior model is far below a weaker alternative, the Clark- West test does not reflect this in their test statistics. Therefore, studies that use this statistic in testing the predictive accuracy of alternative exchange rate models, stock return predictability, inflation forecasting, and unemployment forecasting should not weight too much on the magnitude of the statistically significant Clark-West tests statistics.

  8. Environmental Assessment and Finding of No Significant Impact: Kalina Geothermal Demonstration Project Steamboat Springs, Nevada

    Energy Technology Data Exchange (ETDEWEB)

    N/A

    1999-02-22

    The Department of Energy (DOE) has prepared an Environmental Assessment (EA) to provide the DOE and other public agency decision makers with the environmental documentation required to take informed discretionary action on the proposed Kalina Geothermal Demonstration project. The EA assesses the potential environmental impacts and cumulative impacts, possible ways to minimize effects associated with partial funding of the proposed project, and discusses alternatives to DOE actions. The DOE will use this EA as a basis for their decision to provide financial assistance to Exergy, Inc. (Exergy), the project applicant. Based on the analysis in the EA, DOE has determined that the proposed action is not a major Federal action significantly affecting the quality of the human or physical environment, within the meaning of the National Environmental Policy Act (NEPA) of 1969. Therefore, the preparation of an environmental impact statement is not required and DOE is issuing this Finding of No Significant Impact (FONSI).

  9. EasyGene – a prokaryotic gene finder that ranks ORFs by statistical significance

    Directory of Open Access Journals (Sweden)

    Larsen Thomas

    2003-06-01

    Full Text Available Abstract Background Contrary to other areas of sequence analysis, a measure of statistical significance of a putative gene has not been devised to help in discriminating real genes from the masses of random Open Reading Frames (ORFs in prokaryotic genomes. Therefore, many genomes have too many short ORFs annotated as genes. Results In this paper, we present a new automated gene-finding method, EasyGene, which estimates the statistical significance of a predicted gene. The gene finder is based on a hidden Markov model (HMM that is automatically estimated for a new genome. Using extensions of similarities in Swiss-Prot, a high quality training set of genes is automatically extracted from the genome and used to estimate the HMM. Putative genes are then scored with the HMM, and based on score and length of an ORF, the statistical significance is calculated. The measure of statistical significance for an ORF is the expected number of ORFs in one megabase of random sequence at the same significance level or better, where the random sequence has the same statistics as the genome in the sense of a third order Markov chain. Conclusions The result is a flexible gene finder whose overall performance matches or exceeds other methods. The entire pipeline of computer processing from the raw input of a genome or set of contigs to a list of putative genes with significance is automated, making it easy to apply EasyGene to newly sequenced organisms. EasyGene with pre-trained models can be accessed at http://www.cbs.dtu.dk/services/EasyGene.

  10. Interpreting Statistical Significance Test Results: A Proposed New "What If" Method.

    Science.gov (United States)

    Kieffer, Kevin M.; Thompson, Bruce

    As the 1994 publication manual of the American Psychological Association emphasized, "p" values are affected by sample size. As a result, it can be helpful to interpret the results of statistical significant tests in a sample size context by conducting so-called "what if" analyses. However, these methods can be inaccurate…

  11. A critical discussion of null hypothesis significance testing and statistical power analysis within psychological research

    DEFF Research Database (Denmark)

    Jones, Allan; Sommerlund, Bo

    2007-01-01

    The uses of null hypothesis significance testing (NHST) and statistical power analysis within psychological research are critically discussed. The article looks at the problems of relying solely on NHST when dealing with small and large sample sizes. The use of power-analysis in estimating...

  12. [Tests of statistical significance in three biomedical journals: a critical review].

    Science.gov (United States)

    Sarria Castro, Madelaine; Silva Ayçaguer, Luis Carlos

    2004-05-01

    To describe the use of conventional tests of statistical significance and the current trends shown by their use in three biomedical journals read in Spanish-speaking countries. All descriptive or explanatory original articles published in the five-year period of 1996 through 2000 were reviewed in three journals: Revista Cubana de Medicina General Integral [Cuban Journal of Comprehensive General Medicine], Revista Panamericana de Salud Pública/Pan American Journal of Public Health, and Medicina Clínica [Clinical Medicine] (which is published in Spain). In the three journals that were reviewed various shortcomings were found in their use of hypothesis tests based on P values and in the limited use of new tools that have been suggested for use in their place: confidence intervals (CIs) and Bayesian inference. The basic findings of our research were: minimal use of CIs, as either a complement to significance tests or as the only statistical tool; mentions of a small sample size as a possible explanation for the lack of statistical significance; a predominant use of rigid alpha values; a lack of uniformity in the presentation of results; and improper reference in the research conclusions to the results of hypothesis tests. Our results indicate the lack of compliance by authors and editors with accepted standards for the use of tests of statistical significance. The findings also highlight that the stagnant use of these tests continues to be a common practice in the scientific literature.

  13. Statistical physics inspired methods to assign statistical significance in bioinformatics and proteomics: From sequence comparison to mass spectrometry based peptide sequencing

    Science.gov (United States)

    Alves, Gelio

    After the sequencing of many complete genomes, we are in a post-genomic era in which the most important task has changed from gathering genetic information to organizing the mass of data as well as under standing how components interact with each other. The former is usually undertaking using bioinformatics methods, while the latter task is generally termed proteomics. Success in both parts demands correct statistical significance assignments for results found. In my dissertation. I study two concrete examples: global sequence alignment statistics and peptide sequencing/identification using mass spectrometry. High-performance liquid chromatography coupled to a mass spectrometer (HPLC/MS/MS), enabling peptide identifications and thus protein identifications, has become the tool of choice in large-scale proteomics experiments. Peptide identification is usually done by database searches methods. The lack of robust statistical significance assignment among current methods motivated the development of a novel de novo algorithm, RAId, whose score statistics then provide statistical significance for high scoring peptides found in our custom, enzyme-digested peptide library. The ease of incorporating post-translation modifications is another important feature of RAId. To organize the massive protein/DNA data accumulated, biologists often cluster proteins according to their similarity via tools such as sequence alignment. Homologous proteins share similar domains. To assess the similarity of two domains usually requires alignment from head to toe, ie. a global alignment. A good alignment score statistics with an appropriate null model enable us to distinguish the biologically meaningful similarity from chance similarity. There has been much progress in local alignment statistics, which characterize score statistics when alignments tend to appear as a short segment of the whole sequence. For global alignment, which is useful in domain alignment, there is still much room for

  14. Homeopathy: statistical significance versus the sample size in experiments with Toxoplasma gondii

    Directory of Open Access Journals (Sweden)

    Ana Lúcia Falavigna Guilherme

    2011-09-01

    , examined in its full length. This study was approved by the Ethics Committee for animal experimentation of the UEM - Protocol 036/2009. The data were compared using the tests Mann Whitney and Bootstrap [7] with the statistical software BioStat 5.0. Results and discussion: There was no significant difference when analyzed with the Mann-Whitney, even multiplying the "n" ten times (p=0.0618. The number of cysts observed in BIOT 200DH group was 4.5 ± 3.3 and 12.8 ± 9.7 in the CONTROL group. Table 1 shows the results obtained using the bootstrap analysis for each data changed from 2n until 2n+5, and their respective p-values. With the inclusion of more elements in the different groups, tested one by one, randomly, increasing gradually the samples, we observed the sample size needed to statistically confirm the results seen experimentally. Using 17 mice in group BIOT 200DH and 19 in the CONTROL group we have already observed statistical significance. This result suggests that experiments involving highly diluted substances and infection of mice with T. gondii should work with experimental groups with 17 animals at least. Despite the current and relevant ethical discussions about the number of animals used for experimental procedures the number of animals involved in each experiment must meet the characteristics of each item to be studied. In the case of experiments involving highly diluted substances, experimental animal models are still rudimentary and the biological effects observed appear to be also individualized, as described in literature for homeopathy [8]. The fact that the statistical significance was achieved by increasing the sample observed in this trial, tell us about a rare event, with a strong individual behavior, difficult to demonstrate in a result set, treated simply with a comparison of means or medians. Conclusion: Bootstrap seems to be an interesting methodology for the analysis of data obtained from experiments with highly diluted

  15. A critical discussion of null hypothesis significance testing and statistical power analysis within psychological research

    DEFF Research Database (Denmark)

    Jones, Allan; Sommerlund, Bo

    2007-01-01

    The uses of null hypothesis significance testing (NHST) and statistical power analysis within psychological research are critically discussed. The article looks at the problems of relying solely on NHST when dealing with small and large sample sizes. The use of power-analysis in estimating...... the potential error introduced by small and large samples is advocated. Power analysis is not recommended as a replacement to NHST but as an additional source of information about the phenomena under investigation. Moreover, the importance of conceptual analysis in relation to statistical analysis of hypothesis...

  16. FES Training in Aging: interim results show statistically significant improvements in mobility and muscle fiber size

    Directory of Open Access Journals (Sweden)

    Helmut Kern

    2012-03-01

    Full Text Available Aging is a multifactorial process that is characterized by decline in muscle mass and performance. Several factors, including reduced exercise, poor nutrition and modified hormonal metabolism, are responsible for changes in the rates of protein synthesis and degradation that drive skeletal muscle mass reduction with a consequent decline of force generation and mobility functional performances. Seniors with normal life style were enrolled: two groups in Vienna (n=32 and two groups in Bratislava: (n=19. All subjects were healthy and declared not to have any specific physical/disease problems. The two Vienna groups of seniors exercised for 10 weeks with two different types of training (leg press at the hospital or home-based functional electrical stimulation, h-b FES. Demografic data (age, height and weight were recorded before and after the training period and before and after the training period the patients were submitted to mobility functional analyses and muscle biopsies. The mobility functional analyses were: 1. gait speed (10m test fastest speed, in m/s; 2. time which the subject needed to rise from a chair for five times (5x Chair-Rise, in s; 3. Timed –Up-Go- Test, in s; 4. Stair-Test, in s; 5. isometric measurement of quadriceps force (Torque/kg, in Nm/kg; and 6. Dynamic Balance in mm. Preliminary analyses of muscle biopsies from quadriceps in some of the Vienna and Bratislava patients present morphometric results consistent with their functional behaviors. The statistically significant improvements in functional testings here reported demonstrates the effectiveness of h-b FES, and strongly support h-b FES, as a safe home-based method to improve contractility and performances of ageing muscles.

  17. Confidence Intervals: From tests of statistical significance to confidence intervals, range hypotheses and substantial effects

    Directory of Open Access Journals (Sweden)

    Dominic Beaulieu-Prévost

    2006-03-01

    Full Text Available For the last 50 years of research in quantitative social sciences, the empirical evaluation of scientific hypotheses has been based on the rejection or not of the null hypothesis. However, more than 300 articles demonstrated that this method was problematic. In summary, null hypothesis testing (NHT is unfalsifiable, its results depend directly on sample size and the null hypothesis is both improbable and not plausible. Consequently, alternatives to NHT such as confidence intervals (CI and measures of effect size are starting to be used in scientific publications. The purpose of this article is, first, to provide the conceptual tools necessary to implement an approach based on confidence intervals, and second, to briefly demonstrate why such an approach is an interesting alternative to an approach based on NHT. As demonstrated in the article, the proposed CI approach avoids most problems related to a NHT approach and can often improve the scientific and contextual relevance of the statistical interpretations by testing range hypotheses instead of a point hypothesis and by defining the minimal value of a substantial effect. The main advantage of such a CI approach is that it replaces the notion of statistical power by an easily interpretable three-value logic (probable presence of a substantial effect, probable absence of a substantial effect and probabilistic undetermination. The demonstration includes a complete example.

  18. Cognitive Constructivism and the Epistemic Significance of Sharp Statistical Hypotheses in Natural Sciences

    CERN Document Server

    Stern, J M

    2010-01-01

    This book presents our case in defense of a constructivist epistemological framework and the use of compatible statistical theory and inference tools. The basic metaphor of decision theory is the maximization of a gambler's expected fortune, according to his own subjective utility, prior beliefs an learned experiences. This metaphor has proven to be very useful, leading the development of Bayesian statistics since its XX-th century revival, rooted on the work of de Finetti, Savage and others. The basic metaphor presented in this text, as a foundation for cognitive constructivism, is that of an eigen-solution, and the verification of its objective epistemic status. The FBST - Full Bayesian Significance Test - is the cornerstone of a set of statistical tolls conceived to assess the epistemic value of such eigen-solutions, according to their four essential attributes, namely, sharpness, stability, separability and composability. We believe that this alternative perspective, complementary to the one ofered by dec...

  19. Identifying potentially induced seismicity and assessing statistical significance in Oklahoma and California

    CERN Document Server

    McClure, Mark; Chiu, Kitkwan; Ranganath, Rajesh

    2016-01-01

    In this study, we develop a statistical method for identifying induced seismicity from large datasets and apply the method to decades of wastewater disposal and seismicity data in California and Oklahoma. The method is robust against a variety of potential pitfalls. The study regions are divided into gridblocks. We use a longitudinal study design, seeking associations between seismicity and wastewater injection along time-series within each gridblock. The longitudinal design helps control for non-random application of wastewater injection. We define a statistical model that is flexible enough to describe the seismicity observations, which have temporal correlation and high kurtosis. In each gridblock, we find the maximum likelihood estimate for a model parameter that relates induced seismicity hazard to total volume of wastewater injected each year. To assess significance, we compute likelihood ratio test statistics in each gridblock and each state, California and Oklahoma. Resampling is used to empirically d...

  20. Behaviorally inhibited individuals demonstrate significantly enhanced conditioned response acquisition under non-optimal learning conditions.

    Science.gov (United States)

    Holloway, J L; Allen, M T; Myers, C E; Servatius, R J

    2014-03-15

    Behavioral inhibition (BI) is an anxiety vulnerability factor associated with hypervigilance to novel stimuli, threat, and ambiguous cues. The progression from anxiety risk to a clinical disorder is unknown, although the acquisition of defensive learning and avoidance may be a critical feature. As the expression of avoidance is also central to anxiety development, the present study examined avoidance acquisition as a function of inhibited temperament using classical eyeblink conditioning. Individuals were classified as behaviorally inhibited (BI) or non-inhibited (NI) based on combined scores from the Adult and Retrospective Measures of Behavioural Inhibition (AMBI and RMBI, respectively). Acquisition was assessed using delay, omission, or yoked conditioning schedules of reinforcement. Omission training was identical to delay, except that the emission of an eyeblink conditioned response (CR) resulted in omission of the unconditioned airpuff stimulus (US) on that trial. Each subject in the yoked group was matched on total BI score to a subject in the omission group, and received the same schedule of CS and US delivery, resulting in a partial reinforcement training schedule. Delay conditioning elicited significantly more CRs compared to the omission and yoked contingencies, the latter two of which did not differ from each other. Thus, acquisition of an avoidance response was not apparent. BI individuals demonstrated enhanced acquisition overall, while partial reinforcement training significantly distinguished between BI and NI groups. Enhanced learning in BI may be a function of an increased defensive learning capacity, or sensitivity to uncertainty. Further work examining the influence of BI on learning acquisition is important for understanding individual differences in disorder etiology in anxiety vulnerable cohorts.

  1. The orthopaedic trauma literature: an evaluation of statistically significant findings in orthopaedic trauma randomized trials

    Directory of Open Access Journals (Sweden)

    Tornetta Paul

    2008-01-01

    Full Text Available Abstract Background Evidence-based medicine posits that health care research is founded upon clinically important differences in patient centered outcomes. Statistically significant differences between two treatments may not necessarily reflect a clinically important difference. We aimed to quantify the sample sizes and magnitude of treatment effects in a review of orthopaedic randomized trials with statistically significant findings. Methods We conducted a comprehensive search (PubMed, Cochrane for all randomized controlled trials between 1/1/95 to 12/31/04. Eligible studies include those that focused upon orthopaedic trauma. Baseline characteristics and treatment effects were abstracted by two reviewers. Briefly, for continuous outcome measures (ie functional scores, we calculated effect sizes (mean difference/standard deviation. Dichotomous variables (ie infection, nonunion were summarized as absolute risk differences and relative risk reductions (RRR. Effect sizes >0.80 and RRRs>50% were defined as large effects. Using regression analysis we examined the association between the total number of outcome events and treatment effect (dichotomous outcomes. Results Our search yielded 433 randomized controlled trials (RCTs, of which 76 RCTs with statistically significant findings on 184 outcomes (122 continuous/62 dichotomous outcomes met study eligibility criteria. The mean effect size across studies with continuous outcome variables was 1.7 (95% confidence interval: 1.43–1.97. For dichotomous outcomes, the mean risk difference was 30% (95%confidence interval:24%–36% and the mean relative risk reduction was 61% (95% confidence interval: 55%–66%; range: 0%–97%. Fewer numbers of total outcome events in studies was strongly correlated with increasing magnitude of the treatment effect (Pearson's R = -0.70, p Conclusion Our review suggests that statistically significant results in orthopaedic trials have the following implications-1 On average

  2. A Multi-Core Parallelization Strategy for Statistical Significance Testing in Learning Classifier Systems.

    Science.gov (United States)

    Rudd, James; Moore, Jason H; Urbanowicz, Ryan J

    2013-11-01

    Permutation-based statistics for evaluating the significance of class prediction, predictive attributes, and patterns of association have only appeared within the learning classifier system (LCS) literature since 2012. While still not widely utilized by the LCS research community, formal evaluations of test statistic confidence are imperative to large and complex real world applications such as genetic epidemiology where it is standard practice to quantify the likelihood that a seemingly meaningful statistic could have been obtained purely by chance. LCS algorithms are relatively computationally expensive on their own. The compounding requirements for generating permutation-based statistics may be a limiting factor for some researchers interested in applying LCS algorithms to real world problems. Technology has made LCS parallelization strategies more accessible and thus more popular in recent years. In the present study we examine the benefits of externally parallelizing a series of independent LCS runs such that permutation testing with cross validation becomes more feasible to complete on a single multi-core workstation. We test our python implementation of this strategy in the context of a simulated complex genetic epidemiological data mining problem. Our evaluations indicate that as long as the number of concurrent processes does not exceed the number of CPU cores, the speedup achieved is approximately linear.

  3. Robust statistical methods for significance evaluation and applications in cancer driver detection and biomarker discovery

    DEFF Research Database (Denmark)

    Madsen, Tobias

    2017-01-01

    are used to scale the aforementioned driver detection methods to a dataset consisting of more than 2,000 cancer genomes. The sizes and dimensionalities of genomic data sets, be it a large number of genes or multiple heterogeneous data sources, pose both great statistical opportunities and challenges....... This distribution can be learned across the entire set of genes and then be used to improve inference on the level of the individual gene. A practical way to implement this insight is using empirical Bayes. This idea is one of the main statistical underpinnings of the present work. The thesis consist of three main...... manuscripts as well as two supplementary manuscripts. In the first manuscript we explore efficient significance evaluation for models defined with factor graphs. Factor graphs are a class of graphical models encompassing both Bayesian networks and Markov models. We specifically develop a saddle...

  4. Statistically Non-significant Papers in Environmental Health Studies included more Outcome Variables

    Institute of Scientific and Technical Information of China (English)

    Pentti Nieminen; Khaled Abass; Kirsi Vhkanga; Arja Rautio

    2015-01-01

    Objective The number of analyzed outcome variables is important in the statistical analysis and interpretation of research findings. This study investigated published papers in the field of environmental health studies. We aimed to examine whether differences in the number of reported outcome variables exist between papers with non-significant findings compared to those with significant findings. Articles on the maternal exposure to mercury and child development were used as examples. Methods Articles published between 1995 and 2013 focusing on the relationships between maternal exposure to mercury and child development were collected from Medline and Scopus. Results Of 87 extracted papers, 73 used statistical significance testing and 38 (43.7%) of these reported ‘non-significant’ (P>0.05) findings. The median number of child development outcome variables in papers reporting ‘significant’ (n=35) and ‘non-significant’ (n=38) results was 4 versus 7, respectively (Mann-Whitney test P-value=0.014). An elevated number of outcome variables was especially found in papers reporting non-significant associations between maternal mercury and outcomes when mercury was the only analyzed exposure variable. Conclusion Authors often report analyzed health outcome variables based on their P-values rather than on stated primary research questions. Such a practice probably skews the research evidence.

  5. Statistical significance estimation of a signal within the GooFit framework on GPUs

    Science.gov (United States)

    Cristella, Leonardo; Di Florio, Adriano; Pompili, Alexis

    2017-03-01

    In order to test the computing capabilities of GPUs with respect to traditional CPU cores a high-statistics toy Monte Carlo technique has been implemented both in ROOT/RooFit and GooFit frameworks with the purpose to estimate the statistical significance of the structure observed by CMS close to the kinematical boundary of the J/ψϕ invariant mass in the three-body decay B+ → J/ψϕK+. GooFit is a data analysis open tool under development that interfaces ROOT/RooFit to CUDA platform on nVidia GPU. The optimized GooFit application running on GPUs hosted by servers in the Bari Tier2 provides striking speed-up performances with respect to the RooFit application parallelised on multiple CPUs by means of PROOF-Lite tool. The considerable resulting speed-up, evident when comparing concurrent GooFit processes allowed by CUDA Multi Process Service and a RooFit/PROOF-Lite process with multiple CPU workers, is presented and discussed in detail. By means of GooFit it has also been possible to explore the behaviour of a likelihood ratio test statistic in different situations in which the Wilks Theorem may or may not apply because its regularity conditions are not satisfied.

  6. Deriving statistical significance maps for SVM based image classification and group comparisons.

    Science.gov (United States)

    Gaonkar, Bilwaj; Davatzikos, Christos

    2012-01-01

    Population based pattern analysis and classification for quantifying structural and functional differences between diverse groups has been shown to be a powerful tool for the study of a number of diseases, and is quite commonly used especially in neuroimaging. The alternative to these pattern analysis methods, namely mass univariate methods such as voxel based analysis and all related methods, cannot detect multivariate patterns associated with group differences, and are not particularly suitable for developing individual-based diagnostic and prognostic biomarkers. A commonly used pattern analysis tool is the support vector machine (SVM). Unlike univariate statistical frameworks for morphometry, analytical tools for statistical inference are unavailable for the SVM. In this paper, we show that null distributions ordinarily obtained by permutation tests using SVMs can be analytically approximated from the data. The analytical computation takes a small fraction of the time it takes to do an actual permutation test, thereby rendering it possible to quickly create statistical significance maps derived from SVMs. Such maps are critical for understanding imaging patterns of group differences and interpreting which anatomical regions are important in determining the classifier's decision.

  7. RT-PSM, a real-time program for peptide-spectrum matching with statistical significance.

    Science.gov (United States)

    Wu, Fang-Xiang; Gagné, Pierre; Droit, Arnaud; Poirier, Guy G

    2006-01-01

    The analysis of complex biological peptide mixtures by tandem mass spectrometry (MS/MS) produces a huge body of collision-induced dissociation (CID) MS/MS spectra. Several methods have been developed for identifying peptide-spectrum matches (PSMs) by assigning MS/MS spectra to peptides in a database. However, most of these methods either do not give the statistical significance of PSMs (e.g., SEQUEST) or employ time-consuming computational methods to estimate the statistical significance (e.g., PeptideProphet). In this paper, we describe a new algorithm, RT-PSM, which can be used to identify PSMs and estimate their accuracy statistically in real time. RT-PSM first computes PSM scores between an MS/MS spectrum and a set of candidate peptides whose masses are within a preset tolerance of the MS/MS precursor ion mass. Then the computed PSM scores of all candidate peptides are employed to fit the expectation value distribution of the scores into a second-degree polynomial function in PSM score. The statistical significance of the best PSM is estimated by extrapolating the fitting polynomial function to the best PSM score. RT-PSM was tested on two pairs of MS/MS spectrum datasets and protein databases to investigate its performance. The MS/MS spectra were acquired using an ion trap mass spectrometer equipped with a nano-electrospray ionization source. The results show that RT-PSM has good sensitivity and specificity. Using a 55,577-entry protein database and running on a standard Pentium-4, 2.8-GHz CPU personal computer, RT-PSM can process peptide spectra on a sequential, one-by-one basis in 0.047 s on average, compared to more than 7 s per spectrum on average for Sequest and X!Tandem, in their current batch-mode processing implementations. RT-PSM is clearly shown to be fast enough for real-time PSM assignment of MS/MS spectra generated every 3 s or so by a 3D ion trap or by a QqTOF instrument.

  8. Statistically Significant Strings are Related to Regulatory Elements in the Promoter Regions of Saccharomyces cerevisiae

    CERN Document Server

    Hu, R; Hu, Rui; Wang, Bin

    2000-01-01

    Finding out statistically significant words in DNA and protein sequences forms the basis for many genetic studies. By applying the maximal entropy principle, we give one systematic way to study the nonrandom occurrence of words in DNA or protein sequences. Through comparison with experimental results, it was shown that patterns of regulatory binding sites in Saccharomyces cerevisiae(yeast) genomes tend to occur significantly in the promoter regions. We studied two correlated gene family of yeast. The method successfully extracts the binding sites varified by experiments in each family. Many putative regulatory sites in the upstream regions are proposed. The study also suggested that some regulatory sites are a ctive in both directions, while others show directional preference.

  9. The demonstration of significant ferroelectricity in epitaxial Y-doped HfO2 film

    OpenAIRE

    Takao Shimizu; Kiliha Katayama; Takanori Kiguchi; Akihiro Akama; Konno, Toyohiko J.; Osami Sakata; Hiroshi Funakubo

    2016-01-01

    Ferroelectricity and Curie temperature are demonstrated for epitaxial Y-doped HfO2 film grown on (110) yttrium oxide-stabilized zirconium oxide (YSZ) single crystal using Sn-doped In2O3 (ITO) as bottom electrodes. The XRD measurements for epitaxial film enabled us to investigate its detailed crystal structure including orientations of the film. The ferroelectricity was confirmed by electric displacement filed – electric filed hysteresis measurement, which revealed saturated polarization of 16...

  10. Significance of coronary artery calcification demonstrated by computed tomography in detecting coronary artery stenosis

    Energy Technology Data Exchange (ETDEWEB)

    Shiraki, Teruo; Akiyama, Yoko; Kita, Masahide [Iwakuni national Hospital, Yamaguchi (Japan)] [and others

    2002-02-01

    Serial 27 patients with angina attack were enrolled in this trial. Plain computed tomography (CT) of the chest and coronary angiogram were performed simultaneously. Calcification of main branch of coronary arteies (left main trunk, left anterior desending artery, left circumflex artery, right coronary artery) was judged visually. More than 50% stenosis was defined significant by quantitative coronary angiogram. Correlation between calcified lesions detected by CT and angiographic stenoses showed high specificity and negative predictive value was also high (sensitity=58%, specificity=80%, positive predictive value=27%, negative predictive value=94%, p<0.05). There was no significant correlation between patients with calcification of corornary artery and angiographic stenosis. The present study showed the low probability of significant stenosis without calcification and the high probability with multiple calcified lesions. (author)

  11. Statistically significant faunal differences among Middle Ordovician age, Chickamauga Group bryozoan bioherms, central Alabama

    Energy Technology Data Exchange (ETDEWEB)

    Crow, C.J.

    1985-01-01

    Middle Ordovician age Chickamauga Group carbonates crop out along the Birmingham and Murphrees Valley anticlines in central Alabama. The macrofossil contents on exposed surfaces of seven bioherms have been counted to determine their various paleontologic characteristics. Twelve groups of organisms are present in these bioherms. Dominant organisms include bryozoans, algae, brachiopods, sponges, pelmatozoans, stromatoporoids and corals. Minor accessory fauna include predators, scavengers and grazers such as gastropods, ostracods, trilobites, cephalopods and pelecypods. Vertical and horizontal niche zonation has been detected for some of the bioherm dwelling fauna. No one bioherm of those studied exhibits all 12 groups of organisms; rather, individual bioherms display various subsets of the total diversity. Statistical treatment (G-test) of the diversity data indicates a lack of statistical homogeneity of the bioherms, both within and between localities. Between-locality population heterogeneity can be ascribed to differences in biologic responses to such gross environmental factors as water depth and clarity, and energy levels. At any one locality, gross aspects of the paleoenvironments are assumed to have been more uniform. Significant differences among bioherms at any one locality may have resulted from patchy distribution of species populations, differential preservation and other factors.

  12. Mining Statistically Significant Substrings Based on the Chi-Square Measure

    CERN Document Server

    Bhattacharya, Sourav Dutta Arnab

    2010-01-01

    Given the vast reservoirs of data stored worldwide, efficient mining of data from a large information store has emerged as a great challenge. Many databases like that of intrusion detection systems, web-click records, player statistics, texts, proteins etc., store strings or sequences. Searching for an unusual pattern within such long strings of data has emerged as a requirement for diverse applications. Given a string, the problem then is to identify the substrings that differs the most from the expected or normal behavior, i.e., the substrings that are statistically significant. In other words, these substrings are less likely to occur due to chance alone and may point to some interesting information or phenomenon that warrants further exploration. To this end, we use the chi-square measure. We propose two heuristics for retrieving the top-k substrings with the largest chi-square measure. We show that the algorithms outperform other competing algorithms in the runtime, while maintaining a high approximation...

  13. Demonstrating the Effectiveness of an Integrated and Intensive Research Methods and Statistics Course Sequence

    Science.gov (United States)

    Pliske, Rebecca M.; Caldwell, Tracy L.; Calin-Jageman, Robert J.; Taylor-Ritzler, Tina

    2015-01-01

    We developed a two-semester series of intensive (six-contact hours per week) behavioral research methods courses with an integrated statistics curriculum. Our approach includes the use of team-based learning, authentic projects, and Excel and SPSS. We assessed the effectiveness of our approach by examining our students' content area scores on the…

  14. Demonstrating the Effectiveness of an Integrated and Intensive Research Methods and Statistics Course Sequence

    Science.gov (United States)

    Pliske, Rebecca M.; Caldwell, Tracy L.; Calin-Jageman, Robert J.; Taylor-Ritzler, Tina

    2015-01-01

    We developed a two-semester series of intensive (six-contact hours per week) behavioral research methods courses with an integrated statistics curriculum. Our approach includes the use of team-based learning, authentic projects, and Excel and SPSS. We assessed the effectiveness of our approach by examining our students' content area scores on the…

  15. Scalable detection of statistically significant communities and hierarchies: message-passing for modularity

    CERN Document Server

    Zhang, Pan

    2014-01-01

    Modularity is a popular measure of community structure. However, maximizing the modularity can lead to many competing partitions with almost the same modularity that are poorly correlated to each other; it can also overfit, producing illusory "communities" in random graphs where none exist. We address this problem by using the modularity as a Hamiltonian, and computing the marginals of the resulting Gibbs distribution. If we assign each node to its most-likely community under these marginals, we claim that, unlike the ground state, the resulting partition is a good measure of statistically-significant community structure. We propose an efficient Belief Propagation (BP) algorithm to compute these marginals. In random networks with no true communities, the system has two phases as we vary the temperature: a paramagnetic phase where all marginals are equal, and a spin glass phase where BP fails to converge. In networks with real community structure, there is an additional retrieval phase where BP converges, and ...

  16. Statistical Significance of Non-Reproducibility of Cross Sections in Dissipative Reactions

    Institute of Scientific and Technical Information of China (English)

    王琦; 董玉川; 李松林; 田文栋; 李志常; 路秀琴; 赵葵; 符长波; 刘建成; 姜华; 胡桂青

    2003-01-01

    Two independent excitation function measurements have been performed in the reaction system of 19F+93 Nb using two target foils of the same nominal thickness. We measured the dissipative reaction products at incident energies of 102 through 108 MeV with a step of 250keV. The variance of energy autocorrelation functions of the reaction products was found to be three times of that originated from the randomized counting rates. By analysing the probability distributions of the deviations in the measured cross sections, we found that about 20% of all the deviations exceeds three standard deviations. This indicates that the non-reproducibility of the cross sections in the two independent measurements is of a statistical significance but not originated from randomized fluctuation of counting rates.

  17. Henry Eyring: Statistical Mechanics, Significant Structure Theory, and the Inductive-Deductive Method

    CERN Document Server

    Henderson, Douglas

    2010-01-01

    Henry Eyring was, and still is, a towering figure in science. Some aspects of his life and science, beginning in Mexico and continuing in Arizona, California, Wisconsin, Germany, Princeton, and finally Utah, are reviewed here. Eyring moved gradually from quantum theory toward statistical mechanics and the theory of liquids, motivated in part by his desire to understand reactions in condensed matter. Significant structure theory, while not as successful as Eyring thought, is better than his critics realize. Eyring won many awards. However, most chemists are surprised, if not shocked, that he was never awarded a Nobel Prize. He joined Lise Meitner, Rosalind Franklin, John Slater, and others, in an even more select group, those who should have received a Nobel Prize but did not.

  18. A network-based method to assess the statistical significance of mild co-regulation effects.

    Directory of Open Access Journals (Sweden)

    Emőke-Ágnes Horvát

    Full Text Available Recent development of high-throughput, multiplexing technology has initiated projects that systematically investigate interactions between two types of components in biological networks, for instance transcription factors and promoter sequences, or microRNAs (miRNAs and mRNAs. In terms of network biology, such screening approaches primarily attempt to elucidate relations between biological components of two distinct types, which can be represented as edges between nodes in a bipartite graph. However, it is often desirable not only to determine regulatory relationships between nodes of different types, but also to understand the connection patterns of nodes of the same type. Especially interesting is the co-occurrence of two nodes of the same type, i.e., the number of their common neighbours, which current high-throughput screening analysis fails to address. The co-occurrence gives the number of circumstances under which both of the biological components are influenced in the same way. Here we present SICORE, a novel network-based method to detect pairs of nodes with a statistically significant co-occurrence. We first show the stability of the proposed method on artificial data sets: when randomly adding and deleting observations we obtain reliable results even with noise exceeding the expected level in large-scale experiments. Subsequently, we illustrate the viability of the method based on the analysis of a proteomic screening data set to reveal regulatory patterns of human microRNAs targeting proteins in the EGFR-driven cell cycle signalling system. Since statistically significant co-occurrence may indicate functional synergy and the mechanisms underlying canalization, and thus hold promise in drug target identification and therapeutic development, we provide a platform-independent implementation of SICORE with a graphical user interface as a novel tool in the arsenal of high-throughput screening analysis.

  19. A Network-Based Method to Assess the Statistical Significance of Mild Co-Regulation Effects

    Science.gov (United States)

    Horvát, Emőke-Ágnes; Zhang, Jitao David; Uhlmann, Stefan; Sahin, Özgür; Zweig, Katharina Anna

    2013-01-01

    Recent development of high-throughput, multiplexing technology has initiated projects that systematically investigate interactions between two types of components in biological networks, for instance transcription factors and promoter sequences, or microRNAs (miRNAs) and mRNAs. In terms of network biology, such screening approaches primarily attempt to elucidate relations between biological components of two distinct types, which can be represented as edges between nodes in a bipartite graph. However, it is often desirable not only to determine regulatory relationships between nodes of different types, but also to understand the connection patterns of nodes of the same type. Especially interesting is the co-occurrence of two nodes of the same type, i.e., the number of their common neighbours, which current high-throughput screening analysis fails to address. The co-occurrence gives the number of circumstances under which both of the biological components are influenced in the same way. Here we present SICORE, a novel network-based method to detect pairs of nodes with a statistically significant co-occurrence. We first show the stability of the proposed method on artificial data sets: when randomly adding and deleting observations we obtain reliable results even with noise exceeding the expected level in large-scale experiments. Subsequently, we illustrate the viability of the method based on the analysis of a proteomic screening data set to reveal regulatory patterns of human microRNAs targeting proteins in the EGFR-driven cell cycle signalling system. Since statistically significant co-occurrence may indicate functional synergy and the mechanisms underlying canalization, and thus hold promise in drug target identification and therapeutic development, we provide a platform-independent implementation of SICORE with a graphical user interface as a novel tool in the arsenal of high-throughput screening analysis. PMID:24039936

  20. A Palatable Introduction to and Demonstration of Statistical Main Effects and Interactions

    Science.gov (United States)

    Christopher, Andrew N.; Marek, Pam

    2009-01-01

    Because concrete explanations in a familiar context facilitate understanding, we illustrate the concept of an interaction via a baking analogy to provide students with food for thought. The demonstration initially introduces the concepts of independent and dependent variables using a chocolate chip cookie recipe. The demonstration provides an…

  1. The demonstration of significant ferroelectricity in epitaxial Y-doped HfO2 film

    Science.gov (United States)

    Shimizu, Takao; Katayama, Kiliha; Kiguchi, Takanori; Akama, Akihiro; Konno, Toyohiko J.; Sakata, Osami; Funakubo, Hiroshi

    2016-09-01

    Ferroelectricity and Curie temperature are demonstrated for epitaxial Y-doped HfO2 film grown on (110) yttrium oxide-stabilized zirconium oxide (YSZ) single crystal using Sn-doped In2O3 (ITO) as bottom electrodes. The XRD measurements for epitaxial film enabled us to investigate its detailed crystal structure including orientations of the film. The ferroelectricity was confirmed by electric displacement filed – electric filed hysteresis measurement, which revealed saturated polarization of 16 μC/cm2. Estimated spontaneous polarization based on the obtained saturation polarization and the crystal structure analysis was 45 μC/cm2. This value is the first experimental estimations of the spontaneous polarization and is in good agreement with the theoretical value from first principle calculation. Curie temperature was also estimated to be about 450 °C. This study strongly suggests that the HfO2-based materials are promising for various ferroelectric applications because of their comparable ferroelectric properties including polarization and Curie temperature to conventional ferroelectric materials together with the reported excellent scalability in thickness and compatibility with practical manufacturing processes.

  2. SOCR Analyses: Implementation and Demonstration of a New Graphical Statistics Educational Toolkit

    Directory of Open Access Journals (Sweden)

    Annie Chu

    2009-04-01

    Full Text Available The web-based, Java-written SOCR (Statistical Online Computational Resource toolshave been utilized in many undergraduate and graduate level statistics courses for sevenyears now (Dinov 2006; Dinov et al. 2008b. It has been proven that these resourcescan successfully improve students' learning (Dinov et al. 2008b. Being rst publishedonline in 2005, SOCR Analyses is a somewhat new component and it concentrate on datamodeling for both parametric and non-parametric data analyses with graphical modeldiagnostics. One of the main purposes of SOCR Analyses is to facilitate statistical learn-ing for high school and undergraduate students. As we have already implemented SOCRDistributions and Experiments, SOCR Analyses and Charts fulll the rest of a standardstatistics curricula. Currently, there are four core components of SOCR Analyses. Linearmodels included in SOCR Analyses are simple linear regression, multiple linear regression,one-way and two-way ANOVA. Tests for sample comparisons include t-test in the para-metric category. Some examples of SOCR Analyses' in the non-parametric category areWilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, Kolmogorov-Smirno testand Fligner-Killeen test. Hypothesis testing models include contingency table, Friedman'stest and Fisher's exact test. The last component of Analyses is a utility for computingsample sizes for normal distribution. In this article, we present the design framework,computational implementation and the utilization of SOCR Analyses.

  3. Statistical significant changes in ground thermal conditions of alpine Austria during the last decade

    Science.gov (United States)

    Kellerer-Pirklbauer, Andreas

    2016-04-01

    Longer data series (e.g. >10 a) of ground temperatures in alpine regions are helpful to improve the understanding regarding the effects of present climate change on distribution and thermal characteristics of seasonal frost- and permafrost-affected areas. Beginning in 2004 - and more intensively since 2006 - a permafrost and seasonal frost monitoring network was established in Central and Eastern Austria by the University of Graz. This network consists of c.60 ground temperature (surface and near-surface) monitoring sites which are located at 1922-3002 m a.s.l., at latitude 46°55'-47°22'N and at longitude 12°44'-14°41'E. These data allow conclusions about general ground thermal conditions, potential permafrost occurrence, trend during the observation period, and regional pattern of changes. Calculations and analyses of several different temperature-related parameters were accomplished. At an annual scale a region-wide statistical significant warming during the observation period was revealed by e.g. an increase in mean annual temperature values (mean, maximum) or the significant lowering of the surface frost number (F+). At a seasonal scale no significant trend of any temperature-related parameter was in most cases revealed for spring (MAM) and autumn (SON). Winter (DJF) shows only a weak warming. In contrast, the summer (JJA) season reveals in general a significant warming as confirmed by several different temperature-related parameters such as e.g. mean seasonal temperature, number of thawing degree days, number of freezing degree days, or days without night frost. On a monthly basis August shows the statistically most robust and strongest warming of all months, although regional differences occur. Despite the fact that the general ground temperature warming during the last decade is confirmed by the field data in the study region, complications in trend analyses arise by temperature anomalies (e.g. warm winter 2006/07) or substantial variations in the winter

  4. Novel stable isotope analyses demonstrate significant rates of glucose cycling in mouse pancreatic islets.

    Science.gov (United States)

    Wall, Martha L; Pound, Lynley D; Trenary, Irina; O'Brien, Richard M; Young, Jamey D

    2015-06-01

    A polymorphism located in the G6PC2 gene, which encodes an islet-specific glucose-6-phosphatase catalytic subunit, is the most important common determinant of variations in fasting blood glucose (FBG) levels in humans. Studies of G6pc2 knockout (KO) mice suggest that G6pc2 represents a negative regulator of basal glucose-stimulated insulin secretion (GSIS) that acts by hydrolyzing glucose-6-phosphate (G6P), thereby reducing glycolytic flux. However, this conclusion conflicts with the very low estimates for the rate of glucose cycling in pancreatic islets, as assessed using radioisotopes. We have reassessed the rate of glucose cycling in pancreatic islets using a novel stable isotope method. The data show much higher levels of glucose cycling than previously reported. In 5 mmol/L glucose, islets from C57BL/6J chow-fed mice cycled ∼16% of net glucose uptake. The cycling rate was further increased at 11 mmol/L glucose. Similar cycling rates were observed using islets from high fat-fed mice. Importantly, glucose cycling was abolished in G6pc2 KO mouse islets, confirming that G6pc2 opposes the action of the glucose sensor glucokinase by hydrolyzing G6P. The demonstration of high rates of glucose cycling in pancreatic islets explains why G6pc2 deletion enhances GSIS and why variants in G6PC2 affect FBG in humans. © 2015 by the American Diabetes Association. Readers may use this article as long as the work is properly cited, the use is educational and not for profit, and the work is not altered.

  5. Application of universal kriging for estimation of earthquake ground motion: Statistical significance of results

    Energy Technology Data Exchange (ETDEWEB)

    Carr, J.R.; Roberts, K.P.

    1989-02-01

    Universal kriging is compared with ordinary kriging for estimation of earthquake ground motion. Ordinary kriging is based on a stationary random function model; universal kriging is based on a nonstationary random function model representing first-order drift. Accuracy of universal kriging is compared with that for ordinary kriging; cross-validation is used as the basis for comparison. Hypothesis testing on these results shows that accuracy obtained using universal kriging is not significantly different from accuracy obtained using ordinary kriging. Test based on normal distribution assumptions are applied to errors measured in the cross-validation procedure; t and F tests reveal no evidence to suggest universal and ordinary kriging are different for estimation of earthquake ground motion. Nonparametric hypothesis tests applied to these errors and jackknife statistics yield the same conclusion: universal and ordinary kriging are not significantly different for this application as determined by a cross-validation procedure. These results are based on application to four independent data sets (four different seismic events).

  6. Determining coding CpG islands by identifying regions significant for pattern statistics on Markov chains.

    Science.gov (United States)

    Singer, Meromit; Engström, Alexander; Schönhuth, Alexander; Pachter, Lior

    2011-09-23

    Recent experimental and computational work confirms that CpGs can be unmethylated inside coding exons, thereby showing that codons may be subjected to both genomic and epigenomic constraint. It is therefore of interest to identify coding CpG islands (CCGIs) that are regions inside exons enriched for CpGs. The difficulty in identifying such islands is that coding exons exhibit sequence biases determined by codon usage and constraints that must be taken into account. We present a method for finding CCGIs that showcases a novel approach we have developed for identifying regions of interest that are significant (with respect to a Markov chain) for the counts of any pattern. Our method begins with the exact computation of tail probabilities for the number of CpGs in all regions contained in coding exons, and then applies a greedy algorithm for selecting islands from among the regions. We show that the greedy algorithm provably optimizes a biologically motivated criterion for selecting islands while controlling the false discovery rate. We applied this approach to the human genome (hg18) and annotated CpG islands in coding exons. The statistical criterion we apply to evaluating islands reduces the number of false positives in existing annotations, while our approach to defining islands reveals significant numbers of undiscovered CCGIs in coding exons. Many of these appear to be examples of functional epigenetic specialization in coding exons.

  7. A visitor's guide to effect sizes: statistical significance versus practical (clinical) importance of research findings.

    Science.gov (United States)

    Hojat, Mohammadreza; Xu, Gang

    2004-01-01

    Effect Sizes (ES) are an increasingly important index used to quantify the degree of practical significance of study results. This paper gives an introduction to the computation and interpretation of effect sizes from the perspective of the consumer of the research literature. The key points made are: 1. ES is a useful indicator of the practical (clinical) importance of research results that can be operationally defined from being "negligible" to "moderate", to "important". 2. The ES has two advantages over statistical significance testing: (a) it is independent of the size of the sample; (b) it is a scale-free index. Therefore, ES can be uniformly interpreted in different studies regardless of the sample size and the original scales of the variables. 3. Calculations of the ES are illustrated by using examples of comparisons between two means, correlation coefficients, chi-square tests and two proportions, along with appropriate formulas. 4. Operational definitions for the ES s are given, along with numerical examples for the purpose of illustration.

  8. A common misapplication of statistical inference: Nuisance control with null-hypothesis significance tests.

    Science.gov (United States)

    Sassenhagen, Jona; Alday, Phillip M

    2016-11-01

    Experimental research on behavior and cognition frequently rests on stimulus or subject selection where not all characteristics can be fully controlled, even when attempting strict matching. For example, when contrasting patients to controls, variables such as intelligence or socioeconomic status are often correlated with patient status. Similarly, when presenting word stimuli, variables such as word frequency are often correlated with primary variables of interest. One procedure very commonly employed to control for such nuisance effects is conducting inferential tests on confounding stimulus or subject characteristics. For example, if word length is not significantly different for two stimulus sets, they are considered as matched for word length. Such a test has high error rates and is conceptually misguided. It reflects a common misunderstanding of statistical tests: interpreting significance not to refer to inference about a particular population parameter, but about 1. the sample in question, 2. the practical relevance of a sample difference (so that a nonsignificant test is taken to indicate evidence for the absence of relevant differences). We show inferential testing for assessing nuisance effects to be inappropriate both pragmatically and philosophically, present a survey showing its high prevalence, and briefly discuss an alternative in the form of regression including nuisance variables.

  9. Post hoc pattern matching: assigning significance to statistically defined expression patterns in single channel microarray data

    Directory of Open Access Journals (Sweden)

    Blalock Eric M

    2007-07-01

    Full Text Available Abstract Background Researchers using RNA expression microarrays in experimental designs with more than two treatment groups often identify statistically significant genes with ANOVA approaches. However, the ANOVA test does not discriminate which of the multiple treatment groups differ from one another. Thus, post hoc tests, such as linear contrasts, template correlations, and pairwise comparisons are used. Linear contrasts and template correlations work extremely well, especially when the researcher has a priori information pointing to a particular pattern/template among the different treatment groups. Further, all pairwise comparisons can be used to identify particular, treatment group-dependent patterns of gene expression. However, these approaches are biased by the researcher's assumptions, and some treatment-based patterns may fail to be detected using these approaches. Finally, different patterns may have different probabilities of occurring by chance, importantly influencing researchers' conclusions about a pattern and its constituent genes. Results We developed a four step, post hoc pattern matching (PPM algorithm to automate single channel gene expression pattern identification/significance. First, 1-Way Analysis of Variance (ANOVA, coupled with post hoc 'all pairwise' comparisons are calculated for all genes. Second, for each ANOVA-significant gene, all pairwise contrast results are encoded to create unique pattern ID numbers. The # genes found in each pattern in the data is identified as that pattern's 'actual' frequency. Third, using Monte Carlo simulations, those patterns' frequencies are estimated in random data ('random' gene pattern frequency. Fourth, a Z-score for overrepresentation of the pattern is calculated ('actual' against 'random' gene pattern frequencies. We wrote a Visual Basic program (StatiGen that automates PPM procedure, constructs an Excel workbook with standardized graphs of overrepresented patterns, and lists of

  10. Statistical Analysis of Tract-Tracing Experiments Demonstrates a Dense, Complex Cortical Network in the Mouse.

    Science.gov (United States)

    Ypma, Rolf J F; Bullmore, Edward T

    2016-09-01

    Anatomical tract tracing methods are the gold standard for estimating the weight of axonal connectivity between a pair of pre-defined brain regions. Large studies, comprising hundreds of experiments, have become feasible by automated methods. However, this comes at the cost of positive-mean noise making it difficult to detect weak connections, which are of particular interest as recent high resolution tract-tracing studies of the macaque have identified many more weak connections, adding up to greater connection density of cortical networks, than previously recognized. We propose a statistical framework that estimates connectivity weights and credibility intervals from multiple tract-tracing experiments. We model the observed signal as a log-normal distribution generated by a combination of tracer fluorescence and positive-mean noise, also accounting for injections into multiple regions. Using anterograde viral tract-tracing data provided by the Allen Institute for Brain Sciences, we estimate the connection density of the mouse intra-hemispheric cortical network to be 73% (95% credibility interval (CI): 71%, 75%); higher than previous estimates (40%). Inter-hemispheric density was estimated to be 59% (95% CI: 54%, 62%). The weakest estimable connections (about 6 orders of magnitude weaker than the strongest connections) are likely to represent only one or a few axons. These extremely weak connections are topologically more random and longer distance than the strongest connections, which are topologically more clustered and shorter distance (spatially clustered). Weak links do not substantially contribute to the global topology of a weighted brain graph, but incrementally increased topological integration of a binary graph. The topology of weak anatomical connections in the mouse brain, rigorously estimable down to the biological limit of a single axon between cortical areas in these data, suggests that they might confer functional advantages for integrative

  11. Evaluation of significantly modified water bodies in Vojvodina by using multivariate statistical techniques

    Directory of Open Access Journals (Sweden)

    Vujović Svetlana R.

    2013-01-01

    Full Text Available This paper illustrates the utility of multivariate statistical techniques for analysis and interpretation of water quality data sets and identification of pollution sources/factors with a view to get better information about the water quality and design of monitoring network for effective management of water resources. Multivariate statistical techniques, such as factor analysis (FA/principal component analysis (PCA and cluster analysis (CA, were applied for the evaluation of variations and for the interpretation of a water quality data set of the natural water bodies obtained during 2010 year of monitoring of 13 parameters at 33 different sites. FA/PCA attempts to explain the correlations between the observations in terms of the underlying factors, which are not directly observable. Factor analysis is applied to physico-chemical parameters of natural water bodies with the aim classification and data summation as well as segmentation of heterogeneous data sets into smaller homogeneous subsets. Factor loadings were categorized as strong and moderate corresponding to the absolute loading values of >0.75, 0.75-0.50, respectively. Four principal factors were obtained with Eigenvalues >1 summing more than 78 % of the total variance in the water data sets, which is adequate to give good prior information regarding data structure. Each factor that is significantly related to specific variables represents a different dimension of water quality. The first factor F1 accounting for 28 % of the total variance and represents the hydrochemical dimension of water quality. The second factor F2 accounting for 18% of the total variance and may be taken factor of water eutrophication. The third factor F3 accounting 17 % of the total variance and represents the influence of point sources of pollution on water quality. The fourth factor F4 accounting 13 % of the total variance and may be taken as an ecological dimension of water quality. Cluster analysis (CA is an

  12. Accelerator driven reactors, - the significance of the energy distribution of spallation neutrons on the neutron statistics

    Energy Technology Data Exchange (ETDEWEB)

    Fhager, V

    2000-01-01

    In order to make correct predictions of the second moment of statistical nuclear variables, such as the number of fissions and the number of thermalized neutrons, the dependence of the energy distribution of the source particles on their number should be considered. It has been pointed out recently that neglecting this number dependence in accelerator driven systems might result in bad estimates of the second moment, and this paper contains qualitative and quantitative estimates of the size of these efforts. We walk towards the requested results in two steps. First, models of the number dependent energy distributions of the neutrons that are ejected in the spallation reactions are constructed, both by simple assumptions and by extracting energy distributions of spallation neutrons from a high-energy particle transport code. Then, the second moment of nuclear variables in a sub-critical reactor, into which spallation neutrons are injected, is calculated. The results from second moment calculations using number dependent energy distributions for the source neutrons are compared to those where only the average energy distribution is used. Two physical models are employed to simulate the neutron transport in the reactor. One is analytical, treating only slowing down of neutrons by elastic scattering in the core material. For this model, equations are written down and solved for the second moment of thermalized neutrons that include the distribution of energy of the spallation neutrons. The other model utilizes Monte Carlo methods for tracking the source neutrons as they travel inside the reactor material. Fast and thermal fission reactions are considered, as well as neutron capture and elastic scattering, and the second moment of the number of fissions, the number of neutrons that leaked out of the system, etc. are calculated. Both models use a cylindrical core with a homogenous mixture of core material. Our results indicate that the number dependence of the energy

  13. Testing statistical significance scores of sequence comparison methods with structure similarity

    NARCIS (Netherlands)

    Hulsen, T.; Vlieg, J. de; Leunissen, J.A.M.; Groenen, P.M.

    2006-01-01

    BACKGROUND: In the past years the Smith-Waterman sequence comparison algorithm has gained popularity due to improved implementations and rapidly increasing computing power. However, the quality and sensitivity of a database search is not only determined by the algorithm but also by the statistical s

  14. Testing statistical significance scores of sequence comparison methods with structure similarity

    NARCIS (Netherlands)

    Hulsen, T.; Vlieg, de J.; Leunissen, J.A.M.; Groenen, P.

    2006-01-01

    Background - In the past years the Smith-Waterman sequence comparison algorithm has gained popularity due to improved implementations and rapidly increasing computing power. However, the quality and sensitivity of a database search is not only determined by the algorithm but also by the statistical

  15. Statistics

    CERN Document Server

    Hayslett, H T

    1991-01-01

    Statistics covers the basic principles of Statistics. The book starts by tackling the importance and the two kinds of statistics; the presentation of sample data; the definition, illustration and explanation of several measures of location; and the measures of variation. The text then discusses elementary probability, the normal distribution and the normal approximation to the binomial. Testing of statistical hypotheses and tests of hypotheses about the theoretical proportion of successes in a binomial population and about the theoretical mean of a normal population are explained. The text the

  16. Statistics

    Science.gov (United States)

    Links to sources of cancer-related statistics, including the Surveillance, Epidemiology and End Results (SEER) Program, SEER-Medicare datasets, cancer survivor prevalence data, and the Cancer Trends Progress Report.

  17. The Hall current system revealed as a statistical significant pattern during fast flows

    Directory of Open Access Journals (Sweden)

    K. Snekvik

    2008-11-01

    Full Text Available We have examined the dawn-dusk component of the magnetic field, BY, in the night side current sheet during fast flows in the neutral sheet. 237 h of Cluster data from the plasma sheet between 2 August 2002 and 2 October 2002 have been analysed. The spatial pattern of BY as a function of the distance from the centre of the current sheet has been estimated by using a Harris current sheet model. We have used the average slopes of these patterns to estimate earthward and tailward currents. For earthward fast flows there is a tailward current in the inner central plasma sheet and an earthward current in the outer central plasma sheet on average. For tailward fast flows the currents are oppositely directed. These observations are interpreted as signatures of Hall currents in the reconnection region or as field aligned currents which are connected with these currents. Although fast flows often are associated with a dawn-dusk current wedge, we believe that we have managed to filter out such currents from our statistical patterns.

  18. Statistical significance of hair analysis of clenbuterol to discriminate therapeutic use from contamination.

    Science.gov (United States)

    Krumbholz, Aniko; Anielski, Patricia; Gfrerer, Lena; Graw, Matthias; Geyer, Hans; Schänzer, Wilhelm; Dvorak, Jiri; Thieme, Detlef

    2014-01-01

    Clenbuterol is a well-established β2-agonist, which is prohibited in sports and strictly regulated for use in the livestock industry. During the last few years clenbuterol-positive results in doping controls and in samples from residents or travellers from a high-risk country were suspected to be related the illegal use of clenbuterol for fattening. A sensitive liquid chromatography-tandem mass spectrometry (LC-MS/MS) method was developed to detect low clenbuterol residues in hair with a detection limit of 0.02 pg/mg. A sub-therapeutic application study and a field study with volunteers, who have a high risk of contamination, were performed. For the application study, a total dosage of 30 µg clenbuterol was applied to 20 healthy volunteers on 5 subsequent days. One month after the beginning of the application, clenbuterol was detected in the proximal hair segment (0-1 cm) in concentrations between 0.43 and 4.76 pg/mg. For the second part, samples of 66 Mexican soccer players were analyzed. In 89% of these volunteers, clenbuterol was detectable in their hair at concentrations between 0.02 and 1.90 pg/mg. A comparison of both parts showed no statistical difference between sub-therapeutic application and contamination. In contrast, discrimination to a typical abuse of clenbuterol is apparently possible. Due to these findings results of real doping control samples can be evaluated. Copyright © 2014 John Wiley & Sons, Ltd.

  19. Statistical Significance and Reliability Analyses in Recent "Journal of Counseling & Development" Research Articles.

    Science.gov (United States)

    Thompson, Bruce; Snyder, Patricia A.

    1998-01-01

    Investigates two aspects of research analyses in quantitative research studies reported in the 1996 issues of "Journal of Counseling & Development" (JCD). Acceptable methodological practice regarding significance testing and evaluation of score reliability has evolved considerably. Contemporary thinking on these issues is described; practice as…

  20. A Visitor's Guide to Effect Sizes--Statistical Significance versus Practical (Clinical) Importance of Research Findings

    Science.gov (United States)

    Hojat, Mohammadreza; Xu, Gang

    2004-01-01

    Effect Sizes (ES) are an increasingly important index used to quantify the degree of practical significance of study results. This paper gives an introduction to the computation and interpretation of effect sizes from the perspective of the consumer of the research literature. The key points made are: (1) "ES" is a useful indicator of the…

  1. Deep Space Ka-band Link Management and the MRO Demonstration: Long-term Weather Statistics Versus Forecasting

    Science.gov (United States)

    Davarian, Faramaz; Shambayati, Shervin; Slobin, Stephen

    2004-01-01

    During the last 40 years, deep space radio communication systems have experienced a move toward shorter wavelengths. In the 1960s a transition from L- to S-band occurred which was followed by a transition from S- to X-band in the 1970s. Both these transitions provided deep space links with wider bandwidths and improved radio metrics capability. Now, in the 2000s, a new change is taking place, namely a move to the Ka-band region of the radio frequency spectrum. Ka-band will soon replace X-band as the frequency of choice for deep space communications providing ample spectrum for the high data rate requirements of future missions. The low-noise receivers of deep space networks have a great need for link management techniques that can mitigate weather effects. In this paper, three approaches for managing Ka-band Earth-space links are investigated. The first approach uses aggregate annual statistics, the second one uses monthly statistics, and the third is based on the short-term forecasting of the local weather. An example of weather forecasting for Ka-band link performance prediction is presented. Furthermore, spacecraft commanding schemes suitable for Ka-band link management are investigated. Theses schemes will be demonstrated using NASA's Mars Reconnaissance Orbiter (MRO) spacecraft in the 2007 to 2008 time period, and the demonstration findings will be reported in a future publication.

  2. The statistical significance of error probability as determined from decoding simulations for long codes

    Science.gov (United States)

    Massey, J. L.

    1976-01-01

    The very low error probability obtained with long error-correcting codes results in a very small number of observed errors in simulation studies of practical size and renders the usual confidence interval techniques inapplicable to the observed error probability. A natural extension of the notion of a 'confidence interval' is made and applied to such determinations of error probability by simulation. An example is included to show the surprisingly great significance of as few as two decoding errors in a very large number of decoding trials.

  3. WISCOD: A Statistical Web-Enabled Tool for the Identification of Significant Protein Coding Regions

    Directory of Open Access Journals (Sweden)

    Mireia Vilardell

    2014-01-01

    Full Text Available Classically, gene prediction programs are based on detecting signals such as boundary sites (splice sites, starts, and stops and coding regions in the DNA sequence in order to build potential exons and join them into a gene structure. Although nowadays it is possible to improve their performance with additional information from related species or/and cDNA databases, further improvement at any step could help to obtain better predictions. Here, we present WISCOD, a web-enabled tool for the identification of significant protein coding regions, a novel software tool that tackles the exon prediction problem in eukaryotic genomes. WISCOD has the capacity to detect real exons from large lists of potential exons, and it provides an easy way to use global P value called expected probability of being a false exon (EPFE that is useful for ranking potential exons in a probabilistic framework, without additional computational costs. The advantage of our approach is that it significantly increases the specificity and sensitivity (both between 80% and 90% in comparison to other ab initio methods (where they are in the range of 70–75%. WISCOD is written in JAVA and R and is available to download and to run in a local mode on Linux and Windows platforms.

  4. A New Method for Assessing the Statistical Significance in the Differential Functioning of Items and Tests (DFIT) Framework

    Science.gov (United States)

    Oshima, T. C.; Raju, Nambury S.; Nanda, Alice O.

    2006-01-01

    A new item parameter replication method is proposed for assessing the statistical significance of the noncompensatory differential item functioning (NCDIF) index associated with the differential functioning of items and tests framework. In this new method, a cutoff score for each item is determined by obtaining a (1-alpha ) percentile rank score…

  5. Statistical Significance of the Maximum Hardness Principle Applied to Some Selected Chemical Reactions.

    Science.gov (United States)

    Saha, Ranajit; Pan, Sudip; Chattaraj, Pratim K

    2016-11-05

    The validity of the maximum hardness principle (MHP) is tested in the cases of 50 chemical reactions, most of which are organic in nature and exhibit anomeric effect. To explore the effect of the level of theory on the validity of MHP in an exothermic reaction, B3LYP/6-311++G(2df,3pd) and LC-BLYP/6-311++G(2df,3pd) (def2-QZVP for iodine and mercury) levels are employed. Different approximations like the geometric mean of hardness and combined hardness are considered in case there are multiple reactants and/or products. It is observed that, based on the geometric mean of hardness, while 82% of the studied reactions obey the MHP at the B3LYP level, 84% of the reactions follow this rule at the LC-BLYP level. Most of the reactions possess the hardest species on the product side. A 50% null hypothesis is rejected at a 1% level of significance.

  6. Statistical Significance of the Maximum Hardness Principle Applied to Some Selected Chemical Reactions

    Directory of Open Access Journals (Sweden)

    Ranajit Saha

    2016-11-01

    Full Text Available The validity of the maximum hardness principle (MHP is tested in the cases of 50 chemical reactions, most of which are organic in nature and exhibit anomeric effect. To explore the effect of the level of theory on the validity of MHP in an exothermic reaction, B3LYP/6-311++G(2df,3pd and LC-BLYP/6-311++G(2df,3pd (def2-QZVP for iodine and mercury levels are employed. Different approximations like the geometric mean of hardness and combined hardness are considered in case there are multiple reactants and/or products. It is observed that, based on the geometric mean of hardness, while 82% of the studied reactions obey the MHP at the B3LYP level, 84% of the reactions follow this rule at the LC-BLYP level. Most of the reactions possess the hardest species on the product side. A 50% null hypothesis is rejected at a 1% level of significance.

  7. Significant Association of Urinary Toxic Metals and Autism-Related Symptoms—A Nonlinear Statistical Analysis with Cross Validation

    Science.gov (United States)

    Adams, James; Kruger, Uwe; Geis, Elizabeth; Gehn, Eva; Fimbres, Valeria; Pollard, Elena; Mitchell, Jessica; Ingram, Julie; Hellmers, Robert; Quig, David; Hahn, Juergen

    2017-01-01

    Introduction A number of previous studies examined a possible association of toxic metals and autism, and over half of those studies suggest that toxic metal levels are different in individuals with Autism Spectrum Disorders (ASD). Additionally, several studies found that those levels correlate with the severity of ASD. Methods In order to further investigate these points, this paper performs the most detailed statistical analysis to date of a data set in this field. First morning urine samples were collected from 67 children and adults with ASD and 50 neurotypical controls of similar age and gender. The samples were analyzed to determine the levels of 10 urinary toxic metals (UTM). Autism-related symptoms were assessed with eleven behavioral measures. Statistical analysis was used to distinguish participants on the ASD spectrum and neurotypical participants based upon the UTM data alone. The analysis also included examining the association of autism severity with toxic metal excretion data using linear and nonlinear analysis. “Leave-one-out” cross-validation was used to ensure statistical independence of results. Results and Discussion Average excretion levels of several toxic metals (lead, tin, thallium, antimony) were significantly higher in the ASD group. However, ASD classification using univariate statistics proved difficult due to large variability, but nonlinear multivariate statistical analysis significantly improved ASD classification with Type I/II errors of 15% and 18%, respectively. These results clearly indicate that the urinary toxic metal excretion profiles of participants in the ASD group were significantly different from those of the neurotypical participants. Similarly, nonlinear methods determined a significantly stronger association between the behavioral measures and toxic metal excretion. The association was strongest for the Aberrant Behavior Checklist (including subscales on Irritability, Stereotypy, Hyperactivity, and Inappropriate

  8. Significant Association of Urinary Toxic Metals and Autism-Related Symptoms-A Nonlinear Statistical Analysis with Cross Validation.

    Science.gov (United States)

    Adams, James; Howsmon, Daniel P; Kruger, Uwe; Geis, Elizabeth; Gehn, Eva; Fimbres, Valeria; Pollard, Elena; Mitchell, Jessica; Ingram, Julie; Hellmers, Robert; Quig, David; Hahn, Juergen

    2017-01-01

    A number of previous studies examined a possible association of toxic metals and autism, and over half of those studies suggest that toxic metal levels are different in individuals with Autism Spectrum Disorders (ASD). Additionally, several studies found that those levels correlate with the severity of ASD. In order to further investigate these points, this paper performs the most detailed statistical analysis to date of a data set in this field. First morning urine samples were collected from 67 children and adults with ASD and 50 neurotypical controls of similar age and gender. The samples were analyzed to determine the levels of 10 urinary toxic metals (UTM). Autism-related symptoms were assessed with eleven behavioral measures. Statistical analysis was used to distinguish participants on the ASD spectrum and neurotypical participants based upon the UTM data alone. The analysis also included examining the association of autism severity with toxic metal excretion data using linear and nonlinear analysis. "Leave-one-out" cross-validation was used to ensure statistical independence of results. Average excretion levels of several toxic metals (lead, tin, thallium, antimony) were significantly higher in the ASD group. However, ASD classification using univariate statistics proved difficult due to large variability, but nonlinear multivariate statistical analysis significantly improved ASD classification with Type I/II errors of 15% and 18%, respectively. These results clearly indicate that the urinary toxic metal excretion profiles of participants in the ASD group were significantly different from those of the neurotypical participants. Similarly, nonlinear methods determined a significantly stronger association between the behavioral measures and toxic metal excretion. The association was strongest for the Aberrant Behavior Checklist (including subscales on Irritability, Stereotypy, Hyperactivity, and Inappropriate Speech), but significant associations were found

  9. Methods for Determining the Statistical Significance of Enrichment or Depletion of Gene Ontology Classifications under Weighted Membership

    Directory of Open Access Journals (Sweden)

    Ernesto eIacucci

    2012-02-01

    Full Text Available High-throughput molecular biology studies, such as microarray assays of gene expression, two-hybrid experiments for detecting protein interactions, or ChIP-Seq experiments for transcription factor binding, often result in an interesting set of genes—say, genes that are co-expressed or bound by the same factor. One way of understanding the biological meaning of such a set is to consider what processes or functions, as defined in an ontology, are over-represented (enriched or under-represented (depleted among genes in the set. Usually, the significance of enrichment or depletion scores is based on simple statistical models and on the membership of genes in different classifications. We consider the more general problem of computing p-values for arbitrary integer additive statistics, or weighted membership functions. Such membership functions can be used to represent, for example, prior knowledge on the role of certain genes or classifications, differential importance of different classifications or genes to the experimenter, hierarchical relationships between classifications, or different degrees of interestingness or evidence for specific genes. We describe a generic dynamic programming algorithm that can compute exact p-values for arbitrary integer additive statistics. We also describe several optimizations for important special cases, which can provide orders-of-magnitude speed up in the computations. We apply our methods to datasets describing oxidative phosphorylation and parturition and compare p-values based on computations of several different statistics for measuring enrichment. We find major differences between p-values resulting from these statistics, and that some statistics recover gold standard annotations of the data better than others. Our work establishes a theoretical and algorithmic basis for far richer notions of enrichment or depletion of gene sets with respect to gene ontologies than has previously been available.

  10. Thermodynamic stability and statistical significance of potential stem-loop structures situated at the frameshift sites of retroviruses.

    OpenAIRE

    Le, S.Y.; Chen, J H; Maizel, J. V.

    1989-01-01

    RNA stem-loop structures situated just 3' to the frameshift sites of the retroviral gag-pol or gag-pro and pro-pol regions may make important contributions to frame-shifting in retroviruses. In this study, the thermodynamic stability and statistical significance of such secondary structural features relative to others in the sequence have been assessed using a newly developed method that combines calculations of the lowest free energy of formation of RNA secondary structures and the Monte Car...

  11. Assessing Statistically Significant Heavy-Metal Concentrations in Abandoned Mine Areas via Hot Spot Analysis of Portable XRF Data

    Directory of Open Access Journals (Sweden)

    Sung-Min Kim

    2017-06-01

    Full Text Available To develop appropriate measures to prevent soil contamination in abandoned mining areas, an understanding of the spatial variation of the potentially toxic trace elements (PTEs in the soil is necessary. For the purpose of effective soil sampling, this study uses hot spot analysis, which calculates a z-score based on the Getis-Ord Gi* statistic to identify a statistically significant hot spot sample. To constitute a statistically significant hot spot, a feature with a high value should also be surrounded by other features with high values. Using relatively cost- and time-effective portable X-ray fluorescence (PXRF analysis, sufficient input data are acquired from the Busan abandoned mine and used for hot spot analysis. To calibrate the PXRF data, which have a relatively low accuracy, the PXRF analysis data are transformed using the inductively coupled plasma atomic emission spectrometry (ICP-AES data. The transformed PXRF data of the Busan abandoned mine are classified into four groups according to their normalized content and z-scores: high content with a high z-score (HH, high content with a low z-score (HL, low content with a high z-score (LH, and low content with a low z-score (LL. The HL and LH cases may be due to measurement errors. Additional or complementary surveys are required for the areas surrounding these suspect samples or for significant hot spot areas. The soil sampling is conducted according to a four-phase procedure in which the hot spot analysis and proposed group classification method are employed to support the development of a sampling plan for the following phase. Overall, 30, 50, 80, and 100 samples are investigated and analyzed in phases 1–4, respectively. The method implemented in this case study may be utilized in the field for the assessment of statistically significant soil contamination and the identification of areas for which an additional survey is required.

  12. Weighted Feature Significance: A Simple, Interpretable Model of Compound Toxicity Based on the Statistical Enrichment of Structural Features

    OpenAIRE

    Huang, Ruili; Southall, Noel; Xia, Menghang; Cho, Ming-Hsuang; Jadhav, Ajit; Nguyen, Dac-Trung; Inglese, James; Tice, Raymond R.; Austin, Christopher P.

    2009-01-01

    In support of the U.S. Tox21 program, we have developed a simple and chemically intuitive model we call weighted feature significance (WFS) to predict the toxicological activity of compounds, based on the statistical enrichment of structural features in toxic compounds. We trained and tested the model on the following: (1) data from quantitative high–throughput screening cytotoxicity and caspase activation assays conducted at the National Institutes of Health Chemical Genomics Center, (2) dat...

  13. Adaptive management of the Great Barrier Reef: a globally significant demonstration of the benefits of networks of marine reserves.

    Science.gov (United States)

    McCook, Laurence J; Ayling, Tony; Cappo, Mike; Choat, J Howard; Evans, Richard D; De Freitas, Debora M; Heupel, Michelle; Hughes, Terry P; Jones, Geoffrey P; Mapstone, Bruce; Marsh, Helene; Mills, Morena; Molloy, Fergus J; Pitcher, C Roland; Pressey, Robert L; Russ, Garry R; Sutton, Stephen; Sweatman, Hugh; Tobin, Renae; Wachenfeld, David R; Williamson, David H

    2010-10-26

    The Great Barrier Reef (GBR) provides a globally significant demonstration of the effectiveness of large-scale networks of marine reserves in contributing to integrated, adaptive management. Comprehensive review of available evidence shows major, rapid benefits of no-take areas for targeted fish and sharks, in both reef and nonreef habitats, with potential benefits for fisheries as well as biodiversity conservation. Large, mobile species like sharks benefit less than smaller, site-attached fish. Critically, reserves also appear to benefit overall ecosystem health and resilience: outbreaks of coral-eating, crown-of-thorns starfish appear less frequent on no-take reefs, which consequently have higher abundance of coral, the very foundation of reef ecosystems. Effective marine reserves require regular review of compliance: fish abundances in no-entry zones suggest that even no-take zones may be significantly depleted due to poaching. Spatial analyses comparing zoning with seabed biodiversity or dugong distributions illustrate significant benefits from application of best-practice conservation principles in data-poor situations. Increases in the marine reserve network in 2004 affected fishers, but preliminary economic analysis suggests considerable net benefits, in terms of protecting environmental and tourism values. Relative to the revenue generated by reef tourism, current expenditure on protection is minor. Recent implementation of an Outlook Report provides regular, formal review of environmental condition and management and links to policy responses, key aspects of adaptive management. Given the major threat posed by climate change, the expanded network of marine reserves provides a critical and cost-effective contribution to enhancing the resilience of the Great Barrier Reef.

  14. The distribution of P-values in medical research articles suggested selective reporting associated with statistical significance.

    Science.gov (United States)

    Perneger, Thomas V; Combescure, Christophe

    2017-07-01

    Published P-values provide a window into the global enterprise of medical research. The aim of this study was to use the distribution of published P-values to estimate the relative frequencies of null and alternative hypotheses and to seek irregularities suggestive of publication bias. This cross-sectional study included P-values published in 120 medical research articles in 2016 (30 each from the BMJ, JAMA, Lancet, and New England Journal of Medicine). The observed distribution of P-values was compared with expected distributions under the null hypothesis (i.e., uniform between 0 and 1) and the alternative hypothesis (strictly decreasing from 0 to 1). P-values were categorized according to conventional levels of statistical significance and in one-percent intervals. Among 4,158 recorded P-values, 26.1% were highly significant (P P ≥ 0.001 to P ≥ 0.01 to P ≥ 0.05). We noted three irregularities: (1) high proportion of P-values P-values equal to 1, and (3) about twice as many P-values less than 0.05 compared with those more than 0.05. The latter finding was seen in both randomized trials and observational studies, and in most types of analyses, excepting heterogeneity tests and interaction tests. Under plausible assumptions, we estimate that about half of the tested hypotheses were null and the other half were alternative. This analysis suggests that statistical tests published in medical journals are not a random sample of null and alternative hypotheses but that selective reporting is prevalent. In particular, significant results are about twice as likely to be reported as nonsignificant results. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Statistically significant dependence of the Xaa-Pro peptide bond conformation on secondary structure and amino acid sequence

    Directory of Open Access Journals (Sweden)

    Leitner Dietmar

    2005-04-01

    Full Text Available Abstract Background A reliable prediction of the Xaa-Pro peptide bond conformation would be a useful tool for many protein structure calculation methods. We have analyzed the Protein Data Bank and show that the combined use of sequential and structural information has a predictive value for the assessment of the cis versus trans peptide bond conformation of Xaa-Pro within proteins. For the analysis of the data sets different statistical methods such as the calculation of the Chou-Fasman parameters and occurrence matrices were used. Furthermore we analyzed the relationship between the relative solvent accessibility and the relative occurrence of prolines in the cis and in the trans conformation. Results One of the main results of the statistical investigations is the ranking of the secondary structure and sequence information with respect to the prediction of the Xaa-Pro peptide bond conformation. We observed a significant impact of secondary structure information on the occurrence of the Xaa-Pro peptide bond conformation, while the sequence information of amino acids neighboring proline is of little predictive value for the conformation of this bond. Conclusion In this work, we present an extensive analysis of the occurrence of the cis and trans proline conformation in proteins. Based on the data set, we derived patterns and rules for a possible prediction of the proline conformation. Upon adoption of the Chou-Fasman parameters, we are able to derive statistically relevant correlations between the secondary structure of amino acid fragments and the Xaa-Pro peptide bond conformation.

  16. Search for semileptonic decays of photoproduced charmed mesons. [100 to 300 GeV, no statistically significant evidence

    Energy Technology Data Exchange (ETDEWEB)

    Coleman, R. N.

    1977-01-01

    In the broad band neutral beam at Fermilab, a search for photoproduction of charmed D mesons was done using photons of 100 to 300 GeV. The reaction considered was ..gamma.. + Be ..-->.. DantiD + X, leptons + ..., K/sup 0//sub s/n..pi../sup +-/. No statistically significant evidence for D production is observed based on the K/sup 0//sub s/n..pi../sup +-/ mass spectrum. The sensitivity of the search is commensurate with theoretical estimates of sigma(..gamma..p ..-->.. DantiD + X) approximately 500 nb, however this is dependent on branching ratios and photoproduction models. Data are given on a similar search for semileptonic decays of charmed baryons. 48 references.

  17. The Statistical Significance Test of Regional Climate Change Caused by Land Use and Land Cover Variation in West China

    Institute of Scientific and Technical Information of China (English)

    WANG Hanjie; SHI Weilai; CHEN Xiaohong

    2006-01-01

    The West Development Policy being implemented in China is causing significant land use and land cover (LULC) changes in West China. With the up-to-date satellite database of the Global Land Cover Characteristics Database (GLCCD) that characterizes the lower boundary conditions, the regional climate model RIEMS-TEA is used to simulate possible impacts of the significant LULC variation. The model was run for five continuous three-month periods from 1 June to 1 September of 1993, 1994, 1995, 1996, and 1997, and the results of the five groups are examined by means of a student t-test to identify the statistical significance of regional climate variation. The main results are: (1) The regional climate is affected by the LULC variation because the equilibrium of water and heat transfer in the air-vegetation interface is changed. (2) The integrated impact of the LULC variation on regional climate is not only limited to West China where the LULC varies, but also to some areas in the model domain where the LULC does not vary at all. (3) The East Asian monsoon system and its vertical structure are adjusted by the large scale LULC variation in western China, where the consequences are the enhancement of the westward water vapor transfer from the east oast and the relevant increase of wet-hydrostatic energy in the middle-upper atmospheric layers. (4) The ecological engineering in West China affects significantly the regional climate in Northwest China, North China and the middle-lower reaches of the Yangtze River; there are obvious effects in South, Northeast, and Southwest China, but minor effects in Tibet.

  18. Orienting response elicitation by personally significant information under subliminal stimulus presentation: demonstration using the concealed information test.

    Science.gov (United States)

    Maoz, Keren; Breska, Assaf; Ben-Shakhar, Gershon

    2012-12-01

    Considerable evidence suggests that subliminal information can trigger cognitive and neural processes. Here, we examined whether elicitation of orienting response by personally significant (PS) verbal information requires conscious awareness of the input. Subjects were exposed to the Concealed Information Test (CIT), in which autonomic responses for autobiographical items are typically larger than for control items. These items were presented subliminally using two different masking protocols: single or multiple presentation of the masked item. An objective test was used to verify unawareness to the stimuli. As predicted, PS items elicited significantly stronger skin conductance responses than the control items in both exposure conditions. The results extend previous findings showing that autonomic responses can be elicited following subliminal exposure to aversive information, and also may have implications on the applied usage of the CIT.

  19. A clip-based protocol for breast boost radiotherapy provides clear target visualisation and demonstrates significant volume reduction over time

    Energy Technology Data Exchange (ETDEWEB)

    Lewis, Lorraine [Department of Radiation Oncology, Northern Sydney Cancer Centre, Royal North Shore Hospital, Sydney, New South Wales (Australia); Cox, Jennifer [Department of Radiation Oncology, Northern Sydney Cancer Centre, Royal North Shore Hospital, Sydney, New South Wales (Australia); Faculty of Health Sciences, University of Sydney, Sydney, New South Wales (Australia); Morgia, Marita [Department of Radiation Oncology, Northern Sydney Cancer Centre, Royal North Shore Hospital, Sydney, New South Wales (Australia); Atyeo, John [Faculty of Health Sciences, University of Sydney, Sydney, New South Wales (Australia); Lamoury, Gillian [Department of Radiation Oncology, Northern Sydney Cancer Centre, Royal North Shore Hospital, Sydney, New South Wales (Australia)

    2015-09-15

    The clinical target volume (CTV) for early stage breast cancer is difficult to clearly identify on planning computed tomography (CT) scans. Surgical clips inserted around the tumour bed should help to identify the CTV, particularly if the seroma has been reabsorbed, and enable tracking of CTV changes over time. A surgical clip-based CTV delineation protocol was introduced. CTV visibility and its post-operative shrinkage pattern were assessed. The subjects were 27 early stage breast cancer patients receiving post-operative radiotherapy alone and 15 receiving post-operative chemotherapy followed by radiotherapy. The radiotherapy alone (RT/alone) group received a CT scan at median 25 days post-operatively (CT1rt) and another at 40 Gy, median 68 days (CT2rt). The chemotherapy/RT group (chemo/RT) received a CT scan at median 18 days post-operatively (CT1ch), a planning CT scan at median 126 days (CT2ch), and another at 40 Gy (CT3ch). There was no significant difference (P = 0.08) between the initial mean CTV for each cohort. The RT/alone cohort showed significant CTV volume reduction of 38.4% (P = 0.01) at 40 Gy. The Chemo/RT cohort had significantly reduced volumes between CT1ch: median 54 cm{sup 3} (4–118) and CT2ch: median 16 cm{sup 3}, (2–99), (P = 0.01), but no significant volume reduction thereafter. Surgical clips enable localisation of the post-surgical seroma for radiotherapy targeting. Most seroma shrinkage occurs early, enabling CT treatment planning to take place at 7 weeks, which is within the 9 weeks recommended to limit disease recurrence.

  20. Statistical significance of rising and oscillatory trends in global ocean and land temperature in the past 160 years

    CERN Document Server

    Østvand, Lene; Rypdal, Martin

    2013-01-01

    Various interpretations of the notion of a trend in the context of global warming are discussed, contrasting the difference between viewing a trend as the deterministic response to an external forcing and viewing it as a slow variation which can be separated from the background spectral continuum of long-range persistent climate noise. The emphasis in this paper is on the latter notion, and a general scheme is presented for testing a multi-parameter trend model against a null hypothesis which models the observed climate record as an autocorrelated noise. The scheme is employed to the instrumental global sea-surface temperature record and the global land-temperature record. A trend model comprising a linear plus an oscillatory trend with period of approximately 60 yr, and the statistical significance of the trends, are tested against three different null models: first-order autoregressive process, fractional Gaussian noise, and fractional Brownian motion. The linear trend is significant in all cases, but the o...

  1. Macro-indicators of citation impacts of six prolific countries: InCites data and the statistical significance of trends.

    Directory of Open Access Journals (Sweden)

    Lutz Bornmann

    Full Text Available Using the InCites tool of Thomson Reuters, this study compares normalized citation impact values calculated for China, Japan, France, Germany, United States, and the UK throughout the time period from 1981 to 2010. InCites offers a unique opportunity to study the normalized citation impacts of countries using (i a long publication window (1981 to 2010, (ii a differentiation in (broad or more narrow subject areas, and (iii allowing for the use of statistical procedures in order to obtain an insightful investigation of national citation trends across the years. Using four broad categories, our results show significantly increasing trends in citation impact values for France, the UK, and especially Germany across the last thirty years in all areas. The citation impact of papers from China is still at a relatively low level (mostly below the world average, but the country follows an increasing trend line. The USA exhibits a stable pattern of high citation impact values across the years. With small impact differences between the publication years, the US trend is increasing in engineering and technology but decreasing in medical and health sciences as well as in agricultural sciences. Similar to the USA, Japan follows increasing as well as decreasing trends in different subject areas, but the variability across the years is small. In most of the years, papers from Japan perform below or approximately at the world average in each subject area.

  2. Is Quality/Effectiveness An Empirically Demonstrable School Attribute? Statistical Aids for Determining Appropriate Levels of Analysis.

    Science.gov (United States)

    Griffith, James

    2002-01-01

    Describes and demonstrates analytical techniques used in organizational psychology and contemporary multilevel analysis. Using these analytic techniques, examines the relationship between educational outcomes and the school environment. Finds that at least some indicators might be represented as school-level phenomena. Results imply that the…

  3. Statistical Demonstration of Gas Experimental Three Laws%气体实验三定律的统计证明

    Institute of Scientific and Technical Information of China (English)

    邓发明

    2011-01-01

    From the perspective of Gas Molecular Dynamics Theory and Ideal Gas Molecular Model, this paper attempts to statistically prove the three experimental laws in college physics course by using Maxwell Velocity Distribution Law. The three experimental laws are respectively Boyle Law, Charles Law and Gay - Lussac Law.%从气体分子动理论和理想气体分子模型的角度,运用麦克斯韦速度分布律对大学物理课程中的玻意耳定律、查理定律和盖·吕萨克定律三个实验定律进行了统计证明.

  4. A randomized trial in a massive online open course shows people don’t know what a statistically significant relationship looks like, but they can learn

    Directory of Open Access Journals (Sweden)

    Aaron Fisher

    2014-10-01

    Full Text Available Scatterplots are the most common way for statisticians, scientists, and the public to visually detect relationships between measured variables. At the same time, and despite widely publicized controversy, P-values remain the most commonly used measure to statistically justify relationships identified between variables. Here we measure the ability to detect statistically significant relationships from scatterplots in a randomized trial of 2,039 students in a statistics massive open online course (MOOC. Each subject was shown a random set of scatterplots and asked to visually determine if the underlying relationships were statistically significant at the P < 0.05 level. Subjects correctly classified only 47.4% (95% CI [45.1%–49.7%] of statistically significant relationships, and 74.6% (95% CI [72.5%–76.6%] of non-significant relationships. Adding visual aids such as a best fit line or scatterplot smooth increased the probability a relationship was called significant, regardless of whether the relationship was actually significant. Classification of statistically significant relationships improved on repeat attempts of the survey, although classification of non-significant relationships did not. Our results suggest: (1 that evidence-based data analysis can be used to identify weaknesses in theoretical procedures in the hands of average users, (2 data analysts can be trained to improve detection of statistically significant results with practice, but (3 data analysts have incorrect intuition about what statistically significant relationships look like, particularly for small effects. We have built a web tool for people to compare scatterplots with their corresponding p-values which is available here: http://glimmer.rstudio.com/afisher/EDA/.

  5. A randomized trial in a massive online open course shows people don't know what a statistically significant relationship looks like, but they can learn.

    Science.gov (United States)

    Fisher, Aaron; Anderson, G Brooke; Peng, Roger; Leek, Jeff

    2014-01-01

    Scatterplots are the most common way for statisticians, scientists, and the public to visually detect relationships between measured variables. At the same time, and despite widely publicized controversy, P-values remain the most commonly used measure to statistically justify relationships identified between variables. Here we measure the ability to detect statistically significant relationships from scatterplots in a randomized trial of 2,039 students in a statistics massive open online course (MOOC). Each subject was shown a random set of scatterplots and asked to visually determine if the underlying relationships were statistically significant at the P < 0.05 level. Subjects correctly classified only 47.4% (95% CI [45.1%-49.7%]) of statistically significant relationships, and 74.6% (95% CI [72.5%-76.6%]) of non-significant relationships. Adding visual aids such as a best fit line or scatterplot smooth increased the probability a relationship was called significant, regardless of whether the relationship was actually significant. Classification of statistically significant relationships improved on repeat attempts of the survey, although classification of non-significant relationships did not. Our results suggest: (1) that evidence-based data analysis can be used to identify weaknesses in theoretical procedures in the hands of average users, (2) data analysts can be trained to improve detection of statistically significant results with practice, but (3) data analysts have incorrect intuition about what statistically significant relationships look like, particularly for small effects. We have built a web tool for people to compare scatterplots with their corresponding p-values which is available here: http://glimmer.rstudio.com/afisher/EDA/.

  6. A randomized trial in a massive online open course shows people don’t know what a statistically significant relationship looks like, but they can learn

    Science.gov (United States)

    Fisher, Aaron; Anderson, G. Brooke; Peng, Roger

    2014-01-01

    Scatterplots are the most common way for statisticians, scientists, and the public to visually detect relationships between measured variables. At the same time, and despite widely publicized controversy, P-values remain the most commonly used measure to statistically justify relationships identified between variables. Here we measure the ability to detect statistically significant relationships from scatterplots in a randomized trial of 2,039 students in a statistics massive open online course (MOOC). Each subject was shown a random set of scatterplots and asked to visually determine if the underlying relationships were statistically significant at the P < 0.05 level. Subjects correctly classified only 47.4% (95% CI [45.1%–49.7%]) of statistically significant relationships, and 74.6% (95% CI [72.5%–76.6%]) of non-significant relationships. Adding visual aids such as a best fit line or scatterplot smooth increased the probability a relationship was called significant, regardless of whether the relationship was actually significant. Classification of statistically significant relationships improved on repeat attempts of the survey, although classification of non-significant relationships did not. Our results suggest: (1) that evidence-based data analysis can be used to identify weaknesses in theoretical procedures in the hands of average users, (2) data analysts can be trained to improve detection of statistically significant results with practice, but (3) data analysts have incorrect intuition about what statistically significant relationships look like, particularly for small effects. We have built a web tool for people to compare scatterplots with their corresponding p-values which is available here: http://glimmer.rstudio.com/afisher/EDA/. PMID:25337457

  7. Prognostic Significance of Perineural Invasion in Patients with Rectal Cancer using R Environment for Statistical Computing and Graphics

    Directory of Open Access Journals (Sweden)

    Ioan Catalin VLAD

    2012-11-01

    Full Text Available Purpose: In recent studies perineural invasion (PNI is associated with poor survival rates in rectal cancer, but the impact of PNI it’s still controversial. We assessed PNI as a potential prognostic factor in rectal cancer. Patients and Methods: We analyzed 317 patients with rectal cancer resected at The Oncology Institute”Prof. Dr. Ion Chiricuţă” Cluj-Napoca, between January 2000 and December 2008. Tumors were reviewed for PNI by a pathologist. Patients data were reviewed and entered into a comprehensive database. The statistical analysis in our study was carried out in R environment for statistical computing and graphics, version 1.15.1. Overall and disease-free survivals were determined using the Kaplan-Meier method, and multivariate analysis using the Cox multiple hazards model. Results were compared using the log-rank test. Results: In our study PNI was identified in 19% of tumors. The 5-year disease-free survival rate was higher for patients with PNI-negative tumors versus those with PNI-positive tumors (57.31% vs. 36.99%, p=0.009. The 5-year overall survival rate was 59.15% for PNI-negative tumors versus 39.19% for PNI-positive tumors (p=0.014. On multivariate analysis, PNI was an independent prognostic factor for overall survival (Hazard Ratio = 0.6; 95% CI = 0.41 to 0.87; p = 0.0082. Conclusions: PNI can be considered an independent prognostic factor of outcomes in patients with rectal cancer. PNI should be taken into account when selecting patients for adjuvant treatment. R environment for statistical computing and graphics is complex yet easy to use software that has proven to be efficient in our clinical study.

  8. Statistical significance of non-reproducibility of cross sections measured in dissipative reactions 19F+93Nb

    Institute of Scientific and Technical Information of China (English)

    DONG Yu-Chuan; JIANG Hua; HU Gui-Qing; WANG Qi; LI Song-Lin; TIAN Wen-Dong; LI Zhi-Chang; LU Xiu-Qin; ZHAO Kui; FU Chang-Bo; LIU Jian-Cheng

    2004-01-01

    Two independent measurements of cross sections for the 19F+93Nb dissipative heavy-ion collision (DHIC) have been performed at incident energies from 100 to 108 MeV in steps of 250 keV. Two independently prepared targets were used respectively with all other experimental conditions being identical in both experiments. The data indicate non-reproducibility of the non-self-averaging oscillation yields in the two measurements. The statistical analysis of this non-reproducibility supports recent theoretical predictions of spontaneous coherence, slow phase randomization and extreme sensitivity in highly excited quantum many-body systems.

  9. Ecophysiological significance of scale-dependent patterns in prokaryotic genomes unveiled by a combination of statistic and genometric analyses.

    Science.gov (United States)

    Garcia, Juan A L; Bartumeus, Frederic; Roche, David; Giraldo, Jesús; Stanley, H Eugene; Casamayor, Emilio O

    2008-06-01

    We combined genometric (DNA walks) and statistical (detrended fluctuation analysis) methods on 456 prokaryotic chromosomes from 309 different bacterial and archaeal species to look for specific patterns and long-range correlations along the genome and relate them to ecological lifestyles. The position of each nucleotide along the complete genome sequence was plotted on an orthogonal plane (DNA landscape), and fluctuation analysis applied to the DNA walk series showed a long-range correlation in contrast to the lack of correlation for artificially generated genomes. Different features in the DNA landscapes among genomes from different ecological and metabolic groups of prokaryotes appeared with the combined analysis. Transition from hyperthermophilic to psychrophilic environments could have been related to more complex structural adaptations in microbial genomes, whereas for other environmental factors such as pH and salinity this effect would have been smaller. Prokaryotes with domain-specific metabolisms, such as photoautotrophy in Bacteria and methanogenesis in Archaea, showed consistent differences in genome correlation structure. Overall, we show that, beyond the relative proportion of nucleotides, correlation properties derived from their sequential position within the genome hide relevant phylogenetic and ecological information. This can be studied by combining genometric and statistical physics methods, leading to a reduction of genome complexity to a few useful descriptors.

  10. Statistical versus Musical Significance: Commentary on Leigh VanHandel's 'National Metrical Types in Nineteenth Century Art Song'

    Directory of Open Access Journals (Sweden)

    Justin London

    2010-01-01

    Full Text Available In “National Metrical Types in Nineteenth Century Art Song” Leigh Van Handel gives a sympathetic critique of William Rothstein’s claim that in western classical music of the late 18th and 19th centuries there are discernable differences in the phrasing and metrical practice of German versus French and Italian composers. This commentary (a examines just what Rothstein means in terms of his proposed metrical typology, (b questions Van Handel on how she has applied it to a purely melodic framework, (c amplifies Van Handel’s critique of Rothstein, and then (d concludes with a rumination on the reach of quantitative (i.e., statistically-driven versus qualitative claims regarding such things as “national metrical types.”

  11. Statistic characteristics and weather significance of infrared TBB during May―August in Beijing and its vicinity

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In order to meet the demand of nowcasting convective storms in Beijing, the climatological characteristics of convective storms in Beijing and its vicinity were analyzed based on the infrared (IR) temperature of black body (TBB) data during May―August of 1997―2004. The climatological probabilities, the diurnal cycle and the spatial distribution of convective storms are given respectively in this paper. The results show that the climatological characteristics of convective storms denoted by TBB≤-52℃ are consistent with those statistic studies based on the surface and lightning observations. Furthermore, the climatological characteristics of May and June are very different from those of July and August, showing that there are two types of convective storms in this region. One occurs in the transient polar air mass on the midlatitude continent during the late spring and early summer. This type of convection arises with thunder, strong wind gust and hail over the mountainous area in the northern part of this region from afternoon to nightfall, the other occurs with heavy rainfall in the warm and moist air mass over the North China Plain and vicinity of Bohai Sea. This study also shows that the long-term data of IR TBB observed by geostationary satellite can complement the temporal and spatial limitation of the weather radar and surface observations.

  12. Statistical and molecular analyses of evolutionary significance of red-green color vision and color blindness in vertebrates.

    Science.gov (United States)

    Yokoyama, Shozo; Takenaka, Naomi

    2005-04-01

    Red-green color vision is strongly suspected to enhance the survival of its possessors. Despite being red-green color blind, however, many species have successfully competed in nature, which brings into question the evolutionary advantage of achieving red-green color vision. Here, we propose a new method of identifying positive selection at individual amino acid sites with the premise that if positive Darwinian selection has driven the evolution of the protein under consideration, then it should be found mostly at the branches in the phylogenetic tree where its function had changed. The statistical and molecular methods have been applied to 29 visual pigments with the wavelengths of maximal absorption at approximately 510-540 nm (green- or middle wavelength-sensitive [MWS] pigments) and at approximately 560 nm (red- or long wavelength-sensitive [LWS] pigments), which are sampled from a diverse range of vertebrate species. The results show that the MWS pigments are positively selected through amino acid replacements S180A, Y277F, and T285A and that the LWS pigments have been subjected to strong evolutionary conservation. The fact that these positively selected M/LWS pigments are found not only in animals with red-green color vision but also in those with red-green color blindness strongly suggests that both red-green color vision and color blindness have undergone adaptive evolution independently in different species.

  13. Analytic estimation of statistical significance maps for support vector machine based multi-variate image analysis and classification

    OpenAIRE

    Gaonkar, Bilwaj; Davatzikos, Christos

    2013-01-01

    Multivariate pattern analysis (MVPA) methods such as support vector machines (SVMs) have been increasingly applied to fMRI and sMRI analyses, enabling the detection of distinctive imaging patterns. However, identifying brain regions that significantly contribute to the classification/group separation requires computationally expensive permutation testing. In this paper we show that the results of SVM-permutation testing can be analytically approximated. This approximation leads to more than a...

  14. Analytic estimation of statistical significance maps for support vector machine based multi-variate image analysis and classification.

    Science.gov (United States)

    Gaonkar, Bilwaj; Davatzikos, Christos

    2013-09-01

    Multivariate pattern analysis (MVPA) methods such as support vector machines (SVMs) have been increasingly applied to fMRI and sMRI analyses, enabling the detection of distinctive imaging patterns. However, identifying brain regions that significantly contribute to the classification/group separation requires computationally expensive permutation testing. In this paper we show that the results of SVM-permutation testing can be analytically approximated. This approximation leads to more than a thousandfold speedup of the permutation testing procedure, thereby rendering it feasible to perform such tests on standard computers. The speedup achieved makes SVM based group difference analysis competitive with standard univariate group difference analysis methods.

  15. Analysis/plot generation code with significance levels computed using Kolmogorov-Smirnov statistics valid for both large and small samples

    Energy Technology Data Exchange (ETDEWEB)

    Kurtz, S.E.; Fields, D.E.

    1983-10-01

    This report describes a version of the TERPED/P computer code that is very useful for small data sets. A new algorithm for determining the Kolmogorov-Smirnov (KS) statistics is used to extend program applicability. The TERPED/P code facilitates the analysis of experimental data and assists the user in determining its probability distribution function. Graphical and numerical tests are performed interactively in accordance with the user's assumption of normally or log-normally distributed data. Statistical analysis options include computation of the chi-square statistic and the KS one-sample test statistic and the corresponding significance levels. Cumulative probability plots of the user's data are generated either via a local graphics terminal, a local line printer or character-oriented terminal, or a remote high-resolution graphics device such as the FR80 film plotter or the Calcomp paper plotter. Several useful computer methodologies suffer from limitations of their implementations of the KS nonparametric test. This test is one of the more powerful analysis tools for examining the validity of an assumption about the probability distribution of a set of data. KS algorithms are found in other analysis codes, including the Statistical Analysis Subroutine (SAS) package and earlier versions of TERPED. The inability of these algorithms to generate significance levels for sample sizes less than 50 has limited their usefulness. The release of the TERPED code described herein contains algorithms to allow computation of the KS statistic and significance level for data sets of, if the user wishes, as few as three points. Values computed for the KS statistic are within 3% of the correct value for all data set sizes.

  16. The Role of Baryons in Creating Statistically Significant Planes of Satellites around Milky Way-Mass Galaxies

    CERN Document Server

    Ahmed, Sheehan H; Christensen, Charlotte R

    2016-01-01

    We investigate whether the inclusion of baryonic physics influences the formation of thin, coherently rotating planes of satellites such as those seen around the Milky Way and Andromeda. For four Milky Way-mass simulations, each run both as dark matter-only and with baryons included, we are able to identify a planar configuration that significantly maximizes the number of plane satellite members. The maximum plane member satellites are consistently different between the dark matter-only and baryonic versions of the same run due to the fact that satellites are both more likely to be destroyed and to infall later in the baryonic runs. Hence, studying satellite planes in dark matter-only simulations is misleading, because they will be composed of different satellite members than those that would exist if baryons were included. Additionally, the destruction of satellites in the baryonic runs leads to less radially concentrated satellite distributions, a result that is critical to making planes that are statistica...

  17. Test the Overall Significance of p-values by Using Joint Tail Probability of Ordered p-values as Test Statistic

    NARCIS (Netherlands)

    Fang, Yongxiang; Wit, Ernst

    2008-01-01

    Fisher’s combined probability test is the most commonly used method to test the overall significance of a set independent p-values. However, it is very obviously that Fisher’s statistic is more sensitive to smaller p-values than to larger p-value and a small p-value may overrule the other p-values a

  18. Comparison of US and FRG post-irradiation examination procedures to measure statistically significant failure fractions of irradiated coated-particle fuels. [HTGR

    Energy Technology Data Exchange (ETDEWEB)

    Kania, M.J.; Homan, F.J.; Mehner, A.W.

    1982-08-01

    Two methods for measuring failure fraction on irradiated coated-particle fuels have been developed, one in the United States (the IMGA system - Irradiated-Microsphere Gamma Analyzer) and one in the Federal Republic of Germany (FRG) (the PIAA procedure - Postirradiation Annealing and Beta Autoradiography). A comparison of the two methods on two standardized sets of irradiated particles was undertaken to evaluate the accuracy, operational procedures, and expense of each method in obtaining statistically significant results. From the comparison, the postirradiation examination method employing the IMGA system was found to be superior to the PIAA procedure for measuring statistically significant failure fractions. Both methods require that the irradiated fuel be in the form of loose particles, each requires extensive remote hot-cell facilities, and each is capable of physically separating failed particles from unfailed particles. Important differences noted in the comparison are described.

  19. A novel complete-case analysis to determine statistical significance between treatments in an intention-to-treat population of randomized clinical trials involving missing data.

    Science.gov (United States)

    Liu, Wei; Ding, Jinhui

    2016-05-25

    The application of the principle of the intention-to-treat (ITT) to the analysis of clinical trials is challenged in the presence of missing outcome data. The consequences of stopping an assigned treatment in a withdrawn subject are unknown. It is difficult to make a single assumption about missing mechanisms for all clinical trials because there are complicated reactions in the human body to drugs due to the presence of complex biological networks, leading to data missing randomly or non-randomly. Currently there is no statistical method that can tell whether a difference between two treatments in the ITT population of a randomized clinical trial with missing data is significant at a pre-specified level. Making no assumptions about the missing mechanisms, we propose a generalized complete-case (GCC) analysis based on the data of completers. An evaluation of the impact of missing data on the ITT analysis reveals that a statistically significant GCC result implies a significant treatment effect in the ITT population at a pre-specified significance level unless, relative to the comparator, the test drug is poisonous to the non-completers as documented in their medical records. Applications of the GCC analysis are illustrated using literature data, and its properties and limits are discussed.

  20. Evaluating statistical and clinical significance of intervention effects in single-case experimental designs: an SPSS method to analyze univariate data.

    Science.gov (United States)

    Maric, Marija; de Haan, Else; Hogendoorn, Sanne M; Wolters, Lidewij H; Huizenga, Hilde M

    2015-03-01

    Single-case experimental designs are useful methods in clinical research practice to investigate individual client progress. Their proliferation might have been hampered by methodological challenges such as the difficulty applying existing statistical procedures. In this article, we describe a data-analytic method to analyze univariate (i.e., one symptom) single-case data using the common package SPSS. This method can help the clinical researcher to investigate whether an intervention works as compared with a baseline period or another intervention type, and to determine whether symptom improvement is clinically significant. First, we describe the statistical method in a conceptual way and show how it can be implemented in SPSS. Simulation studies were performed to determine the number of observation points required per intervention phase. Second, to illustrate this method and its implications, we present a case study of an adolescent with anxiety disorders treated with cognitive-behavioral therapy techniques in an outpatient psychotherapy clinic, whose symptoms were regularly assessed before each session. We provide a description of the data analyses and results of this case study. Finally, we discuss the advantages and shortcomings of the proposed method. Copyright © 2014. Published by Elsevier Ltd.

  1. Quantum mechanically based estimation of perturbed-chain polar statistical associating fluid theory parameters for analyzing their physical significance and predicting properties.

    Science.gov (United States)

    Nhu, Nguyen Van; Singh, Mahendra; Leonhard, Kai

    2008-05-08

    We have computed molecular descriptors for sizes, shapes, charge distributions, and dispersion interactions for 67 compounds using quantum chemical ab initio and density functional theory methods. For the same compounds, we have fitted the three perturbed-chain polar statistical associating fluid theory (PCP-SAFT) equation of state (EOS) parameters to experimental data and have performed a statistical analysis for relations between the descriptors and the EOS parameters. On this basis, an analysis of the physical significance of the parameters, the limits of the present descriptors, and the PCP-SAFT EOS has been performed. The result is a method that can be used to estimate the vapor pressure curve including the normal boiling point, the liquid volume, the enthalpy of vaporization, the critical data, mixture properties, and so on. When only two of the three parameters are predicted and one is adjusted to experimental normal boiling point data, excellent predictions of all investigated pure compound and mixture properties are obtained. We are convinced that the methodology presented in this work will lead to new EOS applications as well as improved EOS models whose predictive performance is likely to surpass that of most present quantum chemically based, quantitative structure-property relationship, and group contribution methods for a broad range of chemical substances.

  2. Clinical progress of human papillomavirus genotypes and their persistent infection in subjects with atypical squamous cells of undetermined significance cytology: Statistical and latent Dirichlet allocation analysis.

    Science.gov (United States)

    Kim, Yee Suk; Lee, Sungin; Zong, Nansu; Kahng, Jimin

    2017-06-01

    The present study aimed to investigate differences in prognosis based on human papillomavirus (HPV) infection, persistent infection and genotype variations for patients exhibiting atypical squamous cells of undetermined significance (ASCUS) in their initial Papanicolaou (PAP) test results. A latent Dirichlet allocation (LDA)-based tool was developed that may offer a facilitated means of communication to be employed during patient-doctor consultations. The present study assessed 491 patients (139 HPV-positive and 352 HPV-negative cases) with a PAP test result of ASCUS with a follow-up period ≥2 years. Patients underwent PAP and HPV DNA chip tests between January 2006 and January 2009. The HPV-positive subjects were followed up with at least 2 instances of PAP and HPV DNA chip tests. The most common genotypes observed were HPV-16 (25.9%, 36/139), HPV-52 (14.4%, 20/139), HPV-58 (13.7%, 19/139), HPV-56 (11.5%, 16/139), HPV-51 (9.4%, 13/139) and HPV-18 (8.6%, 12/139). A total of 33.3% (12/36) patients positive for HPV-16 had cervical intraepithelial neoplasia (CIN)2 or a worse result, which was significantly higher than the prevalence of CIN2 of 1.8% (8/455) in patients negative for HPV-16 (P<0.001), while no significant association was identified for other genotypes in terms of genotype and clinical progress. There was a significant association between clearance and good prognosis (P<0.001). Persistent infection was higher in patients aged ≥51 years (38.7%) than in those aged ≤50 years (20.4%; P=0.036). Progression from persistent infection to CIN2 or worse (19/34, 55.9%) was higher than clearance (0/105, 0.0%; P<0.001). In the LDA analysis, using symmetric Dirichlet priors α=0.1 and β=0.01, and clusters (k)=5 or 10 provided the most meaningful groupings. Statistical and LDA analyses produced consistent results regarding the association between persistent infection of HPV-16, old age and long infection period with a clinical progression of CIN2 or worse

  3. Evaluating statistical and clinical significance of intervention effects in single-case experimental designs: An SPSS method to analyze univariate data

    NARCIS (Netherlands)

    Maric, M.; de Haan, M.; Hogendoorn, S.M.; Wolters, L.H.; Huizenga, H.M.

    2015-01-01

    Single-case experimental designs are useful methods in clinical research practice to investigate individual client progress. Their proliferation might have been hampered by methodological challenges such as the difficulty applying existing statistical procedures. In this article, we describe a

  4. Evaluating statistical and clinical significance of intervention effects in single-case experimental designs: An SPSS method to analyze univariate data

    NARCIS (Netherlands)

    M. Maric; M. de Haan; S.M. Hogendoorn; L.H. Wolters; H.M. Huizenga

    2015-01-01

    Single-case experimental designs are useful methods in clinical research practice to investigate individual client progress. Their proliferation might have been hampered by methodological challenges such as the difficulty applying existing statistical procedures. In this article, we describe a data-

  5. Which cities produce more excellent papers than can be expected? A new mapping approach, using Google Maps, based on statistical significance testing

    NARCIS (Netherlands)

    L. Bornmann; L. Leydesdorff

    2011-01-01

    The methods presented in this paper allow for a statistical analysis revealing centers of excellence around the world using programs that are freely available. Based on Web of Science data (a fee-based database), field-specific excellence can be identified in cities where highly cited papers were pu

  6. STATISTICAL ESTIMATION OF CONTINUITY OF FORMATION OF SOCIALLY SIGNIFICANT HIERARCHY OF MOTIVES OF THE LEARNING OF SENIOR PUPILS BY MEANS OF MULTIMEDIA TECHNOLOGY OF TRAINING

    National Research Council Canada - National Science Library

    Marina Yepifanova; Boris Zhelezovsky

    2012-01-01

    It is theoretically proved and experimentally possibility of maintenance of continuity of formation of socially significant hierarchy of motives of the learning of senior pupils by means of multimedia...

  7. Which cities produce worldwide more excellent papers than can be expected? A new mapping approach--using Google Maps--based on statistical significance testing

    CERN Document Server

    Bornmann, Lutz

    2011-01-01

    The methods presented in this paper allow for a spatial analysis revealing centers of excellence around the world using programs that are freely available. Based on Web of Science data, field-specific excellence can be identified in cities where highly-cited papers were published. Compared to the mapping approaches published hitherto, our approach is more analytically oriented by allowing the assessment of an observed number of excellent papers for a city against the expected number. With this feature, this approach can not only identify the top performers in output but the "true jewels." These are cities locating authors who publish significantly more top cited papers than can be expected. As the examples in this paper show for physics, chemistry, and psychology, these cities do not necessarily have a high output of excellent papers.

  8. Assessing the Statistical Significance of the Achieved Classification Error of Classifiers Constructed using Serum Peptide Profiles, and a Prescription for Random Sampling Repeated Studies for Massive High-Throughput Genomic and Proteomic Studies

    Directory of Open Access Journals (Sweden)

    William L Bigbee

    2005-01-01

    Full Text Available source of patient-specific information with high potential impact on the early detection and classification of cancer and other diseases. The new profiling technology comes, however, with numerous challenges and concerns. Particularly important are concerns of reproducibility of classification results and their significance. In this work we describe a computational validation framework, called PACE (Permutation-Achieved Classification Error, that lets us assess, for a given classification model, the significance of the Achieved Classification Error (ACE on the profile data. The framework compares the performance statistic of the classifier on true data samples and checks if these are consistent with the behavior of the classifier on the same data with randomly reassigned class labels. A statistically significant ACE increases our belief that a discriminative signal was found in the data. The advantage of PACE analysis is that it can be easily combined with any classification model and is relatively easy to interpret. PACE analysis does not protect researchers against confounding in the experimental design, or other sources of systematic or random error.We use PACE analysis to assess significance of classification results we have achieved on a number of published data sets. The results show that many of these datasets indeed possess a signal that leads to a statistically significant ACE.

  9. Statistical mechanics

    CERN Document Server

    Schwabl, Franz

    2006-01-01

    The completely revised new edition of the classical book on Statistical Mechanics covers the basic concepts of equilibrium and non-equilibrium statistical physics. In addition to a deductive approach to equilibrium statistics and thermodynamics based on a single hypothesis - the form of the microcanonical density matrix - this book treats the most important elements of non-equilibrium phenomena. Intermediate calculations are presented in complete detail. Problems at the end of each chapter help students to consolidate their understanding of the material. Beyond the fundamentals, this text demonstrates the breadth of the field and its great variety of applications. Modern areas such as renormalization group theory, percolation, stochastic equations of motion and their applications to critical dynamics, kinetic theories, as well as fundamental considerations of irreversibility, are discussed. The text will be useful for advanced students of physics and other natural sciences; a basic knowledge of quantum mechan...

  10. Tested Demonstrations.

    Science.gov (United States)

    Sands, Robert; And Others

    1982-01-01

    Procedures for two demonstrations are provided. The solubility of ammonia gas in water is demonstrated by introducing water into a closed can filled with the gas, collapsing the can. The second demonstration relates scale of standard reduction potentials to observed behavior of metals in reactions with hydrogen to produce hydrogen gas. (Author/JN)

  11. Data mining-based statistical analysis of biological data uncovers hidden significance: clustering Hashimoto's thyroiditis patients based on the response of their PBMC with IL-2 and IFN-γ secretion to stimulation with Hsp60.

    Science.gov (United States)

    Tonello, Lucio; Conway de Macario, Everly; Marino Gammazza, Antonella; Cocchi, Massimo; Gabrielli, Fabio; Zummo, Giovanni; Cappello, Francesco; Macario, Alberto J L

    2015-03-01

    The pathogenesis of Hashimoto's thyroiditis includes autoimmunity involving thyroid antigens, autoantibodies, and possibly cytokines. It is unclear what role plays Hsp60, but our recent data indicate that it may contribute to pathogenesis as an autoantigen. Its role in the induction of cytokine production, pro- or anti-inflammatory, was not elucidated, except that we found that peripheral blood mononucleated cells (PBMC) from patients or from healthy controls did not respond with cytokine production upon stimulation by Hsp60 in vitro with patterns that would differentiate patients from controls with statistical significance. This "negative" outcome appeared when the data were pooled and analyzed with conventional statistical methods. We re-analyzed our data with non-conventional statistical methods based on data mining using the classification and regression tree learning algorithm and clustering methodology. The results indicate that by focusing on IFN-γ and IL-2 levels before and after Hsp60 stimulation of PBMC in each patient, it is possible to differentiate patients from controls. A major general conclusion is that when trying to identify disease markers such as levels of cytokines and Hsp60, reference to standards obtained from pooled data from many patients may be misleading. The chosen biomarker, e.g., production of IFN-γ and IL-2 by PBMC upon stimulation with Hsp60, must be assessed before and after stimulation and the results compared within each patient and analyzed with conventional and data mining statistical methods.

  12. Tested Demonstrations.

    Science.gov (United States)

    Gilbert, George L., Ed.

    1983-01-01

    Free radical chlorination of methane is used in organic chemistry to introduce free radical/chain reactions. In spite of its common occurrence, demonstrations of the reaction are uncommon. Therefore, such a demonstration is provided, including background information, preparation of reactants/reaction vessel, introduction of reactants, irradiation,…

  13. Tested Demonstrations.

    Science.gov (United States)

    Gilbert, George L., Ed.

    1983-01-01

    Discusses a supplement to the "water to rose" demonstration in which a pink color is produced. Also discusses blood buffer demonstrations, including hydrolysis of sodium bicarbonate, simulated blood buffer, metabolic acidosis, natural compensation of metabolic acidosis, metabolic alkalosis, acidosis treatment, and alkalosis treatment. Procedures…

  14. Complete Demonstration.

    Science.gov (United States)

    Yelon, Stephen; Maddocks, Peg

    1986-01-01

    Describes four-step approach to educational demonstration: tell learners they will have to perform; what they should notice; describe each step before doing it; and require memorization of steps. Examples illustrate use of this process to demonstrate a general mental strategy, and industrial design, supervisory, fine motor, and specific…

  15. Tested Demonstrations.

    Science.gov (United States)

    Gilbert, George L., Ed.

    1987-01-01

    Describes two laboratory demonstrations in chemistry. One uses dry ice, freon, and freezer bags to demonstrate volume changes, vapor-liquid equilibrium, a simulation of a rain forest, and vaporization. The other uses the clock reaction technique to illustrate fast reactions and kinetic problems in releasing carbon dioxide during respiration. (TW)

  16. Tested Demonstrations.

    Science.gov (United States)

    Gilbert, George L., Ed.

    1986-01-01

    Outlines a simple, inexpensive way of demonstrating electroplating using the reaction between nickel ions and copper metal. Explains how to conduct a demonstration of the electrolysis of water by using a colored Na2SO4 solution as the electrolyte so that students can observe the pH changes. (TW)

  17. Injury Statistics

    Science.gov (United States)

    ... Certification Import Safety International Recall Guidance Civil and Criminal Penalties Federal Court Orders & Decisions Research & Statistics Research & Statistics Technical Reports Injury Statistics NEISS Injury ...

  18. Tested Demonstrations.

    Science.gov (United States)

    Gilbert, George L.

    1990-01-01

    Included are three demonstrations that include the phase change of ice when under pressure, viscoelasticity and colloid systems, and flame tests for metal ions. The materials, procedures, probable results, and applications to real life situations are included. (KR)

  19. Tested Demonstrations.

    Science.gov (United States)

    Gilbert, George L., Ed.

    1980-01-01

    Presented is a Corridor Demonstration which can be set up in readily accessible areas such as hallways or lobbies. Equipment is listed for a display of three cells (solar cells, fuel cells, and storage cells) which develop electrical energy. (CS)

  20. Tested Demonstrations.

    Science.gov (United States)

    Gilbert, George L., Ed.

    1987-01-01

    Presents three demonstrations suitable for undergraduate chemistry classes. Focuses on experiments with calcium carbide, the induction by iron of the oxidation of iodide by dichromate, and the classical iodine clock reaction. (ML)

  1. Cosmic Statistics of Statistics

    OpenAIRE

    Szapudi, I.; Colombi, S.; Bernardeau, F.

    1999-01-01

    The errors on statistics measured in finite galaxy catalogs are exhaustively investigated. The theory of errors on factorial moments by Szapudi & Colombi (1996) is applied to cumulants via a series expansion method. All results are subsequently extended to the weakly non-linear regime. Together with previous investigations this yields an analytic theory of the errors for moments and connected moments of counts in cells from highly nonlinear to weakly nonlinear scales. The final analytic formu...

  2. Five-year results from a prospective multicentre study of percutaneous pulmonary valve implantation demonstrate sustained removal of significant pulmonary regurgitation, improved right ventricular outflow tract obstruction and improved quality of life.

    Science.gov (United States)

    Hager, Alfred; Schubert, Stephan; Ewert, Peter; Søndergaard, Lars; Witsenburg, Maarten; Guccione, Paolo; Benson, Lee N; Suárez de Lezo, José; Lung, Te-Hsin; Hess, John; Eicken, Andreas; Berger, Felix

    2017-02-20

    Percutaneous pulmonary valve implantation (PPVI) is used to treat patients with dysfunctional pulmonary valve conduits. Short- and longer-term results from multiple trials have outlined haemodynamic improvements. Our aim was to report the long-term results, including quality of life, from a multicentre trial in Europe and Canada. From October 2007 to April 2009, 71 patients (24 female; median age 19.0 [IQR: 14.0 to 25.0] years) were enrolled in a prospective cohort study. PPVI was performed successfully in 63 patients. At five-year follow-up four patients had died. Moderate and severe pulmonary regurgitation were completely resolved in all except one patient, who needed re-PPVI. Outflow tract obstruction improved significantly from a mean pressure gradient of 37.7±12.1 mmHg before PPVI to 17.3±9.7 mmHg at five-year follow-up; however, 11 patients needed treatment for restenosis. The EQ-5D quality of life utility index and visual analogue scale scores were both significantly improved six months post PPVI and remained so at five years. Five-year results following PPVI demonstrate resolved moderate or severe pulmonary regurgitation, improved right ventricular outflow tract obstruction, and improved quality of life.

  3. Tested Demonstrations.

    Science.gov (United States)

    Gilbert, George L., Ed.

    1987-01-01

    Describes two demonstrations to illustrate characteristics of substances. Outlines a method to detect the changes in pH levels during the electrolysis of water. Uses water pistols, one filled with methane gas and the other filled with water, to illustrate the differences in these two substances. (TW)

  4. ICT Demonstration

    DEFF Research Database (Denmark)

    Jensen, Tine Wirenfeldt; Bay, Gina

    In this demonstration we present and discuss two interrelated on-line learning resources aimed at supporting international students at Danish universities in building study skills (the Study Metro) and avoiding plagiarism (Stopplagiarism). We emphasize the necessity of designing online learning r...

  5. Arc Statistics

    CERN Document Server

    Meneghetti, M; Dahle, H; Limousin, M

    2013-01-01

    The existence of an arc statistics problem was at the center of a strong debate in the last fifteen years. With the aim to clarify if the optical depth for giant gravitational arcs by galaxy clusters in the so called concordance model is compatible with observations, several studies were carried out which helped to significantly improve our knowledge of strong lensing clusters, unveiling their extremely complex internal structure. In particular, the abundance and the frequency of strong lensing events like gravitational arcs turned out to be a potentially very powerful tool to trace the structure formation. However, given the limited size of observational and theoretical data-sets, the power of arc statistics as a cosmological tool has been only minimally exploited so far. On the other hand, the last years were characterized by significant advancements in the field, and several cluster surveys that are ongoing or planned for the near future seem to have the potential to make arc statistics a competitive cosmo...

  6. The hetero-transplantation of human bone marrow stromal cells carried by hydrogel unexpectedly demonstrates a significant role in the functional recovery in the injured spinal cord of rats.

    Science.gov (United States)

    Raynald; Li, Yanbin; Yu, Hao; Huang, Hua; Guo, Muyao; Hua, Rongrong; Jiang, Fenjun; Zhang, Kaihua; Li, Hailong; Wang, Fei; Li, Lusheng; Cui, FuZhai; An, Yihua

    2016-03-01

    Spinal cord injury (SCI) often causes a disturbance in the microenvironment in the lesion site resulting in sudden loss of sensory and motor function. Transplantation of stem cells provides a promising strategy in the treatment of SCI. But limited growth and immunological incompatibility of the stem cells with the host limits the application of this strategy. In order to get better survival and integration with the host, we employed a hyaluronic acid (HA) based scaffold covalently modified by poly-l-Lysine (PLL) as a vehicle to deliver the human bone marrow stromal cells (BMSCs) to the injured spinal cord of rats. The BMSCs were chosen as an ideal candidate for its advantage of low expression of major histocompatibility complex II. The data unexpectedly showed that the hetero-transplanted cells survived well in the lesion site even at 8 weeks post injury. Both the immunofluorescent and the electrophysiological assay indicated better survival of the transplanted cells and improved axonal growth in SCI rats transplanted with BMSCs in HA-PLL in contrast to the groups without either BMSCs or the HA scaffold transplantation. These promotions may account for the functional recovery assessed by Basso-Beattie-Bresnahan (BBB) locomotor rating scale in the HA-PLL seeded with BMSCs group. These data suggests that hetero-transplantation of human BMSCs delivered by HA scaffold demonstrates a significant role in the functional recovery in the injured spinal cord of rats.

  7. GASIS demonstration

    Energy Technology Data Exchange (ETDEWEB)

    Vidas, E.H. [Energy and Environmental Analysis, Inc., Arlington, VA (United States)

    1995-04-01

    A prototype of the GASIS database and retrieval software has been developed and is the subject of this poster session and computer demonstration. The prototype consists of test or preliminary versions of the GASIS Reservoir Data System and Source Directory datasets and the software for query and retrieval. The prototype reservoir database covers the Rocky Mountain region and contains the full GASIS data matrix (all GASIS data elements) that will eventually be included on the CD-ROM. It is populated for development purposes primarily by the information included in the Rocky Mountain Gas Atlas. The software has been developed specifically for GASIS using Foxpro for Windows. The application is an executable file that does not require Foxpro to run. The reservoir database software includes query and retrieval, screen display, report generation, and data export functions. Basic queries by state, basin, or field name will be assisted by scrolling selection lists. A detailed query screen will allow record selection on the basis of any data field, such as depth, cumulative production, or geological age. Logical operators can be applied to any-numeric data element or combination of elements. Screen display includes a {open_quotes}browse{close_quotes} display with one record per row and a detailed single record display. Datasets can be exported in standard formats for manipulation with other software packages. The Source Directory software will allow record retrieval by database type or subject area.

  8. Industrial statistics with Minitab

    CERN Document Server

    Cintas, Pere Grima; Llabres, Xavier Tort-Martorell

    2012-01-01

    Industrial Statistics with MINITAB demonstrates the use of MINITAB as a tool for performing statistical analysis in an industrial context. This book covers introductory industrial statistics, exploring the most commonly used techniques alongside those that serve to give an overview of more complex issues. A plethora of examples in MINITAB are featured along with case studies for each of the statistical techniques presented. Industrial Statistics with MINITAB: Provides comprehensive coverage of user-friendly practical guidance to the essential statistical methods applied in industry.Explores

  9. No statistically significant kinematic difference found between a cruciate-retaining and posterior-stabilised Triathlon knee arthroplasty: a laboratory study involving eight cadavers examining soft-tissue laxity.

    Science.gov (United States)

    Hunt, N C; Ghosh, K M; Blain, A P; Rushton, S P; Longstaff, L M; Deehan, D J

    2015-05-01

    The aim of this study was to compare the maximum laxity conferred by the cruciate-retaining (CR) and posterior-stabilised (PS) Triathlon single-radius total knee arthroplasty (TKA) for anterior drawer, varus-valgus opening and rotation in eight cadaver knees through a defined arc of flexion (0º to 110º). The null hypothesis was that the limits of laxity of CR- and PS-TKAs are not significantly different. The investigation was undertaken in eight loaded cadaver knees undergoing subjective stress testing using a measurement rig. Firstly the native knee was tested prior to preparation for CR-TKA and subsequently for PS-TKA implantation. Surgical navigation was used to track maximal displacements/rotations at 0º, 30º, 60º, 90º and 110° of flexion. Mixed-effects modelling was used to define the behaviour of the TKAs. The laxity measured for the CR- and PS-TKAs revealed no statistically significant differences over the studied flexion arc for the two versions of TKA. Compared with the native knee both TKAs exhibited slightly increased anterior drawer and decreased varus-valgus and internal-external roational laxities. We believe further study is required to define the clinical states for which the additional constraint offered by a PS-TKA implant may be beneficial.

  10. Algebraic Statistics

    OpenAIRE

    Norén, Patrik

    2013-01-01

    Algebraic statistics brings together ideas from algebraic geometry, commutative algebra, and combinatorics to address problems in statistics and its applications. Computer algebra provides powerful tools for the study of algorithms and software. However, these tools are rarely prepared to address statistical challenges and therefore new algebraic results need often be developed. This way of interplay between algebra and statistics fertilizes both disciplines. Algebraic statistics is a relativ...

  11. 生物多样性和均匀度显著性的随机化检验及计算软件%Randomization tests and computational software on statistic significance of community biodiversity and evenness

    Institute of Scientific and Technical Information of China (English)

    张文军; 齐艳红; 等

    2002-01-01

    Diversity and evenness indices were widely used in community ecology and biodiversity researches. However, shortage of statistic tests on these indices restricted their reliability. To develop statistic test methods on diversity is one of the focuses in biodiversity researches. In present study, some randomization tests on statistic significance of diversity and evenness indices, confidence interval of diversity and evenness, and randomization test on statistic significance of between-community differences were presented. Shannon-Wiener diversity index, Simpson diversity index, McIntosh diversity index, Berger-Parker diversity index, Hurlbert diversity index, Brillouin diversity index, and corresponding evenness indices are included in the randomization test procedure. The web-based computational software for the statistic tests, BiodiversityTest, which is comprised of seven Java classes and an HTML file, is developed. It can be run on various operational systems and java-enabled web browsers and, may read ODBC linked databases such as MS Access, Excel, FoxPro, dBASE, etc. Rice arthropod diversity (15 sampling sites, 125 arthropod species, 17 functional groups) was recorded on September,1996 in IRRI rice farm using RiceVac apparatus and bucket enclosure. The data were analysed using BiodiversityTest with Shannon-Wiener index and Berger-Parker index respectively, and the results showed that the changes of diversity and evenness can be effectively detected by these tests. The randomization tests will correct the possible wrong conclusions aroused in direct comparison of arthropod diversity which was used in most of the researches up to now. The development of randomization tests on biodiversity will provide a quantitative tool for stricter statistic comparison of biodiversity between communities and present an absolute criterion fordiversity measuring. BiodiversityTest will make the computation realistic and accessible on Internet.%多样性指数和均匀

  12. Bayesian statistics

    OpenAIRE

    新家, 健精

    2013-01-01

    © 2012 Springer Science+Business Media, LLC. All rights reserved. Article Outline: Glossary Definition of the Subject and Introduction The Bayesian Statistical Paradigm Three Examples Comparison with the Frequentist Statistical Paradigm Future Directions Bibliography

  13. Mathematical statistics

    CERN Document Server

    Pestman, Wiebe R

    2009-01-01

    This textbook provides a broad and solid introduction to mathematical statistics, including the classical subjects hypothesis testing, normal regression analysis, and normal analysis of variance. In addition, non-parametric statistics and vectorial statistics are considered, as well as applications of stochastic analysis in modern statistics, e.g., Kolmogorov-Smirnov testing, smoothing techniques, robustness and density estimation. For students with some elementary mathematical background. With many exercises. Prerequisites from measure theory and linear algebra are presented.

  14. Significance evaluation in factor graphs

    DEFF Research Database (Denmark)

    Madsen, Tobias; Hobolth, Asger; Jensen, Jens Ledet

    2017-01-01

    Background Factor graphs provide a flexible and general framework for specifying probability distributions. They can capture a range of popular and recent models for analysis of both genomics data as well as data from other scientific fields. Owing to the ever larger data sets encountered...... in genomics and the multiple-testing issues accompanying them, accurate significance evaluation is of great importance. We here address the problem of evaluating statistical significance of observations from factor graph models. Results Two novel numerical approximations for evaluation of statistical....... Conclusions The applicability of saddlepoint approximation and importance sampling is demonstrated on known models in the factor graph framework. Using the two methods we can substantially improve computational cost without compromising accuracy. This contribution allows analyses of large datasets...

  15. Harmonic statistics

    Science.gov (United States)

    Eliazar, Iddo

    2017-05-01

    The exponential, the normal, and the Poisson statistical laws are of major importance due to their universality. Harmonic statistics are as universal as the three aforementioned laws, but yet they fall short in their 'public relations' for the following reason: the full scope of harmonic statistics cannot be described in terms of a statistical law. In this paper we describe harmonic statistics, in their full scope, via an object termed harmonic Poisson process: a Poisson process, over the positive half-line, with a harmonic intensity. The paper reviews the harmonic Poisson process, investigates its properties, and presents the connections of this object to an assortment of topics: uniform statistics, scale invariance, random multiplicative perturbations, Pareto and inverse-Pareto statistics, exponential growth and exponential decay, power-law renormalization, convergence and domains of attraction, the Langevin equation, diffusions, Benford's law, and 1/f noise.

  16. Statistical physics

    CERN Document Server

    Sadovskii, Michael V

    2012-01-01

    This volume provides a compact presentation of modern statistical physics at an advanced level. Beginning with questions on the foundations of statistical mechanics all important aspects of statistical physics are included, such as applications to ideal gases, the theory of quantum liquids and superconductivity and the modern theory of critical phenomena. Beyond that attention is given to new approaches, such as quantum field theory methods and non-equilibrium problems.

  17. Statistical methods

    CERN Document Server

    Szulc, Stefan

    1965-01-01

    Statistical Methods provides a discussion of the principles of the organization and technique of research, with emphasis on its application to the problems in social statistics. This book discusses branch statistics, which aims to develop practical ways of collecting and processing numerical data and to adapt general statistical methods to the objectives in a given field.Organized into five parts encompassing 22 chapters, this book begins with an overview of how to organize the collection of such information on individual units, primarily as accomplished by government agencies. This text then

  18. Statistical optics

    CERN Document Server

    Goodman, Joseph W

    2015-01-01

    This book discusses statistical methods that are useful for treating problems in modern optics, and the application of these methods to solving a variety of such problems This book covers a variety of statistical problems in optics, including both theory and applications.  The text covers the necessary background in statistics, statistical properties of light waves of various types, the theory of partial coherence and its applications, imaging with partially coherent light, atmospheric degradations of images, and noise limitations in the detection of light. New topics have been introduced i

  19. Histoplasmosis Statistics

    Science.gov (United States)

    ... Foodborne, Waterborne, and Environmental Diseases Mycotic Diseases Branch Histoplasmosis Statistics Recommend on Facebook Tweet Share Compartir How common is histoplasmosis? In the United States, an estimated 60% to ...

  20. Statistical distributions

    CERN Document Server

    Forbes, Catherine; Hastings, Nicholas; Peacock, Brian J.

    2010-01-01

    A new edition of the trusted guide on commonly used statistical distributions Fully updated to reflect the latest developments on the topic, Statistical Distributions, Fourth Edition continues to serve as an authoritative guide on the application of statistical methods to research across various disciplines. The book provides a concise presentation of popular statistical distributions along with the necessary knowledge for their successful use in data modeling and analysis. Following a basic introduction, forty popular distributions are outlined in individual chapters that are complete with re

  1. Harmonic statistics

    Energy Technology Data Exchange (ETDEWEB)

    Eliazar, Iddo, E-mail: eliazar@post.tau.ac.il

    2017-05-15

    The exponential, the normal, and the Poisson statistical laws are of major importance due to their universality. Harmonic statistics are as universal as the three aforementioned laws, but yet they fall short in their ‘public relations’ for the following reason: the full scope of harmonic statistics cannot be described in terms of a statistical law. In this paper we describe harmonic statistics, in their full scope, via an object termed harmonic Poisson process: a Poisson process, over the positive half-line, with a harmonic intensity. The paper reviews the harmonic Poisson process, investigates its properties, and presents the connections of this object to an assortment of topics: uniform statistics, scale invariance, random multiplicative perturbations, Pareto and inverse-Pareto statistics, exponential growth and exponential decay, power-law renormalization, convergence and domains of attraction, the Langevin equation, diffusions, Benford’s law, and 1/f noise. - Highlights: • Harmonic statistics are described and reviewed in detail. • Connections to various statistical laws are established. • Connections to perturbation, renormalization and dynamics are established.

  2. Choosing Outcomes of Significance.

    Science.gov (United States)

    Spady, William G.

    1994-01-01

    Outcomes are high-quality, culminating demonstrations of significant learning in context. The High Success Network uses the "Demonstration Mountain" to differentiate among three major "learning zones" and six different forms of learning demonstrations that increase in complexity, generalizability, and significance, along with…

  3. Scan Statistics

    CERN Document Server

    Glaz, Joseph

    2009-01-01

    Suitable for graduate students and researchers in applied probability and statistics, as well as for scientists in biology, computer science, pharmaceutical science and medicine, this title brings together a collection of chapters illustrating the depth and diversity of theory, methods and applications in the area of scan statistics.

  4. Statistical Diversions

    Science.gov (United States)

    Petocz, Peter; Sowey, Eric

    2008-01-01

    In this article, the authors focus on hypothesis testing--that peculiarly statistical way of deciding things. Statistical methods for testing hypotheses were developed in the 1920s and 1930s by some of the most famous statisticians, in particular Ronald Fisher, Jerzy Neyman and Egon Pearson, who laid the foundations of almost all modern methods of…

  5. Practical Statistics

    CERN Document Server

    Lyons, L

    2016-01-01

    Accelerators and detectors are expensive, both in terms of money and human effort. It is thus important to invest effort in performing a good statistical anal- ysis of the data, in order to extract the best information from it. This series of five lectures deals with practical aspects of statistical issues that arise in typical High Energy Physics analyses.

  6. Introductory statistics

    CERN Document Server

    Ross, Sheldon M

    2005-01-01

    In this revised text, master expositor Sheldon Ross has produced a unique work in introductory statistics. The text's main merits are the clarity of presentation, contemporary examples and applications from diverse areas, and an explanation of intuition and ideas behind the statistical methods. To quote from the preface, ""It is only when a student develops a feel or intuition for statistics that she or he is really on the path toward making sense of data."" Ross achieves this goal through a coherent mix of mathematical analysis, intuitive discussions and examples.* Ross's clear writin

  7. Introductory statistics

    CERN Document Server

    Ross, Sheldon M

    2010-01-01

    In this 3rd edition revised text, master expositor Sheldon Ross has produced a unique work in introductory statistics. The text's main merits are the clarity of presentation, contemporary examples and applications from diverse areas, and an explanation of intuition and ideas behind the statistical methods. Concepts are motivated, illustrated and explained in a way that attempts to increase one's intuition. To quote from the preface, ""It is only when a student develops a feel or intuition for statistics that she or he is really on the path toward making sense of data."" Ross achieves this

  8. Statistics Clinic

    Science.gov (United States)

    Feiveson, Alan H.; Foy, Millennia; Ploutz-Snyder, Robert; Fiedler, James

    2014-01-01

    Do you have elevated p-values? Is the data analysis process getting you down? Do you experience anxiety when you need to respond to criticism of statistical methods in your manuscript? You may be suffering from Insufficient Statistical Support Syndrome (ISSS). For symptomatic relief of ISSS, come for a free consultation with JSC biostatisticians at our help desk during the poster sessions at the HRP Investigators Workshop. Get answers to common questions about sample size, missing data, multiple testing, when to trust the results of your analyses and more. Side effects may include sudden loss of statistics anxiety, improved interpretation of your data, and increased confidence in your results.

  9. Statistical physics

    CERN Document Server

    Wannier, Gregory H

    2010-01-01

    Until recently, the field of statistical physics was traditionally taught as three separate subjects: thermodynamics, statistical mechanics, and kinetic theory. This text, a forerunner in its field and now a classic, was the first to recognize the outdated reasons for their separation and to combine the essentials of the three subjects into one unified presentation of thermal physics. It has been widely adopted in graduate and advanced undergraduate courses, and is recommended throughout the field as an indispensable aid to the independent study and research of statistical physics.Designed for

  10. Semiconductor statistics

    CERN Document Server

    Blakemore, J S

    1962-01-01

    Semiconductor Statistics presents statistics aimed at complementing existing books on the relationships between carrier densities and transport effects. The book is divided into two parts. Part I provides introductory material on the electron theory of solids, and then discusses carrier statistics for semiconductors in thermal equilibrium. Of course a solid cannot be in true thermodynamic equilibrium if any electrical current is passed; but when currents are reasonably small the distribution function is but little perturbed, and the carrier distribution for such a """"quasi-equilibrium"""" co

  11. The statistical stability phenomenon

    CERN Document Server

    Gorban, Igor I

    2017-01-01

    This monograph investigates violations of statistical stability of physical events, variables, and processes and develops a new physical-mathematical theory taking into consideration such violations – the theory of hyper-random phenomena. There are five parts. The first describes the phenomenon of statistical stability and its features, and develops methods for detecting violations of statistical stability, in particular when data is limited. The second part presents several examples of real processes of different physical nature and demonstrates the violation of statistical stability over broad observation intervals. The third part outlines the mathematical foundations of the theory of hyper-random phenomena, while the fourth develops the foundations of the mathematical analysis of divergent and many-valued functions. The fifth part contains theoretical and experimental studies of statistical laws where there is violation of statistical stability. The monograph should be of particular interest to engineers...

  12. SEER Statistics

    Science.gov (United States)

    The Surveillance, Epidemiology, and End Results (SEER) Program of the National Cancer Institute works to provide information on cancer statistics in an effort to reduce the burden of cancer among the U.S. population.

  13. Cancer Statistics

    Science.gov (United States)

    ... Resources Conducting Clinical Trials Statistical Tools and Data Terminology Resources NCI Data Catalog Cryo-EM NCI's Role ... Contacts Other Funding Find NCI funding for small business innovation, technology transfer, and contracts Training Cancer Training ...

  14. CMS Statistics

    Data.gov (United States)

    U.S. Department of Health & Human Services — The CMS Center for Strategic Planning produces an annual CMS Statistics reference booklet that provides a quick reference for summary information about health...

  15. Reversible Statistics

    DEFF Research Database (Denmark)

    Tryggestad, Kjell

    2004-01-01

    The study aims is to describe how the inclusion and exclusion of materials and calculative devices construct the boundaries and distinctions between statistical facts and artifacts in economics. My methodological approach is inspired by John Graunt's (1667) Political arithmetic and more recent work...... within constructivism and the field of Science and Technology Studies (STS). The result of this approach is here termed reversible statistics, reconstructing the findings of a statistical study within economics in three different ways. It is argued that all three accounts are quite normal, albeit...... in different ways. The presence and absence of diverse materials, both natural and political, is what distinguishes them from each other. Arguments are presented for a more symmetric relation between the scientific statistical text and the reader. I will argue that a more symmetric relation can be achieved...

  16. Image Statistics

    Energy Technology Data Exchange (ETDEWEB)

    Wendelberger, Laura Jean [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-08-08

    In large datasets, it is time consuming or even impossible to pick out interesting images. Our proposed solution is to find statistics to quantify the information in each image and use those to identify and pick out images of interest.

  17. Accident Statistics

    Data.gov (United States)

    Department of Homeland Security — Accident statistics available on the Coast Guard’s website by state, year, and one variable to obtain tables and/or graphs. Data from reports has been loaded for...

  18. Multiparametric statistics

    CERN Document Server

    Serdobolskii, Vadim Ivanovich

    2007-01-01

    This monograph presents mathematical theory of statistical models described by the essentially large number of unknown parameters, comparable with sample size but can also be much larger. In this meaning, the proposed theory can be called "essentially multiparametric". It is developed on the basis of the Kolmogorov asymptotic approach in which sample size increases along with the number of unknown parameters.This theory opens a way for solution of central problems of multivariate statistics, which up until now have not been solved. Traditional statistical methods based on the idea of an infinite sampling often break down in the solution of real problems, and, dependent on data, can be inefficient, unstable and even not applicable. In this situation, practical statisticians are forced to use various heuristic methods in the hope the will find a satisfactory solution.Mathematical theory developed in this book presents a regular technique for implementing new, more efficient versions of statistical procedures. ...

  19. Contributions to industrial statistics

    OpenAIRE

    2015-01-01

    This thesis is about statistics' contributions to industry. It is an article compendium comprising four articles divided in two blocks: (i) two contributions for a water supply company, and (ii) significance of the effects in Design of Experiments. In the first block, great emphasis is placed on how the research design and statistics can be applied to various real problems that a water company raises and it aims to convince water management companies that statistics can be very useful to impr...

  20. Probability and (Braiding) Statistics

    OpenAIRE

    2016-01-01

    Given recent progress in the realization of Majorana zero modes in semiconducting nanowires with proximity-induced superconductivity, a crucial next step is to attempt an experimental demonstration of the predicted braiding statistics associated with the Majorana mode. Such a demonstration should, in principle, confirm that observed zero-bias anomalies are indeed indicative of the presence of anyonic Majorana zero modes. Moreover, such a demonstration would be a breakthrough at the level of f...

  1. Statistical mechanics

    CERN Document Server

    Jana, Madhusudan

    2015-01-01

    Statistical mechanics is self sufficient, written in a lucid manner, keeping in mind the exam system of the universities. Need of study this subject and its relation to Thermodynamics is discussed in detail. Starting from Liouville theorem gradually, the Statistical Mechanics is developed thoroughly. All three types of Statistical distribution functions are derived separately with their periphery of applications and limitations. Non-interacting ideal Bose gas and Fermi gas are discussed thoroughly. Properties of Liquid He-II and the corresponding models have been depicted. White dwarfs and condensed matter physics, transport phenomenon - thermal and electrical conductivity, Hall effect, Magneto resistance, viscosity, diffusion, etc. are discussed. Basic understanding of Ising model is given to explain the phase transition. The book ends with a detailed coverage to the method of ensembles (namely Microcanonical, canonical and grand canonical) and their applications. Various numerical and conceptual problems ar...

  2. Statistical inference

    CERN Document Server

    Rohatgi, Vijay K

    2003-01-01

    Unified treatment of probability and statistics examines and analyzes the relationship between the two fields, exploring inferential issues. Numerous problems, examples, and diagrams--some with solutions--plus clear-cut, highlighted summaries of results. Advanced undergraduate to graduate level. Contents: 1. Introduction. 2. Probability Model. 3. Probability Distributions. 4. Introduction to Statistical Inference. 5. More on Mathematical Expectation. 6. Some Discrete Models. 7. Some Continuous Models. 8. Functions of Random Variables and Random Vectors. 9. Large-Sample Theory. 10. General Meth

  3. Statistical Physics

    CERN Document Server

    Mandl, Franz

    1988-01-01

    The Manchester Physics Series General Editors: D. J. Sandiford; F. Mandl; A. C. Phillips Department of Physics and Astronomy, University of Manchester Properties of Matter B. H. Flowers and E. Mendoza Optics Second Edition F. G. Smith and J. H. Thomson Statistical Physics Second Edition E. Mandl Electromagnetism Second Edition I. S. Grant and W. R. Phillips Statistics R. J. Barlow Solid State Physics Second Edition J. R. Hook and H. E. Hall Quantum Mechanics F. Mandl Particle Physics Second Edition B. R. Martin and G. Shaw The Physics of Stars Second Edition A. C. Phillips Computing for Scient

  4. AP statistics

    CERN Document Server

    Levine-Wissing, Robin

    2012-01-01

    All Access for the AP® Statistics Exam Book + Web + Mobile Everything you need to prepare for the Advanced Placement® exam, in a study system built around you! There are many different ways to prepare for an Advanced Placement® exam. What's best for you depends on how much time you have to study and how comfortable you are with the subject matter. To score your highest, you need a system that can be customized to fit you: your schedule, your learning style, and your current level of knowledge. This book, and the online tools that come with it, will help you personalize your AP® Statistics prep

  5. Statistical methods

    CERN Document Server

    Freund, Rudolf J; Wilson, William J

    2010-01-01

    Statistical Methods, 3e provides students with a working introduction to statistical methods offering a wide range of applications that emphasize the quantitative skills useful across many academic disciplines. This text takes a classic approach emphasizing concepts and techniques for working out problems and intepreting results. The book includes research projects, real-world case studies, numerous examples and data exercises organized by level of difficulty. This text requires that a student be familiar with algebra. New to this edition: NEW expansion of exercises a

  6. Statistical mechanics

    CERN Document Server

    Davidson, Norman

    2003-01-01

    Clear and readable, this fine text assists students in achieving a grasp of the techniques and limitations of statistical mechanics. The treatment follows a logical progression from elementary to advanced theories, with careful attention to detail and mathematical development, and is sufficiently rigorous for introductory or intermediate graduate courses.Beginning with a study of the statistical mechanics of ideal gases and other systems of non-interacting particles, the text develops the theory in detail and applies it to the study of chemical equilibrium and the calculation of the thermody

  7. Addressing mathematics & statistics anxiety

    OpenAIRE

    Kotecha, Meena

    2015-01-01

    This paper should be of interest to mathematics and statistics educators ranging from pre-university to university education sectors. It will discuss some features of the author’s teaching model developed over her longitudinal study conducted to understand and address mathematics and statistics anxiety, which is one of the main barriers to engaging with these subjects especially in non-specialist undergraduates. It will demonstrate how a range of formative assessments are used to kindle, as w...

  8. Statistics; Tilastot

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-12-31

    For the year 1997 and 1998, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually includes also historical time series over a longer period (see e.g. Energiatilastot 1997, Statistics Finland, Helsinki 1998, ISSN 0784-3165). The inside of the Review`s back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO{sub 2}-emissions, Electricity supply, Energy imports by country of origin in January-September 1998, Energy exports by recipient country in January-September 1998, Consumer prices of liquid fuels, Consumer prices of hard coal, Natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, Value added taxes and fiscal charges and fees included in consumer prices of some energy sources, Energy taxes and precautionary stock fees, pollution fees on oil products

  9. Statistics; Tilastot

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-12-31

    For the year 1997 and 1998, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually includes also historical time series over a longer period (see e.g. Energiatilastot 1996, Statistics Finland, Helsinki 1997, ISSN 0784-3165). The inside of the Review`s back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO{sub 2}-emissions, Electricity supply, Energy imports by country of origin in January-June 1998, Energy exports by recipient country in January-June 1998, Consumer prices of liquid fuels, Consumer prices of hard coal, Natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, Value added taxes and fiscal charges and fees included in consumer prices of some energy sources, Energy taxes and precautionary stock fees, pollution fees on oil products

  10. Statistical Mechancis

    CERN Document Server

    Gallavotti, Giovanni

    2011-01-01

    C. Cercignani: A sketch of the theory of the Boltzmann equation.- O.E. Lanford: Qualitative and statistical theory of dissipative systems.- E.H. Lieb: many particle Coulomb systems.- B. Tirozzi: Report on renormalization group.- A. Wehrl: Basic properties of entropy in quantum mechanics.

  11. Funding source and primary outcome changes in clinical trials registered on ClinicalTrials.gov are associated with the reporting of a statistically significant primary outcome: a cross-sectional study [v2; ref status: indexed, http://f1000r.es/5bj

    Directory of Open Access Journals (Sweden)

    Sreeram V Ramagopalan

    2015-04-01

    Full Text Available Background: We and others have shown a significant proportion of interventional trials registered on ClinicalTrials.gov have their primary outcomes altered after the listed study start and completion dates. The objectives of this study were to investigate whether changes made to primary outcomes are associated with the likelihood of reporting a statistically significant primary outcome on ClinicalTrials.gov. Methods: A cross-sectional analysis of all interventional clinical trials registered on ClinicalTrials.gov as of 20 November 2014 was performed. The main outcome was any change made to the initially listed primary outcome and the time of the change in relation to the trial start and end date. Findings: 13,238 completed interventional trials were registered with ClinicalTrials.gov that also had study results posted on the website. 2555 (19.3% had one or more statistically significant primary outcomes. Statistical analysis showed that registration year, funding source and primary outcome change after trial completion were associated with reporting a statistically significant primary outcome. Conclusions: Funding source and primary outcome change after trial completion are associated with a statistically significant primary outcome report on clinicaltrials.gov.

  12. Experimental statistics

    CERN Document Server

    Natrella, Mary Gibbons

    2005-01-01

    Formulated to assist scientists and engineers engaged in army ordnance research and development programs, this well-known and highly regarded handbook is a ready reference for advanced undergraduate and graduate students as well as for professionals seeking engineering information and quantitative data for designing, developing, constructing, and testing equipment. Topics include characterizing and comparing the measured performance of a material, product, or process; general considerations in planning experiments; statistical techniques for analyzing extreme-value data; use of transformations

  13. Depth statistics

    OpenAIRE

    2012-01-01

    In 1975 John Tukey proposed a multivariate median which is the 'deepest' point in a given data cloud in R^d. Later, in measuring the depth of an arbitrary point z with respect to the data, David Donoho and Miriam Gasko considered hyperplanes through z and determined its 'depth' by the smallest portion of data that are separated by such a hyperplane. Since then, these ideas has proved extremely fruitful. A rich statistical methodology has developed that is based on data depth and, more general...

  14. Statistical mechanics

    CERN Document Server

    Sheffield, Scott

    2009-01-01

    In recent years, statistical mechanics has been increasingly recognized as a central domain of mathematics. Major developments include the Schramm-Loewner evolution, which describes two-dimensional phase transitions, random matrix theory, renormalization group theory and the fluctuations of random surfaces described by dimers. The lectures contained in this volume present an introduction to recent mathematical progress in these fields. They are designed for graduate students in mathematics with a strong background in analysis and probability. This book will be of particular interest to graduate students and researchers interested in modern aspects of probability, conformal field theory, percolation, random matrices and stochastic differential equations.

  15. Quantum statistics on graphs

    CERN Document Server

    Harrison, JM; Robbins, JM; 10.1098/rspa.2010.0254

    2011-01-01

    Quantum graphs are commonly used as models of complex quantum systems, for example molecules, networks of wires, and states of condensed matter. We consider quantum statistics for indistinguishable spinless particles on a graph, concentrating on the simplest case of abelian statistics for two particles. In spite of the fact that graphs are locally one-dimensional, anyon statistics emerge in a generalized form. A given graph may support a family of independent anyon phases associated with topologically inequivalent exchange processes. In addition, for sufficiently complex graphs, there appear new discrete-valued phases. Our analysis is simplified by considering combinatorial rather than metric graphs -- equivalently, a many-particle tight-binding model. The results demonstrate that graphs provide an arena in which to study new manifestations of quantum statistics. Possible applications include topological quantum computing, topological insulators, the fractional quantum Hall effect, superconductivity and molec...

  16. Statistics 101 for Radiologists.

    Science.gov (United States)

    Anvari, Arash; Halpern, Elkan F; Samir, Anthony E

    2015-10-01

    Diagnostic tests have wide clinical applications, including screening, diagnosis, measuring treatment effect, and determining prognosis. Interpreting diagnostic test results requires an understanding of key statistical concepts used to evaluate test efficacy. This review explains descriptive statistics and discusses probability, including mutually exclusive and independent events and conditional probability. In the inferential statistics section, a statistical perspective on study design is provided, together with an explanation of how to select appropriate statistical tests. Key concepts in recruiting study samples are discussed, including representativeness and random sampling. Variable types are defined, including predictor, outcome, and covariate variables, and the relationship of these variables to one another. In the hypothesis testing section, we explain how to determine if observed differences between groups are likely to be due to chance. We explain type I and II errors, statistical significance, and study power, followed by an explanation of effect sizes and how confidence intervals can be used to generalize observed effect sizes to the larger population. Statistical tests are explained in four categories: t tests and analysis of variance, proportion analysis tests, nonparametric tests, and regression techniques. We discuss sensitivity, specificity, accuracy, receiver operating characteristic analysis, and likelihood ratios. Measures of reliability and agreement, including κ statistics, intraclass correlation coefficients, and Bland-Altman graphs and analysis, are introduced.

  17. CMS Statistics

    Data.gov (United States)

    U.S. Department of Health & Human Services — This reference provides significant summary information about health expenditures and the Centers for Medicare & Medicaid Services' (CMS) programs. The...

  18. Statistical Neurodynamics.

    Science.gov (United States)

    Paine, Gregory Harold

    1982-03-01

    The primary objective of the thesis is to explore the dynamical properties of small nerve networks by means of the methods of statistical mechanics. To this end, a general formalism is developed and applied to elementary groupings of model neurons which are driven by either constant (steady state) or nonconstant (nonsteady state) forces. Neuronal models described by a system of coupled, nonlinear, first-order, ordinary differential equations are considered. A linearized form of the neuronal equations is studied in detail. A Lagrange function corresponding to the linear neural network is constructed which, through a Legendre transformation, provides a constant of motion. By invoking the Maximum-Entropy Principle with the single integral of motion as a constraint, a probability distribution function for the network in a steady state can be obtained. The formalism is implemented for some simple networks driven by a constant force; accordingly, the analysis focuses on a study of fluctuations about the steady state. In particular, a network composed of N noninteracting neurons, termed Free Thinkers, is considered in detail, with a view to interpretation and numerical estimation of the Lagrange multiplier corresponding to the constant of motion. As an archetypical example of a net of interacting neurons, the classical neural oscillator, consisting of two mutually inhibitory neurons, is investigated. It is further shown that in the case of a network driven by a nonconstant force, the Maximum-Entropy Principle can be applied to determine a probability distribution functional describing the network in a nonsteady state. The above examples are reconsidered with nonconstant driving forces which produce small deviations from the steady state. Numerical studies are performed on simplified models of two physical systems: the starfish central nervous system and the mammalian olfactory bulb. Discussions are given as to how statistical neurodynamics can be used to gain a better

  19. Basics of statistical physics

    CERN Document Server

    Müller-Kirsten, Harald J W

    2013-01-01

    Statistics links microscopic and macroscopic phenomena, and requires for this reason a large number of microscopic elements like atoms. The results are values of maximum probability or of averaging. This introduction to statistical physics concentrates on the basic principles, and attempts to explain these in simple terms supplemented by numerous examples. These basic principles include the difference between classical and quantum statistics, a priori probabilities as related to degeneracies, the vital aspect of indistinguishability as compared with distinguishability in classical physics, the differences between conserved and non-conserved elements, the different ways of counting arrangements in the three statistics (Maxwell-Boltzmann, Fermi-Dirac, Bose-Einstein), the difference between maximization of the number of arrangements of elements, and averaging in the Darwin-Fowler method. Significant applications to solids, radiation and electrons in metals are treated in separate chapters, as well as Bose-Eins...

  20. The statistical significance of hippotherapy for children with psychomotor disabilities

    OpenAIRE

    Anca Nicoleta BNLBA

    2015-01-01

    Topic The recovery and social integration of children with psychomotility disabilities is an important goal for the integration of Romania into the European Union. Studies conducted in this area reveal that people, who practice therapy using the horse due to a recommendation by professionals, benefit from a much faster recovery and at a much higher level. Purpose of study Identification of results for adaptive areas due to participation in a therapy program with the help of the horse for chil...

  1. Detecting Statistically Significant Communities of Triangle Motifs in Undirected Networks

    Science.gov (United States)

    2016-04-26

    right of the red line correspond to individuals who became associated with the author through marriage . Essentially there are three main clusters...public release. [8] Zachary, W., 1977. “An information flow model for conflict and fission in small groups,” Journal of Anthropological Research 33, pp

  2. The questioned p value: clinical, practical and statistical significance

    Directory of Open Access Journals (Sweden)

    Rosa Jiménez-Paneque

    2016-09-01

    Full Text Available Resumen El uso del valor de p y la significación estadística han estado en entredicho desde principios de la década de los 80 en el siglo pasado hasta nuestros días. Mucho se ha discutido al respecto en el ámbito de la estadística y sus aplicaciones, en particular a la Epidemiología y la Salud Pública. El valor de p y su equivalente, la significación estadística, son por demás conceptos difíciles de asimilar para los muchos profesionales de la salud involucrados de alguna manera en la investigación aplicada a sus áreas de trabajo. Sin embargo, su significado debería ser claro en términos intuitivos a pesar de que se basa en conceptos teóricos del terreno de la Estadística-Matemática. Este artículo intenta presentar al valor de p como un concepto que se aplica a la vida diaria y por tanto intuitivamente sencillo pero cuyo uso adecuado no se puede separar de elementos teóricos y metodológicos con complejidad intrínseca. Se explican también de manera intuitiva las razones detrás de las críticas que ha recibido el valor de p y su uso aislado, principalmente la necesidad de deslindar significación estadística de significación clínica y se mencionan algunos de los remedios propuestos para estos problemas. Se termina aludiendo a la actual tendencia a reivindicar su uso apelando a la conveniencia de utilizarlo en ciertas situaciones y la reciente declaración de la Asociación Americana de Estadística al respecto.

  3. Detecting Statistically Significant Communities of Triangle Motifs in Undirected Networks

    Science.gov (United States)

    2015-03-16

    level of transitivity are often more stable, balanced and harmonious . For social networks, Granovetter [3] in his work on “strength of weak ties...of schedules for the independent teams , relative to the other conferences in the FBS. Applying the proposed clustering algorithm to the FBS network...correctly identified all 11 conferences, as well as those teams that belong to those conferences. The “independent” teams were also assigned to a conference

  4. The questioned p value: clinical, practical and statistical significance

    OpenAIRE

    Rosa Jiménez-Paneque

    2016-01-01

    Resumen El uso del valor de p y la significación estadística han estado en entredicho desde principios de la década de los 80 en el siglo pasado hasta nuestros días. Mucho se ha discutido al respecto en el ámbito de la estadística y sus aplicaciones, en particular a la Epidemiología y la Salud Pública. El valor de p y su equivalente, la significación estadística, son por demás conceptos difíciles de asimilar para los muchos profesionales de la salud involucrados de alguna manera en la inve...

  5. Obesity Statistics.

    Science.gov (United States)

    Smith, Kristy Breuhl; Smith, Michael Seth

    2016-03-01

    Obesity is a chronic disease that is strongly associated with an increase in mortality and morbidity including, certain types of cancer, cardiovascular disease, disability, diabetes mellitus, hypertension, osteoarthritis, and stroke. In adults, overweight is defined as a body mass index (BMI) of 25 kg/m(2) to 29 kg/m(2) and obesity as a BMI of greater than 30 kg/m(2). If current trends continue, it is estimated that, by the year 2030, 38% of the world's adult population will be overweight and another 20% obese. Significant global health strategies must reduce the morbidity and mortality associated with the obesity epidemic.

  6. Whither Statistics Education Research?

    Science.gov (United States)

    Watson, Jane

    2016-01-01

    This year marks the 25th anniversary of the publication of a "National Statement on Mathematics for Australian Schools", which was the first curriculum statement this country had including "Chance and Data" as a significant component. It is hence an opportune time to survey the history of the related statistics education…

  7. Nonparametric statistical methods

    CERN Document Server

    Hollander, Myles; Chicken, Eric

    2013-01-01

    Praise for the Second Edition"This book should be an essential part of the personal library of every practicing statistician."-Technometrics  Thoroughly revised and updated, the new edition of Nonparametric Statistical Methods includes additional modern topics and procedures, more practical data sets, and new problems from real-life situations. The book continues to emphasize the importance of nonparametric methods as a significant branch of modern statistics and equips readers with the conceptual and technical skills necessary to select and apply the appropriate procedures for any given sit

  8. Clinical significance of adiponectin expression in colon cancer patients

    Directory of Open Access Journals (Sweden)

    Mustafa Canhoroz

    2014-01-01

    Conclusion: Adiponectin, which is secreted by adipose tissue, may have a role in the development and progression of cancer via its pro-apoptotic and/or anti-proliferative effects. Adiponectin expression in tumor tissues is likely to have a negative effect on disease - free survival in patients with stage II/III colon cancer; however, no statistically significant effect was demonstrated.

  9. Exploration Medical System Demonstration

    Science.gov (United States)

    Rubin, D. A.; Watkins, S. D.

    2014-01-01

    BACKGROUND: Exploration class missions will present significant new challenges and hazards to the health of the astronauts. Regardless of the intended destination, beyond low Earth orbit a greater degree of crew autonomy will be required to diagnose medical conditions, develop treatment plans, and implement procedures due to limited communications with ground-based personnel. SCOPE: The Exploration Medical System Demonstration (EMSD) project will act as a test bed on the International Space Station (ISS) to demonstrate to crew and ground personnel that an end-to-end medical system can assist clinician and non-clinician crew members in optimizing medical care delivery and data management during an exploration mission. Challenges facing exploration mission medical care include limited resources, inability to evacuate to Earth during many mission phases, and potential rendering of medical care by non-clinicians. This system demonstrates the integration of medical devices and informatics tools for managing evidence and decision making and can be designed to assist crewmembers in nominal, non-emergent situations and in emergent situations when they may be suffering from performance decrements due to environmental, physiological or other factors. PROJECT OBJECTIVES: The objectives of the EMSD project are to: a. Reduce or eliminate the time required of an on-orbit crew and ground personnel to access, transfer, and manipulate medical data. b. Demonstrate that the on-orbit crew has the ability to access medical data/information via an intuitive and crew-friendly solution to aid in the treatment of a medical condition. c. Develop a common data management framework that can be ubiquitously used to automate repetitive data collection, management, and communications tasks for all activities pertaining to crew health and life sciences. d. Ensure crew access to medical data during periods of restricted ground communication. e. Develop a common data management framework that

  10. READING STATISTICS AND RESEARCH

    Directory of Open Access Journals (Sweden)

    Reviewed by Yavuz Akbulut

    2008-10-01

    Full Text Available The book demonstrates the best and most conservative ways to decipher and critique research reports particularly for social science researchers. In addition, new editions of the book are always better organized, effectively structured and meticulously updated in line with the developments in the field of research statistics. Even the most trivial issues are revisited and updated in new editions. For instance, purchaser of the previous editions might check the interpretation of skewness and kurtosis indices in the third edition (p. 34 and in the fifth edition (p.29 to see how the author revisits every single detail. Theory and practice always go hand in hand in all editions of the book. Re-reading previous editions (e.g. third edition before reading the fifth edition gives the impression that the author never stops ameliorating his instructional text writing methods. In brief, “Reading Statistics and Research” is among the best sources showing research consumers how to understand and critically assess the statistical information and research results contained in technical research reports. In this respect, the review written by Mirko Savić in Panoeconomicus (2008, 2, pp. 249-252 will help the readers to get a more detailed overview of each chapters. I cordially urge the beginning researchers to pick a highlighter to conduct a detailed reading with the book. A thorough reading of the source will make the researchers quite selective in appreciating the harmony between the data analysis, results and discussion sections of typical journal articles. If interested, beginning researchers might begin with this book to grasp the basics of research statistics, and prop up their critical research reading skills with some statistics package applications through the help of Dr. Andy Field’s book, Discovering Statistics using SPSS (second edition published by Sage in 2005.

  11. LIMB Demonstration Project Extension and Coolside Demonstration

    Energy Technology Data Exchange (ETDEWEB)

    Goots, T.R.; DePero, M.J.; Nolan, P.S.

    1992-11-10

    This report presents results from the limestone Injection Multistage Burner (LIMB) Demonstration Project Extension. LIMB is a furnace sorbent injection technology designed for the reduction of sulfur dioxide (SO[sub 2]) and nitrogen oxides (NO[sub x]) emissions from coal-fired utility boilers. The testing was conducted on the 105 Mwe, coal-fired, Unit 4 boiler at Ohio Edison's Edgewater Station in Lorain, Ohio. In addition to the LIMB Extension activities, the overall project included demonstration of the Coolside process for S0[sub 2] removal for which a separate report has been issued. The primary purpose of the DOE LIMB Extension testing, was to demonstrate the generic applicability of LIMB technology. The program sought to characterize the S0[sub 2] emissions that result when various calcium-based sorbents are injected into the furnace, while burning coals having sulfur content ranging from 1.6 to 3.8 weight percent. The four sorbents used included calcitic limestone, dolomitic hydrated lime, calcitic hydrated lime, and calcitic hydrated lime with a small amount of added calcium lignosulfonate. The results include those obtained for the various coal/sorbent combinations and the effects of the LIMB process on boiler and plant operations.

  12. SWORDS: A statistical tool for analysing large DNA sequences

    Indian Academy of Sciences (India)

    Probal Chaudhuri; Sandip Das

    2002-02-01

    In this article, we present some simple yet effective statistical techniques for analysing and comparing large DNA sequences. These techniques are based on frequency distributions of DNA words in a large sequence, and have been packaged into a software called SWORDS. Using sequences available in public domain databases housed in the Internet, we demonstrate how SWORDS can be conveniently used by molecular biologists and geneticists to unmask biologically important features hidden in large sequences and assess their statistical significance.

  13. Worry, Intolerance of Uncertainty, and Statistics Anxiety

    Science.gov (United States)

    Williams, Amanda S.

    2013-01-01

    Statistics anxiety is a problem for most graduate students. This study investigates the relationship between intolerance of uncertainty, worry, and statistics anxiety. Intolerance of uncertainty was significantly related to worry, and worry was significantly related to three types of statistics anxiety. Six types of statistics anxiety were…

  14. Worry, Intolerance of Uncertainty, and Statistics Anxiety

    Science.gov (United States)

    Williams, Amanda S.

    2013-01-01

    Statistics anxiety is a problem for most graduate students. This study investigates the relationship between intolerance of uncertainty, worry, and statistics anxiety. Intolerance of uncertainty was significantly related to worry, and worry was significantly related to three types of statistics anxiety. Six types of statistics anxiety were…

  15. Fuel Cell Demonstration Program

    Energy Technology Data Exchange (ETDEWEB)

    Gerald Brun

    2006-09-15

    In an effort to promote clean energy projects and aid in the commercialization of new fuel cell technologies the Long Island Power Authority (LIPA) initiated a Fuel Cell Demonstration Program in 1999 with six month deployments of Proton Exchange Membrane (PEM) non-commercial Beta model systems at partnering sites throughout Long Island. These projects facilitated significant developments in the technology, providing operating experience that allowed the manufacturer to produce fuel cells that were half the size of the Beta units and suitable for outdoor installations. In 2001, LIPA embarked on a large-scale effort to identify and develop measures that could improve the reliability and performance of future fuel cell technologies for electric utility applications and the concept to establish a fuel cell farm (Farm) of 75 units was developed. By the end of October of 2001, 75 Lorax 2.0 fuel cells had been installed at the West Babylon substation on Long Island, making it the first fuel cell demonstration of its kind and size anywhere in the world at the time. Designed to help LIPA study the feasibility of using fuel cells to operate in parallel with LIPA's electric grid system, the Farm operated 120 fuel cells over its lifetime of over 3 years including 3 generations of Plug Power fuel cells (Lorax 2.0, Lorax 3.0, Lorax 4.5). Of these 120 fuel cells, 20 Lorax 3.0 units operated under this Award from June 2002 to September 2004. In parallel with the operation of the Farm, LIPA recruited government and commercial/industrial customers to demonstrate fuel cells as on-site distributed generation. From December 2002 to February 2005, 17 fuel cells were tested and monitored at various customer sites throughout Long Island. The 37 fuel cells operated under this Award produced a total of 712,635 kWh. As fuel cell technology became more mature, performance improvements included a 1% increase in system efficiency. Including equipment, design, fuel, maintenance

  16. Introductory statistics for engineering experimentation

    CERN Document Server

    Nelson, Peter R; Coffin, Marie

    2003-01-01

    The Accreditation Board for Engineering and Technology (ABET) introduced a criterion starting with their 1992-1993 site visits that "Students must demonstrate a knowledge of the application of statistics to engineering problems." Since most engineering curricula are filled with requirements in their own discipline, they generally do not have time for a traditional two semesters of probability and statistics. Attempts to condense that material into a single semester often results in so much time being spent on probability that the statistics useful for designing and analyzing engineering/scientific experiments is never covered. In developing a one-semester course whose purpose was to introduce engineering/scientific students to the most useful statistical methods, this book was created to satisfy those needs. - Provides the statistical design and analysis of engineering experiments & problems - Presents a student-friendly approach through providing statistical models for advanced learning techniques - Cove...

  17. Pain: A Statistical Account

    Science.gov (United States)

    Thacker, Michael A.; Moseley, G. Lorimer

    2017-01-01

    Perception is seen as a process that utilises partial and noisy information to construct a coherent understanding of the world. Here we argue that the experience of pain is no different; it is based on incomplete, multimodal information, which is used to estimate potential bodily threat. We outline a Bayesian inference model, incorporating the key components of cue combination, causal inference, and temporal integration, which highlights the statistical problems in everyday perception. It is from this platform that we are able to review the pain literature, providing evidence from experimental, acute, and persistent phenomena to demonstrate the advantages of adopting a statistical account in pain. Our probabilistic conceptualisation suggests a principles-based view of pain, explaining a broad range of experimental and clinical findings and making testable predictions. PMID:28081134

  18. Applied statistical thermodynamics

    CERN Document Server

    Lucas, Klaus

    1991-01-01

    The book guides the reader from the foundations of statisti- cal thermodynamics including the theory of intermolecular forces to modern computer-aided applications in chemical en- gineering and physical chemistry. The approach is new. The foundations of quantum and statistical mechanics are presen- ted in a simple way and their applications to the prediction of fluid phase behavior of real systems are demonstrated. A particular effort is made to introduce the reader to expli- cit formulations of intermolecular interaction models and to show how these models influence the properties of fluid sy- stems. The established methods of statistical mechanics - computer simulation, perturbation theory, and numerical in- tegration - are discussed in a style appropriate for newcom- ers and are extensively applied. Numerous worked examples illustrate how practical calculations should be carried out.

  19. Statistical aspects of determinantal point processes

    DEFF Research Database (Denmark)

    Lavancier, Frédéric; Møller, Jesper; Rubak, Ege

    The statistical aspects of determinantal point processes (DPPs) seem largely unexplored. We review the appealing properties of DDPs, demonstrate that they are useful models for repulsiveness, detail a simulation procedure, and provide freely available software for simulation and statistical infer...

  20. Polarized Light Corridor Demonstrations.

    Science.gov (United States)

    Davies, G. R.

    1990-01-01

    Eleven demonstrations of light polarization are presented. Each includes a brief description of the apparatus and the effect demonstrated. Illustrated are strain patterns, reflection, scattering, the Faraday Effect, interference, double refraction, the polarizing microscope, and optical activity. (CW)

  1. The POSEIDON Demonstrator

    NARCIS (Netherlands)

    Laar, P.J.L.J. van de

    2013-01-01

    In this chapter, we discuss the Poseidon demonstrator: a demonstrator that integrates the individual research results of all partners of the Poseidon project. After describing how the Poseidon demonstrator was built, deployed, and operated, we will not only show many results obtained from the demons

  2. Overhead Projector Demonstrations.

    Science.gov (United States)

    Kolb, Doris, Ed.

    1988-01-01

    Details two demonstrations for use with an overhead projector in a chemistry lecture. Includes "A Very Rapidly Growing Silicate Crystal" and "A Colorful Demonstration to Simulate Orbital Hybridization." The materials and directions for each demonstration are included as well as a brief explanation of the essential learning involved. (CW)

  3. Detecting significant changes in protein abundance

    Directory of Open Access Journals (Sweden)

    Kai Kammers

    2015-06-01

    Full Text Available We review and demonstrate how an empirical Bayes method, shrinking a protein's sample variance towards a pooled estimate, leads to far more powerful and stable inference to detect significant changes in protein abundance compared to ordinary t-tests. Using examples from isobaric mass labelled proteomic experiments we show how to analyze data from multiple experiments simultaneously, and discuss the effects of missing data on the inference. We also present easy to use open source software for normalization of mass spectrometry data and inference based on moderated test statistics.

  4. Strategy Guideline: Demonstration Home

    Energy Technology Data Exchange (ETDEWEB)

    Savage, C.; Hunt, A.

    2012-12-01

    This guideline will provide a general overview of the different kinds of demonstration home projects, a basic understanding of the different roles and responsibilities involved in the successful completion of a demonstration home, and an introduction into some of the lessons learned from actual demonstration home projects. Also, this guideline will specifically look at the communication methods employed during demonstration home projects. And lastly, we will focus on how to best create a communication plan for including an energy efficient message in a demonstration home project and carry that message to successful completion.

  5. Strategy Guideline. Demonstration Home

    Energy Technology Data Exchange (ETDEWEB)

    Hunt, A.; Savage, C.

    2012-12-01

    This guideline will provide a general overview of the different kinds of demonstration home projects, a basic understanding of the different roles and responsibilities involved in the successful completion of a demonstration home, and an introduction into some of the lessons learned from actual demonstration home projects. Also, this guideline will specifically look at the communication methods employed during demonstration home projects. And lastly, we will focus on how to best create a communication plan for including an energy efficient message in a demonstration home project and carry that message to successful completion.

  6. Tidd PFBC demonstration project

    Energy Technology Data Exchange (ETDEWEB)

    Marrocco, M. [American Electric Power, Columbus, OH (United States)

    1997-12-31

    The Tidd project was one of the first joint government-industry ventures to be approved by the US Department of Energy (DOE) in its Clean Coal Technology Program. In March 1987, DOE signed an agreement with the Ohio Power Company, a subsidiary of American Electric Power, to refurbish the then-idle Tidd plant on the banks of the Ohio River with advanced pressurized fluidized bed technology. Testing ended after 49 months of operation, 100 individual tests, and the generation of more than 500,000 megawatt-hours of electricity. The demonstration plant has met its objectives. The project showed that more than 95 percent of sulfur dioxide pollutants could be removed inside the advanced boiler using the advanced combustion technology, giving future power plants an attractive alternative to expensive, add-on scrubber technology. In addition to its sulfur removal effectiveness, the plant`s sustained periods of steady-state operation boosted its availability significantly above design projections, heightening confidence that pressurized fluidized bed technology will be a reliable, baseload technology for future power plants. The technology also controlled the release of nitrogen oxides to levels well below the allowable limits set by federal air quality standards. It also produced a dry waste product that is much easier to handle than wastes from conventional power plants and will likely have commercial value when produced by future power plants.

  7. Plastic Surgery Statistics

    Science.gov (United States)

    ... PSN PSEN GRAFT Contact Us News Plastic Surgery Statistics Plastic surgery procedural statistics from the American Society of Plastic Surgeons. Statistics by Year Print 2016 Plastic Surgery Statistics 2015 ...

  8. MQSA National Statistics

    Science.gov (United States)

    ... Standards Act and Program MQSA Insights MQSA National Statistics Share Tweet Linkedin Pin it More sharing options ... but should level off with time. Archived Scorecard Statistics 2017 Scorecard Statistics 2016 Scorecard Statistics (Archived) 2015 ...

  9. Statistics Anxiety among Postgraduate Students

    Science.gov (United States)

    Koh, Denise; Zawi, Mohd Khairi

    2014-01-01

    Most postgraduate programmes, that have research components, require students to take at least one course of research statistics. Not all postgraduate programmes are science based, there are a significant number of postgraduate students who are from the social sciences that will be taking statistics courses, as they try to complete their…

  10. Statistical Model for Content Extraction

    DEFF Research Database (Denmark)

    2011-01-01

    We present a statistical model for content extraction from HTML documents. The model operates on Document Object Model (DOM) tree of the corresponding HTML document. It evaluates each tree node and associated statistical features to predict significance of the node towards overall content...

  11. Manufacturing Demonstration Facility (MDF)

    Data.gov (United States)

    Federal Laboratory Consortium — The U.S. Department of Energy Manufacturing Demonstration Facility (MDF) at Oak Ridge National Laboratory (ORNL) provides a collaborative, shared infrastructure to...

  12. Predict! Teaching Statistics Using Informational Statistical Inference

    Science.gov (United States)

    Makar, Katie

    2013-01-01

    Statistics is one of the most widely used topics for everyday life in the school mathematics curriculum. Unfortunately, the statistics taught in schools focuses on calculations and procedures before students have a chance to see it as a useful and powerful tool. Researchers have found that a dominant view of statistics is as an assortment of tools…

  13. Performance demonstration by ROC method

    Science.gov (United States)

    Wessel, Hannelore; Nockemann, Christina; Tillack, Gerd-Rüdiger; Mattis, Arne

    1994-12-01

    The question of the efficiency of a material testing system is important, when a competing or advanced system appears at the market. The comparison of the different systems can be done partly by the comparison of the technical specification of the systems, but not all parameters can be expressed by measured values, especially not the influence of human inspectors. A testing system in the field of NDT - for example weld inspection - often consists of several different devices and components (radiographic film, its irradiation and development, conventional inspection with a light box, human inspector). The demonstration of the performance of such a system with similar or advanced methods can be done by a statistical method, the ROC method. This quantitative measure for testing performance allows the comparison of complex NDT systems which will be demonstrated in detail by the comparison of conventional weld inspection with inspection of welds using the digitised image of the radiographs.

  14. Toy Demonstrator's "VISIT" Handbook.

    Science.gov (United States)

    Levenstein, Phyllis

    The role of the toy demonstrator in a home-based, mother-involved intervention effort (Verbal Interaction Project) is presented in this handbook for staff members. It is believed that the prerequisites for functioning in the toy demonstrator's role are a sense of responsibility, patience with the children and their mothers, and willingness to be…

  15. Levitation Kits Demonstrate Superconductivity.

    Science.gov (United States)

    Worthy, Ward

    1987-01-01

    Describes the "Project 1-2-3" levitation kit used to demonstrate superconductivity. Summarizes the materials included in the kit. Discusses the effect demonstrated and gives details on how to obtain kits. Gives an overview of the documentation that is included. (CW)

  16. Kinetics and Catalysis Demonstrations.

    Science.gov (United States)

    Falconer, John L.; Britten, Jerald A.

    1984-01-01

    Eleven videotaped kinetics and catalysis demonstrations are described. Demonstrations include the clock reaction, oscillating reaction, hydrogen oxidation in air, hydrogen-oxygen explosion, acid-base properties of solids, high- and low-temperature zeolite reactivity, copper catalysis of ammonia oxidation and sodium peroxide decomposition, ammonia…

  17. Better Ira Remsen Demonstration

    Science.gov (United States)

    Dalby, David K.; Maynard, James H.; Moore, John W.

    2011-01-01

    Many versions of the classic Ira Remsen experience involving copper and concentrated nitric acid have been used as lecture demonstrations. Remsen's original reminiscence from 150 years ago is included in the Supporting Information, and his biography can be found on the Internet. This article presents a new version that makes the demonstration more…

  18. Levitation Kits Demonstrate Superconductivity.

    Science.gov (United States)

    Worthy, Ward

    1987-01-01

    Describes the "Project 1-2-3" levitation kit used to demonstrate superconductivity. Summarizes the materials included in the kit. Discusses the effect demonstrated and gives details on how to obtain kits. Gives an overview of the documentation that is included. (CW)

  19. CURRENT STATUS OF NONPARAMETRIC STATISTICS

    Directory of Open Access Journals (Sweden)

    Orlov A. I.

    2015-02-01

    Full Text Available Nonparametric statistics is one of the five points of growth of applied mathematical statistics. Despite the large number of publications on specific issues of nonparametric statistics, the internal structure of this research direction has remained undeveloped. The purpose of this article is to consider its division into regions based on the existing practice of scientific activity determination of nonparametric statistics and classify investigations on nonparametric statistical methods. Nonparametric statistics allows to make statistical inference, in particular, to estimate the characteristics of the distribution and testing statistical hypotheses without, as a rule, weakly proven assumptions about the distribution function of samples included in a particular parametric family. For example, the widespread belief that the statistical data are often have the normal distribution. Meanwhile, analysis of results of observations, in particular, measurement errors, always leads to the same conclusion - in most cases the actual distribution significantly different from normal. Uncritical use of the hypothesis of normality often leads to significant errors, in areas such as rejection of outlying observation results (emissions, the statistical quality control, and in other cases. Therefore, it is advisable to use nonparametric methods, in which the distribution functions of the results of observations are imposed only weak requirements. It is usually assumed only their continuity. On the basis of generalization of numerous studies it can be stated that to date, using nonparametric methods can solve almost the same number of tasks that previously used parametric methods. Certain statements in the literature are incorrect that nonparametric methods have less power, or require larger sample sizes than parametric methods. Note that in the nonparametric statistics, as in mathematical statistics in general, there remain a number of unresolved problems

  20. Statistical Mechanics Algorithms and Computations

    CERN Document Server

    Krauth, Werner

    2006-01-01

    This book discusses the computational approach in modern statistical physics, adopting simple language and an attractive format of many illustrations, tables and printed algorithms. The discussion of key subjects in classical and quantum statistical physics will appeal to students, teachers and researchers in physics and related sciences. The focus is on orientation with implementation details kept to a minimum. - ;This book discusses the computational approach in modern statistical physics in a clear and accessible way and demonstrates its close relation to other approaches in theoretical phy

  1. Image quantization: statistics and modeling

    Science.gov (United States)

    Whiting, Bruce R.; Muka, Edward

    1998-07-01

    A method for analyzing the effects of quantization, developed for temporal one-dimensional signals, is extended to two- dimensional radiographic images. By calculating the probability density function for the second order statistics (the differences between nearest neighbor pixels) and utilizing its Fourier transform (the characteristic function), the effect of quantization on image statistics can be studied by the use of standard communication theory. The approach is demonstrated by characterizing the noise properties of a storage phosphor computed radiography system and the image statistics of a simple radiographic object (cylinder) and by comparing the model to experimental measurements. The role of quantization noise and the onset of contouring in image degradation are explained.

  2. Nursing student attitudes toward statistics.

    Science.gov (United States)

    Mathew, Lizy; Aktan, Nadine M

    2014-04-01

    Nursing is guided by evidence-based practice. To understand and apply research to practice, nurses must be knowledgeable in statistics; therefore, it is crucial to promote a positive attitude toward statistics among nursing students. The purpose of this quantitative cross-sectional study was to assess differences in attitudes toward statistics among undergraduate nursing, graduate nursing, and undergraduate non-nursing students. The Survey of Attitudes Toward Statistics Scale-36 (SATS-36) was used to measure student attitudes, with higher scores denoting more positive attitudes. The convenience sample was composed of 175 students from a public university in the northeastern United States. Statistically significant relationships were found among some of the key demographic variables. Graduate nursing students had a significantly lower score on the SATS-36, compared with baccalaureate nursing and non-nursing students. Therefore, an innovative nursing curriculum that incorporates knowledge of student attitudes and key demographic variables may result in favorable outcomes.

  3. Methanol Cannon Demonstrations Revisited.

    Science.gov (United States)

    Dolson, David A.; And Others

    1995-01-01

    Describes two variations on the traditional methanol cannon demonstration. The first variation is a chain reaction using real metal chains. The second example involves using easily available components to produce sequential explosions that can be musical in nature. (AIM)

  4. TENCompetence tool demonstration

    NARCIS (Netherlands)

    Kluijfhout, Eric

    2010-01-01

    Kluijfhout, E. (2009). TENCompetence tool demonstration. Presented at Zorgacademie Parkstad (Health Academy Parkstad), Limburg Leisure Academy, Life Long Learning Limburg and a number of regional educational institutions. May, 18, 2009, Heerlen, The Netherlands: Open University of the Netherlands, T

  5. Land Management Research Demonstration

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — In 2002, Neal Smith National Wildlife Refuge became one of the first Land Management and Research Demonstration (LMRD) sites. These sites are intended to serve as...

  6. Pancreaticopleural fistula : CT demonstration

    Energy Technology Data Exchange (ETDEWEB)

    Hahm, Jin Kyeung [Chuncheon Medical Center, ChunChon (Korea, Republic of)

    1997-03-01

    In patients with chronic pancreatitis, the pancreaticopleural fistula is known to cause recurrent exudative or hemorrhagic pleural effusions. These are often large in volume and require treatment, unlike the effusions in acute pancreatitis. Diagnosis can be made either by the finding of elevated pleural fluid amylase level or, using imaging studies, by the direct demonstration of the fistulous tract. We report two cases of pancreaticopleural fistula demonstrated by computed tomography.

  7. Education Payload Operation - Demonstrations

    Science.gov (United States)

    Keil, Matthew

    2009-01-01

    Education Payload Operation - Demonstrations (EPO-Demos) are recorded video education demonstrations performed on the International Space Station (ISS) by crewmembers using hardware already onboard the ISS. EPO-Demos are videotaped, edited, and used to enhance existing NASA education resources and programs for educators and students in grades K-12. EPO-Demos are designed to support the NASA mission to inspire the next generation of explorers.

  8. Edible Astronomy Demonstrations

    Science.gov (United States)

    Lubowich, Donald A.

    2007-12-01

    Astronomy demonstrations with edible ingredients are an effective way to increase student interest and knowledge of astronomical concepts. This approach has been successful with all age groups from elementary school through college students - and the students remember these demonstrations after they are presented. In this poster I describe edible demonstrations I have created to simulate the expansion of the universe (using big-bang chocolate chip cookies); differentiation during the formation of the Earth and planets (using chocolate or chocolate milk with marshmallows, cereal, candy pieces or nuts); and radioactivity/radioactive dating (using popcorn). Other possible demonstrations include: plate tectonics (crackers with peanut butter and jelly); convection (miso soup or hot chocolate); mud flows on Mars (melted chocolate poured over angel food cake); formation of the Galactic disk (pizza); formation of spiral arms (coffee with cream); the curvature of Space (Pringles); constellations patterns with chocolate chips and chocolate chip cookies; planet shaped cookies; star shaped cookies with different colored frostings; coffee or chocolate milk measurement of solar radiation; Oreo cookie lunar phases. Sometimes the students eat the results of the astronomical demonstrations. These demonstrations are an effective teaching tool and can be adapted for cultural, culinary, and ethnic differences among the students.

  9. Testing University Rankings Statistically: Why this Perhaps is not such a Good Idea after All. Some Reflections on Statistical Power, Effect Size, Random Sampling and Imaginary Populations

    DEFF Research Database (Denmark)

    Schneider, Jesper Wiborg

    2012-01-01

    In this paper we discuss and question the use of statistical significance tests in relation to university rankings as recently suggested. We outline the assumptions behind and interpretations of statistical significance tests and relate this to examples from the recent SCImago Institutions Ranking....... By use of statistical power analyses and demonstration of effect sizes, we emphasize that importance of empirical findings lies in “differences that make a difference” and not statistical significance tests per se. Finally we discuss the crucial assumption of randomness and question the presumption...... that randomness is present in the university ranking data. We conclude that the application of statistical significance tests in relation to university rankings, as recently advocated, is problematic and can be misleading....

  10. Gene Cluster Statistics with Gene Families

    Science.gov (United States)

    Durand, Dannie

    2009-01-01

    Identifying genomic regions that descended from a common ancestor is important for understanding the function and evolution of genomes. In distantly related genomes, clusters of homologous gene pairs are evidence of candidate homologous regions. Demonstrating the statistical significance of such “gene clusters” is an essential component of comparative genomic analyses. However, currently there are no practical statistical tests for gene clusters that model the influence of the number of homologs in each gene family on cluster significance. In this work, we demonstrate empirically that failure to incorporate gene family size in gene cluster statistics results in overestimation of significance, leading to incorrect conclusions. We further present novel analytical methods for estimating gene cluster significance that take gene family size into account. Our methods do not require complete genome data and are suitable for testing individual clusters found in local regions, such as contigs in an unfinished assembly. We consider pairs of regions drawn from the same genome (paralogous clusters), as well as regions drawn from two different genomes (orthologous clusters). Determining cluster significance under general models of gene family size is computationally intractable. By assuming that all gene families are of equal size, we obtain analytical expressions that allow fast approximation of cluster probabilities. We evaluate the accuracy of this approximation by comparing the resulting gene cluster probabilities with cluster probabilities obtained by simulating a realistic, power-law distributed model of gene family size, with parameters inferred from genomic data. Surprisingly, despite the simplicity of the underlying assumption, our method accurately approximates the true cluster probabilities. It slightly overestimates these probabilities, yielding a conservative test. We present additional simulation results indicating the best choice of parameter values for data

  11. The foundations of statistics

    CERN Document Server

    Savage, Leonard J

    1972-01-01

    Classic analysis of the foundations of statistics and development of personal probability, one of the greatest controversies in modern statistical thought. Revised edition. Calculus, probability, statistics, and Boolean algebra are recommended.

  12. Adrenal Gland Tumors: Statistics

    Science.gov (United States)

    ... Gland Tumor: Statistics Request Permissions Adrenal Gland Tumor: Statistics Approved by the Cancer.Net Editorial Board , 03/ ... primary adrenal gland tumor is very uncommon. Exact statistics are not available for this type of tumor ...

  13. Blood Facts and Statistics

    Science.gov (United States)

    ... Facts and Statistics Printable Version Blood Facts and Statistics Facts about blood needs Facts about the blood ... to Top Learn About Blood Blood Facts and Statistics Blood Components Whole Blood and Red Blood Cells ...

  14. Algebraic statistics computational commutative algebra in statistics

    CERN Document Server

    Pistone, Giovanni; Wynn, Henry P

    2000-01-01

    Written by pioneers in this exciting new field, Algebraic Statistics introduces the application of polynomial algebra to experimental design, discrete probability, and statistics. It begins with an introduction to Gröbner bases and a thorough description of their applications to experimental design. A special chapter covers the binary case with new application to coherent systems in reliability and two level factorial designs. The work paves the way, in the last two chapters, for the application of computer algebra to discrete probability and statistical modelling through the important concept of an algebraic statistical model.As the first book on the subject, Algebraic Statistics presents many opportunities for spin-off research and applications and should become a landmark work welcomed by both the statistical community and its relatives in mathematics and computer science.

  15. Solar renovation demonstration projects

    Energy Technology Data Exchange (ETDEWEB)

    Bruun Joergensen, O. [ed.

    1998-10-01

    In the framework of the IEA SHC Programme, a Task on building renovation was initiated, `Task 20, Solar Energy in Building Renovation`. In a part of the task, Subtask C `Design of Solar Renovation Projects`, different solar renovation demonstration projects were developed. The objective of Subtask C was to demonstrate the application of advanced solar renovation concepts on real buildings. This report documents 16 different solar renovation demonstration projects including the design processes of the projects. The projects include the renovation of houses, schools, laboratories, and factories. Several solar techniques were used: building integrated solar collectors, glazed balconies, ventilated solar walls, transparent insulation, second skin facades, daylight elements and photovoltaic systems. These techniques are used in several simple as well as more complex system designs. (au)

  16. Weed Identification Field Training Demonstrations.

    Science.gov (United States)

    Murdock, Edward C.; And Others

    1986-01-01

    Reviews efforts undertaken in weed identification field training sessions for agriprofessionals in South Carolina. Data over a four year period (1980-1983) revealed that participants showed significant improvement in their ability to identify weeds. Reaffirms the value of the field demonstration technique. (ML)

  17. Weed Identification Field Training Demonstrations.

    Science.gov (United States)

    Murdock, Edward C.; And Others

    1986-01-01

    Reviews efforts undertaken in weed identification field training sessions for agriprofessionals in South Carolina. Data over a four year period (1980-1983) revealed that participants showed significant improvement in their ability to identify weeds. Reaffirms the value of the field demonstration technique. (ML)

  18. PROBABILITY AND STATISTICS.

    Science.gov (United States)

    STATISTICAL ANALYSIS, REPORTS), (*PROBABILITY, REPORTS), INFORMATION THEORY, DIFFERENTIAL EQUATIONS, STATISTICAL PROCESSES, STOCHASTIC PROCESSES, MULTIVARIATE ANALYSIS, DISTRIBUTION THEORY , DECISION THEORY, MEASURE THEORY, OPTIMIZATION

  19. Demonstrating marketing accountability.

    Science.gov (United States)

    Gombeski, William R; Britt, Jason; Taylor, Jan; Riggs, Karen; Wray, Tanya; Adkins, Wanda; Springate, Suzanne

    2008-01-01

    Pressure on health care marketers to demonstrate effectiveness of their strategies and show their contribution to organizational goals is growing. A seven-tiered model based on the concepts of structure (having the right people, systems), process (doing the right things in the right way), and outcomes (results) is discussed. Examples of measures for each tier are provided and the benefits of using the model as a tool for measuring, organizing, tracking, and communicating appropriate information are provided. The model also provides a framework for helping management understand marketing's value and can serve as a vehicle for demonstrating marketing accountability.

  20. Demonstrating Supernova Remnant Evolution

    Science.gov (United States)

    Leahy, Denis A.; Williams, Jacqueline

    2017-01-01

    We have created a software tool to calculate at display supernova remnant evolution which includes all stages from early ejecta dominated phase to late-time merging with the interstellar medium. The software was created using Python, and can be distributed as Python code, or as an executable file. The purpose of the software is to demonstrate the different phases and transitions that a supernova remnant undergoes, and will be used in upper level undergraduate astrophysics courses as a teaching tool. The usage of the software and its graphical user interface will be demonstrated.

  1. Gigashot Optical Laser Demonstrator

    Energy Technology Data Exchange (ETDEWEB)

    Deri, R. J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-10-13

    The Gigashot Optical Laser Demonstrator (GOLD) project has demonstrated a novel optical amplifier for high energy pulsed lasers operating at high repetition rates. The amplifier stores enough pump energy to support >10 J of laser output, and employs conduction cooling for thermal management to avoid the need for expensive and bulky high-pressure helium subsystems. A prototype amplifier was fabricated, pumped with diode light at 885 nm, and characterized. Experimental results show that the amplifier provides sufficient small-signal gain and sufficiently low wavefront and birefringence impairments to prove useful in laser systems, at repetition rates up to 60 Hz.

  2. 孤独症谱系障碍标准的演变及《精神疾病诊断与统计手册》第5版中标准的意义%Criteria evolution of autism spectrum disorders and the significance of the criteria in Diagnostic and Statistical Manual of Mental Disorders 5th ed

    Institute of Scientific and Technical Information of China (English)

    陈艳妮

    2014-01-01

    孤独症谱系障碍已逐渐被大家熟知,其内涵也在发生着变化,及时准确地掌握其变化及意义对临床工作非常重要.现就该术语的产生、演变及最新《精神疾病诊断与统计手册》第5版中该术语的变化及意义进行阐述.%Autism spectrum disorders have gradually been known and its concept is also changing.It is important that comprehend the changing timely for clinical work.This paper is mainly about the formation and evolution for the term,as well as the change in Diagnostic and Statistical Manual of Mental Disorders 5th ed and the significance.

  3. Monty Roberts’ public demonstrations

    NARCIS (Netherlands)

    Loftus, Loni; Marks, Kelly; Jones-McVey, Rosie; Gonzales, Jose L.; Fowler, Veronica L.

    2016-01-01

    Effective training of horses relies on the trainer’s awareness of learning theory and equine ethology, and should be undertaken with skill and time. Some trainers, such as Monty Roberts, share their methods through the medium of public demonstrations. This paper describes the opportunistic analys

  4. Arctic Craft Demonstration Report

    Science.gov (United States)

    2012-11-01

    it received a lot of attention from the local population. Demonstration personnel, both Coast Guard and contractors, were asked to be receptive to...www.uscg.mil/top/missions/ . Counter-Drug Interdiction and Alien Migrant Interdiction operations are currently not included. In the non-Polar regions

  5. Participatory Lecture Demonstrations.

    Science.gov (United States)

    Battino, Rubin

    1979-01-01

    The use of participatory lecture demonstrations in the classroom is described. Examples are given for the following topics: chromatography, chemical kinetics, balancing equations, the gas laws, kinetic molecular theory, Henry's law of gas solubility, electronic energy levels in atoms, and translational, vibrational, and rotational energies of…

  6. Demonstrating the Gas Laws.

    Science.gov (United States)

    Holko, David A.

    1982-01-01

    Presents a complete computer program demonstrating the relationship between volume/pressure for Boyle's Law, volume/temperature for Charles' Law, and volume/moles of gas for Avagadro's Law. The programing reinforces students' application of gas laws and equates a simulated moving piston to theoretical values derived using the ideal gas law.…

  7. Polarized Light: Three Demonstrations.

    Science.gov (United States)

    Goehmann, Ruth; Welty, Scott

    1984-01-01

    Describes three demonstrations used in the Chicago Museum of Science and Industry polarized light show. The procedures employed are suitable for the classroom by using smaller polarizers and an overhead projector. Topic areas include properties of cellophane tape, nondisappearing arrows, and rope through a picket fence. (JN)

  8. Passive damping technology demonstration

    Science.gov (United States)

    Holman, Robert E.; Spencer, Susan M.; Austin, Eric M.; Johnson, Conor D.

    1995-05-01

    A Hughes Space Company study was undertaken to (1) acquire the analytical capability to design effective passive damping treatments and to predict the damped dynamic performance with reasonable accuracy; (2) demonstrate reasonable test and analysis agreement for both baseline and damped baseline hardware; and (3) achieve a 75% reduction in peak transmissibility and 50% reduction in rms random vibration response. Hughes Space Company teamed with CSA Engineering to learn how to apply passive damping technology to their products successfully in a cost-effective manner. Existing hardware was selected for the demonstration because (1) previous designs were lightly damped and had difficulty in vibration test; (2) multiple damping concepts could be investigated; (3) the finite element model, hardware, and test fixture would be available; and (4) damping devices could be easily implemented. Bracket, strut, and sandwich panel damping treatments that met the performance goals were developed by analysis. The baseline, baseline with damped bracket, and baseline with damped strut designs were built and tested. The test results were in reasonable agreement with the analytical predictions and demonstrated that the desired reduction in dynamic response could be achieved. Having successfully demonstrated this approach, it can now be used with confidence for future designs as a means for reducing weight and enhancing reliability.

  9. PHARUS ASAR demonstrator

    NARCIS (Netherlands)

    Smith, A.J.E.; Bree, R.J.P. van; Calkoen, C.J.; Dekker, R.J.; Otten, M.P.G.; Rossum, W.L. van

    2001-01-01

    PHARUS is a polarimetric phased array C-band Synthetic Aperture Radar (SAR), designed and built for airborne use. Advanced SAR (ASAR) data in image and alternating polarization mode have been simulated with PHARUS to demonstrate the use of Envisat for a number of typical SAR applications that are no

  10. Distance Learning Environment Demonstration.

    Science.gov (United States)

    1996-11-01

    The Distance Learning Environment Demonstration (DLED) was a comparative study of distributed multimedia computer-based training using low cost high...measurement. The DLED project provides baseline research in the effective use of distance learning and multimedia communications over a wide area ATM/SONET

  11. Calculus Demonstrations Using MATLAB

    Science.gov (United States)

    Dunn, Peter K.; Harman, Chris

    2002-01-01

    The note discusses ways in which technology can be used in the calculus learning process. In particular, five MATLAB programs are detailed for use by instructors or students that demonstrate important concepts in introductory calculus: Newton's method, differentiation and integration. Two of the programs are animated. The programs and the…

  12. Palpability Support Demonstrated

    DEFF Research Database (Denmark)

    Brønsted, Jeppe; Grönvall, Erik; Fors, David

    2007-01-01

    is based on the Active Surfaces concept in which therapists rehabilitate physically and mentally impaired children by means of an activity that stimulates the children both physically and cognitively. In this paper we demonstrate how palpability can be supported in a prototype of the Active Surfaces...

  13. Polarized Light: Three Demonstrations.

    Science.gov (United States)

    Goehmann, Ruth; Welty, Scott

    1984-01-01

    Describes three demonstrations used in the Chicago Museum of Science and Industry polarized light show. The procedures employed are suitable for the classroom by using smaller polarizers and an overhead projector. Topic areas include properties of cellophane tape, nondisappearing arrows, and rope through a picket fence. (JN)

  14. Wave Mechanics or Wave Statistical Mechanics

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    By comparison between equations of motion of geometrical optics and that of classical statistical mechanics, this paper finds that there should be an analogy between geometrical optics and classical statistical mechanics instead of geometrical mechanics and classical mechanics. Furthermore, by comparison between the classical limit of quantum mechanics and classical statistical mechanics, it finds that classical limit of quantum mechanics is classical statistical mechanics not classical mechanics, hence it demonstrates that quantum mechanics is a natural generalization of classical statistical mechanics instead of classical mechanics. Thence quantum mechanics in its true appearance is a wave statistical mechanics instead of a wave mechanics.

  15. Oscillations in counting statistics

    CERN Document Server

    Wilk, Grzegorz

    2016-01-01

    The very large transverse momenta and large multiplicities available in present LHC experiments on pp collisions allow a much closer look at the corresponding distributions. Some time ago we discussed a possible physical meaning of apparent log-periodic oscillations showing up in p_T distributions (suggesting that the exponent of the observed power-like behavior is complex). In this talk we concentrate on another example of oscillations, this time connected with multiplicity distributions P(N). We argue that some combinations of the experimentally measured values of P(N) (satisfying the recurrence relations used in the description of cascade-stochastic processes in quantum optics) exhibit distinct oscillatory behavior, not observed in the usual Negative Binomial Distributions used to fit data. These oscillations provide yet another example of oscillations seen in counting statistics in many different, apparently very disparate branches of physics further demonstrating the universality of this phenomenon.

  16. Statistical Computing in Information Society

    Directory of Open Access Journals (Sweden)

    Domański Czesław

    2015-12-01

    Full Text Available In the presence of massive data coming with high heterogeneity we need to change our statistical thinking and statistical education in order to adapt both - classical statistics and software developments that address new challenges. Significant developments include open data, big data, data visualisation, and they are changing the nature of the evidence that is available, the ways in which it is presented and the skills needed for its interpretation. The amount of information is not the most important issue – the real challenge is the combination of the amount and the complexity of data. Moreover, a need arises to know how uncertain situations should be dealt with and what decisions should be taken when information is insufficient (which can also be observed for large datasets. In the paper we discuss the idea of computational statistics as a new approach to statistical teaching and we try to answer a question: how we can best prepare the next generation of statisticians.

  17. [Comment on] Statistical discrimination

    Science.gov (United States)

    Chinn, Douglas

    In the December 8, 1981, issue of Eos, a news item reported the conclusion of a National Research Council study that sexual discrimination against women with Ph.D.'s exists in the field of geophysics. Basically, the item reported that even when allowances are made for motherhood the percentage of female Ph.D.'s holding high university and corporate positions is significantly lower than the percentage of male Ph.D.'s holding the same types of positions. The sexual discrimination conclusion, based only on these statistics, assumes that there are no basic psychological differences between men and women that might cause different populations in the employment group studied. Therefore, the reasoning goes, after taking into account possible effects from differences related to anatomy, such as women stopping their careers in order to bear and raise children, the statistical distributions of positions held by male and female Ph.D.'s ought to be very similar to one another. Any significant differences between the distributions must be caused primarily by sexual discrimination.

  18. Explorations in statistics: statistical facets of reproducibility.

    Science.gov (United States)

    Curran-Everett, Douglas

    2016-06-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This eleventh installment of Explorations in Statistics explores statistical facets of reproducibility. If we obtain an experimental result that is scientifically meaningful and statistically unusual, we would like to know that our result reflects a general biological phenomenon that another researcher could reproduce if (s)he repeated our experiment. But more often than not, we may learn this researcher cannot replicate our result. The National Institutes of Health and the Federation of American Societies for Experimental Biology have created training modules and outlined strategies to help improve the reproducibility of research. These particular approaches are necessary, but they are not sufficient. The principles of hypothesis testing and estimation are inherent to the notion of reproducibility in science. If we want to improve the reproducibility of our research, then we need to rethink how we apply fundamental concepts of statistics to our science.

  19. Statistics Poster Challenge for Schools

    Science.gov (United States)

    Payne, Brad; Freeman, Jenny; Stillman, Eleanor

    2013-01-01

    The analysis and interpretation of data are important life skills. A poster challenge for schoolchildren provides an innovative outlet for these skills and demonstrates their relevance to daily life. We discuss our Statistics Poster Challenge and the lessons we have learned.

  20. A Simple Statistical Thermodynamics Experiment

    Science.gov (United States)

    LoPresto, Michael C.

    2010-01-01

    Comparing the predicted and actual rolls of combinations of both two and three dice can help to introduce many of the basic concepts of statistical thermodynamics, including multiplicity, probability, microstates, and macrostates, and demonstrate that entropy is indeed a measure of randomness, that disordered states (those of higher entropy) are…

  1. Statistical Topics Concerning Radiometer Theory

    CERN Document Server

    Hunter, Todd R

    2015-01-01

    We present a derivation of the radiometer equation based on the original references and fundamental statistical concepts. We then perform numerical simulations of white noise to illustrate the radiometer equation in action. Finally, we generate 1/f and 1/f^2 noise, demonstrate that it is non-stationary, and use it to simulate the effect of gain fluctuations on radiometer performance.

  2. Nucla CFB Demonstration Project

    Energy Technology Data Exchange (ETDEWEB)

    1990-12-01

    This report documents Colorado-Ute Electric Association's Nucla Circulating Atmospheric Fluidized-Bed Combustion (AFBC) demonstration project. It describes the plant equipment and system design for the first US utility-size circulating AFBC boiler and its support systems. Included are equipment and system descriptions, design/background information and appendices with an equipment list and selected information plus process flow and instrumentation drawings. The purpose of this report is to share the information gathered during the Nucla circulating AFBC demonstration project and present it so that the general public can evaluate the technical feasibility and cost effectiveness of replacing pulverized or stoker-fired boiler units with circulating fluidized-bed boiler units. (VC)

  3. Medical Statistics – Mathematics or Oracle? Farewell Lecture

    Directory of Open Access Journals (Sweden)

    Gaus, Wilhelm

    2005-06-01

    Full Text Available Certainty is rare in medicine. This is a direct consequence of the individuality of each and every human being and the reason why we need medical statistics. However, statistics have their pitfalls, too. Fig. 1 shows that the suicide rate peaks in youth, while in Fig. 2 the rate is highest in midlife and Fig. 3 in old age. Which of these contradictory messages is right? After an introduction to the principles of statistical testing, this lecture examines the probability with which statistical test results are correct. For this purpose the level of significance and the power of the test are compared with the sensitivity and specificity of a diagnostic procedure. The probability of obtaining correct statistical test results is the same as that for the positive and negative correctness of a diagnostic procedure and therefore depends on prevalence. The focus then shifts to the problem of multiple statistical testing. The lecture demonstrates that for each data set of reasonable size at least one test result proves to be significant - even if the data set is produced by a random number generator. It is extremely important that a hypothesis is generated independently from the data used for its testing. These considerations enable us to understand the gradation of "lame excuses, lies and statistics" and the difference between pure truth and the full truth. Finally, two historical oracles are cited.

  4. IGCC technology and demonstration

    Energy Technology Data Exchange (ETDEWEB)

    Palonen, J. [A. Ahlstrom Corporation, Karhula (Finland). Hans Ahlstrom Lab.; Lundqvist, R.G. [A. Ahlstrom Corporation, Helsinki (Finland); Staahl, K. [Sydkraft AB, Malmoe (Sweden)

    1996-12-31

    Future energy production will be performed by advanced technologies that are more efficient, more environmentally friendly and less expensive than current technologies. Integrated gasification combined cycle (IGCC) power plants have been proposed as one of these systems. Utilising biofuels in future energy production will also be emphasised since this lowers substantially carbon dioxide emissions into the atmosphere due to the fact that biomass is a renewable form of energy. Combining advanced technology and biomass utilisation is for this reason something that should and will be encouraged. A. Ahlstrom Corporation of Finland and Sydkraft AB of Sweden have as one part of company strategies adopted this approach for the future. The companies have joined their resources in developing a biomass-based IGCC system with the gasification part based on pressurised circulating fluidized-bed technology. With this kind of technology electrical efficiency can be substantially increased compared to conventional power plants. As a first concrete step, a decision has been made to build a demonstration plant. This plant, located in Vaernamo, Sweden, has already been built and is now in commissioning and demonstration stage. The system comprises a fuel drying plant, a pressurised CFB gasifier with gas cooling and cleaning, a gas turbine, a waste heat recovery unit and a steam turbine. The plant is the first in the world where the integration of a pressurised gasifier with a gas turbine will be realised utilising a low calorific gas produced from biomass. The capacity of the Vaernamo plant is 6 MW of electricity and 9 MW of district heating. Technology development is in progress for design of plants of sizes from 20 to 120 MWe. The paper describes the Bioflow IGCC system, the Vaernamo demonstration plant and experiences from the commissioning and demonstration stages. (orig.)

  5. The Majorana Demonstrator

    CERN Document Server

    Aguayo, E; Hoppe, E W; Keillor, M E; Kephart, J D; Kouzes, R T; LaFerriere, B D; Merriman, J; Orrell, J L; Overman, N R; Avignone, F T; Back, H O; Combs, D C; Leviner, L E; Young, A R; Barabash, A S; Konovalov, S I; Vanyushin, I; Yumatov, V; Bergevin, M; Chan, Y-D; Detwiler, J A; Loach, J C; Martin, R D; Poon, A W P; Prior, G; Vetter, K; Bertrand, F E; Cooper, R J; Radford, D C; Varner, R L; Yu, C -H; Boswell, M; Elliott, S R; Gehman, V M; Hime, A; Kidd, M F; LaRoque, B H; Rielage, K; Ronquest, M C; Steele, D; Brudanin, V; Egorov, V; Gusey, K; Kochetov, O; Shirchenko, M; Timkin, V; Yakushev, E; Busch, M; Esterline, J; Tornow, W; Christofferson, C D; Horton, M; Howard, S; Sobolev, V; Collar, J I; Fields, N; Creswick, R J; Doe, P J; Johnson, R A; Knecht, A; Leon, J; Marino, M G; Miller, M L; Robertson, R G H; Schubert, A G; Wolfe, B A; Efremenko, Yu; Ejiri, H; Hazama, R; Nomachi, M; Shima, T; Finnerty, P; Fraenkle, F M; Giovanetti, G K; Green, M P; Henning, R; Howe, M A; MacMullin, S; Phillips, D G; Snavely, K J; Strain, J; Vorren, K; Guiseppe, V E; Keller, C; Mei, D -M; Perumpilly, G; Thomas, K; Zhang, C; Hallin, A L; Keeter, K J; Mizouni, L; Wilkerson, J F

    2011-01-01

    A brief review of the history and neutrino physics of double beta decay is given. A description of the MAJORANA DEMONSTRATOR research and development program including background reduction techniques is presented in some detail. The application of point contact (PC) detectors to the experiment is discussed, including the effectiveness of pulse shape analysis. The predicted sensitivity of a PC detector array enriched to 86% in 76Ge is given.

  6. The Majorana Demonstrator

    Energy Technology Data Exchange (ETDEWEB)

    Aguayo, Estanislao; Fast, James E.; Hoppe, Eric W.; Keillor, Martin E.; Kephart, Jeremy D.; Kouzes, Richard T.; LaFerriere, Brian D.; Merriman, Jason H.; Orrell, John L.; Overman, Nicole R.; Avignone, Frank T.; Back, Henning O.; Combs, Dustin C.; Leviner, L.; Young, A.; Barabash, Alexander S.; Konovalov, S.; Vanyushin, I.; Yumatov, Vladimir; Bergevin, M.; Chan, Yuen-Dat; Detwiler, Jason A.; Loach, J. C.; Martin, R. D.; Poon, Alan; Prior, Gersende; Vetter, Kai; Bertrand, F.; Cooper, R. J.; Radford, D. C.; Varner, R. L.; Yu, Chang-Hong; Boswell, M.; Elliott, S.; Gehman, Victor M.; Hime, Andrew; Kidd, M. F.; LaRoque, B. H.; Rielage, Keith; Ronquest, M. C.; Steele, David; Brudanin, V.; Egorov, Viatcheslav; Gusey, K.; Kochetov, Oleg; Shirchenko, M.; Timkin, V.; Yakushev, E.; Busch, Matthew; Esterline, James H.; Tornow, Werner; Christofferson, Cabot-Ann; Horton, Mark; Howard, S.; Sobolev, V.; Collar, J. I.; Fields, N.; Creswick, R.; Doe, Peter J.; Johnson, R. A.; Knecht, A.; Leon, Jonathan D.; Marino, Michael G.; Miller, M. L.; Robertson, R. G. H.; Schubert, Alexis G.; Wolfe, B. A.; Efremenko, Yuri; Ejiri, H.; Hazama, R.; Nomachi, Masaharu; Shima, T.; Finnerty, P.; Fraenkle, Florian; Giovanetti, G. K.; Green, M.; Henning, Reyco; Howe, M. A.; MacMullin, S.; Phillips, D.; Snavely, Kyle J.; Strain, J.; Vorren, Kris R.; Guiseppe, Vincente; Keller, C.; Mei, Dong-Ming; Perumpilly, Gopakumar; Thomas, K.; Zhang, C.; Hallin, A. L.; Keeter, K.; Mizouni, Leila; Wilkerson, J. F.

    2011-09-03

    A brief review of the history and neutrino physics of double beta decay is given. A description of the MAJORANA DEMONSTRATOR research and development program, including background reduction techniques, is presented in some detail. The application of point contact (PC) detectors to the experiment is discussed, including the effectiveness of pulse shape analysis. The predicted sensitivity of a PC detector array enriched to 86% to 76Ge is given.

  7. Statistics using R

    CERN Document Server

    Purohit, Sudha G; Deshmukh, Shailaja R

    2015-01-01

    STATISTICS USING R will be useful at different levels, from an undergraduate course in statistics, through graduate courses in biological sciences, engineering, management and so on. The book introduces statistical terminology and defines it for the benefit of a novice. For a practicing statistician, it will serve as a guide to R language for statistical analysis. For a researcher, it is a dual guide, simultaneously explaining appropriate statistical methods for the problems at hand and indicating how these methods can be implemented using the R language. For a software developer, it is a guide in a variety of statistical methods for development of a suite of statistical procedures.

  8. Statistical learning across development: Flexible yet constrained

    Directory of Open Access Journals (Sweden)

    Lauren eKrogh

    2013-01-01

    Full Text Available Much research in the past two decades has documented infants’ and adults' ability to extract statistical regularities from auditory input. Importantly, recent research has extended these findings to the visual domain, demonstrating learners' sensitivity to statistical patterns within visual arrays and sequences of shapes. In this review we discuss both auditory and visual statistical learning to elucidate both the generality of and constraints on statistical learning. The review first outlines the major findings of the statistical learning literature with infants, followed by discussion of statistical learning across domains, modalities, and development. The second part of this review considers constraints on statistical learning. The discussion focuses on two categories of constraint: constraints on the types of input over which statistical learning operates and constraints based on the state of the learner. The review concludes with a discussion of possible mechanisms underlying statistical learning.

  9. Learning From Demonstration?

    DEFF Research Database (Denmark)

    Koch, Christian; Bertelsen, Niels Haldor

    2014-01-01

    . This paper reports on an early demonstration project, the Building of a passive house dormitory in the Central Region of Denmark in 2006-2009. The project was supposed to deliver value, lean design, prefabrication, quality in sustainability, certification according to German standards for passive houses...... of control, driven by such challenges as complying with cost goals, the need to choose a German prefab supplier, and local contractors. Energy calculations, indoor climate, issues related to square meter requirements, and the hydrogen element became problematic. The aim to obtain passive house certification...

  10. Learning From Demonstration?

    DEFF Research Database (Denmark)

    Koch, Christian; Bertelsen, Niels Haldor

    2014-01-01

    , and micro combined heat and power using hydrogen. Using sociological and business economic theories of innovation, the paper discusses how early movers of innovation tend to obtain only partial success when demonstrating their products and often feel obstructed by minor details. The empirical work...... encompasses both an evaluation of the design and Construction process as well as a post-occupancy evaluation. Process experiences include the use of a multidisciplinary competence group and performance measurement. The commencement of the project was enthusiastic, but it was forced into more traditional forms...

  11. Visual Electricity Demonstrator

    Science.gov (United States)

    Lincoln, James

    2017-09-01

    The Visual Electricity Demonstrator (VED) is a linear diode array that serves as a dynamic alternative to an ammeter. A string of 48 red light-emitting diodes (LEDs) blink one after another to create the illusion of a moving current. Having the current represented visually builds an intuitive and qualitative understanding about what is happening in a circuit. In this article, I describe several activities for this device and explain how using this technology in the classroom can enhance the understanding and appreciation of physics.

  12. NAVAJO ELECTRIFICATION DEMONSTRATION PROJECT

    Energy Technology Data Exchange (ETDEWEB)

    Terry W. Battiest

    2008-06-11

    The Navajo Electrification Demonstration Project (NEDP) is a multi-year project which addresses the electricity needs of the unserved and underserved Navajo Nation, the largest American Indian tribe in the United States. The program serves to cumulatively provide off-grid electricty for families living away from the electricty infrastructure, line extensions for unserved families living nearby (less than 1/2 mile away from) the electricity, and, under the current project called NEDP-4, the construction of a substation to increase the capacity and improve the quality of service into the central core region of the Navajo Nation.

  13. Education Demonstration Equipment

    Science.gov (United States)

    Nagy, A.; Lee, R. L.

    2003-10-01

    The General Atomics fusion education program ``Scientist in the Classroom" (SIC) now in its sixth year, uses scientists and engineers to present plasma as a state of matter to students in the classroom. Using hands-on equipment, students see how magnets, gas pressure changes, and different gases are turned into plasmas. A piston, sealed volume, and vacuum chamber illuminate ideal gas laws. Liquid nitrogen is used to explore thermodynamic temperature effects and changes in states of matter. Light bulbs are excited with a Tesla coil to ionize gases, thus becoming an inexpensive plasma devices and a plasma tube shows magnetic interactions with plasma. The demonstration equipment used in this program is built with simple designs and common commercial equipment keeping in mind a teacher's tight budget. The SIC program ( ˜25 school presentations per year) has become very popular and has acquired an enthusiastic group of regular teacher clientele requesting repeat visits. In addition, three very popular and successful ``Build-It" days, sponsored by the General Atomics Fusion Education Outreach Program, enables teachers to build and keep in their classroom some of this equipment. The demonstration devices will be presented along with their ``build-it" details.

  14. Inseparable phone books demonstration

    Science.gov (United States)

    Balta, Nuri; Çetin, Ali

    2017-05-01

    This study is aimed at first introducing a well-known discrepant event; inseparable phone books and second, turning it into an experiment for high school or middle school students. This discrepant event could be used especially to indicate how friction force can be effective in producing an unexpected result. Demonstration, discussion, explanation and experiment steps are presented on how to turn a simple discrepant event into an instructional activity. Results showed the relationships between number of pages and force, as well as between amounts of interleave and force. In addition to these, the mathematical equation for the total force between all interleaved pages is derived. As a conclusion, this study demonstrated that not only can phone books be used, but also ordinary books, to investigate this discrepant event. This experiment can be conducted as an example to show the agreement between theoretical and experimental results along with the confounding variables. This discrepant event can be used to create a cognitive conflict in students’ minds about the concepts of ‘force and motion’ and ‘friction force’.

  15. PFBC Utility Demonstration Project

    Energy Technology Data Exchange (ETDEWEB)

    1992-11-01

    This report provides a summary of activities by American Electric Power Service Corporation during the first budget period of the PFBC Utility Demonstration Project. In April 1990, AEP signed a Cooperative Agreement with the US Department of Energy to repower the Philip Sporn Plant, Units 3 4 in New Haven, West Virginia, with a 330 KW PFBC plant. The purpose of the program was to demonstrate and verify PFBC in a full-scale commercial plant. The technical and cost baselines of the Cooperative Agreement were based on a preliminary engineering and design and a cost estimate developed by AEP subsequent to AEP's proposal submittal in May 1988, and prior to the signing of the Cooperative Agreement. The Statement of Work in the first budget period of the Cooperative Agreement included a task to develop a preliminary design and cost estimate for erecting a Greenfield plant and to conduct a comparison with the repowering option. The comparative assessment of the options concluded that erecting a Greenfield plant rather than repowering the existing Sporn Plant could be the technically and economically superior alternative. The Greenfield plant would have a capacity of 340 MW. The ten additional MW output is due to the ability to better match the steam cycle to the PFBC system with a new balance of plant design. In addition to this study, the conceptual design of the Sporn Repowering led to several items which warranted optimization studies with the goal to develop a more cost effective design.

  16. Smart Grid Demonstration Project

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Craig [National Rural Electric Cooperative Association, Arlington, VA (United States); Carroll, Paul [National Rural Electric Cooperative Association, Arlington, VA (United States); Bell, Abigail [National Rural Electric Cooperative Association, Arlington, VA (United States)

    2015-03-11

    The National Rural Electric Cooperative Association (NRECA) organized the NRECA-U.S. Department of Energy (DOE) Smart Grid Demonstration Project (DE-OE0000222) to install and study a broad range of advanced smart grid technologies in a demonstration that spanned 23 electric cooperatives in 12 states. More than 205,444 pieces of electronic equipment and more than 100,000 minor items (bracket, labels, mounting hardware, fiber optic cable, etc.) were installed to upgrade and enhance the efficiency, reliability, and resiliency of the power networks at the participating co-ops. The objective of this project was to build a path for other electric utilities, and particularly electrical cooperatives, to adopt emerging smart grid technology when it can improve utility operations, thus advancing the co-ops’ familiarity and comfort with such technology. Specifically, the project executed multiple subprojects employing a range of emerging smart grid technologies to test their cost-effectiveness and, where the technology demonstrated value, provided case studies that will enable other electric utilities—particularly electric cooperatives— to use these technologies. NRECA structured the project according to the following three areas: Demonstration of smart grid technology; Advancement of standards to enable the interoperability of components; and Improvement of grid cyber security. We termed these three areas Technology Deployment Study, Interoperability, and Cyber Security. Although the deployment of technology and studying the demonstration projects at coops accounted for the largest portion of the project budget by far, we see our accomplishments in each of the areas as critical to advancing the smart grid. All project deliverables have been published. Technology Deployment Study: The deliverable was a set of 11 single-topic technical reports in areas related to the listed technologies. Each of these reports has already been submitted to DOE, distributed to co-ops, and

  17. Does It Matter If Non-Powerful Significance Tests Are Used in Dissertation Research?

    Directory of Open Access Journals (Sweden)

    Heping Deng

    2005-09-01

    Full Text Available This study examines the statistical power levels presented in the dissertations completed in the field of educational leadership or educational administration. Eighty out of 221 reviewed dissertations were analyzed and overall statistical power levels were calculated for 2,629 significance tests. The statistical power levels demonstrated in the dissertations were satisfactory for detecting Cohen's large effect (d=0.80 and medium effect (d=0.50 but quite low for small effect (d=0.20. Therefore, the authors of analyzed dissertations had a very low probability of finding true significance when looking for Cohen's small effect.

  18. Statistics For Dummies

    CERN Document Server

    Rumsey, Deborah

    2011-01-01

    The fun and easy way to get down to business with statistics Stymied by statistics? No fear ? this friendly guide offers clear, practical explanations of statistical ideas, techniques, formulas, and calculations, with lots of examples that show you how these concepts apply to your everyday life. Statistics For Dummies shows you how to interpret and critique graphs and charts, determine the odds with probability, guesstimate with confidence using confidence intervals, set up and carry out a hypothesis test, compute statistical formulas, and more.Tracks to a typical first semester statistics cou

  19. Jennings Demonstration PLant

    Energy Technology Data Exchange (ETDEWEB)

    Russ Heissner

    2010-08-31

    Verenium operated a demonstration plant with a capacity to produce 1.4 million gallons of cellulosic ethanol from agricultural resiues for about two years. During this time, the plant was able to evaluate the technical issues in producing ethanol from three different cellulosic feedstocks, sugar cane bagasse, energy cane, and sorghum. The project was intended to develop a better understanding of the operating parameters that would inform a commercial sized operation. Issues related to feedstock variability, use of hydrolytic enzymes, and the viability of fermentative organisms were evaluated. Considerable success was achieved with pretreatment processes and use of enzymes but challenges were encountered with feedstock variability and fermentation systems. Limited amounts of cellulosic ethanol were produced.

  20. (Errors in statistical tests3

    Directory of Open Access Journals (Sweden)

    Kaufman Jay S

    2008-07-01

    Full Text Available Abstract In 2004, Garcia-Berthou and Alcaraz published "Incongruence between test statistics and P values in medical papers," a critique of statistical errors that received a tremendous amount of attention. One of their observations was that the final reported digit of p-values in articles published in the journal Nature departed substantially from the uniform distribution that they suggested should be expected. In 2006, Jeng critiqued that critique, observing that the statistical analysis of those terminal digits had been based on comparing the actual distribution to a uniform continuous distribution, when digits obviously are discretely distributed. Jeng corrected the calculation and reported statistics that did not so clearly support the claim of a digit preference. However delightful it may be to read a critique of statistical errors in a critique of statistical errors, we nevertheless found several aspects of the whole exchange to be quite troubling, prompting our own meta-critique of the analysis. The previous discussion emphasized statistical significance testing. But there are various reasons to expect departure from the uniform distribution in terminal digits of p-values, so that simply rejecting the null hypothesis is not terribly informative. Much more importantly, Jeng found that the original p-value of 0.043 should have been 0.086, and suggested this represented an important difference because it was on the other side of 0.05. Among the most widely reiterated (though often ignored tenets of modern quantitative research methods is that we should not treat statistical significance as a bright line test of whether we have observed a phenomenon. Moreover, it sends the wrong message about the role of statistics to suggest that a result should be dismissed because of limited statistical precision when it is so easy to gather more data. In response to these limitations, we gathered more data to improve the statistical precision, and

  1. CMS Program Statistics

    Data.gov (United States)

    U.S. Department of Health & Human Services — The CMS Office of Enterprise Data and Analytics has developed CMS Program Statistics, which includes detailed summary statistics on national health care, Medicare...

  2. Alcohol Facts and Statistics

    Science.gov (United States)

    ... Standard Drink? Drinking Levels Defined Alcohol Facts and Statistics Print version Alcohol Use in the United States: ... 1245, 2004. PMID: 15010446 11 National Center for Statistics and Analysis. 2014 Crash Data Key Findings (Traffic ...

  3. Bureau of Labor Statistics

    Science.gov (United States)

    ... Statistics Students' Pages Errata Other Statistical Sites Subjects Inflation & Prices » Consumer Price Index Producer Price Indexes Import/Export Price ... Choose a Subject Employment and Unemployment Employment Unemployment Inflation, Prices, and ... price indexes Consumer spending Industry price indexes Pay ...

  4. Recreational Boating Statistics 2012

    Data.gov (United States)

    Department of Homeland Security — Every year, the USCG compiles statistics on reported recreational boating accidents. These statistics are derived from accident reports that are filed by the owners...

  5. Statistics for Finance

    DEFF Research Database (Denmark)

    Lindström, Erik; Madsen, Henrik; Nielsen, Jan Nygaard

    Statistics for Finance develops students’ professional skills in statistics with applications in finance. Developed from the authors’ courses at the Technical University of Denmark and Lund University, the text bridges the gap between classical, rigorous treatments of financial mathematics...

  6. Mathematical and statistical analysis

    Science.gov (United States)

    Houston, A. Glen

    1988-01-01

    The goal of the mathematical and statistical analysis component of RICIS is to research, develop, and evaluate mathematical and statistical techniques for aerospace technology applications. Specific research areas of interest include modeling, simulation, experiment design, reliability assessment, and numerical analysis.

  7. Statistics for Finance

    DEFF Research Database (Denmark)

    Lindström, Erik; Madsen, Henrik; Nielsen, Jan Nygaard

    Statistics for Finance develops students’ professional skills in statistics with applications in finance. Developed from the authors’ courses at the Technical University of Denmark and Lund University, the text bridges the gap between classical, rigorous treatments of financial mathematics...

  8. Neuroendocrine Tumor: Statistics

    Science.gov (United States)

    ... Tumor > Neuroendocrine Tumor: Statistics Request Permissions Neuroendocrine Tumor: Statistics Approved by the Cancer.Net Editorial Board , 11/ ... the United States are diagnosed with Merkel cell skin cancer each year. Almost all people diagnosed with the ...

  9. Experiment in Elementary Statistics

    Science.gov (United States)

    Fernando, P. C. B.

    1976-01-01

    Presents an undergraduate laboratory exercise in elementary statistics in which students verify empirically the various aspects of the Gaussian distribution. Sampling techniques and other commonly used statistical procedures are introduced. (CP)

  10. Overweight and Obesity Statistics

    Science.gov (United States)

    ... the full list of resources ​​. Overweight and Obesity Statistics Page Content About Overweight and Obesity Prevalence of ... adults age 20 and older [ Top ] Physical Activity Statistics Adults Research Findings Research suggests that staying active ...

  11. Uterine Cancer Statistics

    Science.gov (United States)

    ... Research AMIGAS Fighting Cervical Cancer Worldwide Stay Informed Statistics for Other Kinds of Cancer Breast Cervical Colorectal ( ... Skin Vaginal and Vulvar Cancer Home Uterine Cancer Statistics Language: English Español (Spanish) Recommend on Facebook Tweet ...

  12. School Violence: Data & Statistics

    Science.gov (United States)

    ... Social Media Publications Injury Center School Violence: Data & Statistics Recommend on Facebook Tweet Share Compartir The first ... fact sheet provides up-to-date data and statistics on youth violence. Data Sources Indicators of School ...

  13. Recreational Boating Statistics 2013

    Data.gov (United States)

    Department of Homeland Security — Every year, the USCG compiles statistics on reported recreational boating accidents. These statistics are derived from accident reports that are filed by the owners...

  14. On two methods of statistical image analysis

    NARCIS (Netherlands)

    Missimer, J; Knorr, U; Maguire, RP; Herzog, H; Seitz, RJ; Tellman, L; Leenders, KL

    1999-01-01

    The computerized brain atlas (CBA) and statistical parametric mapping (SPM) are two procedures for voxel-based statistical evaluation of PET activation studies. Each includes spatial standardization of image volumes, computation of a statistic, and evaluation of its significance. In addition, smooth

  15. Ranald Macdonald and statistical inference.

    Science.gov (United States)

    Smith, Philip T

    2009-05-01

    Ranald Roderick Macdonald (1945-2007) was an important contributor to mathematical psychology in the UK, as a referee and action editor for British Journal of Mathematical and Statistical Psychology and as a participant and organizer at the British Psychological Society's Mathematics, statistics and computing section meetings. This appreciation argues that his most important contribution was to the foundations of significance testing, where his concern about what information was relevant in interpreting the results of significance tests led him to be a persuasive advocate for the 'Weak Fisherian' form of hypothesis testing.

  16. NASA Bioreactor Demonstration System

    Science.gov (United States)

    2002-01-01

    Leland W. K. Chung (left), Director, Molecular Urology Therapeutics Program at the Winship Cancer Institute at Emory University, is principal investigator for the NASA bioreactor demonstration system (BDS-05). With him is Dr. Jun Shu, an assistant professor of Orthopedics Surgery from Kuming Medical University China. The NASA Bioreactor provides a low turbulence culture environment which promotes the formation of large, three-dimensional cell clusters. Due to their high level of cellular organization and specialization, samples constructed in the bioreactor more closely resemble the original tumor or tissue found in the body. The Bioreactor is rotated to provide gentle mixing of fresh and spent nutrient without inducing shear forces that would damage the cells. The work is sponsored by NASA's Office of Biological and Physical Research. The bioreactor is managed by the Biotechnology Cell Science Program at NASA's Johnson Space Center (JSC). NASA-sponsored bioreactor research has been instrumental in helping scientists to better understand normal and cancerous tissue development. In cooperation with the medical community, the bioreactor design is being used to prepare better models of human colon, prostate, breast and ovarian tumors. Cartilage, bone marrow, heart muscle, skeletal muscle, pancreatic islet cells, liver and kidney are just a few of the normal tissues being cultured in rotating bioreactors by investigators. Credit: Emory University.

  17. Nuclear power demonstrating

    Energy Technology Data Exchange (ETDEWEB)

    Basmajian, V. V.; Haldeman, C. W.

    1980-08-12

    Apparatus for demonstrating the operation of a closed loop nuclear steam electric generating plant includes a transparent boiler assembly having immersion heating elements, which may be quartz lamps or stainless steel encased resistive immersion heating units with a quartz iodide lamp providing a source of visible radiation when using the encased immersion heating units. A variable voltage autotransformer is geared to a support rod for simulated reactor control rods for controlling the energy delivered to the heating elements and arranged so that when the voltage is high, the rods are withdrawn from the boiler to produce increased heating and illumination proportional to rod position, thereby simulating nuclear reaction. A relief valve, steam outlet pipe and water inlet pipe are connected to the boiler with a small stainless steel resistive heating element in the steam outlet pipe providing superheat. This heater is connected in series with a rheostat mounted on the front panel to provide superheat adjustments and an interlock switch that prevents the superheater from being energized when the steam valve is off with with no flow through the superheater. A heavy blue plastic radiation shield surrounds the boiler inside a bell jar.

  18. A Demonstration of Lusail

    KAUST Repository

    Mansour, Essam

    2017-05-10

    There has been a proliferation of datasets available as interlinked RDF data accessible through SPARQL endpoints. This has led to the emergence of various applications in life science, distributed social networks, and Internet of Things that need to integrate data from multiple endpoints. We will demonstrate Lusail; a system that supports the need of emerging applications to access tens to hundreds of geo-distributed datasets. Lusail is a geo-distributed graph engine for querying linked RDF data. Lusail delivers outstanding performance using (i) a novel locality-aware query decomposition technique that minimizes the intermediate data to be accessed by the subqueries, and (ii) selectivityawareness and parallel query execution to reduce network latency and to increase parallelism. During the demo, the audience will be able to query actually deployed RDF endpoints as well as large synthetic and real benchmarks that we have deployed in the public cloud. The demo will also show that Lusail outperforms state-of-the-art systems by orders of magnitude in terms of scalability and response time.

  19. Software for Spatial Statistics

    Directory of Open Access Journals (Sweden)

    Edzer Pebesma

    2015-02-01

    Full Text Available We give an overview of the papers published in this special issue on spatial statistics, of the Journal of Statistical Software. 21 papers address issues covering visualization (micromaps, links to Google Maps or Google Earth, point pattern analysis, geostatistics, analysis of areal aggregated or lattice data, spatio-temporal statistics, Bayesian spatial statistics, and Laplace approximations. We also point to earlier publications in this journal on the same topic.

  20. Software for Spatial Statistics

    OpenAIRE

    Edzer Pebesma; Roger Bivand; Paulo Justiniano Ribeiro

    2015-01-01

    We give an overview of the papers published in this special issue on spatial statistics, of the Journal of Statistical Software. 21 papers address issues covering visualization (micromaps, links to Google Maps or Google Earth), point pattern analysis, geostatistics, analysis of areal aggregated or lattice data, spatio-temporal statistics, Bayesian spatial statistics, and Laplace approximations. We also point to earlier publications in this journal on the same topic.

  1. Significance analysis of prognostic signatures.

    Directory of Open Access Journals (Sweden)

    Andrew H Beck

    Full Text Available A major goal in translational cancer research is to identify biological signatures driving cancer progression and metastasis. A common technique applied in genomics research is to cluster patients using gene expression data from a candidate prognostic gene set, and if the resulting clusters show statistically significant outcome stratification, to associate the gene set with prognosis, suggesting its biological and clinical importance. Recent work has questioned the validity of this approach by showing in several breast cancer data sets that "random" gene sets tend to cluster patients into prognostically variable subgroups. This work suggests that new rigorous statistical methods are needed to identify biologically informative prognostic gene sets. To address this problem, we developed Significance Analysis of Prognostic Signatures (SAPS which integrates standard prognostic tests with a new prognostic significance test based on stratifying patients into prognostic subtypes with random gene sets. SAPS ensures that a significant gene set is not only able to stratify patients into prognostically variable groups, but is also enriched for genes showing strong univariate associations with patient prognosis, and performs significantly better than random gene sets. We use SAPS to perform a large meta-analysis (the largest completed to date of prognostic pathways in breast and ovarian cancer and their molecular subtypes. Our analyses show that only a small subset of the gene sets found statistically significant using standard measures achieve significance by SAPS. We identify new prognostic signatures in breast and ovarian cancer and their corresponding molecular subtypes, and we show that prognostic signatures in ER negative breast cancer are more similar to prognostic signatures in ovarian cancer than to prognostic signatures in ER positive breast cancer. SAPS is a powerful new method for deriving robust prognostic biological signatures from clinically

  2. Solar Thermal Demonstration Project

    Energy Technology Data Exchange (ETDEWEB)

    Biesinger, K; Cuppett, D; Dyer, D

    2012-01-30

    HVAC Retrofit and Energy Efficiency Upgrades at Clark High School, Las Vegas, Nevada The overall objectives of this project are to increase usage of alternative/renewable fuels, create a better and more reliable learning environment for the students, and reduce energy costs. Utilizing the grant resources and local bond revenues, the District proposes to reduce electricity consumption by installing within the existing limited space, one principal energy efficient 100 ton adsorption chiller working in concert with two 500 ton electric chillers. The main heating source will be primarily from low nitrogen oxide (NOX), high efficiency natural gas fired boilers. With the use of this type of chiller, the electric power and cost requirements will be greatly reduced. To provide cooling to the information technology centers and equipment rooms of the school during off-peak hours, the District will install water source heat pumps. In another measure to reduce the cooling requirements at Clark High School, the District will replace single pane glass and metal panels with Kalwall building panels. An added feature of the Kalwall system is that it will allow for natural day lighting in the student center. This system will significantly reduce thermal heat/cooling loss and control solar heat gain, thus delivering significant savings in heating ventilation and air conditioning (HVAC) costs.

  3. Ethics in Statistics

    Science.gov (United States)

    Lenard, Christopher; McCarthy, Sally; Mills, Terence

    2014-01-01

    There are many different aspects of statistics. Statistics involves mathematics, computing, and applications to almost every field of endeavour. Each aspect provides an opportunity to spark someone's interest in the subject. In this paper we discuss some ethical aspects of statistics, and describe how an introduction to ethics has been…

  4. Selling statistics[Statistics in scientific progress

    Energy Technology Data Exchange (ETDEWEB)

    Bridle, S. [Astrophysics Group, University College London (United Kingdom)]. E-mail: sarah@star.ucl.ac.uk

    2006-09-15

    From Cosmos to Chaos- Peter Coles, 2006, Oxford University Press, 224pp. To confirm or refute a scientific theory you have to make a measurement. Unfortunately, however, measurements are never perfect: the rest is statistics. Indeed, statistics is at the very heart of scientific progress, but it is often poorly taught and badly received; for many, the very word conjures up half-remembered nightmares of 'null hypotheses' and 'student's t-tests'. From Cosmos to Chaos by Peter Coles, a cosmologist at Nottingham University, is an approachable antidote that places statistics in a range of catchy contexts. Using this book you will be able to calculate the probabilities in a game of bridge or in a legal trial based on DNA fingerprinting, impress friends by talking confidently about entropy, and stretch your mind thinking about quantum mechanics. (U.K.)

  5. Applied multivariate statistics with R

    CERN Document Server

    Zelterman, Daniel

    2015-01-01

    This book brings the power of multivariate statistics to graduate-level practitioners, making these analytical methods accessible without lengthy mathematical derivations. Using the open source, shareware program R, Professor Zelterman demonstrates the process and outcomes for a wide array of multivariate statistical applications. Chapters cover graphical displays, linear algebra, univariate, bivariate and multivariate normal distributions, factor methods, linear regression, discrimination and classification, clustering, time series models, and additional methods. Zelterman uses practical examples from diverse disciplines to welcome readers from a variety of academic specialties. Those with backgrounds in statistics will learn new methods while they review more familiar topics. Chapters include exercises, real data sets, and R implementations. The data are interesting, real-world topics, particularly from health and biology-related contexts. As an example of the approach, the text examines a sample from the B...

  6. Statistics Essentials For Dummies

    CERN Document Server

    Rumsey, Deborah

    2010-01-01

    Statistics Essentials For Dummies not only provides students enrolled in Statistics I with an excellent high-level overview of key concepts, but it also serves as a reference or refresher for students in upper-level statistics courses. Free of review and ramp-up material, Statistics Essentials For Dummies sticks to the point, with content focused on key course topics only. It provides discrete explanations of essential concepts taught in a typical first semester college-level statistics course, from odds and error margins to confidence intervals and conclusions. This guide is also a perfect re

  7. Statistics & probaility for dummies

    CERN Document Server

    Rumsey, Deborah J

    2013-01-01

    Two complete eBooks for one low price! Created and compiled by the publisher, this Statistics I and Statistics II bundle brings together two math titles in one, e-only bundle. With this special bundle, you'll get the complete text of the following two titles: Statistics For Dummies, 2nd Edition  Statistics For Dummies shows you how to interpret and critique graphs and charts, determine the odds with probability, guesstimate with confidence using confidence intervals, set up and carry out a hypothesis test, compute statistical formulas, and more. Tra

  8. Head First Statistics

    CERN Document Server

    Griffiths, Dawn

    2009-01-01

    Wouldn't it be great if there were a statistics book that made histograms, probability distributions, and chi square analysis more enjoyable than going to the dentist? Head First Statistics brings this typically dry subject to life, teaching you everything you want and need to know about statistics through engaging, interactive, and thought-provoking material, full of puzzles, stories, quizzes, visual aids, and real-world examples. Whether you're a student, a professional, or just curious about statistical analysis, Head First's brain-friendly formula helps you get a firm grasp of statistics

  9. Business statistics for dummies

    CERN Document Server

    Anderson, Alan

    2013-01-01

    Score higher in your business statistics course? Easy. Business statistics is a common course for business majors and MBA candidates. It examines common data sets and the proper way to use such information when conducting research and producing informational reports such as profit and loss statements, customer satisfaction surveys, and peer comparisons. Business Statistics For Dummies tracks to a typical business statistics course offered at the undergraduate and graduate levels and provides clear, practical explanations of business statistical ideas, techniques, formulas, and calculations, w

  10. Nonparametric statistical inference

    CERN Document Server

    Gibbons, Jean Dickinson

    2010-01-01

    Overall, this remains a very fine book suitable for a graduate-level course in nonparametric statistics. I recommend it for all people interested in learning the basic ideas of nonparametric statistical inference.-Eugenia Stoimenova, Journal of Applied Statistics, June 2012… one of the best books available for a graduate (or advanced undergraduate) text for a theory course on nonparametric statistics. … a very well-written and organized book on nonparametric statistics, especially useful and recommended for teachers and graduate students.-Biometrics, 67, September 2011This excellently presente

  11. Statistics in a nutshell

    CERN Document Server

    Boslaugh, Sarah

    2013-01-01

    Need to learn statistics for your job? Want help passing a statistics course? Statistics in a Nutshell is a clear and concise introduction and reference for anyone new to the subject. Thoroughly revised and expanded, this edition helps you gain a solid understanding of statistics without the numbing complexity of many college texts. Each chapter presents easy-to-follow descriptions, along with graphics, formulas, solved examples, and hands-on exercises. If you want to perform common statistical analyses and learn a wide range of techniques without getting in over your head, this is your book.

  12. Reducing statistics anxiety and enhancing statistics learning achievement: effectiveness of a one-minute strategy.

    Science.gov (United States)

    Chiou, Chei-Chang; Wang, Yu-Min; Lee, Li-Tze

    2014-08-01

    Statistical knowledge is widely used in academia; however, statistics teachers struggle with the issue of how to reduce students' statistics anxiety and enhance students' statistics learning. This study assesses the effectiveness of a "one-minute paper strategy" in reducing students' statistics-related anxiety and in improving students' statistics-related achievement. Participants were 77 undergraduates from two classes enrolled in applied statistics courses. An experiment was implemented according to a pretest/posttest comparison group design. The quasi-experimental design showed that the one-minute paper strategy significantly reduced students' statistics anxiety and improved students' statistics learning achievement. The strategy was a better instructional tool than the textbook exercise for reducing students' statistics anxiety and improving students' statistics achievement.

  13. Lectures on algebraic statistics

    CERN Document Server

    Drton, Mathias; Sullivant, Seth

    2009-01-01

    How does an algebraic geometer studying secant varieties further the understanding of hypothesis tests in statistics? Why would a statistician working on factor analysis raise open problems about determinantal varieties? Connections of this type are at the heart of the new field of "algebraic statistics". In this field, mathematicians and statisticians come together to solve statistical inference problems using concepts from algebraic geometry as well as related computational and combinatorial techniques. The goal of these lectures is to introduce newcomers from the different camps to algebraic statistics. The introduction will be centered around the following three observations: many important statistical models correspond to algebraic or semi-algebraic sets of parameters; the geometry of these parameter spaces determines the behaviour of widely used statistical inference procedures; computational algebraic geometry can be used to study parameter spaces and other features of statistical models.

  14. Statistics for economics

    CERN Document Server

    Naghshpour, Shahdad

    2012-01-01

    Statistics is the branch of mathematics that deals with real-life problems. As such, it is an essential tool for economists. Unfortunately, the way you and many other economists learn the concept of statistics is not compatible with the way economists think and learn. The problem is worsened by the use of mathematical jargon and complex derivations. Here's a book that proves none of this is necessary. All the examples and exercises in this book are constructed within the field of economics, thus eliminating the difficulty of learning statistics with examples from fields that have no relation to business, politics, or policy. Statistics is, in fact, not more difficult than economics. Anyone who can comprehend economics can understand and use statistics successfully within this field, including you! This book utilizes Microsoft Excel to obtain statistical results, as well as to perform additional necessary computations. Microsoft Excel is not the software of choice for performing sophisticated statistical analy...

  15. Estimation and inferential statistics

    CERN Document Server

    Sahu, Pradip Kumar; Das, Ajit Kumar

    2015-01-01

    This book focuses on the meaning of statistical inference and estimation. Statistical inference is concerned with the problems of estimation of population parameters and testing hypotheses. Primarily aimed at undergraduate and postgraduate students of statistics, the book is also useful to professionals and researchers in statistical, medical, social and other disciplines. It discusses current methodological techniques used in statistics and related interdisciplinary areas. Every concept is supported with relevant research examples to help readers to find the most suitable application. Statistical tools have been presented by using real-life examples, removing the “fear factor” usually associated with this complex subject. The book will help readers to discover diverse perspectives of statistical theory followed by relevant worked-out examples. Keeping in mind the needs of readers, as well as constantly changing scenarios, the material is presented in an easy-to-understand form.

  16. Baseline Statistics of Linked Statistical Data

    NARCIS (Netherlands)

    Scharnhorst, Andrea; Meroño-Peñuela, Albert; Guéret, Christophe

    2014-01-01

    We are surrounded by an ever increasing ocean of information, everybody will agree to that. We build sophisticated strategies to govern this information: design data models, develop infrastructures for data sharing, building tool for data analysis. Statistical datasets curated by National Statistica

  17. Fermi breakup and the statistical multifragmentation model

    Energy Technology Data Exchange (ETDEWEB)

    Carlson, B.V., E-mail: brett@ita.br [Departamento de Fisica, Instituto Tecnologico de Aeronautica - CTA, 12228-900 Sao Jose dos Campos (Brazil); Donangelo, R. [Instituto de Fisica, Universidade Federal do Rio de Janeiro, Cidade Universitaria, CP 68528, 21941-972, Rio de Janeiro (Brazil); Instituto de Fisica, Facultad de Ingenieria, Universidad de la Republica, Julio Herrera y Reissig 565, 11.300 Montevideo (Uruguay); Souza, S.R. [Instituto de Fisica, Universidade Federal do Rio de Janeiro, Cidade Universitaria, CP 68528, 21941-972, Rio de Janeiro (Brazil); Instituto de Fisica, Universidade Federal do Rio Grande do Sul, Av. Bento Goncalves 9500, CP 15051, 91501-970, Porto Alegre (Brazil); Lynch, W.G.; Steiner, A.W.; Tsang, M.B. [Joint Institute for Nuclear Astrophysics, National Superconducting Cyclotron Laboratory and the Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824 (United States)

    2012-02-15

    We demonstrate the equivalence of a generalized Fermi breakup model, in which densities of excited states are taken into account, to the microcanonical statistical multifragmentation model used to describe the disintegration of highly excited fragments of nuclear reactions. We argue that such a model better fulfills the hypothesis of statistical equilibrium than the Fermi breakup model generally used to describe statistical disintegration of light mass nuclei.

  18. Gender Issues in Labour Statistics.

    Science.gov (United States)

    Greenwood, Adriana Mata

    1999-01-01

    Presents the main features needed for labor statistics to reflect the respective situations for women and men in the labor market. Identifies topics to be covered and detail needed for significant distinctions to emerge. Explains how the choice of measurement method and data presentation can influence the final result. (Author/JOW)

  19. Significant Scales in Community Structure

    CERN Document Server

    Traag, V A; Van Dooren, P

    2013-01-01

    Many complex networks show signs of modular structure, uncovered by community detection. Although many methods succeed in revealing various partitions, it remains difficult to detect at what scale some partition is significant. This problem shows foremost in multi-resolution methods. We here introduce an efficient method for scanning for resolutions in one such method. Additionally, we introduce the notion of "significance" of a partition, based on subgraph probabilities. Significance is independent of the exact method used, so could also be applied in other methods, and can be interpreted as the gain in encoding a graph by making use of a partition. Using significance, we can determine "good" resolution parameters, which we demonstrate on benchmark networks. Moreover, optimizing significance itself also shows excellent performance. We demonstrate our method on voting data from the European Parliament. Our analysis suggests the European Parliament has become increasingly ideologically divided and that nationa...

  20. Quantum Informatics View of Statistical Data Processing

    OpenAIRE

    Bogdanov, Yu. I.; Bogdanova, N. A.

    2011-01-01

    Application of root density estimator to problems of statistical data analysis is demonstrated. Four sets of basis functions based on Chebyshev-Hermite, Laguerre, Kravchuk and Charlier polynomials are considered. The sets may be used for numerical analysis in problems of reconstructing statistical distributions by experimental data. Examples of numerical modeling are given.

  1. Teaching Social Statistics with Simulated Data.

    Science.gov (United States)

    Halley, Fred S.

    1991-01-01

    Suggests using simulated data to teach students about the nature and use of statistical tests and measures. Observes that simulated data contains built-in pure relationships with no poor response rates or coding or sampling errors. Recommends suitable software. Includes information on using data sets, demonstrating statistical principles, and…

  2. Significance analysis of lexical bias in microarray data

    Directory of Open Access Journals (Sweden)

    Falkow Stanley

    2003-04-01

    Full Text Available Abstract Background Genes that are determined to be significantly differentially regulated in microarray analyses often appear to have functional commonalities, such as being components of the same biochemical pathway. This results in certain words being under- or overrepresented in the list of genes. Distinguishing between biologically meaningful trends and artifacts of annotation and analysis procedures is of the utmost importance, as only true biological trends are of interest for further experimentation. A number of sophisticated methods for identification of significant lexical trends are currently available, but these methods are generally too cumbersome for practical use by most microarray users. Results We have developed a tool, LACK, for calculating the statistical significance of apparent lexical bias in microarray datasets. The frequency of a user-specified list of search terms in a list of genes which are differentially regulated is assessed for statistical significance by comparison to randomly generated datasets. The simplicity of the input files and user interface targets the average microarray user who wishes to have a statistical measure of apparent lexical trends in analyzed datasets without the need for bioinformatics skills. The software is available as Perl source or a Windows executable. Conclusion We have used LACK in our laboratory to generate biological hypotheses based on our microarray data. We demonstrate the program's utility using an example in which we confirm significant upregulation of SPI-2 pathogenicity island of Salmonella enterica serovar Typhimurium by the cation chelator dipyridyl.

  3. Lectures on statistical mechanics

    CERN Document Server

    Bowler, M G

    1982-01-01

    Anyone dissatisfied with the almost ritual dullness of many 'standard' texts in statistical mechanics will be grateful for the lucid explanation and generally reassuring tone. Aimed at securing firm foundations for equilibrium statistical mechanics, topics of great subtlety are presented transparently and enthusiastically. Very little mathematical preparation is required beyond elementary calculus and prerequisites in physics are limited to some elementary classical thermodynamics. Suitable as a basis for a first course in statistical mechanics, the book is an ideal supplement to more convent

  4. Statistics at square one

    CERN Document Server

    Campbell, M J

    2011-01-01

    The new edition of this international bestseller continues to throw light on the world of statistics for health care professionals and medical students. Revised throughout, the 11th edition features new material in the areas of relative risk, absolute risk and   numbers needed to treat diagnostic tests, sensitivity, specificity, ROC curves free statistical software The popular self-testing exercises at the end of every chapter are strengthened by the addition of new sections on reading and reporting statistics and formula appreciation.

  5. Optimization techniques in statistics

    CERN Document Server

    Rustagi, Jagdish S

    1994-01-01

    Statistics help guide us to optimal decisions under uncertainty. A large variety of statistical problems are essentially solutions to optimization problems. The mathematical techniques of optimization are fundamentalto statistical theory and practice. In this book, Jagdish Rustagi provides full-spectrum coverage of these methods, ranging from classical optimization and Lagrange multipliers, to numerical techniques using gradients or direct search, to linear, nonlinear, and dynamic programming using the Kuhn-Tucker conditions or the Pontryagin maximal principle. Variational methods and optimiza

  6. Equilibrium statistical mechanics

    CERN Document Server

    Jackson, E Atlee

    2000-01-01

    Ideal as an elementary introduction to equilibrium statistical mechanics, this volume covers both classical and quantum methodology for open and closed systems. Introductory chapters familiarize readers with probability and microscopic models of systems, while additional chapters describe the general derivation of the fundamental statistical mechanics relationships. The final chapter contains 16 sections, each dealing with a different application, ordered according to complexity, from classical through degenerate quantum statistical mechanics. Key features include an elementary introduction t

  7. Applied statistics for economists

    CERN Document Server

    Lewis, Margaret

    2012-01-01

    This book is an undergraduate text that introduces students to commonly-used statistical methods in economics. Using examples based on contemporary economic issues and readily-available data, it not only explains the mechanics of the various methods, it also guides students to connect statistical results to detailed economic interpretations. Because the goal is for students to be able to apply the statistical methods presented, online sources for economic data and directions for performing each task in Excel are also included.

  8. Equilibrium statistical mechanics

    CERN Document Server

    Mayer, J E

    1968-01-01

    The International Encyclopedia of Physical Chemistry and Chemical Physics, Volume 1: Equilibrium Statistical Mechanics covers the fundamental principles and the development of theoretical aspects of equilibrium statistical mechanics. Statistical mechanical is the study of the connection between the macroscopic behavior of bulk matter and the microscopic properties of its constituent atoms and molecules. This book contains eight chapters, and begins with a presentation of the master equation used for the calculation of the fundamental thermodynamic functions. The succeeding chapters highlight t

  9. Mathematical statistics with applications

    CERN Document Server

    Wackerly, Dennis D; Scheaffer, Richard L

    2008-01-01

    In their bestselling MATHEMATICAL STATISTICS WITH APPLICATIONS, premiere authors Dennis Wackerly, William Mendenhall, and Richard L. Scheaffer present a solid foundation in statistical theory while conveying the relevance and importance of the theory in solving practical problems in the real world. The authors' use of practical applications and excellent exercises helps you discover the nature of statistics and understand its essential role in scientific research.

  10. Contributions to statistics

    CERN Document Server

    Mahalanobis, P C

    1965-01-01

    Contributions to Statistics focuses on the processes, methodologies, and approaches involved in statistics. The book is presented to Professor P. C. Mahalanobis on the occasion of his 70th birthday. The selection first offers information on the recovery of ancillary information and combinatorial properties of partially balanced designs and association schemes. Discussions focus on combinatorial applications of the algebra of association matrices, sample size analogy, association matrices and the algebra of association schemes, and conceptual statistical experiments. The book then examines latt

  11. Statistical discrete geometry

    CERN Document Server

    Ariwahjoedi, Seramika; Kosasih, Jusak Sali; Rovelli, Carlo; Zen, Freddy Permana

    2016-01-01

    Following our earlier work, we construct statistical discrete geometry by applying statistical mechanics to discrete (Regge) gravity. We propose a coarse-graining method for discrete geometry under the assumptions of atomism and background independence. To maintain these assumptions, restrictions are given to the theory by introducing cut-offs, both in ultraviolet and infrared regime. Having a well-defined statistical picture of discrete Regge geometry, we take the infinite degrees of freedom (large n) limit. We argue that the correct limit consistent with the restrictions and the background independence concept is not the continuum limit of statistical mechanics, but the thermodynamical limit.

  12. Improved Statistics Handling

    OpenAIRE

    2009-01-01

    Ericsson is a global provider of telecommunications systems equipment and related services for mobile and fixed network operators.  3Gsim is a tool used by Ericsson in tests of the 3G RNC node. In order to validate the tests, statistics are constantly gathered within 3Gsim and users can use telnet to access the statistics using some system specific 3Gsim commands. The statistics can be retrieved but is unstructured for the human eye and needs parsing and arranging to be readable.  The statist...

  13. Annual Statistical Supplement, 2008

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2008 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  14. Annual Statistical Supplement, 2004

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2004 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  15. Annual Statistical Supplement, 2006

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2006 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  16. Annual Statistical Supplement, 2016

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2016 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  17. Annual Statistical Supplement, 2010

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2010 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  18. Annual Statistical Supplement, 2002

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2002 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  19. Annual Statistical Supplement, 2003

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2003 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  20. Annual Statistical Supplement, 2011

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2011 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  1. Annual Statistical Supplement, 2000

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2000 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  2. Annual Statistical Supplement, 2015

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2015 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  3. Annual Statistical Supplement, 2009

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2009 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  4. Annual Statistical Supplement, 2014

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2014 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  5. Annual Statistical Supplement, 2007

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2007 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  6. Annual Statistical Supplement, 2005

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2005 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  7. Annual Statistical Supplement, 2001

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2001 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  8. 100 statistical tests

    CERN Document Server

    Kanji, Gopal K

    2006-01-01

    This expanded and updated Third Edition of Gopal K. Kanji's best-selling resource on statistical tests covers all the most commonly used tests with information on how to calculate and interpret results with simple datasets. Each entry begins with a short summary statement about the test's purpose, and contains details of the test objective, the limitations (or assumptions) involved, a brief outline of the method, a worked example, and the numerical calculation. 100 Statistical Tests, Third Edition is the one indispensable guide for users of statistical materials and consumers of statistical information at all levels and across all disciplines.

  9. Understanding Computational Bayesian Statistics

    CERN Document Server

    Bolstad, William M

    2011-01-01

    A hands-on introduction to computational statistics from a Bayesian point of view Providing a solid grounding in statistics while uniquely covering the topics from a Bayesian perspective, Understanding Computational Bayesian Statistics successfully guides readers through this new, cutting-edge approach. With its hands-on treatment of the topic, the book shows how samples can be drawn from the posterior distribution when the formula giving its shape is all that is known, and how Bayesian inferences can be based on these samples from the posterior. These ideas are illustrated on common statistic

  10. Statistics in a Nutshell

    CERN Document Server

    Boslaugh, Sarah

    2008-01-01

    Need to learn statistics as part of your job, or want some help passing a statistics course? Statistics in a Nutshell is a clear and concise introduction and reference that's perfect for anyone with no previous background in the subject. This book gives you a solid understanding of statistics without being too simple, yet without the numbing complexity of most college texts. You get a firm grasp of the fundamentals and a hands-on understanding of how to apply them before moving on to the more advanced material that follows. Each chapter presents you with easy-to-follow descriptions illustrat

  11. Record Statistics and Dynamics

    DEFF Research Database (Denmark)

    Sibani, Paolo; Jensen, Henrik J.

    2009-01-01

    The term record statistics covers the statistical properties of records within an ordered series of numerical data obtained from observations or measurements. A record within such series is simply a value larger (or smaller) than all preceding values. The mathematical properties of records strongly...... fluctuations of e. g. the energy are able to push the system past some sort of ‘edge of stability’, inducing irreversible configurational changes, whose statistics then closely follows the statistics of record fluctuations....

  12. Statistics is Easy

    CERN Document Server

    Shasha, Dennis

    2010-01-01

    Statistics is the activity of inferring results about a population given a sample. Historically, statistics books assume an underlying distribution to the data (typically, the normal distribution) and derive results under that assumption. Unfortunately, in real life, one cannot normally be sure of the underlying distribution. For that reason, this book presents a distribution-independent approach to statistics based on a simple computational counting idea called resampling. This book explains the basic concepts of resampling, then systematically presents the standard statistical measures along

  13. Principles of statistics

    CERN Document Server

    Bulmer, M G

    1979-01-01

    There are many textbooks which describe current methods of statistical analysis, while neglecting related theory. There are equally many advanced textbooks which delve into the far reaches of statistical theory, while bypassing practical applications. But between these two approaches is an unfilled gap, in which theory and practice merge at an intermediate level. Professor M. G. Bulmer's Principles of Statistics, originally published in 1965, was created to fill that need. The new, corrected Dover edition of Principles of Statistics makes this invaluable mid-level text available once again fo

  14. A Statistical Programme Assignment Model

    DEFF Research Database (Denmark)

    Rosholm, Michael; Staghøj, Jonas; Svarer, Michael

    assignment  mechanism, which is based on the discretionary choice of case workers. This is done in a duration model context, using the timing-of-events framework to identify causal effects. We compare different assignment  mechanisms, and the results suggest that a significant reduction in the average...... duration of unemployment spells may result if a statistical programme assignment model is introduced. We discuss several issues regarding the  plementation of such a system, especially the interplay between the statistical model and  case workers....

  15. Testing statistical hypotheses of equivalence

    CERN Document Server

    Wellek, Stefan

    2010-01-01

    Equivalence testing has grown significantly in importance over the last two decades, especially as its relevance to a variety of applications has become understood. Yet published work on the general methodology remains scattered in specialists' journals, and for the most part, it focuses on the relatively narrow topic of bioequivalence assessment.With a far broader perspective, Testing Statistical Hypotheses of Equivalence provides the first comprehensive treatment of statistical equivalence testing. The author addresses a spectrum of specific, two-sided equivalence testing problems, from the

  16. Statistical Mechanics of Zooplankton.

    Science.gov (United States)

    Hinow, Peter; Nihongi, Ai; Strickler, J Rudi

    2015-01-01

    Statistical mechanics provides the link between microscopic properties of many-particle systems and macroscopic properties such as pressure and temperature. Observations of similar "microscopic" quantities exist for the motion of zooplankton, as well as many species of other social animals. Herein, we propose to take average squared velocities as the definition of the "ecological temperature" of a population under different conditions on nutrients, light, oxygen and others. We test the usefulness of this definition on observations of the crustacean zooplankton Daphnia pulicaria. In one set of experiments, D. pulicaria is infested with the pathogen Vibrio cholerae, the causative agent of cholera. We find that infested D. pulicaria under light exposure have a significantly greater ecological temperature, which puts them at a greater risk of detection by visual predators. In a second set of experiments, we observe D. pulicaria in cold and warm water, and in darkness and under light exposure. Overall, our ecological temperature is a good discriminator of the crustacean's swimming behavior.

  17. Statistical Mechanics of Zooplankton.

    Directory of Open Access Journals (Sweden)

    Peter Hinow

    Full Text Available Statistical mechanics provides the link between microscopic properties of many-particle systems and macroscopic properties such as pressure and temperature. Observations of similar "microscopic" quantities exist for the motion of zooplankton, as well as many species of other social animals. Herein, we propose to take average squared velocities as the definition of the "ecological temperature" of a population under different conditions on nutrients, light, oxygen and others. We test the usefulness of this definition on observations of the crustacean zooplankton Daphnia pulicaria. In one set of experiments, D. pulicaria is infested with the pathogen Vibrio cholerae, the causative agent of cholera. We find that infested D. pulicaria under light exposure have a significantly greater ecological temperature, which puts them at a greater risk of detection by visual predators. In a second set of experiments, we observe D. pulicaria in cold and warm water, and in darkness and under light exposure. Overall, our ecological temperature is a good discriminator of the crustacean's swimming behavior.

  18. A Statistical Programme Assignment Model

    DEFF Research Database (Denmark)

    Rosholm, Michael; Staghøj, Jonas; Svarer, Michael

    When treatment effects of active labour market programmes are heterogeneous in an observable way  across the population, the allocation of the unemployed into different programmes becomes a particularly  important issue. In this paper, we present a statistical model designed to improve the present...... assignment  mechanism, which is based on the discretionary choice of case workers. This is done in a duration model context, using the timing-of-events framework to identify causal effects. We compare different assignment  mechanisms, and the results suggest that a significant reduction in the average...... duration of unemployment spells may result if a statistical programme assignment model is introduced. We discuss several issues regarding the  plementation of such a system, especially the interplay between the statistical model and  case workers....

  19. Statistical Engine Knock Control

    DEFF Research Database (Denmark)

    Stotsky, Alexander A.

    2008-01-01

    A new statistical concept of the knock control of a spark ignition automotive engine is proposed . The control aim is associated with the statistical hy pothesis test which compares the threshold value to the average value of the max imal amplitud e of the knock sensor signal at a given freq uency...

  20. Applied Statistics with SPSS

    Science.gov (United States)

    Huizingh, Eelko K. R. E.

    2007-01-01

    Accessibly written and easy to use, "Applied Statistics Using SPSS" is an all-in-one self-study guide to SPSS and do-it-yourself guide to statistics. What is unique about Eelko Huizingh's approach is that this book is based around the needs of undergraduate students embarking on their own research project, and its self-help style is designed to…

  1. Statistical Hadronization and Holography

    DEFF Research Database (Denmark)

    Bechi, Jacopo

    2009-01-01

    In this paper we consider some issues about the statistical model of the hadronization in a holographic approach. We introduce a Rindler like horizon in the bulk and we understand the string breaking as a tunneling event under this horizon. We calculate the hadron spectrum and we get a thermal......, and so statistical, shape for it....

  2. Handbook of Spatial Statistics

    CERN Document Server

    Gelfand, Alan E

    2010-01-01

    Offers an introduction detailing the evolution of the field of spatial statistics. This title focuses on the three main branches of spatial statistics: continuous spatial variation (point referenced data); discrete spatial variation, including lattice and areal unit data; and, spatial point patterns.

  3. Practical statistics simply explained

    CERN Document Server

    Langley, Dr Russell A

    1971-01-01

    For those who need to know statistics but shy away from math, this book teaches how to extract truth and draw valid conclusions from numerical data using logic and the philosophy of statistics rather than complex formulae. Lucid discussion of averages and scatter, investigation design, more. Problems with solutions.

  4. Statistical methods in astronomy

    OpenAIRE

    Long, James P.; de Souza, Rafael S.

    2017-01-01

    We present a review of data types and statistical methods often encountered in astronomy. The aim is to provide an introduction to statistical applications in astronomy for statisticians and computer scientists. We highlight the complex, often hierarchical, nature of many astronomy inference problems and advocate for cross-disciplinary collaborations to address these challenges.

  5. Thiele. Pioneer in statistics

    DEFF Research Database (Denmark)

    Lauritzen, Steffen Lilholt

    This book studies the brilliant Danish 19th Century astronomer, T.N. Thiele who made important contributions to statistics, actuarial science, astronomy and mathematics. The most important of these contributions in statistics are translated into English for the first time, and the text includes...

  6. Applied Statistics with SPSS

    Science.gov (United States)

    Huizingh, Eelko K. R. E.

    2007-01-01

    Accessibly written and easy to use, "Applied Statistics Using SPSS" is an all-in-one self-study guide to SPSS and do-it-yourself guide to statistics. What is unique about Eelko Huizingh's approach is that this book is based around the needs of undergraduate students embarking on their own research project, and its self-help style is designed to…

  7. Inductive Logic and Statistics

    NARCIS (Netherlands)

    Romeijn, J. -W.

    2009-01-01

    This chapter concerns inductive logic in relation to mathematical statistics. I start by introducing a general notion of probabilistic induc- tive inference. Then I introduce Carnapian inductive logic, and I show that it can be related to Bayesian statistical inference via de Finetti's representatio

  8. Statistical mechanics of pluripotency.

    Science.gov (United States)

    MacArthur, Ben D; Lemischka, Ihor R

    2013-08-01

    Recent reports using single-cell profiling have indicated a remarkably dynamic view of pluripotent stem cell identity. Here, we argue that the pluripotent state is not well defined at the single-cell level but rather is a statistical property of stem cell populations, amenable to analysis using the tools of statistical mechanics and information theory.

  9. Thiele. Pioneer in statistics

    DEFF Research Database (Denmark)

    Lauritzen, Steffen Lilholt

    This book studies the brilliant Danish 19th Century astronomer, T.N. Thiele who made important contributions to statistics, actuarial science, astronomy and mathematics. The most important of these contributions in statistics are translated into English for the first time, and the text includes...

  10. Application Statistics 1987.

    Science.gov (United States)

    Council of Ontario Universities, Toronto.

    Summary statistics on application and registration patterns of applicants wishing to pursue full-time study in first-year places in Ontario universities (for the fall of 1987) are given. Data on registrations were received indirectly from the universities as part of their annual submission of USIS/UAR enrollment data to Statistics Canada and MCU.…

  11. Deconstructing Statistical Analysis

    Science.gov (United States)

    Snell, Joel

    2014-01-01

    Using a very complex statistical analysis and research method for the sake of enhancing the prestige of an article or making a new product or service legitimate needs to be monitored and questioned for accuracy. 1) The more complicated the statistical analysis, and research the fewer the number of learned readers can understand it. This adds a…

  12. Practical statistics for educators

    CERN Document Server

    Ravid, Ruth

    2014-01-01

    Practical Statistics for Educators, Fifth Edition, is a clear and easy-to-follow text written specifically for education students in introductory statistics courses and in action research courses. It is also a valuable resource and guidebook for educational practitioners who wish to study their own settings.

  13. Designing Statistical Language Learners Experiments on Noun Compounds

    CERN Document Server

    Lauer, M

    1995-01-01

    The goal of this thesis is to advance the exploration of the statistical language learning design space. In pursuit of that goal, the thesis makes two main theoretical contributions: (i) it identifies a new class of designs by specifying an architecture for natural language analysis in which probabilities are given to semantic forms rather than to more superficial linguistic elements; and (ii) it explores the development of a mathematical theory to predict the expected accuracy of statistical language learning systems in terms of the volume of data used to train them. The theoretical work is illustrated by applying statistical language learning designs to the analysis of noun compounds. Both syntactic and semantic analysis of noun compounds are attempted using the proposed architecture. Empirical comparisons demonstrate that the proposed syntactic model is significantly better than those previously suggested, approaching the performance of human judges on the same task, and that the proposed semantic model, t...

  14. Statistical corrections to numerical predictions. IV. [of weather

    Science.gov (United States)

    Schemm, Jae-Kyung; Faller, Alan J.

    1986-01-01

    The National Meteorological Center Barotropic-Mesh Model has been used to test a statistical correction procedure, designated as M-II, that was developed in Schemm et al. (1981). In the present application, statistical corrections at 12 h resulted in significant reductions of the mean-square errors of both vorticity and the Laplacian of thickness. Predictions to 48 h demonstrated the feasibility of applying corrections at every 12 h in extended forecasts. In addition to these improvements, however, the statistical corrections resulted in a shift of error from smaller to larger-scale motions, improving the smallest scales dramatically but deteriorating the largest scales. This effect is shown to be a consequence of randomization of the residual errors by the regression equations and can be corrected by spatially high-pass filtering the field of corrections before they are applied.

  15. Statistical laws in linguistics

    CERN Document Server

    Altmann, Eduardo G

    2015-01-01

    Zipf's law is just one out of many universal laws proposed to describe statistical regularities in language. Here we review and critically discuss how these laws can be statistically interpreted, fitted, and tested (falsified). The modern availability of large databases of written text allows for tests with an unprecedent statistical accuracy and also a characterization of the fluctuations around the typical behavior. We find that fluctuations are usually much larger than expected based on simplifying statistical assumptions (e.g., independence and lack of correlations between observations).These simplifications appear also in usual statistical tests so that the large fluctuations can be erroneously interpreted as a falsification of the law. Instead, here we argue that linguistic laws are only meaningful (falsifiable) if accompanied by a model for which the fluctuations can be computed (e.g., a generative model of the text). The large fluctuations we report show that the constraints imposed by linguistic laws...

  16. Introduction to Bayesian statistics

    CERN Document Server

    Bolstad, William M

    2017-01-01

    There is a strong upsurge in the use of Bayesian methods in applied statistical analysis, yet most introductory statistics texts only present frequentist methods. Bayesian statistics has many important advantages that students should learn about if they are going into fields where statistics will be used. In this Third Edition, four newly-added chapters address topics that reflect the rapid advances in the field of Bayesian staistics. The author continues to provide a Bayesian treatment of introductory statistical topics, such as scientific data gathering, discrete random variables, robust Bayesian methods, and Bayesian approaches to inferenfe cfor discrete random variables, bionomial proprotion, Poisson, normal mean, and simple linear regression. In addition, newly-developing topics in the field are presented in four new chapters: Bayesian inference with unknown mean and variance; Bayesian inference for Multivariate Normal mean vector; Bayesian inference for Multiple Linear RegressionModel; and Computati...

  17. Statistical Methods for Astronomy

    CERN Document Server

    Feigelson, Eric D

    2012-01-01

    This review outlines concepts of mathematical statistics, elements of probability theory, hypothesis tests and point estimation for use in the analysis of modern astronomical data. Least squares, maximum likelihood, and Bayesian approaches to statistical inference are treated. Resampling methods, particularly the bootstrap, provide valuable procedures when distributions functions of statistics are not known. Several approaches to model selection and good- ness of fit are considered. Applied statistics relevant to astronomical research are briefly discussed: nonparametric methods for use when little is known about the behavior of the astronomical populations or processes; data smoothing with kernel density estimation and nonparametric regression; unsupervised clustering and supervised classification procedures for multivariate problems; survival analysis for astronomical datasets with nondetections; time- and frequency-domain times series analysis for light curves; and spatial statistics to interpret the spati...

  18. Root approach for estimation of statistical distributions

    Science.gov (United States)

    Bogdanov, Yu. I.; Bogdanova, N. A.

    2014-12-01

    Application of root density estimator to problems of statistical data analysis is demonstrated. Four sets of basis functions based on Chebyshev-Hermite, Laguerre, Kravchuk and Charlier polynomials are considered. The sets may be used for numerical analysis in problems of reconstructing statistical distributions by experimental data. Based on the root approach to reconstruction of statistical distributions and quantum states, we study a family of statistical distributions in which the probability density is the product of a Gaussian distribution and an even-degree polynomial. Examples of numerical modeling are given.

  19. Root approach for estimation of statistical distributions

    CERN Document Server

    Bogdanov, Yu I

    2014-01-01

    Application of root density estimator to problems of statistical data analysis is demonstrated. Four sets of basis functions based on Chebyshev-Hermite, Laguerre, Kravchuk and Charlier polynomials are considered. The sets may be used for numerical analysis in problems of reconstructing statistical distributions by experimental data. Based on the root approach to reconstruction of statistical distributions and quantum states, we study a family of statistical distributions in which the probability density is the product of a Gaussian distribution and an even-degree polynomial. Examples of numerical modeling are given. The results of present paper are of interest for the development of tomography of quantum states and processes.

  20. The Role of Previous Experience and Attitudes toward Statistics in Statistics Assessment Outcomes among Undergraduate Psychology Students

    Science.gov (United States)

    Dempster, Martin; McCorry, Noleen K.

    2009-01-01

    Previous research has demonstrated that students' cognitions about statistics are related to their performance in statistics assessments. The purpose of this research is to examine the nature of the relationships between undergraduate psychology students' previous experiences of maths, statistics and computing; their attitudes toward statistics;…

  1. Exploration Medical System Demonstration Project

    Science.gov (United States)

    Chin, D. A.; McGrath, T. L.; Reyna, B.; Watkins, S. D.

    2011-01-01

    A near-Earth Asteroid (NEA) mission will present significant new challenges including hazards to crew health created by exploring a beyond low earth orbit destination, traversing the terrain of asteroid surfaces, and the effects of variable gravity environments. Limited communications with ground-based personnel for diagnosis and consultation of medical events require increased crew autonomy when diagnosing conditions, creating treatment plans, and executing procedures. Scope: The Exploration Medical System Demonstration (EMSD) project will be a test bed on the International Space Station (ISS) to show an end-to-end medical system assisting the Crew Medical Officers (CMO) in optimizing medical care delivery and medical data management during a mission. NEA medical care challenges include resource and resupply constraints limiting the extent to which medical conditions can be treated, inability to evacuate to Earth during many mission phases, and rendering of medical care by a non-clinician. The system demonstrates the integration of medical technologies and medical informatics tools for managing evidence and decision making. Project Objectives: The objectives of the EMSD project are to: a) Reduce and possibly eliminate the time required for a crewmember and ground personnel to manage medical data from one application to another. b) Demonstrate crewmember's ability to access medical data/information via a software solution to assist/aid in the treatment of a medical condition. c) Develop a common data management architecture that can be ubiquitously used to automate repetitive data collection, management, and communications tasks for all crew health and life sciences activities. d) Develop a common data management architecture that allows for scalability, extensibility, and interoperability of data sources and data users. e) Lower total cost of ownership for development and sustainment of peripheral hardware and software that use EMSD for data management f) Provide

  2. The Edgewater Coolside process demonstration

    Energy Technology Data Exchange (ETDEWEB)

    McCoy, D.C.; Scandrol, R.O.; Statnick, R.M.; Stouffer, M.R.; Winschel, R.A.; Withum, J.A.; Wu, M.M.; Yoon, H. [CONSOL, Inc., Pittsburgh, PA (United States)

    1992-02-01

    The Edgewater Coolside process demonstration met the program objectives which were to determine Coolside SO{sub 2} removal performance, establish short-term process operability, and evaluate the economics of the process versus a limestone wet scrubber. On a flue gas produced from the combustion of 3% sulfur coal, the Coolside process achieved 70% SO{sub 2} removal using commercially-available hydrated lime as the sorbent. The operating conditions were Ca/S mol ratio 2.0, Na/Ca mol ratio 0.2, and 20{degree}F approach to adiabatic saturation temperature ({del}T). During tests using fresh plus recycle sorbent, the recycle sorbent exhibited significant capacity for additional SO{sub 2} removal. The longest steady state operation was eleven days at nominally Ca/S = 2, Na/Ca = 0.22, {del}T = 20--22{degree}F, and 70% SO{sub 2} removal. The operability results achieved during the demonstration indicate that with the recommended process modifications, which are discussed in the Coolside process economic analysis, the process could be designed as a reliable system for utility application. Based on the demonstration program, the Coolside process capital cost for a hypothetical commercial installation was minimized. The optimization consisted of a single, large humidifier, no spare air compressor, no isolation dampers, and a 15 day on-site hydrated lime storage. The levelized costs of the Coolside and the wet limestone scrubbing processes were compared. The Coolside process is generally economically competitive with wet scrubbing for coals containing up to 2.5% sulfur and plants under 350 MWe. Site-specific factors such as plant capacity factor, SO{sub 2} emission limit, remaining plant life, retrofit difficulty, and delivered sorbent cost affect the scrubber-Coolside process economic comparison.

  3. The Edgewater Coolside process demonstration

    Energy Technology Data Exchange (ETDEWEB)

    McCoy, D.C.; Scandrol, R.O.; Statnick, R.M.; Stouffer, M.R.; Winschel, R.A.; Withum, J.A.; Wu, M.M.; Yoon, H. (CONSOL, Inc., Pittsburgh, PA (United States))

    1992-02-01

    The Edgewater Coolside process demonstration met the program objectives which were to determine Coolside SO[sub 2] removal performance, establish short-term process operability, and evaluate the economics of the process versus a limestone wet scrubber. On a flue gas produced from the combustion of 3% sulfur coal, the Coolside process achieved 70% SO[sub 2] removal using commercially-available hydrated lime as the sorbent. The operating conditions were Ca/S mol ratio 2.0, Na/Ca mol ratio 0.2, and 20[degree]F approach to adiabatic saturation temperature ([del]T). During tests using fresh plus recycle sorbent, the recycle sorbent exhibited significant capacity for additional SO[sub 2] removal. The longest steady state operation was eleven days at nominally Ca/S = 2, Na/Ca = 0.22, [del]T = 20--22[degree]F, and 70% SO[sub 2] removal. The operability results achieved during the demonstration indicate that with the recommended process modifications, which are discussed in the Coolside process economic analysis, the process could be designed as a reliable system for utility application. Based on the demonstration program, the Coolside process capital cost for a hypothetical commercial installation was minimized. The optimization consisted of a single, large humidifier, no spare air compressor, no isolation dampers, and a 15 day on-site hydrated lime storage. The levelized costs of the Coolside and the wet limestone scrubbing processes were compared. The Coolside process is generally economically competitive with wet scrubbing for coals containing up to 2.5% sulfur and plants under 350 MWe. Site-specific factors such as plant capacity factor, SO[sub 2] emission limit, remaining plant life, retrofit difficulty, and delivered sorbent cost affect the scrubber-Coolside process economic comparison.

  4. Statistics a complete introduction

    CERN Document Server

    Graham, Alan

    2013-01-01

    Statistics: A Complete Introduction is the most comprehensive yet easy-to-use introduction to using Statistics. Written by a leading expert, this book will help you if you are studying for an important exam or essay, or if you simply want to improve your knowledge. The book covers all the key areas of Statistics including graphs, data interpretation, spreadsheets, regression, correlation and probability. Everything you will need is here in this one book. Each chapter includes not only an explanation of the knowledge and skills you need, but also worked examples and test questions.

  5. Statistics of football dynamics

    CERN Document Server

    Mendes, R S; Anteneodo, C

    2007-01-01

    We investigate the dynamics of football matches. Our goal is to characterize statistically the temporal sequence of ball movements in this collective sport game, searching for traits of complex behavior. Data were collected over a variety of matches in South American, European and World championships throughout 2005 and 2006. We show that the statistics of ball touches presents power-law tails and can be described by $q$-gamma distributions. To explain such behavior we propose a model that provides information on the characteristics of football dynamics. Furthermore, we discuss the statistics of duration of out-of-play intervals, not directly related to the previous scenario.

  6. Practical business statistics

    CERN Document Server

    Siegel, Andrew

    2011-01-01

    Practical Business Statistics, Sixth Edition, is a conceptual, realistic, and matter-of-fact approach to managerial statistics that carefully maintains-but does not overemphasize-mathematical correctness. The book offers a deep understanding of how to learn from data and how to deal with uncertainty while promoting the use of practical computer applications. This teaches present and future managers how to use and understand statistics without an overdose of technical detail, enabling them to better understand the concepts at hand and to interpret results. The text uses excellent examples with

  7. Multivariate Statistical Process Control

    DEFF Research Database (Denmark)

    Kulahci, Murat

    2013-01-01

    As sensor and computer technology continues to improve, it becomes a normal occurrence that we confront with high dimensional data sets. As in many areas of industrial statistics, this brings forth various challenges in statistical process control (SPC) and monitoring for which the aim...... is to identify “out-of-control” state of a process using control charts in order to reduce the excessive variation caused by so-called assignable causes. In practice, the most common method of monitoring multivariate data is through a statistic akin to the Hotelling’s T2. For high dimensional data with excessive...

  8. The nature of statistics

    CERN Document Server

    Wallis, W Allen

    2014-01-01

    Focusing on everyday applications as well as those of scientific research, this classic of modern statistical methods requires little to no mathematical background. Readers develop basic skills for evaluating and using statistical data. Lively, relevant examples include applications to business, government, social and physical sciences, genetics, medicine, and public health. ""W. Allen Wallis and Harry V. Roberts have made statistics fascinating."" - The New York Times ""The authors have set out with considerable success, to write a text which would be of interest and value to the student who,

  9. Statistical deception at work

    CERN Document Server

    Mauro, John

    2013-01-01

    Written to reveal statistical deceptions often thrust upon unsuspecting journalists, this book views the use of numbers from a public perspective. Illustrating how the statistical naivete of journalists often nourishes quantitative misinformation, the author's intent is to make journalists more critical appraisers of numerical data so that in reporting them they do not deceive the public. The book frequently uses actual reported examples of misused statistical data reported by mass media and describes how journalists can avoid being taken in by them. Because reports of survey findings seldom g

  10. Statistical Pattern Recognition

    CERN Document Server

    Webb, Andrew R

    2011-01-01

    Statistical pattern recognition relates to the use of statistical techniques for analysing data measurements in order to extract information and make justified decisions.  It is a very active area of study and research, which has seen many advances in recent years. Applications such as data mining, web searching, multimedia data retrieval, face recognition, and cursive handwriting recognition, all require robust and efficient pattern recognition techniques. This third edition provides an introduction to statistical pattern theory and techniques, with material drawn from a wide range of fields,

  11. Statistical Engine Knock Control

    DEFF Research Database (Denmark)

    Stotsky, Alexander A.

    2008-01-01

    A new statistical concept of the knock control of a spark ignition automotive engine is proposed . The control aim is associated with the statistical hy pothesis test which compares the threshold value to the average value of the max imal amplitud e of the knock sensor signal at a given freq uency....... C ontrol algorithm which is used for minimization of the regulation error realizes a simple count-up-count-d own logic. A new ad aptation algorithm for the knock d etection threshold is also d eveloped . C onfi d ence interval method is used as the b asis for ad aptation. A simple statistical mod el...

  12. Statistical Group Comparison

    CERN Document Server

    Liao, Tim Futing

    2011-01-01

    An incomparably useful examination of statistical methods for comparisonThe nature of doing science, be it natural or social, inevitably calls for comparison. Statistical methods are at the heart of such comparison, for they not only help us gain understanding of the world around us but often define how our research is to be carried out. The need to compare between groups is best exemplified by experiments, which have clearly defined statistical methods. However, true experiments are not always possible. What complicates the matter more is a great deal of diversity in factors that are not inde

  13. Informal Statistics Help Desk

    Science.gov (United States)

    Young, M.; Koslovsky, M.; Schaefer, Caroline M.; Feiveson, A. H.

    2017-01-01

    Back by popular demand, the JSC Biostatistics Laboratory and LSAH statisticians are offering an opportunity to discuss your statistical challenges and needs. Take the opportunity to meet the individuals offering expert statistical support to the JSC community. Join us for an informal conversation about any questions you may have encountered with issues of experimental design, analysis, or data visualization. Get answers to common questions about sample size, repeated measures, statistical assumptions, missing data, multiple testing, time-to-event data, and when to trust the results of your analyses.

  14. Approximating Stationary Statistical Properties

    Institute of Scientific and Technical Information of China (English)

    Xiaoming WANG

    2009-01-01

    It is well-known that physical laws for large chaotic dynamical systems are revealed statistically. Many times these statistical properties of the system must be approximated numerically. The main contribution of this manuscript is to provide simple and natural criterions on numerical methods (temporal and spatial discretization) that are able to capture the stationary statistical properties of the underlying dissipative chaotic dynamical systems asymptotically. The result on temporal approximation is a recent finding of the author, and the result on spatial approximation is a new one. Applications to the infinite Prandtl number model for convection and the barotropic quasi-geostrophic model are also discussed.

  15. Commentary: statistics for biomarkers.

    Science.gov (United States)

    Lovell, David P

    2012-05-01

    This short commentary discusses Biomarkers' requirements for the reporting of statistical analyses in submitted papers. It is expected that submitters will follow the general instructions of the journal, the more detailed guidance given by the International Committee of Medical Journal Editors, the specific guidelines developed by the EQUATOR network, and those of various specialist groups. Biomarkers expects that the study design and subsequent statistical analyses are clearly reported and that the data reported can be made available for independent assessment. The journal recognizes that there is continuing debate about different approaches to statistical science. Biomarkers appreciates that the field continues to develop rapidly and encourages the use of new methodologies.

  16. Evolutionary Statistical Procedures

    CERN Document Server

    Baragona, Roberto; Poli, Irene

    2011-01-01

    This proposed text appears to be a good introduction to evolutionary computation for use in applied statistics research. The authors draw from a vast base of knowledge about the current literature in both the design of evolutionary algorithms and statistical techniques. Modern statistical research is on the threshold of solving increasingly complex problems in high dimensions, and the generalization of its methodology to parameters whose estimators do not follow mathematically simple distributions is underway. Many of these challenges involve optimizing functions for which analytic solutions a

  17. AP statistics crash course

    CERN Document Server

    D'Alessio, Michael

    2012-01-01

    AP Statistics Crash Course - Gets You a Higher Advanced Placement Score in Less Time Crash Course is perfect for the time-crunched student, the last-minute studier, or anyone who wants a refresher on the subject. AP Statistics Crash Course gives you: Targeted, Focused Review - Study Only What You Need to Know Crash Course is based on an in-depth analysis of the AP Statistics course description outline and actual Advanced Placement test questions. It covers only the information tested on the exam, so you can make the most of your valuable study time. Our easy-to-read format covers: exploring da

  18. Contaminant analysis automation demonstration proposal

    Energy Technology Data Exchange (ETDEWEB)

    Dodson, M.G.; Schur, A.; Heubach, J.G.

    1993-10-01

    The nation-wide and global need for environmental restoration and waste remediation (ER&WR) presents significant challenges to the analytical chemistry laboratory. The expansion of ER&WR programs forces an increase in the volume of samples processed and the demand for analysis data. To handle this expanding volume, productivity must be increased. However. The need for significantly increased productivity, faces contaminant analysis process which is costly in time, labor, equipment, and safety protection. Laboratory automation offers a cost effective approach to meeting current and future contaminant analytical laboratory needs. The proposed demonstration will present a proof-of-concept automated laboratory conducting varied sample preparations. This automated process also highlights a graphical user interface that provides supervisory, control and monitoring of the automated process. The demonstration provides affirming answers to the following questions about laboratory automation: Can preparation of contaminants be successfully automated?; Can a full-scale working proof-of-concept automated laboratory be developed that is capable of preparing contaminant and hazardous chemical samples?; Can the automated processes be seamlessly integrated and controlled?; Can the automated laboratory be customized through readily convertible design? and Can automated sample preparation concepts be extended to the other phases of the sample analysis process? To fully reap the benefits of automation, four human factors areas should be studied and the outputs used to increase the efficiency of laboratory automation. These areas include: (1) laboratory configuration, (2) procedures, (3) receptacles and fixtures, and (4) human-computer interface for the full automated system and complex laboratory information management systems.

  19. Introduction to Statistically Designed Experiments

    Energy Technology Data Exchange (ETDEWEB)

    Heaney, Mike

    2016-09-13

    Statistically designed experiments can save researchers time and money by reducing the number of necessary experimental trials, while resulting in more conclusive experimental results. Surprisingly, many researchers are still not aware of this efficient and effective experimental methodology. As reported in a 2013 article from Chemical & Engineering News, there has been a resurgence of this methodology in recent years (http://cen.acs.org/articles/91/i13/Design-Experiments-Makes-Comeback.html?h=2027056365). This presentation will provide a brief introduction to statistically designed experiments. The main advantages will be reviewed along with the some basic concepts such as factorial and fractional factorial designs. The recommended sequential approach to experiments will be introduced and finally a case study will be presented to demonstrate this methodology.

  20. Universal Grammar, statistics or both?

    Science.gov (United States)

    Yang, Charles D

    2004-10-01

    Recent demonstrations of statistical learning in infants have reinvigorated the innateness versus learning debate in language acquisition. This article addresses these issues from both computational and developmental perspectives. First, I argue that statistical learning using transitional probabilities cannot reliably segment words when scaled to a realistic setting (e.g. child-directed English). To be successful, it must be constrained by knowledge of phonological structure. Then, turning to the bona fide theory of innateness--the Principles and Parameters framework--I argue that a full explanation of children's grammar development must abandon the domain-specific learning model of triggering, in favor of probabilistic learning mechanisms that might be domain-general but nevertheless operate in the domain-specific space of syntactic parameters.

  1. STATISTICS IN SERVICE QUALITY ASSESSMENT

    Directory of Open Access Journals (Sweden)

    Dragana Gardašević

    2012-09-01

    Full Text Available For any quality evaluation in sports, science, education, and so, it is useful to collect data to construct a strategy to improve the quality of services offered to the user. For this purpose, we use statistical software packages for data processing data collected in order to increase customer satisfaction. The principle is demonstrated by the example of the level of student satisfaction ratings Belgrade Polytechnic (as users the quality of institutions (Belgrade Polytechnic. Here, the emphasis on statistical analysis as a tool for quality control in order to improve the same, and not the interpretation of results. Therefore, the above can be used as a model in sport to improve the overall results.

  2. Breast cancer statistics, 2011.

    Science.gov (United States)

    DeSantis, Carol; Siegel, Rebecca; Bandi, Priti; Jemal, Ahmedin

    2011-01-01

    In this article, the American Cancer Society provides an overview of female breast cancer statistics in the United States, including trends in incidence, mortality, survival, and screening. Approximately 230,480 new cases of invasive breast cancer and 39,520 breast cancer deaths are expected to occur among US women in 2011. Breast cancer incidence rates were stable among all racial/ethnic groups from 2004 to 2008. Breast cancer death rates have been declining since the early 1990s for all women except American Indians/Alaska Natives, among whom rates have remained stable. Disparities in breast cancer death rates are evident by state, socioeconomic status, and race/ethnicity. While significant declines in mortality rates were observed for 36 states and the District of Columbia over the past 10 years, rates for 14 states remained level. Analyses by county-level poverty rates showed that the decrease in mortality rates began later and was slower among women residing in poor areas. As a result, the highest breast cancer death rates shifted from the affluent areas to the poor areas in the early 1990s. Screening rates continue to be lower in poor women compared with non-poor women, despite much progress in increasing mammography utilization. In 2008, 51.4% of poor women had undergone a screening mammogram in the past 2 years compared with 72.8% of non-poor women. Encouraging patients aged 40 years and older to have annual mammography and a clinical breast examination is the single most important step that clinicians can take to reduce suffering and death from breast cancer. Clinicians should also ensure that patients at high risk of breast cancer are identified and offered appropriate screening and follow-up. Continued progress in the control of breast cancer will require sustained and increased efforts to provide high-quality screening, diagnosis, and treatment to all segments of the population.

  3. Elements of statistical thermodynamics

    CERN Document Server

    Nash, Leonard K

    2006-01-01

    Encompassing essentially all aspects of statistical mechanics that appear in undergraduate texts, this concise, elementary treatment shows how an atomic-molecular perspective yields new insights into macroscopic thermodynamics. 1974 edition.

  4. LBVs and Statistical Inference

    CERN Document Server

    Davidson, Kris; Weis, Kerstin

    2016-01-01

    Smith and Tombleson (2015) asserted that statistical tests disprove the standard view of LBVs, and proposed a far more complex scenario to replace it. But Humphreys et al. (2016) showed that Smith and Tombleson's Magellanic "LBV" sample was a mixture of physically different classes of stars, and genuine LBVs are in fact statistically consistent with the standard view. Smith (2016) recently objected at great length to this result. Here we note that he misrepresented some of the arguments, altered the test criteria, ignored some long-recognized observational facts, and employed inadequate statistical procedures. This case illustrates the dangers of uncareful statistical sampling, as well as the need to be wary of unstated assumptions.

  5. Ehrlichiosis: Statistics and Epidemiology

    Science.gov (United States)

    ... a tick Diseases transmitted by ticks Statistics and Epidemiology Recommend on Facebook Tweet Share Compartir On this ... Holman RC, McQuiston JH, Krebs JW, Swerdlow DL. Epidemiology of human ehrlichiosis and anaplasmosis in the United ...

  6. Statistics for Finance

    DEFF Research Database (Denmark)

    Lindström, Erik; Madsen, Henrik; Nielsen, Jan Nygaard

    Statistics for Finance develops students’ professional skills in statistics with applications in finance. Developed from the authors’ courses at the Technical University of Denmark and Lund University, the text bridges the gap between classical, rigorous treatments of financial mathematics...... that rarely connect concepts to data and books on econometrics and time series analysis that do not cover specific problems related to option valuation. The book discusses applications of financial derivatives pertaining to risk assessment and elimination. The authors cover various statistical......, identify interest rate models, value bonds, estimate parameters, and much more. This textbook will help students understand and manage empirical research in financial engineering. It includes examples of how the statistical tools can be used to improve value-at-risk calculations and other issues...

  7. CMS Statistics Reference Booklet

    Data.gov (United States)

    U.S. Department of Health & Human Services — The annual CMS Statistics reference booklet provides a quick reference for summary information about health expenditures and the Medicare and Medicaid health...

  8. Statistical theory of heat

    CERN Document Server

    Scheck, Florian

    2016-01-01

    Scheck’s textbook starts with a concise introduction to classical thermodynamics, including geometrical aspects. Then a short introduction to probabilities and statistics lays the basis for the statistical interpretation of thermodynamics. Phase transitions, discrete models and the stability of matter are explained in great detail. Thermodynamics has a special role in theoretical physics. Due to the general approach of thermodynamics the field has a bridging function between several areas like the theory of condensed matter, elementary particle physics, astrophysics and cosmology. The classical thermodynamics describes predominantly averaged properties of matter, reaching from few particle systems and state of matter to stellar objects. Statistical Thermodynamics covers the same fields, but explores them in greater depth and unifies classical statistical mechanics with quantum theory of multiple particle systems. The content is presented as two tracks: the fast track for master students, providing the essen...

  9. Elements of Statistics

    Science.gov (United States)

    Grégoire, G.

    2016-05-01

    This chapter is devoted to two objectives. The first one is to answer the request expressed by attendees of the first Astrostatistics School (Annecy, October 2013) to be provided with an elementary vademecum of statistics that would facilitate understanding of the given courses. In this spirit we recall very basic notions, that is definitions and properties that we think sufficient to benefit from courses given in the Astrostatistical School. Thus we give briefly definitions and elementary properties on random variables and vectors, distributions, estimation and tests, maximum likelihood methodology. We intend to present basic ideas in a hopefully comprehensible way. We do not try to give a rigorous presentation, and due to the place devoted to this chapter, can cover only a rather limited field of statistics. The second aim is to focus on some statistical tools that are useful in classification: basic introduction to Bayesian statistics, maximum likelihood methodology, Gaussian vectors and Gaussian mixture models.

  10. Statistical mechanics of superconductivity

    CERN Document Server

    Kita, Takafumi

    2015-01-01

    This book provides a theoretical, step-by-step comprehensive explanation of superconductivity for undergraduate and graduate students who have completed elementary courses on thermodynamics and quantum mechanics. To this end, it adopts the unique approach of starting with the statistical mechanics of quantum ideal gases and successively adding and clarifying elements and techniques indispensible for understanding it. They include the spin-statistics theorem, second quantization, density matrices, the Bloch–De Dominicis theorem, the variational principle in statistical mechanics, attractive interaction, and bound states. Ample examples of their usage are also provided in terms of topics from advanced statistical mechanics such as two-particle correlations of quantum ideal gases, derivation of the Hartree–Fock equations, and Landau’s Fermi-liquid theory, among others. With these preliminaries, the fundamental mean-field equations of superconductivity are derived with maximum mathematical clarity based on ...

  11. Childhood Cancer Statistics

    Science.gov (United States)

    ... Non-Hodgkin) Lymphoma (Hodgkin) Neuroblastoma Osteosarcoma Retinoblastoma Rhabdomyosarcoma Skin Cancer Soft Tissue Sarcoma Thyroid Cancer Cancer Resources Childhood Cancer Statistics Coping With Cancer CureSearch CancerCare App Late Effects ...

  12. Boating Accident Statistics

    Data.gov (United States)

    Department of Homeland Security — Accident statistics available on the Coast Guard’s website by state, year, and one variable to obtain tables and/or graphs. Data from reports has been loaded for...

  13. Playing at Statistical Mechanics

    Science.gov (United States)

    Clark, Paul M.; And Others

    1974-01-01

    Discussed are the applications of counting techniques of a sorting game to distributions and concepts in statistical mechanics. Included are the following distributions: Fermi-Dirac, Bose-Einstein, and most probable. (RH)

  14. Transport statistics 1996

    CSIR Research Space (South Africa)

    Shepperson, L

    1997-12-01

    Full Text Available This publication contains transport and related statistics on roads, vehicles, infrastructure, passengers, freight, rail, air, maritime and road traffic, and international comparisons. The information compiled in this publication has been gathered...

  15. Boosted Statistical Mechanics

    CERN Document Server

    Testa, Massimo

    2015-01-01

    Based on the fundamental principles of Relativistic Quantum Mechanics, we give a rigorous, but completely elementary, proof of the relation between fundamental observables of a statistical system when measured relatively to two inertial reference frames, connected by a Lorentz transformation.

  16. Statistics for Finance

    DEFF Research Database (Denmark)

    Lindström, Erik; Madsen, Henrik; Nielsen, Jan Nygaard

    that rarely connect concepts to data and books on econometrics and time series analysis that do not cover specific problems related to option valuation. The book discusses applications of financial derivatives pertaining to risk assessment and elimination. The authors cover various statistical......, identify interest rate models, value bonds, estimate parameters, and much more. This textbook will help students understand and manage empirical research in financial engineering. It includes examples of how the statistical tools can be used to improve value-at-risk calculations and other issues......Statistics for Finance develops students’ professional skills in statistics with applications in finance. Developed from the authors’ courses at the Technical University of Denmark and Lund University, the text bridges the gap between classical, rigorous treatments of financial mathematics...

  17. Bureau of Labor Statistics

    Science.gov (United States)

    ... gov Disability.gov Freedom of Information Act | Privacy & Security Statement | Disclaimers | Customer Survey | Important Web Site Notices U.S. Bureau of Labor Statistics | Postal Square Building, 2 Massachusetts Avenue, NE Washington, ...

  18. Statistics For Neuroscientists

    Directory of Open Access Journals (Sweden)

    Subbakrishna D.K

    2000-01-01

    Full Text Available The role statistical methods play in medicine in the interpretation of empirical data is well recognized by researchers. With modern computing facilities and software packages there is little need for familiarity with the computational details of statistical calculations. However, for the researcher to understand whether these calculations are valid and appropriate it is necessary that the user is aware of the rudiments of the statistical methodology. Also, it needs to be emphasized that no amount of advanced analysis can be a substitute for a properly planned and executed study. An attempt is made in this communication to discuss some of the theoretical issues that are important for the valid analysis and interpretation of precious date that are gathered. The article summarises some of the basic statistical concepts followed by illustrations from live data generated from various research projects from the department of Neurology of this Institute.

  19. Information theory and statistics

    CERN Document Server

    Kullback, Solomon

    1997-01-01

    Highly useful text studies logarithmic measures of information and their application to testing statistical hypotheses. Includes numerous worked examples and problems. References. Glossary. Appendix. 1968 2nd, revised edition.

  20. Statistics of the sagas

    Science.gov (United States)

    Richfield, Jon; bookfeller

    2016-07-01

    In reply to Ralph Kenna and Pádraig Mac Carron's feature article “Maths meets myths” in which they describe how they are using techniques from statistical physics to characterize the societies depicted in ancient Icelandic sagas.

  1. Statistics of extremes

    CERN Document Server

    Gumbel, E J

    2012-01-01

    This classic text covers order statistics and their exceedances; exact distribution of extremes; the 1st asymptotic distribution; uses of the 1st, 2nd, and 3rd asymptotes; more. 1958 edition. Includes 44 tables and 97 graphs.

  2. Medicaid Drug Claims Statistics

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Medicaid Drug Claims Statistics CD is a useful tool that conveniently breaks up Medicaid claim counts and separates them by quarter and includes an annual count.

  3. EDI Performance Statistics

    Data.gov (United States)

    U.S. Department of Health & Human Services — This section contains statistical information and reports related to the percentage of electronic transactions being sent to Medicare contractors in the formats...

  4. Plague Maps and Statistics

    Science.gov (United States)

    ... and Statistics Recommend on Facebook Tweet Share Compartir Plague in the United States Plague was first introduced ... per year in the United States: 1900-2012. Plague Worldwide Plague epidemics have occurred in Africa, Asia, ...

  5. The SAPS crime statistics

    African Journals Online (AJOL)

    Every year, the South African Minister of Police releases the crime statistics in ... prove an invaluable source of information for those who seek to better understand and respond to crime ... of Social Development in the JCPS may suggest a.

  6. CDC WONDER: Cancer Statistics

    Data.gov (United States)

    U.S. Department of Health & Human Services — The United States Cancer Statistics (USCS) online databases in WONDER provide cancer incidence and mortality data for the United States for the years since 1999, by...

  7. Probability and Statistical Inference

    OpenAIRE

    Prosper, Harrison B.

    2006-01-01

    These lectures introduce key concepts in probability and statistical inference at a level suitable for graduate students in particle physics. Our goal is to paint as vivid a picture as possible of the concepts covered.

  8. German cancer statistics 2004

    OpenAIRE

    2010-01-01

    Abstract Background For years the Robert Koch Institute (RKI) has been annually pooling and reviewing the data from the German population-based cancer registries and evaluating them together with the cause-of-death statistics provided by the statistical offices. Traditionally, the RKI periodically estimates the number of new cancer cases in Germany on the basis of the available data from the regional cancer registries in which registration is complete; this figure, in turn, forms the basis fo...

  9. Dominican Republic; Statistical Appendix

    OpenAIRE

    International Monetary Fund

    2003-01-01

    In this paper, statistical data for the Dominican Republic were presented as real, public, financial, and external sectors. In real sector, GDP by sector at constant prices, savings, investment, consumer price index, petroleum statistics, and so on, were outlined. The public sector summarizes operations of the consolidated public sector, central government, and revenues. A summary of the banking system, claims, interest rates, financial indicators, and reserve requirements were described in t...

  10. Introductory statistical inference

    CERN Document Server

    Mukhopadhyay, Nitis

    2014-01-01

    This gracefully organized text reveals the rigorous theory of probability and statistical inference in the style of a tutorial, using worked examples, exercises, figures, tables, and computer simulations to develop and illustrate concepts. Drills and boxed summaries emphasize and reinforce important ideas and special techniques.Beginning with a review of the basic concepts and methods in probability theory, moments, and moment generating functions, the author moves to more intricate topics. Introductory Statistical Inference studies multivariate random variables, exponential families of dist

  11. Business statistics I essentials

    CERN Document Server

    Clark, Louise

    2014-01-01

    REA's Essentials provide quick and easy access to critical information in a variety of different fields, ranging from the most basic to the most advanced. As its name implies, these concise, comprehensive study guides summarize the essentials of the field covered. Essentials are helpful when preparing for exams, doing homework and will remain a lasting reference source for students, teachers, and professionals. Business Statistics I includes descriptive statistics, introduction to probability, probability distributions, sampling and sampling distributions, interval estimation, and hypothesis t

  12. Dominican Republic; Statistical Appendix

    OpenAIRE

    International Monetary Fund

    2003-01-01

    In this paper, statistical data for the Dominican Republic were presented as real, public, financial, and external sectors. In real sector, GDP by sector at constant prices, savings, investment, consumer price index, petroleum statistics, and so on, were outlined. The public sector summarizes operations of the consolidated public sector, central government, and revenues. A summary of the banking system, claims, interest rates, financial indicators, and reserve requirements were described in t...

  13. Significant NRC Enforcement Actions

    Data.gov (United States)

    Nuclear Regulatory Commission — This dataset provides a list of Nuclear Regulartory Commission (NRC) issued significant enforcement actions. These actions, referred to as "escalated", are issued by...

  14. Breakthroughs in statistics

    CERN Document Server

    Johnson, Norman

    This is author-approved bcc: This is the third volume of a collection of seminal papers in the statistical sciences written during the past 110 years. These papers have each had an outstanding influence on the development of statistical theory and practice over the last century. Each paper is preceded by an introduction written by an authority in the field providing background information and assessing its influence. Volume III concerntrates on articles from the 1980's while including some earlier articles not included in Volume I and II. Samuel Kotz is Professor of Statistics in the College of Business and Management at the University of Maryland. Norman L. Johnson is Professor Emeritus of Statistics at the University of North Carolina. Also available: Breakthroughs in Statistics Volume I: Foundations and Basic Theory Samuel Kotz and Norman L. Johnson, Editors 1993. 631 pp. Softcover. ISBN 0-387-94037-5 Breakthroughs in Statistics Volume II: Methodology and Distribution Samuel Kotz and Norman L. Johnson, Edi...

  15. Practical Statistics for Particle Physicists

    CERN Document Server

    Lista, Luca

    2016-01-01

    These three lectures provide an introduction to the main concepts of statistical data analysis useful for precision measurements and searches for new signals in High Energy Physics. The frequentist and Bayesian approaches to probability theory will introduced and, for both approaches, inference methods will be presented. Hypothesis tests will be discussed, then significance and upper limit evaluation will be presented with an overview of the modern and most advanced techniques adopted for data analysis at the Large Hadron Collider.

  16. UN Data: Environment Statistics: Waste

    Data.gov (United States)

    World Wide Human Geography Data Working Group — The Environment Statistics Database contains selected water and waste statistics by country. Statistics on water and waste are based on official statistics supplied...

  17. UN Data- Environmental Statistics: Waste

    Data.gov (United States)

    World Wide Human Geography Data Working Group — The Environment Statistics Database contains selected water and waste statistics by country. Statistics on water and waste are based on official statistics supplied...

  18. IGESS: a statistical approach to integrating individual-level genotype data and summary statistics in genome-wide association studies.

    Science.gov (United States)

    Dai, Mingwei; Ming, Jingsi; Cai, Mingxuan; Liu, Jin; Yang, Can; Wan, Xiang; Xu, Zongben

    2017-09-15

    Results from genome-wide association studies (GWAS) suggest that a complex phenotype is often affected by many variants with small effects, known as 'polygenicity'. Tens of thousands of samples are often required to ensure statistical power of identifying these variants with small effects. However, it is often the case that a research group can only get approval for the access to individual-level genotype data with a limited sample size (e.g. a few hundreds or thousands). Meanwhile, summary statistics generated using single-variant-based analysis are becoming publicly available. The sample sizes associated with the summary statistics datasets are usually quite large. How to make the most efficient use of existing abundant data resources largely remains an open question. In this study, we propose a statistical approach, IGESS, to increasing statistical power of identifying risk variants and improving accuracy of risk prediction by i ntegrating individual level ge notype data and s ummary s tatistics. An efficient algorithm based on variational inference is developed to handle the genome-wide analysis. Through comprehensive simulation studies, we demonstrated the advantages of IGESS over the methods which take either individual-level data or summary statistics data as input. We applied IGESS to perform integrative analysis of Crohns Disease from WTCCC and summary statistics from other studies. IGESS was able to significantly increase the statistical power of identifying risk variants and improve the risk prediction accuracy from 63.2% ( ±0.4% ) to 69.4% ( ±0.1% ) using about 240 000 variants. The IGESS software is available at https://github.com/daviddaigithub/IGESS . zbxu@xjtu.edu.cn or xwan@comp.hkbu.edu.hk or eeyang@hkbu.edu.hk. Supplementary data are available at Bioinformatics online.

  19. An Introduction to Statistical Concepts

    CERN Document Server

    Lomax, Richard G

    2012-01-01

    This comprehensive, flexible text is used in both one- and two-semester courses to review introductory through intermediate statistics. Instructors select the topics that are most appropriate for their course. Its conceptual approach helps students more easily understand the concepts and interpret SPSS and research results. Key concepts are simply stated and occasionally reintroduced and related to one another for reinforcement. Numerous examples demonstrate their relevance. This edition features more explanation to increase understanding of the concepts. Only crucial equations are included. I

  20. Statistical distribution of Chinese names

    Institute of Scientific and Technical Information of China (English)

    Guo Jin-Zhong; Chen Qing-Hua; Wang You-Gui

    2011-01-01

    This paper studies the statistical characteristics of Chinese surnames,first names and full names based on a credible sample.The distribution of Chinese surnames,unlike that in any other countries,shows an exponential pattern in the top part and a power-law pattern in the tail part.The distributions of Chinese first names and full names have the characteristics of a power law with different exponents.Finally,the interrelation of the first name and the surname is demonstrated by using a computer simulation and an exhibition of the name network.Chinese people take the surname into account when they choose a first name for somebody.