WorldWideScience

Sample records for statistically significant comparisons

  1. Testing statistical significance scores of sequence comparison methods with structure similarity

    Directory of Open Access Journals (Sweden)

    Leunissen Jack AM

    2006-10-01

    Full Text Available Abstract Background In the past years the Smith-Waterman sequence comparison algorithm has gained popularity due to improved implementations and rapidly increasing computing power. However, the quality and sensitivity of a database search is not only determined by the algorithm but also by the statistical significance testing for an alignment. The e-value is the most commonly used statistical validation method for sequence database searching. The CluSTr database and the Protein World database have been created using an alternative statistical significance test: a Z-score based on Monte-Carlo statistics. Several papers have described the superiority of the Z-score as compared to the e-value, using simulated data. We were interested if this could be validated when applied to existing, evolutionary related protein sequences. Results All experiments are performed on the ASTRAL SCOP database. The Smith-Waterman sequence comparison algorithm with both e-value and Z-score statistics is evaluated, using ROC, CVE and AP measures. The BLAST and FASTA algorithms are used as reference. We find that two out of three Smith-Waterman implementations with e-value are better at predicting structural similarities between proteins than the Smith-Waterman implementation with Z-score. SSEARCH especially has very high scores. Conclusion The compute intensive Z-score does not have a clear advantage over the e-value. The Smith-Waterman implementations give generally better results than their heuristic counterparts. We recommend using the SSEARCH algorithm combined with e-values for pairwise sequence comparisons.

  2. Statistical Group Comparison

    CERN Document Server

    Liao, Tim Futing

    2011-01-01

    An incomparably useful examination of statistical methods for comparisonThe nature of doing science, be it natural or social, inevitably calls for comparison. Statistical methods are at the heart of such comparison, for they not only help us gain understanding of the world around us but often define how our research is to be carried out. The need to compare between groups is best exemplified by experiments, which have clearly defined statistical methods. However, true experiments are not always possible. What complicates the matter more is a great deal of diversity in factors that are not inde

  3. Statistically significant relational data mining :

    Energy Technology Data Exchange (ETDEWEB)

    Berry, Jonathan W.; Leung, Vitus Joseph; Phillips, Cynthia Ann; Pinar, Ali; Robinson, David Gerald; Berger-Wolf, Tanya; Bhowmick, Sanjukta; Casleton, Emily; Kaiser, Mark; Nordman, Daniel J.; Wilson, Alyson G.

    2014-02-01

    This report summarizes the work performed under the project (3z(BStatitically significant relational data mining.(3y (BThe goal of the project was to add more statistical rigor to the fairly ad hoc area of data mining on graphs. Our goal was to develop better algorithms and better ways to evaluate algorithm quality. We concetrated on algorithms for community detection, approximate pattern matching, and graph similarity measures. Approximate pattern matching involves finding an instance of a relatively small pattern, expressed with tolerance, in a large graph of data observed with uncertainty. This report gathers the abstracts and references for the eight refereed publications that have appeared as part of this work. We then archive three pieces of research that have not yet been published. The first is theoretical and experimental evidence that a popular statistical measure for comparison of community assignments favors over-resolved communities over approximations to a ground truth. The second are statistically motivated methods for measuring the quality of an approximate match of a small pattern in a large graph. The third is a new probabilistic random graph model. Statisticians favor these models for graph analysis. The new local structure graph model overcomes some of the issues with popular models such as exponential random graph models and latent variable models.

  4. Estimates of statistical significance for comparison of individual positions in multiple sequence alignments

    Directory of Open Access Journals (Sweden)

    Sadreyev Ruslan I

    2004-08-01

    Full Text Available Abstract Background Profile-based analysis of multiple sequence alignments (MSA allows for accurate comparison of protein families. Here, we address the problems of detecting statistically confident dissimilarities between (1 MSA position and a set of predicted residue frequencies, and (2 between two MSA positions. These problems are important for (i evaluation and optimization of methods predicting residue occurrence at protein positions; (ii detection of potentially misaligned regions in automatically produced alignments and their further refinement; and (iii detection of sites that determine functional or structural specificity in two related families. Results For problems (1 and (2, we propose analytical estimates of P-value and apply them to the detection of significant positional dissimilarities in various experimental situations. (a We compare structure-based predictions of residue propensities at a protein position to the actual residue frequencies in the MSA of homologs. (b We evaluate our method by the ability to detect erroneous position matches produced by an automatic sequence aligner. (c We compare MSA positions that correspond to residues aligned by automatic structure aligners. (d We compare MSA positions that are aligned by high-quality manual superposition of structures. Detected dissimilarities reveal shortcomings of the automatic methods for residue frequency prediction and alignment construction. For the high-quality structural alignments, the dissimilarities suggest sites of potential functional or structural importance. Conclusion The proposed computational method is of significant potential value for the analysis of protein families.

  5. cocor: a comprehensive solution for the statistical comparison of correlations.

    Directory of Open Access Journals (Sweden)

    Birk Diedenhofen

    Full Text Available A valid comparison of the magnitude of two correlations requires researchers to directly contrast the correlations using an appropriate statistical test. In many popular statistics packages, however, tests for the significance of the difference between correlations are missing. To close this gap, we introduce cocor, a free software package for the R programming language. The cocor package covers a broad range of tests including the comparisons of independent and dependent correlations with either overlapping or nonoverlapping variables. The package also includes an implementation of Zou's confidence interval for all of these comparisons. The platform independent cocor package enhances the R statistical computing environment and is available for scripting. Two different graphical user interfaces-a plugin for RKWard and a web interface-make cocor a convenient and user-friendly tool.

  6. Statistical significance of cis-regulatory modules

    Directory of Open Access Journals (Sweden)

    Smith Andrew D

    2007-01-01

    Full Text Available Abstract Background It is becoming increasingly important for researchers to be able to scan through large genomic regions for transcription factor binding sites or clusters of binding sites forming cis-regulatory modules. Correspondingly, there has been a push to develop algorithms for the rapid detection and assessment of cis-regulatory modules. While various algorithms for this purpose have been introduced, most are not well suited for rapid, genome scale scanning. Results We introduce methods designed for the detection and statistical evaluation of cis-regulatory modules, modeled as either clusters of individual binding sites or as combinations of sites with constrained organization. In order to determine the statistical significance of module sites, we first need a method to determine the statistical significance of single transcription factor binding site matches. We introduce a straightforward method of estimating the statistical significance of single site matches using a database of known promoters to produce data structures that can be used to estimate p-values for binding site matches. We next introduce a technique to calculate the statistical significance of the arrangement of binding sites within a module using a max-gap model. If the module scanned for has defined organizational parameters, the probability of the module is corrected to account for organizational constraints. The statistical significance of single site matches and the architecture of sites within the module can be combined to provide an overall estimation of statistical significance of cis-regulatory module sites. Conclusion The methods introduced in this paper allow for the detection and statistical evaluation of single transcription factor binding sites and cis-regulatory modules. The features described are implemented in the Search Tool for Occurrences of Regulatory Motifs (STORM and MODSTORM software.

  7. A Note on Comparing the Power of Test Statistics at Low Significance Levels.

    Science.gov (United States)

    Morris, Nathan; Elston, Robert

    2011-01-01

    It is an obvious fact that the power of a test statistic is dependent upon the significance (alpha) level at which the test is performed. It is perhaps a less obvious fact that the relative performance of two statistics in terms of power is also a function of the alpha level. Through numerous personal discussions, we have noted that even some competent statisticians have the mistaken intuition that relative power comparisons at traditional levels such as α = 0.05 will be roughly similar to relative power comparisons at very low levels, such as the level α = 5 × 10 -8 , which is commonly used in genome-wide association studies. In this brief note, we demonstrate that this notion is in fact quite wrong, especially with respect to comparing tests with differing degrees of freedom. In fact, at very low alpha levels the cost of additional degrees of freedom is often comparatively low. Thus we recommend that statisticians exercise caution when interpreting the results of power comparison studies which use alpha levels that will not be used in practice.

  8. Robot Trajectories Comparison: A Statistical Approach

    Directory of Open Access Journals (Sweden)

    A. Ansuategui

    2014-01-01

    Full Text Available The task of planning a collision-free trajectory from a start to a goal position is fundamental for an autonomous mobile robot. Although path planning has been extensively investigated since the beginning of robotics, there is no agreement on how to measure the performance of a motion algorithm. This paper presents a new approach to perform robot trajectories comparison that could be applied to any kind of trajectories and in both simulated and real environments. Given an initial set of features, it automatically selects the most significant ones and performs a statistical comparison using them. Additionally, a graphical data visualization named polygraph which helps to better understand the obtained results is provided. The proposed method has been applied, as an example, to compare two different motion planners, FM2 and WaveFront, using different environments, robots, and local planners.

  9. Robot Trajectories Comparison: A Statistical Approach

    Science.gov (United States)

    Ansuategui, A.; Arruti, A.; Susperregi, L.; Yurramendi, Y.; Jauregi, E.; Lazkano, E.; Sierra, B.

    2014-01-01

    The task of planning a collision-free trajectory from a start to a goal position is fundamental for an autonomous mobile robot. Although path planning has been extensively investigated since the beginning of robotics, there is no agreement on how to measure the performance of a motion algorithm. This paper presents a new approach to perform robot trajectories comparison that could be applied to any kind of trajectories and in both simulated and real environments. Given an initial set of features, it automatically selects the most significant ones and performs a statistical comparison using them. Additionally, a graphical data visualization named polygraph which helps to better understand the obtained results is provided. The proposed method has been applied, as an example, to compare two different motion planners, FM2 and WaveFront, using different environments, robots, and local planners. PMID:25525618

  10. The thresholds for statistical and clinical significance

    DEFF Research Database (Denmark)

    Jakobsen, Janus Christian; Gluud, Christian; Winkel, Per

    2014-01-01

    BACKGROUND: Thresholds for statistical significance are insufficiently demonstrated by 95% confidence intervals or P-values when assessing results from randomised clinical trials. First, a P-value only shows the probability of getting a result assuming that the null hypothesis is true and does...... not reflect the probability of getting a result assuming an alternative hypothesis to the null hypothesis is true. Second, a confidence interval or a P-value showing significance may be caused by multiplicity. Third, statistical significance does not necessarily result in clinical significance. Therefore...... of the probability that a given trial result is compatible with a 'null' effect (corresponding to the P-value) divided by the probability that the trial result is compatible with the intervention effect hypothesised in the sample size calculation; (3) adjust the confidence intervals and the statistical significance...

  11. The insignificance of statistical significance testing

    Science.gov (United States)

    Johnson, Douglas H.

    1999-01-01

    Despite their use in scientific journals such as The Journal of Wildlife Management, statistical hypothesis tests add very little value to the products of research. Indeed, they frequently confuse the interpretation of data. This paper describes how statistical hypothesis tests are often viewed, and then contrasts that interpretation with the correct one. I discuss the arbitrariness of P-values, conclusions that the null hypothesis is true, power analysis, and distinctions between statistical and biological significance. Statistical hypothesis testing, in which the null hypothesis about the properties of a population is almost always known a priori to be false, is contrasted with scientific hypothesis testing, which examines a credible null hypothesis about phenomena in nature. More meaningful alternatives are briefly outlined, including estimation and confidence intervals for determining the importance of factors, decision theory for guiding actions in the face of uncertainty, and Bayesian approaches to hypothesis testing and other statistical practices.

  12. Significance levels for studies with correlated test statistics.

    Science.gov (United States)

    Shi, Jianxin; Levinson, Douglas F; Whittemore, Alice S

    2008-07-01

    When testing large numbers of null hypotheses, one needs to assess the evidence against the global null hypothesis that none of the hypotheses is false. Such evidence typically is based on the test statistic of the largest magnitude, whose statistical significance is evaluated by permuting the sample units to simulate its null distribution. Efron (2007) has noted that correlation among the test statistics can induce substantial interstudy variation in the shapes of their histograms, which may cause misleading tail counts. Here, we show that permutation-based estimates of the overall significance level also can be misleading when the test statistics are correlated. We propose that such estimates be conditioned on a simple measure of the spread of the observed histogram, and we provide a method for obtaining conditional significance levels. We justify this conditioning using the conditionality principle described by Cox and Hinkley (1974). Application of the method to gene expression data illustrates the circumstances when conditional significance levels are needed.

  13. Caveats for using statistical significance tests in research assessments

    DEFF Research Database (Denmark)

    Schneider, Jesper Wiborg

    2013-01-01

    controversial and numerous criticisms have been leveled against their use. Based on examples from articles by proponents of the use statistical significance tests in research assessments, we address some of the numerous problems with such tests. The issues specifically discussed are the ritual practice......This article raises concerns about the advantages of using statistical significance tests in research assessments as has recently been suggested in the debate about proper normalization procedures for citation indicators by Opthof and Leydesdorff (2010). Statistical significance tests are highly...... argue that applying statistical significance tests and mechanically adhering to their results are highly problematic and detrimental to critical thinking. We claim that the use of such tests do not provide any advantages in relation to deciding whether differences between citation indicators...

  14. Common pitfalls in statistical analysis: "P" values, statistical significance and confidence intervals

    Directory of Open Access Journals (Sweden)

    Priya Ranganathan

    2015-01-01

    Full Text Available In the second part of a series on pitfalls in statistical analysis, we look at various ways in which a statistically significant study result can be expressed. We debunk some of the myths regarding the ′P′ value, explain the importance of ′confidence intervals′ and clarify the importance of including both values in a paper

  15. The choice of statistical methods for comparisons of dosimetric data in radiotherapy.

    Science.gov (United States)

    Chaikh, Abdulhamid; Giraud, Jean-Yves; Perrin, Emmanuel; Bresciani, Jean-Pierre; Balosso, Jacques

    2014-09-18

    Novel irradiation techniques are continuously introduced in radiotherapy to optimize the accuracy, the security and the clinical outcome of treatments. These changes could raise the question of discontinuity in dosimetric presentation and the subsequent need for practice adjustments in case of significant modifications. This study proposes a comprehensive approach to compare different techniques and tests whether their respective dose calculation algorithms give rise to statistically significant differences in the treatment doses for the patient. Statistical investigation principles are presented in the framework of a clinical example based on 62 fields of radiotherapy for lung cancer. The delivered doses in monitor units were calculated using three different dose calculation methods: the reference method accounts the dose without tissues density corrections using Pencil Beam Convolution (PBC) algorithm, whereas new methods calculate the dose with tissues density correction for 1D and 3D using Modified Batho (MB) method and Equivalent Tissue air ratio (ETAR) method, respectively. The normality of the data and the homogeneity of variance between groups were tested using Shapiro-Wilks and Levene test, respectively, then non-parametric statistical tests were performed. Specifically, the dose means estimated by the different calculation methods were compared using Friedman's test and Wilcoxon signed-rank test. In addition, the correlation between the doses calculated by the three methods was assessed using Spearman's rank and Kendall's rank tests. The Friedman's test showed a significant effect on the calculation method for the delivered dose of lung cancer patients (p Wilcoxon signed-rank test of paired comparisons indicated that the delivered dose was significantly reduced using density-corrected methods as compared to the reference method. Spearman's and Kendall's rank tests indicated a positive correlation between the doses calculated with the different methods

  16. Conducting tests for statistically significant differences using forest inventory data

    Science.gov (United States)

    James A. Westfall; Scott A. Pugh; John W. Coulston

    2013-01-01

    Many forest inventory and monitoring programs are based on a sample of ground plots from which estimates of forest resources are derived. In addition to evaluating metrics such as number of trees or amount of cubic wood volume, it is often desirable to make comparisons between resource attributes. To properly conduct statistical tests for differences, it is imperative...

  17. Health significance and statistical uncertainty. The value of P-value.

    Science.gov (United States)

    Consonni, Dario; Bertazzi, Pier Alberto

    2017-10-27

    The P-value is widely used as a summary statistics of scientific results. Unfortunately, there is a widespread tendency to dichotomize its value in "P0.05" ("statistically not significant"), with the former implying a "positive" result and the latter a "negative" one. To show the unsuitability of such an approach when evaluating the effects of environmental and occupational risk factors. We provide examples of distorted use of P-value and of the negative consequences for science and public health of such a black-and-white vision. The rigid interpretation of P-value as a dichotomy favors the confusion between health relevance and statistical significance, discourages thoughtful thinking, and distorts attention from what really matters, the health significance. A much better way to express and communicate scientific results involves reporting effect estimates (e.g., risks, risks ratios or risk differences) and their confidence intervals (CI), which summarize and convey both health significance and statistical uncertainty. Unfortunately, many researchers do not usually consider the whole interval of CI but only examine if it includes the null-value, therefore degrading this procedure to the same P-value dichotomy (statistical significance or not). In reporting statistical results of scientific research present effects estimates with their confidence intervals and do not qualify the P-value as "significant" or "not significant".

  18. Common pitfalls in statistical analysis: “P” values, statistical significance and confidence intervals

    Science.gov (United States)

    Ranganathan, Priya; Pramesh, C. S.; Buyse, Marc

    2015-01-01

    In the second part of a series on pitfalls in statistical analysis, we look at various ways in which a statistically significant study result can be expressed. We debunk some of the myths regarding the ‘P’ value, explain the importance of ‘confidence intervals’ and clarify the importance of including both values in a paper PMID:25878958

  19. The choice of statistical methods for comparisons of dosimetric data in radiotherapy

    International Nuclear Information System (INIS)

    Chaikh, Abdulhamid; Giraud, Jean-Yves; Perrin, Emmanuel; Bresciani, Jean-Pierre; Balosso, Jacques

    2014-01-01

    Novel irradiation techniques are continuously introduced in radiotherapy to optimize the accuracy, the security and the clinical outcome of treatments. These changes could raise the question of discontinuity in dosimetric presentation and the subsequent need for practice adjustments in case of significant modifications. This study proposes a comprehensive approach to compare different techniques and tests whether their respective dose calculation algorithms give rise to statistically significant differences in the treatment doses for the patient. Statistical investigation principles are presented in the framework of a clinical example based on 62 fields of radiotherapy for lung cancer. The delivered doses in monitor units were calculated using three different dose calculation methods: the reference method accounts the dose without tissues density corrections using Pencil Beam Convolution (PBC) algorithm, whereas new methods calculate the dose with tissues density correction for 1D and 3D using Modified Batho (MB) method and Equivalent Tissue air ratio (ETAR) method, respectively. The normality of the data and the homogeneity of variance between groups were tested using Shapiro-Wilks and Levene test, respectively, then non-parametric statistical tests were performed. Specifically, the dose means estimated by the different calculation methods were compared using Friedman’s test and Wilcoxon signed-rank test. In addition, the correlation between the doses calculated by the three methods was assessed using Spearman’s rank and Kendall’s rank tests. The Friedman’s test showed a significant effect on the calculation method for the delivered dose of lung cancer patients (p <0.001). The density correction methods yielded to lower doses as compared to PBC by on average (−5 ± 4.4 SD) for MB and (−4.7 ± 5 SD) for ETAR. Post-hoc Wilcoxon signed-rank test of paired comparisons indicated that the delivered dose was significantly reduced using density

  20. On the statistical comparison of climate model output and climate data

    International Nuclear Information System (INIS)

    Solow, A.R.

    1991-01-01

    Some broad issues arising in the statistical comparison of the output of climate models with the corresponding climate data are reviewed. Particular attention is paid to the question of detecting climate change. The purpose of this paper is to review some statistical approaches to the comparison of the output of climate models with climate data. There are many statistical issues arising in such a comparison. The author will focus on some of the broader issues, although some specific methodological questions will arise along the way. One important potential application of the approaches discussed in this paper is the detection of climate change. Although much of the discussion will be fairly general, he will try to point out the appropriate connections to the detection question. 9 refs

  1. On the statistical comparison of climate model output and climate data

    International Nuclear Information System (INIS)

    Solow, A.R.

    1990-01-01

    Some broad issues arising in the statistical comparison of the output of climate models with the corresponding climate data are reviewed. Particular attention is paid to the question of detecting climate change. The purpose of this paper is to review some statistical approaches to the comparison of the output of climate models with climate data. There are many statistical issues arising in such a comparison. The author will focus on some of the broader issues, although some specific methodological questions will arise along the way. One important potential application of the approaches discussed in this paper is the detection of climate change. Although much of the discussion will be fairly general, he will try to point out the appropriate connections to the detection question

  2. Understanding the Sampling Distribution and Its Use in Testing Statistical Significance.

    Science.gov (United States)

    Breunig, Nancy A.

    Despite the increasing criticism of statistical significance testing by researchers, particularly in the publication of the 1994 American Psychological Association's style manual, statistical significance test results are still popular in journal articles. For this reason, it remains important to understand the logic of inferential statistics. A…

  3. Swiss solar power statistics 2007 - Significant expansion

    International Nuclear Information System (INIS)

    Hostettler, T.

    2008-01-01

    This article presents and discusses the 2007 statistics for solar power in Switzerland. A significant number of new installations is noted as is the high production figures from newer installations. The basics behind the compilation of the Swiss solar power statistics are briefly reviewed and an overview for the period 1989 to 2007 is presented which includes figures on the number of photovoltaic plant in service and installed peak power. Typical production figures in kilowatt-hours (kWh) per installed kilowatt-peak power (kWp) are presented and discussed for installations of various sizes. Increased production after inverter replacement in older installations is noted. Finally, the general political situation in Switzerland as far as solar power is concerned are briefly discussed as are international developments.

  4. Comparison of adaptive statistical iterative and filtered back projection reconstruction techniques in quantifying coronary calcium.

    Science.gov (United States)

    Takahashi, Masahiro; Kimura, Fumiko; Umezawa, Tatsuya; Watanabe, Yusuke; Ogawa, Harumi

    2016-01-01

    Adaptive statistical iterative reconstruction (ASIR) has been used to reduce radiation dose in cardiac computed tomography. However, change of image parameters by ASIR as compared to filtered back projection (FBP) may influence quantification of coronary calcium. To investigate the influence of ASIR on calcium quantification in comparison to FBP. In 352 patients, CT images were reconstructed using FBP alone, FBP combined with ASIR 30%, 50%, 70%, and ASIR 100% based on the same raw data. Image noise, plaque density, Agatston scores and calcium volumes were compared among the techniques. Image noise, Agatston score, and calcium volume decreased significantly with ASIR compared to FBP (each P ASIR reduced Agatston score by 10.5% to 31.0%. In calcified plaques both of patients and a phantom, ASIR decreased maximum CT values and calcified plaque size. In comparison to FBP, adaptive statistical iterative reconstruction (ASIR) may significantly decrease Agatston scores and calcium volumes. Copyright © 2016 Society of Cardiovascular Computed Tomography. Published by Elsevier Inc. All rights reserved.

  5. Test for the statistical significance of differences between ROC curves

    International Nuclear Information System (INIS)

    Metz, C.E.; Kronman, H.B.

    1979-01-01

    A test for the statistical significance of observed differences between two measured Receiver Operating Characteristic (ROC) curves has been designed and evaluated. The set of observer response data for each ROC curve is assumed to be independent and to arise from a ROC curve having a form which, in the absence of statistical fluctuations in the response data, graphs as a straight line on double normal-deviate axes. To test the significance of an apparent difference between two measured ROC curves, maximum likelihood estimates of the two parameters of each curve and the associated parameter variances and covariance are calculated from the corresponding set of observer response data. An approximate Chi-square statistic with two degrees of freedom is then constructed from the differences between the parameters estimated for each ROC curve and from the variances and covariances of these estimates. This statistic is known to be truly Chi-square distributed only in the limit of large numbers of trials in the observer performance experiments. Performance of the statistic for data arising from a limited number of experimental trials was evaluated. Independent sets of rating scale data arising from the same underlying ROC curve were paired, and the fraction of differences found (falsely) significant was compared to the significance level, α, used with the test. Although test performance was found to be somewhat dependent on both the number of trials in the data and the position of the underlying ROC curve in the ROC space, the results for various significance levels showed the test to be reliable under practical experimental conditions

  6. StAR: a simple tool for the statistical comparison of ROC curves

    Directory of Open Access Journals (Sweden)

    Melo Francisco

    2008-06-01

    Full Text Available Abstract Background As in many different areas of science and technology, most important problems in bioinformatics rely on the proper development and assessment of binary classifiers. A generalized assessment of the performance of binary classifiers is typically carried out through the analysis of their receiver operating characteristic (ROC curves. The area under the ROC curve (AUC constitutes a popular indicator of the performance of a binary classifier. However, the assessment of the statistical significance of the difference between any two classifiers based on this measure is not a straightforward task, since not many freely available tools exist. Most existing software is either not free, difficult to use or not easy to automate when a comparative assessment of the performance of many binary classifiers is intended. This constitutes the typical scenario for the optimization of parameters when developing new classifiers and also for their performance validation through the comparison to previous art. Results In this work we describe and release new software to assess the statistical significance of the observed difference between the AUCs of any two classifiers for a common task estimated from paired data or unpaired balanced data. The software is able to perform a pairwise comparison of many classifiers in a single run, without requiring any expert or advanced knowledge to use it. The software relies on a non-parametric test for the difference of the AUCs that accounts for the correlation of the ROC curves. The results are displayed graphically and can be easily customized by the user. A human-readable report is generated and the complete data resulting from the analysis are also available for download, which can be used for further analysis with other software. The software is released as a web server that can be used in any client platform and also as a standalone application for the Linux operating system. Conclusion A new software for

  7. Sigsearch: a new term for post hoc unplanned search for statistically significant relationships with the intent to create publishable findings.

    Science.gov (United States)

    Hashim, Muhammad Jawad

    2010-09-01

    Post-hoc secondary data analysis with no prespecified hypotheses has been discouraged by textbook authors and journal editors alike. Unfortunately no single term describes this phenomenon succinctly. I would like to coin the term "sigsearch" to define this practice and bring it within the teaching lexicon of statistics courses. Sigsearch would include any unplanned, post-hoc search for statistical significance using multiple comparisons of subgroups. It would also include data analysis with outcomes other than the prespecified primary outcome measure of a study as well as secondary data analyses of earlier research.

  8. On detection and assessment of statistical significance of Genomic Islands

    Directory of Open Access Journals (Sweden)

    Chaudhuri Probal

    2008-04-01

    Full Text Available Abstract Background Many of the available methods for detecting Genomic Islands (GIs in prokaryotic genomes use markers such as transposons, proximal tRNAs, flanking repeats etc., or they use other supervised techniques requiring training datasets. Most of these methods are primarily based on the biases in GC content or codon and amino acid usage of the islands. However, these methods either do not use any formal statistical test of significance or use statistical tests for which the critical values and the P-values are not adequately justified. We propose a method, which is unsupervised in nature and uses Monte-Carlo statistical tests based on randomly selected segments of a chromosome. Such tests are supported by precise statistical distribution theory, and consequently, the resulting P-values are quite reliable for making the decision. Results Our algorithm (named Design-Island, an acronym for Detection of Statistically Significant Genomic Island runs in two phases. Some 'putative GIs' are identified in the first phase, and those are refined into smaller segments containing horizontally acquired genes in the refinement phase. This method is applied to Salmonella typhi CT18 genome leading to the discovery of several new pathogenicity, antibiotic resistance and metabolic islands that were missed by earlier methods. Many of these islands contain mobile genetic elements like phage-mediated genes, transposons, integrase and IS elements confirming their horizontal acquirement. Conclusion The proposed method is based on statistical tests supported by precise distribution theory and reliable P-values along with a technique for visualizing statistically significant islands. The performance of our method is better than many other well known methods in terms of their sensitivity and accuracy, and in terms of specificity, it is comparable to other methods.

  9. Increasing the statistical significance of entanglement detection in experiments.

    Science.gov (United States)

    Jungnitsch, Bastian; Niekamp, Sönke; Kleinmann, Matthias; Gühne, Otfried; Lu, He; Gao, Wei-Bo; Chen, Yu-Ao; Chen, Zeng-Bing; Pan, Jian-Wei

    2010-05-28

    Entanglement is often verified by a violation of an inequality like a Bell inequality or an entanglement witness. Considerable effort has been devoted to the optimization of such inequalities in order to obtain a high violation. We demonstrate theoretically and experimentally that such an optimization does not necessarily lead to a better entanglement test, if the statistical error is taken into account. Theoretically, we show for different error models that reducing the violation of an inequality can improve the significance. Experimentally, we observe this phenomenon in a four-photon experiment, testing the Mermin and Ardehali inequality for different levels of noise. Furthermore, we provide a way to develop entanglement tests with high statistical significance.

  10. Statistical comparison of the geometry of second-phase particles

    Energy Technology Data Exchange (ETDEWEB)

    Benes, Viktor, E-mail: benesv@karlin.mff.cuni.cz [Charles University in Prague, Faculty of Mathematics and Physics, Department of Probability and Mathematical Statistics, Sokolovska 83, 186 75 Prague 8-Karlin (Czech Republic); Lechnerova, Radka, E-mail: radka.lech@seznam.cz [Private College on Economical Studies, Ltd., Lindnerova 575/1, 180 00 Prague 8-Liben (Czech Republic); Klebanov, Lev [Charles University in Prague, Faculty of Mathematics and Physics, Department of Probability and Mathematical Statistics, Sokolovska 83, 186 75 Prague 8-Karlin (Czech Republic); Slamova, Margarita, E-mail: slamova@vyzkum-kovu.cz [Research Institute for Metals, Ltd., Panenske Brezany 50, 250 70 Odolena Voda (Czech Republic); Slama, Peter [Research Institute for Metals, Ltd., Panenske Brezany 50, 250 70 Odolena Voda (Czech Republic)

    2009-10-15

    In microscopic studies of materials, there is often a need to provide a statistical test as to whether two microstructures are different or not. Typically, there are some random objects (particles, grains, pores) and the comparison concerns their density, individual geometrical parameters and their spatial distribution. The problem is that neighbouring objects observed in a single window cannot be assumed to be stochastically independent, therefore classical statistical testing based on random sampling is not applicable. The aim of the present paper is to develop a test based on N-distances in probability theory. Using the measurements from a few independent windows, we consider a two-sample test, which involves a large amount of information collected from each window. An application is presented consisting in a comparison of metallographic samples of aluminium alloys, and the results are interpreted.

  11. Statistical comparison of the geometry of second-phase particles

    International Nuclear Information System (INIS)

    Benes, Viktor; Lechnerova, Radka; Klebanov, Lev; Slamova, Margarita; Slama, Peter

    2009-01-01

    In microscopic studies of materials, there is often a need to provide a statistical test as to whether two microstructures are different or not. Typically, there are some random objects (particles, grains, pores) and the comparison concerns their density, individual geometrical parameters and their spatial distribution. The problem is that neighbouring objects observed in a single window cannot be assumed to be stochastically independent, therefore classical statistical testing based on random sampling is not applicable. The aim of the present paper is to develop a test based on N-distances in probability theory. Using the measurements from a few independent windows, we consider a two-sample test, which involves a large amount of information collected from each window. An application is presented consisting in a comparison of metallographic samples of aluminium alloys, and the results are interpreted.

  12. Testing the Difference of Correlated Agreement Coefficients for Statistical Significance

    Science.gov (United States)

    Gwet, Kilem L.

    2016-01-01

    This article addresses the problem of testing the difference between two correlated agreement coefficients for statistical significance. A number of authors have proposed methods for testing the difference between two correlated kappa coefficients, which require either the use of resampling methods or the use of advanced statistical modeling…

  13. Statistical Significance for Hierarchical Clustering

    Science.gov (United States)

    Kimes, Patrick K.; Liu, Yufeng; Hayes, D. Neil; Marron, J. S.

    2017-01-01

    Summary Cluster analysis has proved to be an invaluable tool for the exploratory and unsupervised analysis of high dimensional datasets. Among methods for clustering, hierarchical approaches have enjoyed substantial popularity in genomics and other fields for their ability to simultaneously uncover multiple layers of clustering structure. A critical and challenging question in cluster analysis is whether the identified clusters represent important underlying structure or are artifacts of natural sampling variation. Few approaches have been proposed for addressing this problem in the context of hierarchical clustering, for which the problem is further complicated by the natural tree structure of the partition, and the multiplicity of tests required to parse the layers of nested clusters. In this paper, we propose a Monte Carlo based approach for testing statistical significance in hierarchical clustering which addresses these issues. The approach is implemented as a sequential testing procedure guaranteeing control of the family-wise error rate. Theoretical justification is provided for our approach, and its power to detect true clustering structure is illustrated through several simulation studies and applications to two cancer gene expression datasets. PMID:28099990

  14. Statistical significance of trends in monthly heavy precipitation over the US

    KAUST Repository

    Mahajan, Salil

    2011-05-11

    Trends in monthly heavy precipitation, defined by a return period of one year, are assessed for statistical significance in observations and Global Climate Model (GCM) simulations over the contiguous United States using Monte Carlo non-parametric and parametric bootstrapping techniques. The results from the two Monte Carlo approaches are found to be similar to each other, and also to the traditional non-parametric Kendall\\'s τ test, implying the robustness of the approach. Two different observational data-sets are employed to test for trends in monthly heavy precipitation and are found to exhibit consistent results. Both data-sets demonstrate upward trends, one of which is found to be statistically significant at the 95% confidence level. Upward trends similar to observations are observed in some climate model simulations of the twentieth century, but their statistical significance is marginal. For projections of the twenty-first century, a statistically significant upwards trend is observed in most of the climate models analyzed. The change in the simulated precipitation variance appears to be more important in the twenty-first century projections than changes in the mean precipitation. Stochastic fluctuations of the climate-system are found to be dominate monthly heavy precipitation as some GCM simulations show a downwards trend even in the twenty-first century projections when the greenhouse gas forcings are strong. © 2011 Springer-Verlag.

  15. Sibling Competition & Growth Tradeoffs. Biological vs. Statistical Significance.

    Science.gov (United States)

    Kramer, Karen L; Veile, Amanda; Otárola-Castillo, Erik

    2016-01-01

    Early childhood growth has many downstream effects on future health and reproduction and is an important measure of offspring quality. While a tradeoff between family size and child growth outcomes is theoretically predicted in high-fertility societies, empirical evidence is mixed. This is often attributed to phenotypic variation in parental condition. However, inconsistent study results may also arise because family size confounds the potentially differential effects that older and younger siblings can have on young children's growth. Additionally, inconsistent results might reflect that the biological significance associated with different growth trajectories is poorly understood. This paper addresses these concerns by tracking children's monthly gains in height and weight from weaning to age five in a high fertility Maya community. We predict that: 1) as an aggregate measure family size will not have a major impact on child growth during the post weaning period; 2) competition from young siblings will negatively impact child growth during the post weaning period; 3) however because of their economic value, older siblings will have a negligible effect on young children's growth. Accounting for parental condition, we use linear mixed models to evaluate the effects that family size, younger and older siblings have on children's growth. Congruent with our expectations, it is younger siblings who have the most detrimental effect on children's growth. While we find statistical evidence of a quantity/quality tradeoff effect, the biological significance of these results is negligible in early childhood. Our findings help to resolve why quantity/quality studies have had inconsistent results by showing that sibling competition varies with sibling age composition, not just family size, and that biological significance is distinct from statistical significance.

  16. Sibling Competition & Growth Tradeoffs. Biological vs. Statistical Significance.

    Directory of Open Access Journals (Sweden)

    Karen L Kramer

    Full Text Available Early childhood growth has many downstream effects on future health and reproduction and is an important measure of offspring quality. While a tradeoff between family size and child growth outcomes is theoretically predicted in high-fertility societies, empirical evidence is mixed. This is often attributed to phenotypic variation in parental condition. However, inconsistent study results may also arise because family size confounds the potentially differential effects that older and younger siblings can have on young children's growth. Additionally, inconsistent results might reflect that the biological significance associated with different growth trajectories is poorly understood. This paper addresses these concerns by tracking children's monthly gains in height and weight from weaning to age five in a high fertility Maya community. We predict that: 1 as an aggregate measure family size will not have a major impact on child growth during the post weaning period; 2 competition from young siblings will negatively impact child growth during the post weaning period; 3 however because of their economic value, older siblings will have a negligible effect on young children's growth. Accounting for parental condition, we use linear mixed models to evaluate the effects that family size, younger and older siblings have on children's growth. Congruent with our expectations, it is younger siblings who have the most detrimental effect on children's growth. While we find statistical evidence of a quantity/quality tradeoff effect, the biological significance of these results is negligible in early childhood. Our findings help to resolve why quantity/quality studies have had inconsistent results by showing that sibling competition varies with sibling age composition, not just family size, and that biological significance is distinct from statistical significance.

  17. Increasing the statistical significance of entanglement detection in experiments

    Energy Technology Data Exchange (ETDEWEB)

    Jungnitsch, Bastian; Niekamp, Soenke; Kleinmann, Matthias; Guehne, Otfried [Institut fuer Quantenoptik und Quanteninformation, Innsbruck (Austria); Lu, He; Gao, Wei-Bo; Chen, Zeng-Bing [Hefei National Laboratory for Physical Sciences at Microscale and Department of Modern Physics, University of Science and Technology of China, Hefei (China); Chen, Yu-Ao; Pan, Jian-Wei [Hefei National Laboratory for Physical Sciences at Microscale and Department of Modern Physics, University of Science and Technology of China, Hefei (China); Physikalisches Institut, Universitaet Heidelberg (Germany)

    2010-07-01

    Entanglement is often verified by a violation of an inequality like a Bell inequality or an entanglement witness. Considerable effort has been devoted to the optimization of such inequalities in order to obtain a high violation. We demonstrate theoretically and experimentally that such an optimization does not necessarily lead to a better entanglement test, if the statistical error is taken into account. Theoretically, we show for different error models that reducing the violation of an inequality can improve the significance. We show this to be the case for an error model in which the variance of an observable is interpreted as its error and for the standard error model in photonic experiments. Specifically, we demonstrate that the Mermin inequality yields a Bell test which is statistically more significant than the Ardehali inequality in the case of a photonic four-qubit state that is close to a GHZ state. Experimentally, we observe this phenomenon in a four-photon experiment, testing the above inequalities for different levels of noise.

  18. Statistical Network Analysis for Functional MRI: Mean Networks and Group Comparisons.

    Directory of Open Access Journals (Sweden)

    Cedric E Ginestet

    2014-05-01

    Full Text Available Comparing networks in neuroscience is hard, because the topological properties of a given network are necessarily dependent on the number of edges of that network. This problem arises in the analysis of both weighted and unweighted networks. The term density is often used in this context, in order to refer to the mean edge weight of a weighted network, or to the number of edges in an unweighted one. Comparing families of networks is therefore statistically difficult because differences in topology are necessarily associated with differences in density. In this review paper, we consider this problem from two different perspectives, which include (i the construction of summary networks, such as how to compute and visualize the mean network from a sample of network-valued data points; and (ii how to test for topological differences, when two families of networks also exhibit significant differences in density. In the first instance, we show that the issue of summarizing a family of networks can be conducted by either adopting a mass-univariate approach, which produces a statistical parametric network (SPN, or by directly computing the mean network, provided that a metric has been specified on the space of all networks with a given number of nodes. In the second part of this review, we then highlight the inherent problems associated with the comparison of topological functions of families of networks that differ in density. In particular, we show that a wide range of topological summaries, such as global efficiency and network modularity are highly sensitive to differences in density. Moreover, these problems are not restricted to unweighted metrics, as we demonstrate that the same issues remain present when considering the weighted versions of these metrics. We conclude by encouraging caution, when reporting such statistical comparisons, and by emphasizing the importance of constructing summary networks.

  19. Reporting effect sizes as a supplement to statistical significance ...

    African Journals Online (AJOL)

    The purpose of the article is to review the statistical significance reporting practices in reading instruction studies and to provide guidelines for when to calculate and report effect sizes in educational research. A review of six readily accessible (online) and accredited journals publishing research on reading instruction ...

  20. More 'mapping' in brain mapping: statistical comparison of effects

    DEFF Research Database (Denmark)

    Jernigan, Terry Lynne; Gamst, Anthony C.; Fennema-Notestine, Christine

    2003-01-01

    The term 'mapping' in the context of brain imaging conveys to most the concept of localization; that is, a brain map is meant to reveal a relationship between some condition or parameter and specific sites within the brain. However, in reality, conventional voxel-based maps of brain function......, or for that matter of brain structure, are generally constructed using analyses that yield no basis for inferences regarding the spatial nonuniformity of the effects. In the normal analysis path for functional images, for example, there is nowhere a statistical comparison of the observed effect in any voxel relative...... to that in any other voxel. Under these circumstances, strictly speaking, the presence of significant activation serves as a legitimate basis only for inferences about the brain as a unit. In their discussion of results, investigators rarely are content to confirm the brain's role, and instead generally prefer...

  1. Your Chi-Square Test Is Statistically Significant: Now What?

    Science.gov (United States)

    Sharpe, Donald

    2015-01-01

    Applied researchers have employed chi-square tests for more than one hundred years. This paper addresses the question of how one should follow a statistically significant chi-square test result in order to determine the source of that result. Four approaches were evaluated: calculating residuals, comparing cells, ransacking, and partitioning. Data…

  2. EPA/NMED/LANL 1998 water quality results: Statistical analysis and comparison to regulatory standards

    International Nuclear Information System (INIS)

    Gallaher, B.; Mercier, T.; Black, P.; Mullen, K.

    2000-01-01

    Four governmental agencies conducted a round of groundwater, surface water, and spring water sampling at the Los Alamos National Laboratory during 1998. Samples were split among the four parties and sent to independent analytical laboratories. Results from three of the agencies were available for this study. Comparisons of analytical results that were paired by location and date were made between the various analytical laboratories. The results for over 50 split samples analyzed for inorganic chemicals, metals, and radionuclides were compared. Statistical analyses included non-parametric (sign test and signed-ranks test) and parametric (paired t-test and linear regression) methods. The data pairs were tested for statistically significant differences, defined by an observed significance level, or p-value, less than 0.05. The main conclusion is that the laboratories' performances are similar across most of the analytes that were measured. In some 95% of the laboratory measurements there was agreement on whether contaminant levels exceeded regulatory limits. The most significant differences in performance were noted for the radioactive suite, particularly for gross alpha particle activity and Sr-90

  3. Confidence intervals permit, but don't guarantee, better inference than statistical significance testing

    Directory of Open Access Journals (Sweden)

    Melissa Coulson

    2010-07-01

    Full Text Available A statistically significant result, and a non-significant result may differ little, although significance status may tempt an interpretation of difference. Two studies are reported that compared interpretation of such results presented using null hypothesis significance testing (NHST, or confidence intervals (CIs. Authors of articles published in psychology, behavioural neuroscience, and medical journals were asked, via email, to interpret two fictitious studies that found similar results, one statistically significant, and the other non-significant. Responses from 330 authors varied greatly, but interpretation was generally poor, whether results were presented as CIs or using NHST. However, when interpreting CIs respondents who mentioned NHST were 60% likely to conclude, unjustifiably, the two results conflicted, whereas those who interpreted CIs without reference to NHST were 95% likely to conclude, justifiably, the two results were consistent. Findings were generally similar for all three disciplines. An email survey of academic psychologists confirmed that CIs elicit better interpretations if NHST is not invoked. Improved statistical inference can result from encouragement of meta-analytic thinking and use of CIs but, for full benefit, such highly desirable statistical reform requires also that researchers interpret CIs without recourse to NHST.

  4. Statistical significance versus clinical relevance.

    Science.gov (United States)

    van Rijn, Marieke H C; Bech, Anneke; Bouyer, Jean; van den Brand, Jan A J G

    2017-04-01

    In March this year, the American Statistical Association (ASA) posted a statement on the correct use of P-values, in response to a growing concern that the P-value is commonly misused and misinterpreted. We aim to translate these warnings given by the ASA into a language more easily understood by clinicians and researchers without a deep background in statistics. Moreover, we intend to illustrate the limitations of P-values, even when used and interpreted correctly, and bring more attention to the clinical relevance of study findings using two recently reported studies as examples. We argue that P-values are often misinterpreted. A common mistake is saying that P < 0.05 means that the null hypothesis is false, and P ≥0.05 means that the null hypothesis is true. The correct interpretation of a P-value of 0.05 is that if the null hypothesis were indeed true, a similar or more extreme result would occur 5% of the times upon repeating the study in a similar sample. In other words, the P-value informs about the likelihood of the data given the null hypothesis and not the other way around. A possible alternative related to the P-value is the confidence interval (CI). It provides more information on the magnitude of an effect and the imprecision with which that effect was estimated. However, there is no magic bullet to replace P-values and stop erroneous interpretation of scientific results. Scientists and readers alike should make themselves familiar with the correct, nuanced interpretation of statistical tests, P-values and CIs. © The Author 2017. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.

  5. Statistical significance of epidemiological data. Seminar: Evaluation of epidemiological studies

    International Nuclear Information System (INIS)

    Weber, K.H.

    1993-01-01

    In stochastic damages, the numbers of events, e.g. the persons who are affected by or have died of cancer, and thus the relative frequencies (incidence or mortality) are binomially distributed random variables. Their statistical fluctuations can be characterized by confidence intervals. For epidemiologic questions, especially for the analysis of stochastic damages in the low dose range, the following issues are interesting: - Is a sample (a group of persons) with a definite observed damage frequency part of the whole population? - Is an observed frequency difference between two groups of persons random or statistically significant? - Is an observed increase or decrease of the frequencies with increasing dose random or statistically significant and how large is the regression coefficient (= risk coefficient) in this case? These problems can be solved by sttistical tests. So-called distribution-free tests and tests which are not bound to the supposition of normal distribution are of particular interest, such as: - χ 2 -independence test (test in contingency tables); - Fisher-Yates-test; - trend test according to Cochran; - rank correlation test given by Spearman. These tests are explained in terms of selected epidemiologic data, e.g. of leukaemia clusters, of the cancer mortality of the Japanese A-bomb survivors especially in the low dose range as well as on the sample of the cancer mortality in the high background area in Yangjiang (China). (orig.) [de

  6. Statistical Significance and Effect Size: Two Sides of a Coin.

    Science.gov (United States)

    Fan, Xitao

    This paper suggests that statistical significance testing and effect size are two sides of the same coin; they complement each other, but do not substitute for one another. Good research practice requires that both should be taken into consideration to make sound quantitative decisions. A Monte Carlo simulation experiment was conducted, and a…

  7. A method for statistical comparison of data sets and its uses in analysis of nuclear physics data

    International Nuclear Information System (INIS)

    Bityukov, S.I.; Smirnova, V.V.; Krasnikov, N.V.; Maksimushkina, A.V.; Nikitenko, A.N.

    2014-01-01

    Authors propose a method for statistical comparison of two data sets. The method is based on the method of statistical comparison of histograms. As an estimator of quality of the decision made, it is proposed to use the value which it is possible to call the probability that the decision (data sets are various) is correct [ru

  8. Publication of statistically significant research findings in prosthodontics & implant dentistry in the context of other dental specialties.

    Science.gov (United States)

    Papageorgiou, Spyridon N; Kloukos, Dimitrios; Petridis, Haralampos; Pandis, Nikolaos

    2015-10-01

    To assess the hypothesis that there is excessive reporting of statistically significant studies published in prosthodontic and implantology journals, which could indicate selective publication. The last 30 issues of 9 journals in prosthodontics and implant dentistry were hand-searched for articles with statistical analyses. The percentages of significant and non-significant results were tabulated by parameter of interest. Univariable/multivariable logistic regression analyses were applied to identify possible predictors of reporting statistically significance findings. The results of this study were compared with similar studies in dentistry with random-effects meta-analyses. From the 2323 included studies 71% of them reported statistically significant results, with the significant results ranging from 47% to 86%. Multivariable modeling identified that geographical area and involvement of statistician were predictors of statistically significant results. Compared to interventional studies, the odds that in vitro and observational studies would report statistically significant results was increased by 1.20 times (OR: 2.20, 95% CI: 1.66-2.92) and 0.35 times (OR: 1.35, 95% CI: 1.05-1.73), respectively. The probability of statistically significant results from randomized controlled trials was significantly lower compared to various study designs (difference: 30%, 95% CI: 11-49%). Likewise the probability of statistically significant results in prosthodontics and implant dentistry was lower compared to other dental specialties, but this result did not reach statistical significant (P>0.05). The majority of studies identified in the fields of prosthodontics and implant dentistry presented statistically significant results. The same trend existed in publications of other specialties in dentistry. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Significant Statistics: Viewed with a Contextual Lens

    Science.gov (United States)

    Tait-McCutcheon, Sandi

    2010-01-01

    This paper examines the pedagogical and organisational changes three lead teachers made to their statistics teaching and learning programs. The lead teachers posed the research question: What would the effect of contextually integrating statistical investigations and literacies into other curriculum areas be on student achievement? By finding the…

  10. An ANOVA approach for statistical comparisons of brain networks.

    Science.gov (United States)

    Fraiman, Daniel; Fraiman, Ricardo

    2018-03-16

    The study of brain networks has developed extensively over the last couple of decades. By contrast, techniques for the statistical analysis of these networks are less developed. In this paper, we focus on the statistical comparison of brain networks in a nonparametric framework and discuss the associated detection and identification problems. We tested network differences between groups with an analysis of variance (ANOVA) test we developed specifically for networks. We also propose and analyse the behaviour of a new statistical procedure designed to identify different subnetworks. As an example, we show the application of this tool in resting-state fMRI data obtained from the Human Connectome Project. We identify, among other variables, that the amount of sleep the days before the scan is a relevant variable that must be controlled. Finally, we discuss the potential bias in neuroimaging findings that is generated by some behavioural and brain structure variables. Our method can also be applied to other kind of networks such as protein interaction networks, gene networks or social networks.

  11. Statistics Using Just One Formula

    Science.gov (United States)

    Rosenthal, Jeffrey S.

    2018-01-01

    This article advocates that introductory statistics be taught by basing all calculations on a single simple margin-of-error formula and deriving all of the standard introductory statistical concepts (confidence intervals, significance tests, comparisons of means and proportions, etc) from that one formula. It is argued that this approach will…

  12. "What If" Analyses: Ways to Interpret Statistical Significance Test Results Using EXCEL or "R"

    Science.gov (United States)

    Ozturk, Elif

    2012-01-01

    The present paper aims to review two motivations to conduct "what if" analyses using Excel and "R" to understand the statistical significance tests through the sample size context. "What if" analyses can be used to teach students what statistical significance tests really do and in applied research either prospectively to estimate what sample size…

  13. Statistical vs. Economic Significance in Economics and Econometrics: Further comments on McCloskey & Ziliak

    DEFF Research Database (Denmark)

    Engsted, Tom

    I comment on the controversy between McCloskey & Ziliak and Hoover & Siegler on statistical versus economic significance, in the March 2008 issue of the Journal of Economic Methodology. I argue that while McCloskey & Ziliak are right in emphasizing 'real error', i.e. non-sampling error that cannot...... be eliminated through specification testing, they fail to acknowledge those areas in economics, e.g. rational expectations macroeconomics and asset pricing, where researchers clearly distinguish between statistical and economic significance and where statistical testing plays a relatively minor role in model...

  14. Reducing statistics anxiety and enhancing statistics learning achievement: effectiveness of a one-minute strategy.

    Science.gov (United States)

    Chiou, Chei-Chang; Wang, Yu-Min; Lee, Li-Tze

    2014-08-01

    Statistical knowledge is widely used in academia; however, statistics teachers struggle with the issue of how to reduce students' statistics anxiety and enhance students' statistics learning. This study assesses the effectiveness of a "one-minute paper strategy" in reducing students' statistics-related anxiety and in improving students' statistics-related achievement. Participants were 77 undergraduates from two classes enrolled in applied statistics courses. An experiment was implemented according to a pretest/posttest comparison group design. The quasi-experimental design showed that the one-minute paper strategy significantly reduced students' statistics anxiety and improved students' statistics learning achievement. The strategy was a better instructional tool than the textbook exercise for reducing students' statistics anxiety and improving students' statistics achievement.

  15. Comparison of long-term Moscow and Danish NLC observations: statistical results

    Directory of Open Access Journals (Sweden)

    P. Dalin

    2006-11-01

    Full Text Available Noctilucent clouds (NLC are the highest clouds in the Earth's atmosphere, observed close to the mesopause at 80–90 km altitudes. Systematic NLC observations conducted in Moscow for the period of 1962–2005 and in Denmark for 1983–2005 are compared and statistical results both for seasonally summarized NLC parameters and for individual NLC appearances are described. Careful attention is paid to the weather conditions during each season of observations. This turns out to be a very important factor both for the NLC case study and for long-term data set analysis. Time series of seasonal values show moderate similarity (taking into account the weather conditions but, at the same time, the comparison of individual cases of NLC occurrence reveals substantial differences. There are positive trends in the Moscow and Danish normalized NLC brightness as well as nearly zero trend in the Moscow normalized NLC occurrence frequency but these long-term changes are not statistically significant. The quasi-ten-year cycle in NLC parameters is about 1 year shorter than the solar cycle during the same period. The characteristic scale of NLC fields is estimated for the first time and it is found to be less than 800 km.

  16. Distinguishing between statistical significance and practical/clinical meaningfulness using statistical inference.

    Science.gov (United States)

    Wilkinson, Michael

    2014-03-01

    Decisions about support for predictions of theories in light of data are made using statistical inference. The dominant approach in sport and exercise science is the Neyman-Pearson (N-P) significance-testing approach. When applied correctly it provides a reliable procedure for making dichotomous decisions for accepting or rejecting zero-effect null hypotheses with known and controlled long-run error rates. Type I and type II error rates must be specified in advance and the latter controlled by conducting an a priori sample size calculation. The N-P approach does not provide the probability of hypotheses or indicate the strength of support for hypotheses in light of data, yet many scientists believe it does. Outcomes of analyses allow conclusions only about the existence of non-zero effects, and provide no information about the likely size of true effects or their practical/clinical value. Bayesian inference can show how much support data provide for different hypotheses, and how personal convictions should be altered in light of data, but the approach is complicated by formulating probability distributions about prior subjective estimates of population effects. A pragmatic solution is magnitude-based inference, which allows scientists to estimate the true magnitude of population effects and how likely they are to exceed an effect magnitude of practical/clinical importance, thereby integrating elements of subjective Bayesian-style thinking. While this approach is gaining acceptance, progress might be hastened if scientists appreciate the shortcomings of traditional N-P null hypothesis significance testing.

  17. Codon Deviation Coefficient: a novel measure for estimating codon usage bias and its statistical significance

    Directory of Open Access Journals (Sweden)

    Zhang Zhang

    2012-03-01

    Full Text Available Abstract Background Genetic mutation, selective pressure for translational efficiency and accuracy, level of gene expression, and protein function through natural selection are all believed to lead to codon usage bias (CUB. Therefore, informative measurement of CUB is of fundamental importance to making inferences regarding gene function and genome evolution. However, extant measures of CUB have not fully accounted for the quantitative effect of background nucleotide composition and have not statistically evaluated the significance of CUB in sequence analysis. Results Here we propose a novel measure--Codon Deviation Coefficient (CDC--that provides an informative measurement of CUB and its statistical significance without requiring any prior knowledge. Unlike previous measures, CDC estimates CUB by accounting for background nucleotide compositions tailored to codon positions and adopts the bootstrapping to assess the statistical significance of CUB for any given sequence. We evaluate CDC by examining its effectiveness on simulated sequences and empirical data and show that CDC outperforms extant measures by achieving a more informative estimation of CUB and its statistical significance. Conclusions As validated by both simulated and empirical data, CDC provides a highly informative quantification of CUB and its statistical significance, useful for determining comparative magnitudes and patterns of biased codon usage for genes or genomes with diverse sequence compositions.

  18. Comparisons between physics-based, engineering, and statistical learning models for outdoor sound propagation.

    Science.gov (United States)

    Hart, Carl R; Reznicek, Nathan J; Wilson, D Keith; Pettit, Chris L; Nykaza, Edward T

    2016-05-01

    Many outdoor sound propagation models exist, ranging from highly complex physics-based simulations to simplified engineering calculations, and more recently, highly flexible statistical learning methods. Several engineering and statistical learning models are evaluated by using a particular physics-based model, namely, a Crank-Nicholson parabolic equation (CNPE), as a benchmark. Narrowband transmission loss values predicted with the CNPE, based upon a simulated data set of meteorological, boundary, and source conditions, act as simulated observations. In the simulated data set sound propagation conditions span from downward refracting to upward refracting, for acoustically hard and soft boundaries, and low frequencies. Engineering models used in the comparisons include the ISO 9613-2 method, Harmonoise, and Nord2000 propagation models. Statistical learning methods used in the comparisons include bagged decision tree regression, random forest regression, boosting regression, and artificial neural network models. Computed skill scores are relative to sound propagation in a homogeneous atmosphere over a rigid ground. Overall skill scores for the engineering noise models are 0.6%, -7.1%, and 83.8% for the ISO 9613-2, Harmonoise, and Nord2000 models, respectively. Overall skill scores for the statistical learning models are 99.5%, 99.5%, 99.6%, and 99.6% for bagged decision tree, random forest, boosting, and artificial neural network regression models, respectively.

  19. Statistics Refresher for Molecular Imaging Technologists, Part 2: Accuracy of Interpretation, Significance, and Variance.

    Science.gov (United States)

    Farrell, Mary Beth

    2018-06-01

    This article is the second part of a continuing education series reviewing basic statistics that nuclear medicine and molecular imaging technologists should understand. In this article, the statistics for evaluating interpretation accuracy, significance, and variance are discussed. Throughout the article, actual statistics are pulled from the published literature. We begin by explaining 2 methods for quantifying interpretive accuracy: interreader and intrareader reliability. Agreement among readers can be expressed simply as a percentage. However, the Cohen κ-statistic is a more robust measure of agreement that accounts for chance. The higher the κ-statistic is, the higher is the agreement between readers. When 3 or more readers are being compared, the Fleiss κ-statistic is used. Significance testing determines whether the difference between 2 conditions or interventions is meaningful. Statistical significance is usually expressed using a number called a probability ( P ) value. Calculation of P value is beyond the scope of this review. However, knowing how to interpret P values is important for understanding the scientific literature. Generally, a P value of less than 0.05 is considered significant and indicates that the results of the experiment are due to more than just chance. Variance, standard deviation (SD), confidence interval, and standard error (SE) explain the dispersion of data around a mean of a sample drawn from a population. SD is commonly reported in the literature. A small SD indicates that there is not much variation in the sample data. Many biologic measurements fall into what is referred to as a normal distribution taking the shape of a bell curve. In a normal distribution, 68% of the data will fall within 1 SD, 95% will fall within 2 SDs, and 99.7% will fall within 3 SDs. Confidence interval defines the range of possible values within which the population parameter is likely to lie and gives an idea of the precision of the statistic being

  20. Systematic reviews of anesthesiologic interventions reported as statistically significant

    DEFF Research Database (Denmark)

    Imberger, Georgina; Gluud, Christian; Boylan, John

    2015-01-01

    statistically significant meta-analyses of anesthesiologic interventions, we used TSA to estimate power and imprecision in the context of sparse data and repeated updates. METHODS: We conducted a search to identify all systematic reviews with meta-analyses that investigated an intervention that may......: From 11,870 titles, we found 682 systematic reviews that investigated anesthesiologic interventions. In the 50 sampled meta-analyses, the median number of trials included was 8 (interquartile range [IQR], 5-14), the median number of participants was 964 (IQR, 523-1736), and the median number...

  1. Using the Bootstrap Method for a Statistical Significance Test of Differences between Summary Histograms

    Science.gov (United States)

    Xu, Kuan-Man

    2006-01-01

    A new method is proposed to compare statistical differences between summary histograms, which are the histograms summed over a large ensemble of individual histograms. It consists of choosing a distance statistic for measuring the difference between summary histograms and using a bootstrap procedure to calculate the statistical significance level. Bootstrapping is an approach to statistical inference that makes few assumptions about the underlying probability distribution that describes the data. Three distance statistics are compared in this study. They are the Euclidean distance, the Jeffries-Matusita distance and the Kuiper distance. The data used in testing the bootstrap method are satellite measurements of cloud systems called cloud objects. Each cloud object is defined as a contiguous region/patch composed of individual footprints or fields of view. A histogram of measured values over footprints is generated for each parameter of each cloud object and then summary histograms are accumulated over all individual histograms in a given cloud-object size category. The results of statistical hypothesis tests using all three distances as test statistics are generally similar, indicating the validity of the proposed method. The Euclidean distance is determined to be most suitable after comparing the statistical tests of several parameters with distinct probability distributions among three cloud-object size categories. Impacts on the statistical significance levels resulting from differences in the total lengths of satellite footprint data between two size categories are also discussed.

  2. P-Value, a true test of statistical significance? a cautionary note ...

    African Journals Online (AJOL)

    While it's not the intention of the founders of significance testing and hypothesis testing to have the two ideas intertwined as if they are complementary, the inconvenient marriage of the two practices into one coherent, convenient, incontrovertible and misinterpreted practice has dotted our standard statistics textbooks and ...

  3. Codon Deviation Coefficient: A novel measure for estimating codon usage bias and its statistical significance

    KAUST Repository

    Zhang, Zhang

    2012-03-22

    Background: Genetic mutation, selective pressure for translational efficiency and accuracy, level of gene expression, and protein function through natural selection are all believed to lead to codon usage bias (CUB). Therefore, informative measurement of CUB is of fundamental importance to making inferences regarding gene function and genome evolution. However, extant measures of CUB have not fully accounted for the quantitative effect of background nucleotide composition and have not statistically evaluated the significance of CUB in sequence analysis.Results: Here we propose a novel measure--Codon Deviation Coefficient (CDC)--that provides an informative measurement of CUB and its statistical significance without requiring any prior knowledge. Unlike previous measures, CDC estimates CUB by accounting for background nucleotide compositions tailored to codon positions and adopts the bootstrapping to assess the statistical significance of CUB for any given sequence. We evaluate CDC by examining its effectiveness on simulated sequences and empirical data and show that CDC outperforms extant measures by achieving a more informative estimation of CUB and its statistical significance.Conclusions: As validated by both simulated and empirical data, CDC provides a highly informative quantification of CUB and its statistical significance, useful for determining comparative magnitudes and patterns of biased codon usage for genes or genomes with diverse sequence compositions. 2012 Zhang et al; licensee BioMed Central Ltd.

  4. Interpreting Statistical Significance Test Results: A Proposed New "What If" Method.

    Science.gov (United States)

    Kieffer, Kevin M.; Thompson, Bruce

    As the 1994 publication manual of the American Psychological Association emphasized, "p" values are affected by sample size. As a result, it can be helpful to interpret the results of statistical significant tests in a sample size context by conducting so-called "what if" analyses. However, these methods can be inaccurate…

  5. Measuring individual significant change on the Beck Depression Inventory-II through IRT-based statistics.

    NARCIS (Netherlands)

    Brouwer, D.; Meijer, R.R.; Zevalkink, D.J.

    2013-01-01

    Several researchers have emphasized that item response theory (IRT)-based methods should be preferred over classical approaches in measuring change for individual patients. In the present study we discuss and evaluate the use of IRT-based statistics to measure statistical significant individual

  6. Statistical comparison of two or more SAGE libraries: one tag at a time

    NARCIS (Netherlands)

    Schaaf, Gerben J.; van Ruissen, Fred; van Kampen, Antoine; Kool, Marcel; Ruijter, Jan M.

    2008-01-01

    Several statistical tests have been introduced for the comparison of serial analysis of gene expression (SAGE) libraries to quantitatively analyze the differential expression of genes. As each SAGE library is only one measurement, the necessary information on biological variation or experimental

  7. Strategies for Testing Statistical and Practical Significance in Detecting DIF with Logistic Regression Models

    Science.gov (United States)

    Fidalgo, Angel M.; Alavi, Seyed Mohammad; Amirian, Seyed Mohammad Reza

    2014-01-01

    This study examines three controversial aspects in differential item functioning (DIF) detection by logistic regression (LR) models: first, the relative effectiveness of different analytical strategies for detecting DIF; second, the suitability of the Wald statistic for determining the statistical significance of the parameters of interest; and…

  8. Quantitative Imaging Biomarkers: A Review of Statistical Methods for Computer Algorithm Comparisons

    Science.gov (United States)

    2014-01-01

    Quantitative biomarkers from medical images are becoming important tools for clinical diagnosis, staging, monitoring, treatment planning, and development of new therapies. While there is a rich history of the development of quantitative imaging biomarker (QIB) techniques, little attention has been paid to the validation and comparison of the computer algorithms that implement the QIB measurements. In this paper we provide a framework for QIB algorithm comparisons. We first review and compare various study designs, including designs with the true value (e.g. phantoms, digital reference images, and zero-change studies), designs with a reference standard (e.g. studies testing equivalence with a reference standard), and designs without a reference standard (e.g. agreement studies and studies of algorithm precision). The statistical methods for comparing QIB algorithms are then presented for various study types using both aggregate and disaggregate approaches. We propose a series of steps for establishing the performance of a QIB algorithm, identify limitations in the current statistical literature, and suggest future directions for research. PMID:24919829

  9. Quantitative imaging biomarkers: a review of statistical methods for computer algorithm comparisons.

    Science.gov (United States)

    Obuchowski, Nancy A; Reeves, Anthony P; Huang, Erich P; Wang, Xiao-Feng; Buckler, Andrew J; Kim, Hyun J Grace; Barnhart, Huiman X; Jackson, Edward F; Giger, Maryellen L; Pennello, Gene; Toledano, Alicia Y; Kalpathy-Cramer, Jayashree; Apanasovich, Tatiyana V; Kinahan, Paul E; Myers, Kyle J; Goldgof, Dmitry B; Barboriak, Daniel P; Gillies, Robert J; Schwartz, Lawrence H; Sullivan, Daniel C

    2015-02-01

    Quantitative biomarkers from medical images are becoming important tools for clinical diagnosis, staging, monitoring, treatment planning, and development of new therapies. While there is a rich history of the development of quantitative imaging biomarker (QIB) techniques, little attention has been paid to the validation and comparison of the computer algorithms that implement the QIB measurements. In this paper we provide a framework for QIB algorithm comparisons. We first review and compare various study designs, including designs with the true value (e.g. phantoms, digital reference images, and zero-change studies), designs with a reference standard (e.g. studies testing equivalence with a reference standard), and designs without a reference standard (e.g. agreement studies and studies of algorithm precision). The statistical methods for comparing QIB algorithms are then presented for various study types using both aggregate and disaggregate approaches. We propose a series of steps for establishing the performance of a QIB algorithm, identify limitations in the current statistical literature, and suggest future directions for research. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  10. Thresholds for statistical and clinical significance in systematic reviews with meta-analytic methods

    DEFF Research Database (Denmark)

    Jakobsen, Janus Christian; Wetterslev, Jorn; Winkel, Per

    2014-01-01

    BACKGROUND: Thresholds for statistical significance when assessing meta-analysis results are being insufficiently demonstrated by traditional 95% confidence intervals and P-values. Assessment of intervention effects in systematic reviews with meta-analysis deserves greater rigour. METHODS......: Methodologies for assessing statistical and clinical significance of intervention effects in systematic reviews were considered. Balancing simplicity and comprehensiveness, an operational procedure was developed, based mainly on The Cochrane Collaboration methodology and the Grading of Recommendations...... Assessment, Development, and Evaluation (GRADE) guidelines. RESULTS: We propose an eight-step procedure for better validation of meta-analytic results in systematic reviews (1) Obtain the 95% confidence intervals and the P-values from both fixed-effect and random-effects meta-analyses and report the most...

  11. Statistical Comparison of the Baseline Mechanical Properties of NBG-18 and PCEA Graphite

    Energy Technology Data Exchange (ETDEWEB)

    Mark C. Carroll; David T. Rohrbaugh

    2013-08-01

    High-purity graphite is the core structural material of choice in the Very High Temperature Reactor (VHTR), a graphite-moderated, helium-cooled design that is capable of producing process heat for power generation and for industrial process that require temperatures higher than the outlet temperatures of present nuclear reactors. The Baseline Graphite Characterization Program is endeavoring to minimize the conservative estimates of as-manufactured mechanical and physical properties by providing comprehensive data that captures the level of variation in measured values. In addition to providing a comprehensive comparison between these values in different nuclear grades, the program is also carefully tracking individual specimen source, position, and orientation information in order to provide comparisons and variations between different lots, different billets, and different positions from within a single billet. This report is a preliminary comparison between the two grades of graphite that were initially favored in the two main VHTR designs. NBG-18, a medium-grain pitch coke graphite from SGL formed via vibration molding, was the favored structural material in the pebble-bed configuration, while PCEA, a smaller grain, petroleum coke, extruded graphite from GrafTech was favored for the prismatic configuration. An analysis of the comparison between these two grades will include not only the differences in fundamental and statistically-significant individual strength levels, but also the differences in variability in properties within each of the grades that will ultimately provide the basis for the prediction of in-service performance. The comparative performance of the different types of nuclear grade graphites will continue to evolve as thousands more specimens are fully characterized from the numerous grades of graphite being evaluated.

  12. Comparison of Artificial Neural Networks and ARIMA statistical models in simulations of target wind time series

    Science.gov (United States)

    Kolokythas, Kostantinos; Vasileios, Salamalikis; Athanassios, Argiriou; Kazantzidis, Andreas

    2015-04-01

    The wind is a result of complex interactions of numerous mechanisms taking place in small or large scales, so, the better knowledge of its behavior is essential in a variety of applications, especially in the field of power production coming from wind turbines. In the literature there is a considerable number of models, either physical or statistical ones, dealing with the problem of simulation and prediction of wind speed. Among others, Artificial Neural Networks (ANNs) are widely used for the purpose of wind forecasting and, in the great majority of cases, outperform other conventional statistical models. In this study, a number of ANNs with different architectures, which have been created and applied in a dataset of wind time series, are compared to Auto Regressive Integrated Moving Average (ARIMA) statistical models. The data consist of mean hourly wind speeds coming from a wind farm on a hilly Greek region and cover a period of one year (2013). The main goal is to evaluate the models ability to simulate successfully the wind speed at a significant point (target). Goodness-of-fit statistics are performed for the comparison of the different methods. In general, the ANN showed the best performance in the estimation of wind speed prevailing over the ARIMA models.

  13. Erroneous analyses of interactions in neuroscience: a problem of significance

    NARCIS (Netherlands)

    Nieuwenhuis, S.; Forstmann, B.U.; Wagenmakers, E.-J.

    2011-01-01

    In theory, a comparison of two experimental effects requires a statistical test on their difference. In practice, this comparison is often based on an incorrect procedure involving two separate tests in which researchers conclude that effects differ when one effect is significant (P < 0.05) but the

  14. Statistical approaches in published ophthalmic clinical science papers: a comparison to statistical practice two decades ago.

    Science.gov (United States)

    Zhang, Harrison G; Ying, Gui-Shuang

    2018-02-09

    The aim of this study is to evaluate the current practice of statistical analysis of eye data in clinical science papers published in British Journal of Ophthalmology ( BJO ) and to determine whether the practice of statistical analysis has improved in the past two decades. All clinical science papers (n=125) published in BJO in January-June 2017 were reviewed for their statistical analysis approaches for analysing primary ocular measure. We compared our findings to the results from a previous paper that reviewed BJO papers in 1995. Of 112 papers eligible for analysis, half of the studies analysed the data at an individual level because of the nature of observation, 16 (14%) studies analysed data from one eye only, 36 (32%) studies analysed data from both eyes at ocular level, one study (1%) analysed the overall summary of ocular finding per individual and three (3%) studies used the paired comparison. Among studies with data available from both eyes, 50 (89%) of 56 papers in 2017 did not analyse data from both eyes or ignored the intereye correlation, as compared with in 60 (90%) of 67 papers in 1995 (P=0.96). Among studies that analysed data from both eyes at an ocular level, 33 (92%) of 36 studies completely ignored the intereye correlation in 2017, as compared with in 16 (89%) of 18 studies in 1995 (P=0.40). A majority of studies did not analyse the data properly when data from both eyes were available. The practice of statistical analysis did not improve in the past two decades. Collaborative efforts should be made in the vision research community to improve the practice of statistical analysis for ocular data. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  15. Scaling images using their background ratio. An application in statistical comparisons of images

    International Nuclear Information System (INIS)

    Kalemis, A; Binnie, D; Bailey, D L; Flower, M A; Ott, R J

    2003-01-01

    Comparison of two medical images often requires image scaling as a pre-processing step. This is usually done with the scaling-to-the-mean or scaling-to-the-maximum techniques which, under certain circumstances, in quantitative applications may contribute a significant amount of bias. In this paper, we present a simple scaling method which assumes only that the most predominant values in the corresponding images belong to their background structure. The ratio of the two images to be compared is calculated and its frequency histogram is plotted. The scaling factor is given by the position of the peak in this histogram which belongs to the background structure. The method was tested against the traditional scaling-to-the-mean technique on simulated planar gamma-camera images which were compared using pixelwise statistical parametric tests. Both sensitivity and specificity for each condition were measured over a range of different contrasts and sizes of inhomogeneity for the two scaling techniques. The new method was found to preserve sensitivity in all cases while the traditional technique resulted in significant degradation of sensitivity in certain cases

  16. Scaling images using their background ratio. An application in statistical comparisons of images.

    Science.gov (United States)

    Kalemis, A; Binnie, D; Bailey, D L; Flower, M A; Ott, R J

    2003-06-07

    Comparison of two medical images often requires image scaling as a pre-processing step. This is usually done with the scaling-to-the-mean or scaling-to-the-maximum techniques which, under certain circumstances, in quantitative applications may contribute a significant amount of bias. In this paper, we present a simple scaling method which assumes only that the most predominant values in the corresponding images belong to their background structure. The ratio of the two images to be compared is calculated and its frequency histogram is plotted. The scaling factor is given by the position of the peak in this histogram which belongs to the background structure. The method was tested against the traditional scaling-to-the-mean technique on simulated planar gamma-camera images which were compared using pixelwise statistical parametric tests. Both sensitivity and specificity for each condition were measured over a range of different contrasts and sizes of inhomogeneity for the two scaling techniques. The new method was found to preserve sensitivity in all cases while the traditional technique resulted in significant degradation of sensitivity in certain cases.

  17. Change of diffusion anisotropy in patients with acute cerebral infarction using statistical parametric analysis

    International Nuclear Information System (INIS)

    Morita, Naomi; Harada, Masafumi; Uno, Masaaki; Furutani, Kaori; Nishitani, Hiromu

    2006-01-01

    We conducted statistical parametric comparison of fractional anisotropy (FA) images and quantified FA values to determine whether significant change occurs in the ischemic region. The subjects were 20 patients seen within 24 h after onset of ischemia. For statistical comparison of FA images, a sample FA image was coordinated by the Talairach template, and each FA map was normalized. Statistical comparison was conducted using statistical parametric mapping (SPM) 99. Regions of interest were set in the same region on apparent diffusion coefficient (ADC) and FA maps, the region being consistent with the hyperintense region on diffusion-weighted images (DWIs). The contralateral region was also measured to obtain asymmetry ratios of ADC and FA. Regions with areas of statistical significance on FA images were found only in the white matter of three patients, although the regions were smaller than hyperintense regions on DWIs. The mean ADC and FA ratios were 0.64±0.16 and 0.93±0.09, respectively, and the degree of FA change was less than that of the ADC change. Significant change in diffusion anisotropy was limited to the severely infarcted core of the white matter. We believe statistical comparison of FA maps to be useful for detecting different regions of diffusion anisotropy. (author)

  18. Inconsistency between direct and indirect comparisons of competing interventions: meta-epidemiological study.

    Science.gov (United States)

    Song, Fujian; Xiong, Tengbin; Parekh-Bhurke, Sheetal; Loke, Yoon K; Sutton, Alex J; Eastwood, Alison J; Holland, Richard; Chen, Yen-Fu; Glenny, Anne-Marie; Deeks, Jonathan J; Altman, Doug G

    2011-08-16

    To investigate the agreement between direct and indirect comparisons of competing healthcare interventions. Meta-epidemiological study based on sample of meta-analyses of randomised controlled trials. Data sources Cochrane Database of Systematic Reviews and PubMed. Inclusion criteria Systematic reviews that provided sufficient data for both direct comparison and independent indirect comparisons of two interventions on the basis of a common comparator and in which the odds ratio could be used as the outcome statistic. Inconsistency measured by the difference in the log odds ratio between the direct and indirect methods. The study included 112 independent trial networks (including 1552 trials with 478,775 patients in total) that allowed both direct and indirect comparison of two interventions. Indirect comparison had already been explicitly done in only 13 of the 85 Cochrane reviews included. The inconsistency between the direct and indirect comparison was statistically significant in 16 cases (14%, 95% confidence interval 9% to 22%). The statistically significant inconsistency was associated with fewer trials, subjectively assessed outcomes, and statistically significant effects of treatment in either direct or indirect comparisons. Owing to considerable inconsistency, many (14/39) of the statistically significant effects by direct comparison became non-significant when the direct and indirect estimates were combined. Significant inconsistency between direct and indirect comparisons may be more prevalent than previously observed. Direct and indirect estimates should be combined in mixed treatment comparisons only after adequate assessment of the consistency of the evidence.

  19. Markov model plus k-word distributions: a synergy that produces novel statistical measures for sequence comparison.

    Science.gov (United States)

    Dai, Qi; Yang, Yanchun; Wang, Tianming

    2008-10-15

    Many proposed statistical measures can efficiently compare biological sequences to further infer their structures, functions and evolutionary information. They are related in spirit because all the ideas for sequence comparison try to use the information on the k-word distributions, Markov model or both. Motivated by adding k-word distributions to Markov model directly, we investigated two novel statistical measures for sequence comparison, called wre.k.r and S2.k.r. The proposed measures were tested by similarity search, evaluation on functionally related regulatory sequences and phylogenetic analysis. This offers the systematic and quantitative experimental assessment of our measures. Moreover, we compared our achievements with these based on alignment or alignment-free. We grouped our experiments into two sets. The first one, performed via ROC (receiver operating curve) analysis, aims at assessing the intrinsic ability of our statistical measures to search for similar sequences from a database and discriminate functionally related regulatory sequences from unrelated sequences. The second one aims at assessing how well our statistical measure is used for phylogenetic analysis. The experimental assessment demonstrates that our similarity measures intending to incorporate k-word distributions into Markov model are more efficient.

  20. Intensive inpatient treatment for bulimia nervosa: Statistical and clinical significance of symptom changes.

    Science.gov (United States)

    Diedrich, Alice; Schlegl, Sandra; Greetfeld, Martin; Fumi, Markus; Voderholzer, Ulrich

    2018-03-01

    This study examines the statistical and clinical significance of symptom changes during an intensive inpatient treatment program with a strong psychotherapeutic focus for individuals with severe bulimia nervosa. 295 consecutively admitted bulimic patients were administered the Structured Interview for Anorexic and Bulimic Syndromes-Self-Rating (SIAB-S), the Eating Disorder Inventory-2 (EDI-2), the Brief Symptom Inventory (BSI), and the Beck Depression Inventory-II (BDI-II) at treatment intake and discharge. Results indicated statistically significant symptom reductions with large effect sizes regarding severity of binge eating and compensatory behavior (SIAB-S), overall eating disorder symptom severity (EDI-2), overall psychopathology (BSI), and depressive symptom severity (BDI-II) even when controlling for antidepressant medication. The majority of patients showed either reliable (EDI-2: 33.7%, BSI: 34.8%, BDI-II: 18.1%) or even clinically significant symptom changes (EDI-2: 43.2%, BSI: 33.9%, BDI-II: 56.9%). Patients with clinically significant improvement were less distressed at intake and less likely to suffer from a comorbid borderline personality disorder when compared with those who did not improve to a clinically significant extent. Findings indicate that intensive psychotherapeutic inpatient treatment may be effective in about 75% of severely affected bulimic patients. For the remaining non-responding patients, inpatient treatment might be improved through an even stronger focus on the reduction of comorbid borderline personality traits.

  1. Model Accuracy Comparison for High Resolution Insar Coherence Statistics Over Urban Areas

    Science.gov (United States)

    Zhang, Yue; Fu, Kun; Sun, Xian; Xu, Guangluan; Wang, Hongqi

    2016-06-01

    The interferometric coherence map derived from the cross-correlation of two complex registered synthetic aperture radar (SAR) images is the reflection of imaged targets. In many applications, it can act as an independent information source, or give additional information complementary to the intensity image. Specially, the statistical properties of the coherence are of great importance in land cover classification, segmentation and change detection. However, compared to the amount of work on the statistical characters of SAR intensity, there are quite fewer researches on interferometric SAR (InSAR) coherence statistics. And to our knowledge, all of the existing work that focuses on InSAR coherence statistics, models the coherence with Gaussian distribution with no discrimination on data resolutions or scene types. But the properties of coherence may be different for different data resolutions and scene types. In this paper, we investigate on the coherence statistics for high resolution data over urban areas, by making a comparison of the accuracy of several typical statistical models. Four typical land classes including buildings, trees, shadow and roads are selected as the representatives of urban areas. Firstly, several regions are selected from the coherence map manually and labelled with their corresponding classes respectively. Then we try to model the statistics of the pixel coherence for each type of region, with different models including Gaussian, Rayleigh, Weibull, Beta and Nakagami. Finally, we evaluate the model accuracy for each type of region. The experiments on TanDEM-X data show that the Beta model has a better performance than other distributions.

  2. MODEL ACCURACY COMPARISON FOR HIGH RESOLUTION INSAR COHERENCE STATISTICS OVER URBAN AREAS

    Directory of Open Access Journals (Sweden)

    Y. Zhang

    2016-06-01

    Full Text Available The interferometric coherence map derived from the cross-correlation of two complex registered synthetic aperture radar (SAR images is the reflection of imaged targets. In many applications, it can act as an independent information source, or give additional information complementary to the intensity image. Specially, the statistical properties of the coherence are of great importance in land cover classification, segmentation and change detection. However, compared to the amount of work on the statistical characters of SAR intensity, there are quite fewer researches on interferometric SAR (InSAR coherence statistics. And to our knowledge, all of the existing work that focuses on InSAR coherence statistics, models the coherence with Gaussian distribution with no discrimination on data resolutions or scene types. But the properties of coherence may be different for different data resolutions and scene types. In this paper, we investigate on the coherence statistics for high resolution data over urban areas, by making a comparison of the accuracy of several typical statistical models. Four typical land classes including buildings, trees, shadow and roads are selected as the representatives of urban areas. Firstly, several regions are selected from the coherence map manually and labelled with their corresponding classes respectively. Then we try to model the statistics of the pixel coherence for each type of region, with different models including Gaussian, Rayleigh, Weibull, Beta and Nakagami. Finally, we evaluate the model accuracy for each type of region. The experiments on TanDEM-X data show that the Beta model has a better performance than other distributions.

  3. Recent Literature on Whether Statistical Significance Tests Should or Should Not Be Banned.

    Science.gov (United States)

    Deegear, James

    This paper summarizes the literature regarding statistical significant testing with an emphasis on recent literature in various discipline and literature exploring why researchers have demonstrably failed to be influenced by the American Psychological Association publication manual's encouragement to report effect sizes. Also considered are…

  4. Network-based statistical comparison of citation topology of bibliographic databases

    Science.gov (United States)

    Šubelj, Lovro; Fiala, Dalibor; Bajec, Marko

    2014-01-01

    Modern bibliographic databases provide the basis for scientific research and its evaluation. While their content and structure differ substantially, there exist only informal notions on their reliability. Here we compare the topological consistency of citation networks extracted from six popular bibliographic databases including Web of Science, CiteSeer and arXiv.org. The networks are assessed through a rich set of local and global graph statistics. We first reveal statistically significant inconsistencies between some of the databases with respect to individual statistics. For example, the introduced field bow-tie decomposition of DBLP Computer Science Bibliography substantially differs from the rest due to the coverage of the database, while the citation information within arXiv.org is the most exhaustive. Finally, we compare the databases over multiple graph statistics using the critical difference diagram. The citation topology of DBLP Computer Science Bibliography is the least consistent with the rest, while, not surprisingly, Web of Science is significantly more reliable from the perspective of consistency. This work can serve either as a reference for scholars in bibliometrics and scientometrics or a scientific evaluation guideline for governments and research agencies. PMID:25263231

  5. Diagnostic value of Doppler echocardiography for identifying hemodynamic significant pulmonary valve regurgitation in tetralogy of Fallot: comparison with cardiac MRI.

    Science.gov (United States)

    Beurskens, Niek E G; Gorter, Thomas M; Pieper, Petronella G; Hoendermis, Elke S; Bartelds, Beatrijs; Ebels, Tjark; Berger, Rolf M F; Willems, Tineke P; van Melle, Joost P

    2017-11-01

    Quantification of pulmonary regurgitation (PR) is essential in the management of patients with repaired tetralogy of Fallot (TOF). We sought to evaluate the accuracy of first-line Doppler echocardiography in comparison with cardiac magnetic resonance imaging (MRI) to identify hemodynamic significant PR. Paired cardiac MRI and echocardiographic studies (n = 97) in patients with repaired TOF were retrospectively analyzed. Pressure half time (PHT) and pulmonary regurgitation index (PRi) were measured using continuous wave Doppler. The ratio of the color flow Doppler regurgitation jet width to pulmonary valve (PV) annulus (jet/annulus ratio) and diastolic to systolic time velocity integral (DSTVI; pulsed wave Doppler) were assessed. Accuracy of echocardiographic measurements was tested to identify significant PR as determined by phase-contrast MRI (PR fraction [PRF] ≥ 20%). Mean PRF was 29.4 ± 15.7%. PHT < 100 ms had a sensitivity of 93%, specificity 75%, positive predictive value (PPV) 92% and negative predictive value (NPV) 78% for identifying significant PR (C-statistic 0.82). PRi < 0.77 had sensitivity and specificity of 66% and 54%, respectively (C-statistic 0.63). Jet/annulus ratio ≥1/3 had sensitivity 96%, specificity 75%, PPV 92% and NPV 82% (C-statistic 0.87). DSTVI had sensitivity 84%, specificity 33%, PPV 84% and NPV 40%, (C-statistic 0.56). Combined jet/annulus ratio ≥1/3 and PHT < 100 ms was highly accurate in identifying PRF ≥ 20%, with sensitivity 97% and specificity 100%. PHT and jet/annulus ratio on Doppler echocardiography, especially when combined, are highly accurate in identifying significant PR and therefore seem useful in the follow-up of patients with repaired TOF.

  6. MISR Aerosol Product Attributes and Statistical Comparisons with MODIS

    Science.gov (United States)

    Kahn, Ralph A.; Nelson, David L.; Garay, Michael J.; Levy, Robert C.; Bull, Michael A.; Diner, David J.; Martonchik, John V.; Paradise, Susan R.; Hansen, Earl G.; Remer, Lorraine A.

    2009-01-01

    In this paper, Multi-angle Imaging SpectroRadiometer (MISR) aerosol product attributes are described, including geometry and algorithm performance flags. Actual retrieval coverage is mapped and explained in detail using representative global monthly data. Statistical comparisons are made with coincident aerosol optical depth (AOD) and Angstrom exponent (ANG) retrieval results from the Moderate Resolution Imaging Spectroradiometer (MODIS) instrument. The relationship between these results and the ones previously obtained for MISR and MODIS individually, based on comparisons with coincident ground-truth observations, is established. For the data examined, MISR and MODIS each obtain successful aerosol retrievals about 15% of the time, and coincident MISR-MODIS aerosol retrievals are obtained for about 6%-7% of the total overlap region. Cloud avoidance, glint and oblique-Sun exclusions, and other algorithm physical limitations account for these results. For both MISR and MODIS, successful retrievals are obtained for over 75% of locations where attempts are made. Where coincident AOD retrievals are obtained over ocean, the MISR-MODIS correlation coefficient is about 0.9; over land, the correlation coefficient is about 0.7. Differences are traced to specific known algorithm issues or conditions. Over-ocean ANG comparisons yield a correlation of 0.67, showing consistency in distinguishing aerosol air masses dominated by coarse-mode versus fine-mode particles. Sampling considerations imply that care must be taken when assessing monthly global aerosol direct radiative forcing and AOD trends with these products, but they can be used directly for many other applications, such as regional AOD gradient and aerosol air mass type mapping and aerosol transport model validation. Users are urged to take seriously the published product data-quality statements.

  7. Statistical significance approximation in local trend analysis of high-throughput time-series data using the theory of Markov chains.

    Science.gov (United States)

    Xia, Li C; Ai, Dongmei; Cram, Jacob A; Liang, Xiaoyi; Fuhrman, Jed A; Sun, Fengzhu

    2015-09-21

    Local trend (i.e. shape) analysis of time series data reveals co-changing patterns in dynamics of biological systems. However, slow permutation procedures to evaluate the statistical significance of local trend scores have limited its applications to high-throughput time series data analysis, e.g., data from the next generation sequencing technology based studies. By extending the theories for the tail probability of the range of sum of Markovian random variables, we propose formulae for approximating the statistical significance of local trend scores. Using simulations and real data, we show that the approximate p-value is close to that obtained using a large number of permutations (starting at time points >20 with no delay and >30 with delay of at most three time steps) in that the non-zero decimals of the p-values obtained by the approximation and the permutations are mostly the same when the approximate p-value is less than 0.05. In addition, the approximate p-value is slightly larger than that based on permutations making hypothesis testing based on the approximate p-value conservative. The approximation enables efficient calculation of p-values for pairwise local trend analysis, making large scale all-versus-all comparisons possible. We also propose a hybrid approach by integrating the approximation and permutations to obtain accurate p-values for significantly associated pairs. We further demonstrate its use with the analysis of the Polymouth Marine Laboratory (PML) microbial community time series from high-throughput sequencing data and found interesting organism co-occurrence dynamic patterns. The software tool is integrated into the eLSA software package that now provides accelerated local trend and similarity analysis pipelines for time series data. The package is freely available from the eLSA website: http://bitbucket.org/charade/elsa.

  8. Cloud-based solution to identify statistically significant MS peaks differentiating sample categories.

    Science.gov (United States)

    Ji, Jun; Ling, Jeffrey; Jiang, Helen; Wen, Qiaojun; Whitin, John C; Tian, Lu; Cohen, Harvey J; Ling, Xuefeng B

    2013-03-23

    Mass spectrometry (MS) has evolved to become the primary high throughput tool for proteomics based biomarker discovery. Until now, multiple challenges in protein MS data analysis remain: large-scale and complex data set management; MS peak identification, indexing; and high dimensional peak differential analysis with the concurrent statistical tests based false discovery rate (FDR). "Turnkey" solutions are needed for biomarker investigations to rapidly process MS data sets to identify statistically significant peaks for subsequent validation. Here we present an efficient and effective solution, which provides experimental biologists easy access to "cloud" computing capabilities to analyze MS data. The web portal can be accessed at http://transmed.stanford.edu/ssa/. Presented web application supplies large scale MS data online uploading and analysis with a simple user interface. This bioinformatic tool will facilitate the discovery of the potential protein biomarkers using MS.

  9. Power, effects, confidence, and significance: an investigation of statistical practices in nursing research.

    Science.gov (United States)

    Gaskin, Cadeyrn J; Happell, Brenda

    2014-05-01

    improvement. Most importantly, researchers should abandon the misleading practice of interpreting the results from inferential tests based solely on whether they are statistically significant (or not) and, instead, focus on reporting and interpreting effect sizes, confidence intervals, and significance levels. Nursing researchers also need to conduct and report a priori power analyses, and to address the issue of Type I experiment-wise error inflation in their studies. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  10. Statistical comparison of a hybrid approach with approximate and exact inference models for Fusion 2+

    Science.gov (United States)

    Lee, K. David; Wiesenfeld, Eric; Gelfand, Andrew

    2007-04-01

    One of the greatest challenges in modern combat is maintaining a high level of timely Situational Awareness (SA). In many situations, computational complexity and accuracy considerations make the development and deployment of real-time, high-level inference tools very difficult. An innovative hybrid framework that combines Bayesian inference, in the form of Bayesian Networks, and Possibility Theory, in the form of Fuzzy Logic systems, has recently been introduced to provide a rigorous framework for high-level inference. In previous research, the theoretical basis and benefits of the hybrid approach have been developed. However, lacking is a concrete experimental comparison of the hybrid framework with traditional fusion methods, to demonstrate and quantify this benefit. The goal of this research, therefore, is to provide a statistical analysis on the comparison of the accuracy and performance of hybrid network theory, with pure Bayesian and Fuzzy systems and an inexact Bayesian system approximated using Particle Filtering. To accomplish this task, domain specific models will be developed under these different theoretical approaches and then evaluated, via Monte Carlo Simulation, in comparison to situational ground truth to measure accuracy and fidelity. Following this, a rigorous statistical analysis of the performance results will be performed, to quantify the benefit of hybrid inference to other fusion tools.

  11. Performance comparison between total variation (TV)-based compressed sensing and statistical iterative reconstruction algorithms

    International Nuclear Information System (INIS)

    Tang Jie; Nett, Brian E; Chen Guanghong

    2009-01-01

    Of all available reconstruction methods, statistical iterative reconstruction algorithms appear particularly promising since they enable accurate physical noise modeling. The newly developed compressive sampling/compressed sensing (CS) algorithm has shown the potential to accurately reconstruct images from highly undersampled data. The CS algorithm can be implemented in the statistical reconstruction framework as well. In this study, we compared the performance of two standard statistical reconstruction algorithms (penalized weighted least squares and q-GGMRF) to the CS algorithm. In assessing the image quality using these iterative reconstructions, it is critical to utilize realistic background anatomy as the reconstruction results are object dependent. A cadaver head was scanned on a Varian Trilogy system at different dose levels. Several figures of merit including the relative root mean square error and a quality factor which accounts for the noise performance and the spatial resolution were introduced to objectively evaluate reconstruction performance. A comparison is presented between the three algorithms for a constant undersampling factor comparing different algorithms at several dose levels. To facilitate this comparison, the original CS method was formulated in the framework of the statistical image reconstruction algorithms. Important conclusions of the measurements from our studies are that (1) for realistic neuro-anatomy, over 100 projections are required to avoid streak artifacts in the reconstructed images even with CS reconstruction, (2) regardless of the algorithm employed, it is beneficial to distribute the total dose to more views as long as each view remains quantum noise limited and (3) the total variation-based CS method is not appropriate for very low dose levels because while it can mitigate streaking artifacts, the images exhibit patchy behavior, which is potentially harmful for medical diagnosis.

  12. Measuring radioactive half-lives via statistical sampling in practice

    Science.gov (United States)

    Lorusso, G.; Collins, S. M.; Jagan, K.; Hitt, G. W.; Sadek, A. M.; Aitken-Smith, P. M.; Bridi, D.; Keightley, J. D.

    2017-10-01

    The statistical sampling method for the measurement of radioactive decay half-lives exhibits intriguing features such as that the half-life is approximately the median of a distribution closely resembling a Cauchy distribution. Whilst initial theoretical considerations suggested that in certain cases the method could have significant advantages, accurate measurements by statistical sampling have proven difficult, for they require an exercise in non-standard statistical analysis. As a consequence, no half-life measurement using this method has yet been reported and no comparison with traditional methods has ever been made. We used a Monte Carlo approach to address these analysis difficulties, and present the first experimental measurement of a radioisotope half-life (211Pb) by statistical sampling in good agreement with the literature recommended value. Our work also focused on the comparison between statistical sampling and exponential regression analysis, and concluded that exponential regression achieves generally the highest accuracy.

  13. Examining reproducibility in psychology : A hybrid method for combining a statistically significant original study and a replication

    NARCIS (Netherlands)

    Van Aert, R.C.M.; Van Assen, M.A.L.M.

    2018-01-01

    The unrealistically high rate of positive results within psychology has increased the attention to replication research. However, researchers who conduct a replication and want to statistically combine the results of their replication with a statistically significant original study encounter

  14. A tutorial on hunting statistical significance by chasing N

    Directory of Open Access Journals (Sweden)

    Denes Szucs

    2016-09-01

    Full Text Available There is increasing concern about the replicability of studies in psychology and cognitive neuroscience. Hidden data dredging (also called p-hacking is a major contributor to this crisis because it substantially increases Type I error resulting in a much larger proportion of false positive findings than the usually expected 5%. In order to build better intuition to avoid, detect and criticise some typical problems, here I systematically illustrate the large impact of some easy to implement and so, perhaps frequent data dredging techniques on boosting false positive findings. I illustrate several forms of two special cases of data dredging. First, researchers may violate the data collection stopping rules of null hypothesis significance testing by repeatedly checking for statistical significance with various numbers of participants. Second, researchers may group participants post-hoc along potential but unplanned independent grouping variables. The first approach 'hacks' the number of participants in studies, the second approach ‘hacks’ the number of variables in the analysis. I demonstrate the high amount of false positive findings generated by these techniques with data from true null distributions. I also illustrate that it is extremely easy to introduce strong bias into data by very mild selection and re-testing. Similar, usually undocumented data dredging steps can easily lead to having 20-50%, or more false positives.

  15. Statistical evaluation of an interlaboratory comparison for the determination of uranium by potentiometric titration

    International Nuclear Information System (INIS)

    Ketema, D.J.; Harry, R.J.S.; Zijp, W.L.

    1990-09-01

    Upon request of the ESARDA working group 'Low enriched uranium conversion - and fuel fabrication plants' an interlaboratory comparison was organized, to assess the precision and accuracy concerning the determination of uranium by the potentiometric titration method. This report presents the results of a statistical evaluation on the data of the first phase of this exercise. (author). 9 refs.; 5 figs.; 24 tabs

  16. Metabolome Comparison of Transgenic and Non-transgenic Rice by Statistical Analysis of FTIR and NMR Spectra

    Directory of Open Access Journals (Sweden)

    Keykhosrow Keymanesh

    2009-06-01

    Full Text Available Modern biotechnology, based on recombinant DNA techniques, has made it possible to introduce new traits with great potential for crop improvement. However, concerns about unintended effects of gene transformation that possibly threaten environment or consumer health have persuaded scientists to set up pre-release tests on genetically modified organisms. Assessment of ‘substantial equivalence’ concept that established by comparison of genetically modified organism with a comparator with a history of safe use could be the first step of a comprehensive risk assessment. Metabolite level is the richest in performance of changes which stem from genetic or environmental factors. Since assessment of all metabolites in detail is very costly and practically impossible, statistical evaluation of processed data of grain spectroscopic values could be a time and cost effective substitution for complex chemical analysis. To investigate the ability of multivariate statistical techniques in comparison of metabolomes as well as testing a method for such comparisons with available tools, a transgenic rice in combination with its traditionally bred parent were used as test material, and the discriminant analysis were applied as supervised method and principal component analysis as unsupervised classification method on the processed data which were extracted from Fourier transform infrared spectroscopy and nuclear magnetic resonance spectral data of powdered rice and rice extraction and barley grain samples, of which the latter was considered as control. The results confirmed the capability of statistics, even with initial data processing applications in metabolome studies. Meanwhile, this study confirms that the supervised method results in more distinctive results.

  17. Applying Statistical Mechanics to pixel detectors

    International Nuclear Information System (INIS)

    Pindo, Massimiliano

    2002-01-01

    Pixel detectors, being made of a large number of active cells of the same kind, can be considered as significant sets to which Statistical Mechanics variables and methods can be applied. By properly redefining well known statistical parameters in order to let them match the ones that actually characterize pixel detectors, an analysis of the way they work can be performed in a totally new perspective. A deeper understanding of pixel detectors is attained, helping in the evaluation and comparison of their intrinsic characteristics and performance

  18. LHCb: Statistical Comparison of CPU performance for LHCb applications on the Grid

    CERN Multimedia

    Graciani, R

    2009-01-01

    The usage of CPU resources by LHCb on the Grid id dominated by two different applications: Gauss and Brunel. Gauss the application doing the Monte Carlo simulation of proton-proton collisions. Brunel is the application responsible for the reconstruction of the signals recorded by the detector converting them into objects that can be used for later physics analysis of the data (tracks, clusters,…) Both applications are based on the Gaudi and LHCb software frameworks. Gauss uses Pythia and Geant as underlying libraries for the simulation of the collision and the later passage of the generated particles through the LHCb detector. While Brunel makes use of LHCb specific code to process the data from each sub-detector. Both applications are CPU bound. Large Monte Carlo productions or data reconstructions running on the Grid are an ideal benchmark to compare the performance of the different CPU models for each case. Since the processed events are only statistically comparable, only statistical comparison of the...

  19. A critical discussion of null hypothesis significance testing and statistical power analysis within psychological research

    DEFF Research Database (Denmark)

    Jones, Allan; Sommerlund, Bo

    2007-01-01

    The uses of null hypothesis significance testing (NHST) and statistical power analysis within psychological research are critically discussed. The article looks at the problems of relying solely on NHST when dealing with small and large sample sizes. The use of power-analysis in estimating...... the potential error introduced by small and large samples is advocated. Power analysis is not recommended as a replacement to NHST but as an additional source of information about the phenomena under investigation. Moreover, the importance of conceptual analysis in relation to statistical analysis of hypothesis...

  20. ASYMPTOTIC COMPARISONS OF U-STATISTICS, V-STATISTICS AND LIMITS OF BAYES ESTIMATES BY DEFICIENCIES

    OpenAIRE

    Toshifumi, Nomachi; Hajime, Yamato; Graduate School of Science and Engineering, Kagoshima University:Miyakonojo College of Technology; Faculty of Science, Kagoshima University

    2001-01-01

    As estimators of estimable parameters, we consider three statistics which are U-statistic, V-statistic and limit of Bayes estimate. This limit of Bayes estimate, called LB-statistic in this paper, is obtained from Bayes estimate of estimable parameter based on Dirichlet process, by letting its parameter tend to zero. For the estimable parameter with non-degenerate kernel, the asymptotic relative efficiencies of LB-statistic with respect to U-statistic and V-statistic and that of V-statistic w...

  1. How Many Subjects are Needed for a Visual Field Normative Database? A Comparison of Ground Truth and Bootstrapped Statistics.

    Science.gov (United States)

    Phu, Jack; Bui, Bang V; Kalloniatis, Michael; Khuu, Sieu K

    2018-03-01

    The number of subjects needed to establish the normative limits for visual field (VF) testing is not known. Using bootstrap resampling, we determined whether the ground truth mean, distribution limits, and standard deviation (SD) could be approximated using different set size ( x ) levels, in order to provide guidance for the number of healthy subjects required to obtain robust VF normative data. We analyzed the 500 Humphrey Field Analyzer (HFA) SITA-Standard results of 116 healthy subjects and 100 HFA full threshold results of 100 psychophysically experienced healthy subjects. These VFs were resampled (bootstrapped) to determine mean sensitivity, distribution limits (5th and 95th percentiles), and SD for different ' x ' and numbers of resamples. We also used the VF results of 122 glaucoma patients to determine the performance of ground truth and bootstrapped results in identifying and quantifying VF defects. An x of 150 (for SITA-Standard) and 60 (for full threshold) produced bootstrapped descriptive statistics that were no longer different to the original distribution limits and SD. Removing outliers produced similar results. Differences between original and bootstrapped limits in detecting glaucomatous defects were minimized at x = 250. Ground truth statistics of VF sensitivities could be approximated using set sizes that are significantly smaller than the original cohort. Outlier removal facilitates the use of Gaussian statistics and does not significantly affect the distribution limits. We provide guidance for choosing the cohort size for different levels of error when performing normative comparisons with glaucoma patients.

  2. Wave Mechanics or Wave Statistical Mechanics

    International Nuclear Information System (INIS)

    Qian Shangwu; Xu Laizi

    2007-01-01

    By comparison between equations of motion of geometrical optics and that of classical statistical mechanics, this paper finds that there should be an analogy between geometrical optics and classical statistical mechanics instead of geometrical mechanics and classical mechanics. Furthermore, by comparison between the classical limit of quantum mechanics and classical statistical mechanics, it finds that classical limit of quantum mechanics is classical statistical mechanics not classical mechanics, hence it demonstrates that quantum mechanics is a natural generalization of classical statistical mechanics instead of classical mechanics. Thence quantum mechanics in its true appearance is a wave statistical mechanics instead of a wave mechanics.

  3. Statistical significance estimation of a signal within the GooFit framework on GPUs

    Directory of Open Access Journals (Sweden)

    Cristella Leonardo

    2017-01-01

    Full Text Available In order to test the computing capabilities of GPUs with respect to traditional CPU cores a high-statistics toy Monte Carlo technique has been implemented both in ROOT/RooFit and GooFit frameworks with the purpose to estimate the statistical significance of the structure observed by CMS close to the kinematical boundary of the J/ψϕ invariant mass in the three-body decay B+ → J/ψϕK+. GooFit is a data analysis open tool under development that interfaces ROOT/RooFit to CUDA platform on nVidia GPU. The optimized GooFit application running on GPUs hosted by servers in the Bari Tier2 provides striking speed-up performances with respect to the RooFit application parallelised on multiple CPUs by means of PROOF-Lite tool. The considerable resulting speed-up, evident when comparing concurrent GooFit processes allowed by CUDA Multi Process Service and a RooFit/PROOF-Lite process with multiple CPU workers, is presented and discussed in detail. By means of GooFit it has also been possible to explore the behaviour of a likelihood ratio test statistic in different situations in which the Wilks Theorem may or may not apply because its regularity conditions are not satisfied.

  4. Is statistical significance clinically important?--A guide to judge the clinical relevance of study findings

    NARCIS (Netherlands)

    Sierevelt, Inger N.; van Oldenrijk, Jakob; Poolman, Rudolf W.

    2007-01-01

    In this paper we describe several issues that influence the reporting of statistical significance in relation to clinical importance, since misinterpretation of p values is a common issue in orthopaedic literature. Orthopaedic research is tormented by the risks of false-positive (type I error) and

  5. Statistical significance of theoretical predictions: A new dimension in nuclear structure theories (I)

    International Nuclear Information System (INIS)

    DUDEK, J; SZPAK, B; FORNAL, B; PORQUET, M-G

    2011-01-01

    In this and the follow-up article we briefly discuss what we believe represents one of the most serious problems in contemporary nuclear structure: the question of statistical significance of parametrizations of nuclear microscopic Hamiltonians and the implied predictive power of the underlying theories. In the present Part I, we introduce the main lines of reasoning of the so-called Inverse Problem Theory, an important sub-field in the contemporary Applied Mathematics, here illustrated on the example of the Nuclear Mean-Field Approach.

  6. Food consumption and the actual statistics of cardiovascular diseases: an epidemiological comparison of 42 European countries

    Directory of Open Access Journals (Sweden)

    Pavel Grasgruber

    2016-09-01

    Full Text Available Background: The aim of this ecological study was to identify the main nutritional factors related to the prevalence of cardiovascular diseases (CVDs in Europe, based on a comparison of international statistics. Design: The mean consumption of 62 food items from the FAOSTAT database (1993–2008 was compared with the actual statistics of five CVD indicators in 42 European countries. Several other exogenous factors (health expenditure, smoking, body mass index and the historical stability of results were also examined. Results: We found exceptionally strong relationships between some of the examined factors, the highest being a correlation between raised cholesterol in men and the combined consumption of animal fat and animal protein (r=0.92, p<0.001. The most significant dietary correlate of low CVD risk was high total fat and animal protein consumption. Additional statistical analyses further highlighted citrus fruits, high-fat dairy (cheese and tree nuts. Among other non-dietary factors, health expenditure showed by far the highest correlation coefficients. The major correlate of high CVD risk was the proportion of energy from carbohydrates and alcohol, or from potato and cereal carbohydrates. Similar patterns were observed between food consumption and CVD statistics from the period 1980–2000, which shows that these relationships are stable over time. However, we found striking discrepancies in men's CVD statistics from 1980 and 1990, which can probably explain the origin of the ‘saturated fat hypothesis’ that influenced public health policies in the following decades. Conclusion: Our results do not support the association between CVDs and saturated fat, which is still contained in official dietary guidelines. Instead, they agree with data accumulated from recent studies that link CVD risk with the high glycaemic index/load of carbohydrate-based diets. In the absence of any scientific evidence connecting saturated fat with CVDs, these

  7. Comparison of Tsallis statistics with the Tsallis-factorized statistics in the ultrarelativistic pp collisions

    International Nuclear Information System (INIS)

    Parvan, A.S.

    2016-01-01

    The Tsallis statistics was applied to describe the experimental data on the transverse momentum distributions of hadrons. We considered the energy dependence of the parameters of the Tsallis-factorized statistics, which is now widely used for the description of the experimental transverse momentum distributions of hadrons, and the Tsallis statistics for the charged pions produced in pp collisions at high energies. We found that the results of the Tsallis-factorized statistics deviate from the results of the Tsallis statistics only at low NA61/SHINE energies when the value of the entropic parameter is close to unity. At higher energies, when the value of the entropic parameter deviates essentially from unity, the Tsallis-factorized statistics satisfactorily recovers the results of the Tsallis statistics. (orig.)

  8. Instrumental and statistical methods for the comparison of class evidence

    Science.gov (United States)

    Liszewski, Elisa Anne

    Trace evidence is a major field within forensic science. Association of trace evidence samples can be problematic due to sample heterogeneity and a lack of quantitative criteria for comparing spectra or chromatograms. The aim of this study is to evaluate different types of instrumentation for their ability to discriminate among samples of various types of trace evidence. Chemometric analysis, including techniques such as Agglomerative Hierarchical Clustering, Principal Components Analysis, and Discriminant Analysis, was employed to evaluate instrumental data. First, automotive clear coats were analyzed by using microspectrophotometry to collect UV absorption data. In total, 71 samples were analyzed with classification accuracy of 91.61%. An external validation was performed, resulting in a prediction accuracy of 81.11%. Next, fiber dyes were analyzed using UV-Visible microspectrophotometry. While several physical characteristics of cotton fiber can be identified and compared, fiber color is considered to be an excellent source of variation, and thus was examined in this study. Twelve dyes were employed, some being visually indistinguishable. Several different analyses and comparisons were done, including an inter-laboratory comparison and external validations. Lastly, common plastic samples and other polymers were analyzed using pyrolysis-gas chromatography/mass spectrometry, and their pyrolysis products were then analyzed using multivariate statistics. The classification accuracy varied dependent upon the number of classes chosen, but the plastics were grouped based on composition. The polymers were used as an external validation and misclassifications occurred with chlorinated samples all being placed into the category containing PVC.

  9. Knowledge fusion: Comparison of fuzzy curve smoothers to statistically motivated curve smoothers

    International Nuclear Information System (INIS)

    Burr, T.; Strittmatter, R.B.

    1996-03-01

    This report describes work during FY 95 that was sponsored by the Department of Energy, Office of Nonproliferation and National Security (NN) Knowledge Fusion (KF) Project. The project team selected satellite sensor data to use as the one main example to which its analysis algorithms would be applied. The specific sensor-fusion problem has many generic features, which make it a worthwhile problem to attempt to solve in a general way. The generic problem is to recognize events of interest from multiple time series that define a possibly noisy background. By implementing a suite of time series modeling and forecasting methods and using well-chosen alarm criteria, we reduce the number of false alarms. We then further reduce the number of false alarms by analyzing all suspicious sections of data, as judged by the alarm criteria, with pattern recognition methods. This report gives a detailed comparison of two of the forecasting methods (fuzzy forecaster and statistically motivated curve smoothers as forecasters). The two methods are compared on five simulated and five real data sets. One of the five real data sets is satellite sensor data. The conclusion is the statistically motivated curve smoother is superior on simulated data of the type we studied. The statistically motivated method is also superior on most real data. In defense of the fuzzy-logic motivated methods, we point out that fuzzy-logic methods were never intended to compete with statistical methods on numeric data. Fuzzy logic was developed to handle real-world situations where either real data was not available or was supplemented with either ''expert opinion'' or some sort of linguistic information

  10. Statistical Significance of the Contribution of Variables to the PCA Solution: An Alternative Permutation Strategy

    Science.gov (United States)

    Linting, Marielle; van Os, Bart Jan; Meulman, Jacqueline J.

    2011-01-01

    In this paper, the statistical significance of the contribution of variables to the principal components in principal components analysis (PCA) is assessed nonparametrically by the use of permutation tests. We compare a new strategy to a strategy used in previous research consisting of permuting the columns (variables) of a data matrix…

  11. Isotopic safeguards statistics

    International Nuclear Information System (INIS)

    Timmerman, C.L.; Stewart, K.B.

    1978-06-01

    The methods and results of our statistical analysis of isotopic data using isotopic safeguards techniques are illustrated using example data from the Yankee Rowe reactor. The statistical methods used in this analysis are the paired comparison and the regression analyses. A paired comparison results when a sample from a batch is analyzed by two different laboratories. Paired comparison techniques can be used with regression analysis to detect and identify outlier batches. The second analysis tool, linear regression, involves comparing various regression approaches. These approaches use two basic types of models: the intercept model (y = α + βx) and the initial point model [y - y 0 = β(x - x 0 )]. The intercept model fits strictly the exposure or burnup values of isotopic functions, while the initial point model utilizes the exposure values plus the initial or fabricator's data values in the regression analysis. Two fitting methods are applied to each of these models. These methods are: (1) the usual least squares fitting approach where x is measured without error, and (2) Deming's approach which uses the variance estimates obtained from the paired comparison results and considers x and y are both measured with error. The Yankee Rowe data were first measured by Nuclear Fuel Services (NFS) and remeasured by Nuclear Audit and Testing Company (NATCO). The ratio of Pu/U versus 235 D (in which 235 D is the amount of depleted 235 U expressed in weight percent) using actual numbers is the isotopic function illustrated. Statistical results using the Yankee Rowe data indicates the attractiveness of Deming's regression model over the usual approach by simple comparison of the given regression variances with the random variance from the paired comparison results

  12. ClusterSignificance: A bioconductor package facilitating statistical analysis of class cluster separations in dimensionality reduced data

    DEFF Research Database (Denmark)

    Serviss, Jason T.; Gådin, Jesper R.; Eriksson, Per

    2017-01-01

    , e.g. genes in a specific pathway, alone can separate samples into these established classes. Despite this, the evaluation of class separations is often subjective and performed via visualization. Here we present the ClusterSignificance package; a set of tools designed to assess the statistical...... significance of class separations downstream of dimensionality reduction algorithms. In addition, we demonstrate the design and utility of the ClusterSignificance package and utilize it to determine the importance of long non-coding RNA expression in the identity of multiple hematological malignancies....

  13. Statistical inference an integrated Bayesianlikelihood approach

    CERN Document Server

    Aitkin, Murray

    2010-01-01

    Filling a gap in current Bayesian theory, Statistical Inference: An Integrated Bayesian/Likelihood Approach presents a unified Bayesian treatment of parameter inference and model comparisons that can be used with simple diffuse prior specifications. This novel approach provides new solutions to difficult model comparison problems and offers direct Bayesian counterparts of frequentist t-tests and other standard statistical methods for hypothesis testing.After an overview of the competing theories of statistical inference, the book introduces the Bayes/likelihood approach used throughout. It pre

  14. Statistical significance versus clinical importance: trials on exercise therapy for chronic low back pain as example.

    NARCIS (Netherlands)

    van Tulder, M.W.; Malmivaara, A.; Hayden, J.; Koes, B.

    2007-01-01

    STUDY DESIGN. Critical appraisal of the literature. OBJECIVES. The objective of this study was to assess if results of back pain trials are statistically significant and clinically important. SUMMARY OF BACKGROUND DATA. There seems to be a discrepancy between conclusions reported by authors and

  15. Applying contemporary statistical techniques

    CERN Document Server

    Wilcox, Rand R

    2003-01-01

    Applying Contemporary Statistical Techniques explains why traditional statistical methods are often inadequate or outdated when applied to modern problems. Wilcox demonstrates how new and more powerful techniques address these problems far more effectively, making these modern robust methods understandable, practical, and easily accessible.* Assumes no previous training in statistics * Explains how and why modern statistical methods provide more accurate results than conventional methods* Covers the latest developments on multiple comparisons * Includes recent advanc

  16. Business statistics for dummies

    CERN Document Server

    Anderson, Alan

    2013-01-01

    Score higher in your business statistics course? Easy. Business statistics is a common course for business majors and MBA candidates. It examines common data sets and the proper way to use such information when conducting research and producing informational reports such as profit and loss statements, customer satisfaction surveys, and peer comparisons. Business Statistics For Dummies tracks to a typical business statistics course offered at the undergraduate and graduate levels and provides clear, practical explanations of business statistical ideas, techniques, formulas, and calculations, w

  17. Comparison of statistical sampling methods with ScannerBit, the GAMBIT scanning module

    Energy Technology Data Exchange (ETDEWEB)

    Martinez, Gregory D. [University of California, Physics and Astronomy Department, Los Angeles, CA (United States); McKay, James; Scott, Pat [Imperial College London, Department of Physics, Blackett Laboratory, London (United Kingdom); Farmer, Ben; Conrad, Jan [AlbaNova University Centre, Oskar Klein Centre for Cosmoparticle Physics, Stockholm (Sweden); Stockholm University, Department of Physics, Stockholm (Sweden); Roebber, Elinore [McGill University, Department of Physics, Montreal, QC (Canada); Putze, Antje [LAPTh, Universite de Savoie, CNRS, Annecy-le-Vieux (France); Collaboration: The GAMBIT Scanner Workgroup

    2017-11-15

    We introduce ScannerBit, the statistics and sampling module of the public, open-source global fitting framework GAMBIT. ScannerBit provides a standardised interface to different sampling algorithms, enabling the use and comparison of multiple computational methods for inferring profile likelihoods, Bayesian posteriors, and other statistical quantities. The current version offers random, grid, raster, nested sampling, differential evolution, Markov Chain Monte Carlo (MCMC) and ensemble Monte Carlo samplers. We also announce the release of a new standalone differential evolution sampler, Diver, and describe its design, usage and interface to ScannerBit. We subject Diver and three other samplers (the nested sampler MultiNest, the MCMC GreAT, and the native ScannerBit implementation of the ensemble Monte Carlo algorithm T-Walk) to a battery of statistical tests. For this we use a realistic physical likelihood function, based on the scalar singlet model of dark matter. We examine the performance of each sampler as a function of its adjustable settings, and the dimensionality of the sampling problem. We evaluate performance on four metrics: optimality of the best fit found, completeness in exploring the best-fit region, number of likelihood evaluations, and total runtime. For Bayesian posterior estimation at high resolution, T-Walk provides the most accurate and timely mapping of the full parameter space. For profile likelihood analysis in less than about ten dimensions, we find that Diver and MultiNest score similarly in terms of best fit and speed, outperforming GreAT and T-Walk; in ten or more dimensions, Diver substantially outperforms the other three samplers on all metrics. (orig.)

  18. Performance in College Chemistry: a Statistical Comparison Using Gender and Jungian Personality Type

    Science.gov (United States)

    Greene, Susan V.; Wheeler, Henry R.; Riley, Wayne D.

    This study sorted college introductory chemistry students by gender and Jungian personality type. It recognized differences from the general population distribution and statistically compared the students' grades with their Jungian personality types. Data from 577 female students indicated that ESFP (extroverted, sensory, feeling, perceiving) and ENFP (extroverted, intuitive, feeling, perceiving) profiles performed poorly at statistically significant levels when compared with the distribution of females enrolled in introductory chemistry. The comparable analysis using data from 422 male students indicated that the poorly performing male profiles were ISTP (introverted, sensory, thinking, perceiving) and ESTP (extroverted, sensory, thinking, perceiving). ESTJ (extroverted, sensory, thinking, judging) female students withdrew from the course at a statistically significant level. For both genders, INTJ (introverted, intuitive, thinking, judging) students were the best performers. By examining the documented characteristics of Jungian profiles that correspond with poorly performing students in chemistry, one may more effectively assist the learning process and the retention of these individuals in the fields of natural science, engineering, and technology.

  19. Tract-oriented statistical group comparison of diffusion in sheet-like white matter

    DEFF Research Database (Denmark)

    Lyksborg, Mark; Dyrby, T. B.; Sorensen, P. S.

    2013-01-01

    tube-like shapes, not always suitable for modelling the white matter tracts of the brain. The tract-oriented technique aimed at group studies, integrates the usage of multivariate features and outputs a single value of significance indicating tract-specific differences. This is in contrast to voxel...... based analysis techniques which outputs a significance per voxel basis, and requires multiple comparison correction. We demonstrate our technique by comparing a group of controls with a group of Multiple Sclerosis subjects obtaining significant differences on 11 different fascicle structures....

  20. Using assemblage data in ecological indicators: A comparison and evaluation of commonly available statistical tools

    Science.gov (United States)

    Smith, Joseph M.; Mather, Martha E.

    2012-01-01

    Ecological indicators are science-based tools used to assess how human activities have impacted environmental resources. For monitoring and environmental assessment, existing species assemblage data can be used to make these comparisons through time or across sites. An impediment to using assemblage data, however, is that these data are complex and need to be simplified in an ecologically meaningful way. Because multivariate statistics are mathematical relationships, statistical groupings may not make ecological sense and will not have utility as indicators. Our goal was to define a process to select defensible and ecologically interpretable statistical simplifications of assemblage data in which researchers and managers can have confidence. For this, we chose a suite of statistical methods, compared the groupings that resulted from these analyses, identified convergence among groupings, then we interpreted the groupings using species and ecological guilds. When we tested this approach using a statewide stream fish dataset, not all statistical methods worked equally well. For our dataset, logistic regression (Log), detrended correspondence analysis (DCA), cluster analysis (CL), and non-metric multidimensional scaling (NMDS) provided consistent, simplified output. Specifically, the Log, DCA, CL-1, and NMDS-1 groupings were ≥60% similar to each other, overlapped with the fluvial-specialist ecological guild, and contained a common subset of species. Groupings based on number of species (e.g., Log, DCA, CL and NMDS) outperformed groupings based on abundance [e.g., principal components analysis (PCA) and Poisson regression]. Although the specific methods that worked on our test dataset have generality, here we are advocating a process (e.g., identifying convergent groupings with redundant species composition that are ecologically interpretable) rather than the automatic use of any single statistical tool. We summarize this process in step-by-step guidance for the

  1. Indirectional statistics and the significance of an asymmetry discovered by Birch

    International Nuclear Information System (INIS)

    Kendall, D.G.; Young, G.A.

    1984-01-01

    Birch (1982, Nature, 298, 451) reported an apparent 'statistical asymmetry of the Universe'. The authors here develop 'indirectional analysis' as a technique for investigating statistical effects of this kind and conclude that the reported effect (whatever may be its origin) is strongly supported by the observations. The estimated pole of the asymmetry is at RA 13h 30m, Dec. -37deg. The angular error in its estimation is unlikely to exceed 20-30deg. (author)

  2. Data-driven inference for the spatial scan statistic.

    Science.gov (United States)

    Almeida, Alexandre C L; Duarte, Anderson R; Duczmal, Luiz H; Oliveira, Fernando L P; Takahashi, Ricardo H C

    2011-08-02

    Kulldorff's spatial scan statistic for aggregated area maps searches for clusters of cases without specifying their size (number of areas) or geographic location in advance. Their statistical significance is tested while adjusting for the multiple testing inherent in such a procedure. However, as is shown in this work, this adjustment is not done in an even manner for all possible cluster sizes. A modification is proposed to the usual inference test of the spatial scan statistic, incorporating additional information about the size of the most likely cluster found. A new interpretation of the results of the spatial scan statistic is done, posing a modified inference question: what is the probability that the null hypothesis is rejected for the original observed cases map with a most likely cluster of size k, taking into account only those most likely clusters of size k found under null hypothesis for comparison? This question is especially important when the p-value computed by the usual inference process is near the alpha significance level, regarding the correctness of the decision based in this inference. A practical procedure is provided to make more accurate inferences about the most likely cluster found by the spatial scan statistic.

  3. Statistical comparison of models for estimating the monthly average daily diffuse radiation at a subtropical African site

    International Nuclear Information System (INIS)

    Bashahu, M.

    2003-01-01

    Nine correlations have been developed in this paper to estimate the monthly average diffuse radiation for Dakar, Senegal. A 16-year period data on the global (H) and diffuse (H d ) radiation, together with data on the bright sunshine hours (N), the fraction of the sky's (Ne/8), the water vapour pressure in the air (e) and the ambient temperature (T) have been used for that purpose. A model inter-comparison based on the MBE, RMSE and t statistical tests has shown that estimates in any of the obtained correlations are not significantly different from their measured counterparts, thus all the nine models are recommended for the aforesaid location. Three of them should be particularly selected for their simplicity, universal applicability and high accuracy. Those are simple linear correlations between K d and N/N d , Ne/8 or K t . Even presenting adequate performance, the remaining correlations are either simple but less accurate, or multiple or nonlinear regressions needing one or two input variables. (author)

  4. CT of the chest with model-based, fully iterative reconstruction: comparison with adaptive statistical iterative reconstruction.

    Science.gov (United States)

    Ichikawa, Yasutaka; Kitagawa, Kakuya; Nagasawa, Naoki; Murashima, Shuichi; Sakuma, Hajime

    2013-08-09

    The recently developed model-based iterative reconstruction (MBIR) enables significant reduction of image noise and artifacts, compared with adaptive statistical iterative reconstruction (ASIR) and filtered back projection (FBP). The purpose of this study was to evaluate lesion detectability of low-dose chest computed tomography (CT) with MBIR in comparison with ASIR and FBP. Chest CT was acquired with 64-slice CT (Discovery CT750HD) with standard-dose (5.7 ± 2.3 mSv) and low-dose (1.6 ± 0.8 mSv) conditions in 55 patients (aged 72 ± 7 years) who were suspected of lung disease on chest radiograms. Low-dose CT images were reconstructed with MBIR, ASIR 50% and FBP, and standard-dose CT images were reconstructed with FBP, using a reconstructed slice thickness of 0.625 mm. Two observers evaluated the image quality of abnormal lung and mediastinal structures on a 5-point scale (Score 5 = excellent and score 1 = non-diagnostic). The objective image noise was also measured as the standard deviation of CT intensity in the descending aorta. The image quality score of enlarged mediastinal lymph nodes on low-dose MBIR CT (4.7 ± 0.5) was significantly improved in comparison with low-dose FBP and ASIR CT (3.0 ± 0.5, p = 0.004; 4.0 ± 0.5, p = 0.02, respectively), and was nearly identical to the score of standard-dose FBP image (4.8 ± 0.4, p = 0.66). Concerning decreased lung attenuation (bulla, emphysema, or cyst), the image quality score on low-dose MBIR CT (4.9 ± 0.2) was slightly better compared to low-dose FBP and ASIR CT (4.5 ± 0.6, p = 0.01; 4.6 ± 0.5, p = 0.01, respectively). There were no significant differences in image quality scores of visualization of consolidation or mass, ground-glass attenuation, or reticular opacity among low- and standard-dose CT series. Image noise with low-dose MBIR CT (11.6 ± 1.0 Hounsfield units (HU)) were significantly lower than with low-dose ASIR (21.1 ± 2.6 HU, p standard-dose FBP CT (16.6 ± 2.3 HU, p 70%, MBIR can provide

  5. A Comparison of the Performance of Advanced Statistical Techniques for the Refinement of Day-ahead and Longer NWP-based Wind Power Forecasts

    Science.gov (United States)

    Zack, J. W.

    2015-12-01

    Predictions from Numerical Weather Prediction (NWP) models are the foundation for wind power forecasts for day-ahead and longer forecast horizons. The NWP models directly produce three-dimensional wind forecasts on their respective computational grids. These can be interpolated to the location and time of interest. However, these direct predictions typically contain significant systematic errors ("biases"). This is due to a variety of factors including the limited space-time resolution of the NWP models and shortcomings in the model's representation of physical processes. It has become common practice to attempt to improve the raw NWP forecasts by statistically adjusting them through a procedure that is widely known as Model Output Statistics (MOS). The challenge is to identify complex patterns of systematic errors and then use this knowledge to adjust the NWP predictions. The MOS-based improvements are the basis for much of the value added by commercial wind power forecast providers. There are an enormous number of statistical approaches that can be used to generate the MOS adjustments to the raw NWP forecasts. In order to obtain insight into the potential value of some of the newer and more sophisticated statistical techniques often referred to as "machine learning methods" a MOS-method comparison experiment has been performed for wind power generation facilities in 6 wind resource areas of California. The underlying NWP models that provided the raw forecasts were the two primary operational models of the US National Weather Service: the GFS and NAM models. The focus was on 1- and 2-day ahead forecasts of the hourly wind-based generation. The statistical methods evaluated included: (1) screening multiple linear regression, which served as a baseline method, (2) artificial neural networks, (3) a decision-tree approach called random forests, (4) gradient boosted regression based upon an decision-tree algorithm, (5) support vector regression and (6) analog ensemble

  6. Rapid Classification and Identification of Multiple Microorganisms with Accurate Statistical Significance via High-Resolution Tandem Mass Spectrometry.

    Science.gov (United States)

    Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y; Drake, Steven K; Gucek, Marjan; Sacks, David B; Yu, Yi-Kuo

    2018-06-05

    Rapid and accurate identification and classification of microorganisms is of paramount importance to public health and safety. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is complicating correct microbial identification even in a simple sample due to the large number of candidates present. To properly untwine candidate microbes in samples containing one or more microbes, one needs to go beyond apparent morphology or simple "fingerprinting"; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptide-centric representations of microbes to better separate them and by augmenting our earlier analysis method that yields accurate statistical significance. Here, we present an updated analysis workflow that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using 226 MS/MS publicly available data files (each containing from 2500 to nearly 100,000 MS/MS spectra) and 4000 additional MS/MS data files, that the updated workflow can correctly identify multiple microbes at the genus and often the species level for samples containing more than one microbe. We have also shown that the proposed workflow computes accurate statistical significances, i.e., E values for identified peptides and unified E values for identified microbes. Our updated analysis workflow MiCId, a freely available software for Microorganism Classification and Identification, is available for download at https://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html . Graphical Abstract ᅟ.

  7. Assessing Statistically Significant Heavy-Metal Concentrations in Abandoned Mine Areas via Hot Spot Analysis of Portable XRF Data.

    Science.gov (United States)

    Kim, Sung-Min; Choi, Yosoon

    2017-06-18

    To develop appropriate measures to prevent soil contamination in abandoned mining areas, an understanding of the spatial variation of the potentially toxic trace elements (PTEs) in the soil is necessary. For the purpose of effective soil sampling, this study uses hot spot analysis, which calculates a z -score based on the Getis-Ord Gi* statistic to identify a statistically significant hot spot sample. To constitute a statistically significant hot spot, a feature with a high value should also be surrounded by other features with high values. Using relatively cost- and time-effective portable X-ray fluorescence (PXRF) analysis, sufficient input data are acquired from the Busan abandoned mine and used for hot spot analysis. To calibrate the PXRF data, which have a relatively low accuracy, the PXRF analysis data are transformed using the inductively coupled plasma atomic emission spectrometry (ICP-AES) data. The transformed PXRF data of the Busan abandoned mine are classified into four groups according to their normalized content and z -scores: high content with a high z -score (HH), high content with a low z -score (HL), low content with a high z -score (LH), and low content with a low z -score (LL). The HL and LH cases may be due to measurement errors. Additional or complementary surveys are required for the areas surrounding these suspect samples or for significant hot spot areas. The soil sampling is conducted according to a four-phase procedure in which the hot spot analysis and proposed group classification method are employed to support the development of a sampling plan for the following phase. Overall, 30, 50, 80, and 100 samples are investigated and analyzed in phases 1-4, respectively. The method implemented in this case study may be utilized in the field for the assessment of statistically significant soil contamination and the identification of areas for which an additional survey is required.

  8. Assessing Statistically Significant Heavy-Metal Concentrations in Abandoned Mine Areas via Hot Spot Analysis of Portable XRF Data

    Directory of Open Access Journals (Sweden)

    Sung-Min Kim

    2017-06-01

    Full Text Available To develop appropriate measures to prevent soil contamination in abandoned mining areas, an understanding of the spatial variation of the potentially toxic trace elements (PTEs in the soil is necessary. For the purpose of effective soil sampling, this study uses hot spot analysis, which calculates a z-score based on the Getis-Ord Gi* statistic to identify a statistically significant hot spot sample. To constitute a statistically significant hot spot, a feature with a high value should also be surrounded by other features with high values. Using relatively cost- and time-effective portable X-ray fluorescence (PXRF analysis, sufficient input data are acquired from the Busan abandoned mine and used for hot spot analysis. To calibrate the PXRF data, which have a relatively low accuracy, the PXRF analysis data are transformed using the inductively coupled plasma atomic emission spectrometry (ICP-AES data. The transformed PXRF data of the Busan abandoned mine are classified into four groups according to their normalized content and z-scores: high content with a high z-score (HH, high content with a low z-score (HL, low content with a high z-score (LH, and low content with a low z-score (LL. The HL and LH cases may be due to measurement errors. Additional or complementary surveys are required for the areas surrounding these suspect samples or for significant hot spot areas. The soil sampling is conducted according to a four-phase procedure in which the hot spot analysis and proposed group classification method are employed to support the development of a sampling plan for the following phase. Overall, 30, 50, 80, and 100 samples are investigated and analyzed in phases 1–4, respectively. The method implemented in this case study may be utilized in the field for the assessment of statistically significant soil contamination and the identification of areas for which an additional survey is required.

  9. Transport statistics 1996

    CSIR Research Space (South Africa)

    Shepperson, L

    1997-12-01

    Full Text Available This publication contains transport and related statistics on roads, vehicles, infrastructure, passengers, freight, rail, air, maritime and road traffic, and international comparisons. The information compiled in this publication has been gathered...

  10. Significance analysis of lexical bias in microarray data

    Directory of Open Access Journals (Sweden)

    Falkow Stanley

    2003-04-01

    Full Text Available Abstract Background Genes that are determined to be significantly differentially regulated in microarray analyses often appear to have functional commonalities, such as being components of the same biochemical pathway. This results in certain words being under- or overrepresented in the list of genes. Distinguishing between biologically meaningful trends and artifacts of annotation and analysis procedures is of the utmost importance, as only true biological trends are of interest for further experimentation. A number of sophisticated methods for identification of significant lexical trends are currently available, but these methods are generally too cumbersome for practical use by most microarray users. Results We have developed a tool, LACK, for calculating the statistical significance of apparent lexical bias in microarray datasets. The frequency of a user-specified list of search terms in a list of genes which are differentially regulated is assessed for statistical significance by comparison to randomly generated datasets. The simplicity of the input files and user interface targets the average microarray user who wishes to have a statistical measure of apparent lexical trends in analyzed datasets without the need for bioinformatics skills. The software is available as Perl source or a Windows executable. Conclusion We have used LACK in our laboratory to generate biological hypotheses based on our microarray data. We demonstrate the program's utility using an example in which we confirm significant upregulation of SPI-2 pathogenicity island of Salmonella enterica serovar Typhimurium by the cation chelator dipyridyl.

  11. [Risk stratification of patients with diabetes mellitus undergoing coronary artery bypass grafting--a comparison of statistical methods].

    Science.gov (United States)

    Arnrich, B; Albert, A; Walter, J

    2006-01-01

    Among the coronary bypass patients from our Datamart database, we found a prevalence of 29.6% of diagnosed diabetics. 5.2% of the patients without a diagnosis of diabetes mellitus and a fasting plasma glucose level > 125 mg/dl were defined as undiagnosed diabetics. The objective of this paper was to compare univariate methods and techniques for risk stratification to determine, whether undiagnosed diabetes is per se a risk factor for increased ventilation time and length of ICU stay, and for increased prevalence of resuscitation, reintubation and 30-d mortality for diabetics in heart surgery. Univariate comparisons reveals that undiagnosed diabetics needed resuscitation significantly more often and had an increased ventilation time, while the length of ICU stay was significantly reduced. The significantly different distribution between the diabetics groups of 11 from 32 attributes examined, demands the use of methods for risk stratification. Both risk adjusted methods regression and matching confirm that undiagnosed diabetics had an increased ventilation time and an increased prevalence of resuscitation, while the length of ICU stay was not significantly reduced. A homogeneous distribution of the patient characteristics in the two diabetics groups could be achieved through a statistical matching method using the propensity score. In contrast to the regression analysis, a significantly increased prevalence of reintubation in undiagnosed diabetics was found. Based on an example of undiagnosed diabetics in heart surgery, the presented study reveals the necessity and the possibilities of techniques for risk stratification in retrospective analysis and shows how the potential of data collection from daily clinical practice can be used in an effective way.

  12. Critical analysis of adsorption data statistically

    Science.gov (United States)

    Kaushal, Achla; Singh, S. K.

    2017-10-01

    Experimental data can be presented, computed, and critically analysed in a different way using statistics. A variety of statistical tests are used to make decisions about the significance and validity of the experimental data. In the present study, adsorption was carried out to remove zinc ions from contaminated aqueous solution using mango leaf powder. The experimental data was analysed statistically by hypothesis testing applying t test, paired t test and Chi-square test to (a) test the optimum value of the process pH, (b) verify the success of experiment and (c) study the effect of adsorbent dose in zinc ion removal from aqueous solutions. Comparison of calculated and tabulated values of t and χ 2 showed the results in favour of the data collected from the experiment and this has been shown on probability charts. K value for Langmuir isotherm was 0.8582 and m value for Freundlich adsorption isotherm obtained was 0.725, both are mango leaf powder.

  13. Development of free statistical software enabling researchers to calculate confidence levels, clinical significance curves and risk-benefit contours

    International Nuclear Information System (INIS)

    Shakespeare, T.P.; Mukherjee, R.K.; Gebski, V.J.

    2003-01-01

    Confidence levels, clinical significance curves, and risk-benefit contours are tools improving analysis of clinical studies and minimizing misinterpretation of published results, however no software has been available for their calculation. The objective was to develop software to help clinicians utilize these tools. Excel 2000 spreadsheets were designed using only built-in functions, without macros. The workbook was protected and encrypted so that users can modify only input cells. The workbook has 4 spreadsheets for use in studies comparing two patient groups. Sheet 1 comprises instructions and graphic examples for use. Sheet 2 allows the user to input the main study results (e.g. survival rates) into a 2-by-2 table. Confidence intervals (95%), p-value and the confidence level for Treatment A being better than Treatment B are automatically generated. An additional input cell allows the user to determine the confidence associated with a specified level of benefit. For example if the user wishes to know the confidence that Treatment A is at least 10% better than B, 10% is entered. Sheet 2 automatically displays clinical significance curves, graphically illustrating confidence levels for all possible benefits of one treatment over the other. Sheet 3 allows input of toxicity data, and calculates the confidence that one treatment is more toxic than the other. It also determines the confidence that the relative toxicity of the most effective arm does not exceed user-defined tolerability. Sheet 4 automatically calculates risk-benefit contours, displaying the confidence associated with a specified scenario of minimum benefit and maximum risk of one treatment arm over the other. The spreadsheet is freely downloadable at www.ontumor.com/professional/statistics.htm A simple, self-explanatory, freely available spreadsheet calculator was developed using Excel 2000. The incorporated decision-making tools can be used for data analysis and improve the reporting of results of any

  14. Statistically significant faunal differences among Middle Ordovician age, Chickamauga Group bryozoan bioherms, central Alabama

    Energy Technology Data Exchange (ETDEWEB)

    Crow, C.J.

    1985-01-01

    Middle Ordovician age Chickamauga Group carbonates crop out along the Birmingham and Murphrees Valley anticlines in central Alabama. The macrofossil contents on exposed surfaces of seven bioherms have been counted to determine their various paleontologic characteristics. Twelve groups of organisms are present in these bioherms. Dominant organisms include bryozoans, algae, brachiopods, sponges, pelmatozoans, stromatoporoids and corals. Minor accessory fauna include predators, scavengers and grazers such as gastropods, ostracods, trilobites, cephalopods and pelecypods. Vertical and horizontal niche zonation has been detected for some of the bioherm dwelling fauna. No one bioherm of those studied exhibits all 12 groups of organisms; rather, individual bioherms display various subsets of the total diversity. Statistical treatment (G-test) of the diversity data indicates a lack of statistical homogeneity of the bioherms, both within and between localities. Between-locality population heterogeneity can be ascribed to differences in biologic responses to such gross environmental factors as water depth and clarity, and energy levels. At any one locality, gross aspects of the paleoenvironments are assumed to have been more uniform. Significant differences among bioherms at any one locality may have resulted from patchy distribution of species populations, differential preservation and other factors.

  15. Detecting Statistically Significant Communities of Triangle Motifs in Undirected Networks

    Science.gov (United States)

    2016-04-26

    Systems, Statistics & Management Science, University of Alabama, USA. 1 DISTRIBUTION A: Distribution approved for public release. Contents 1 Summary 5...13 5 Application to Real Networks 18 5.1 2012 FBS Football Schedule Network... football schedule network. . . . . . . . . . . . . . . . . . . . . . 21 14 Stem plot of degree-ordered vertices versus the degree for college football

  16. Data-driven inference for the spatial scan statistic

    Directory of Open Access Journals (Sweden)

    Duczmal Luiz H

    2011-08-01

    Full Text Available Abstract Background Kulldorff's spatial scan statistic for aggregated area maps searches for clusters of cases without specifying their size (number of areas or geographic location in advance. Their statistical significance is tested while adjusting for the multiple testing inherent in such a procedure. However, as is shown in this work, this adjustment is not done in an even manner for all possible cluster sizes. Results A modification is proposed to the usual inference test of the spatial scan statistic, incorporating additional information about the size of the most likely cluster found. A new interpretation of the results of the spatial scan statistic is done, posing a modified inference question: what is the probability that the null hypothesis is rejected for the original observed cases map with a most likely cluster of size k, taking into account only those most likely clusters of size k found under null hypothesis for comparison? This question is especially important when the p-value computed by the usual inference process is near the alpha significance level, regarding the correctness of the decision based in this inference. Conclusions A practical procedure is provided to make more accurate inferences about the most likely cluster found by the spatial scan statistic.

  17. Statistical significant changes in ground thermal conditions of alpine Austria during the last decade

    Science.gov (United States)

    Kellerer-Pirklbauer, Andreas

    2016-04-01

    Longer data series (e.g. >10 a) of ground temperatures in alpine regions are helpful to improve the understanding regarding the effects of present climate change on distribution and thermal characteristics of seasonal frost- and permafrost-affected areas. Beginning in 2004 - and more intensively since 2006 - a permafrost and seasonal frost monitoring network was established in Central and Eastern Austria by the University of Graz. This network consists of c.60 ground temperature (surface and near-surface) monitoring sites which are located at 1922-3002 m a.s.l., at latitude 46°55'-47°22'N and at longitude 12°44'-14°41'E. These data allow conclusions about general ground thermal conditions, potential permafrost occurrence, trend during the observation period, and regional pattern of changes. Calculations and analyses of several different temperature-related parameters were accomplished. At an annual scale a region-wide statistical significant warming during the observation period was revealed by e.g. an increase in mean annual temperature values (mean, maximum) or the significant lowering of the surface frost number (F+). At a seasonal scale no significant trend of any temperature-related parameter was in most cases revealed for spring (MAM) and autumn (SON). Winter (DJF) shows only a weak warming. In contrast, the summer (JJA) season reveals in general a significant warming as confirmed by several different temperature-related parameters such as e.g. mean seasonal temperature, number of thawing degree days, number of freezing degree days, or days without night frost. On a monthly basis August shows the statistically most robust and strongest warming of all months, although regional differences occur. Despite the fact that the general ground temperature warming during the last decade is confirmed by the field data in the study region, complications in trend analyses arise by temperature anomalies (e.g. warm winter 2006/07) or substantial variations in the winter

  18. Statistical analysis applied to safety culture self-assessment

    International Nuclear Information System (INIS)

    Macedo Soares, P.P.

    2002-01-01

    Interviews and opinion surveys are instruments used to assess the safety culture in an organization as part of the Safety Culture Enhancement Programme. Specific statistical tools are used to analyse the survey results. This paper presents an example of an opinion survey with the corresponding application of the statistical analysis and the conclusions obtained. Survey validation, Frequency statistics, Kolmogorov-Smirnov non-parametric test, Student (T-test) and ANOVA means comparison tests and LSD post-hoc multiple comparison test, are discussed. (author)

  19. Statistical comparison of models for estimating the monthly average daily diffuse radiation at a subtropical African site

    Energy Technology Data Exchange (ETDEWEB)

    Bashahu, M. [University of Burundi, Bujumbura (Burundi). Institute of Applied Pedagogy, Department of Physics and Technology

    2003-07-01

    Nine correlations have been developed in this paper to estimate the monthly average diffuse radiation for Dakar, Senegal. A 16-year period data on the global (H) and diffuse (H{sub d}) radiation, together with data on the bright sunshine hours (N), the fraction of the sky's (Ne/8), the water vapour pressure in the air (e) and the ambient temperature (T) have been used for that purpose. A model inter-comparison based on the MBE, RMSE and t statistical tests has shown that estimates in any of the obtained correlations are not significantly different from their measured counterparts, thus all the nine models are recommended for the aforesaid location. Three of them should be particularly selected for their simplicity, universal applicability and high accuracy. Those are simple linear correlations between K{sub d} and N/N{sub d}, Ne/8 or K{sub t}. Even presenting adequate performance, the remaining correlations are either simple but less accurate, or multiple or nonlinear regressions needing one or two input variables. (author)

  20. Conversion factors and oil statistics

    International Nuclear Information System (INIS)

    Karbuz, Sohbet

    2004-01-01

    World oil statistics, in scope and accuracy, are often far from perfect. They can easily lead to misguided conclusions regarding the state of market fundamentals. Without proper attention directed at statistic caveats, the ensuing interpretation of oil market data opens the door to unnecessary volatility, and can distort perception of market fundamentals. Among the numerous caveats associated with the compilation of oil statistics, conversion factors, used to produce aggregated data, play a significant role. Interestingly enough, little attention is paid to conversion factors, i.e. to the relation between different units of measurement for oil. Additionally, the underlying information regarding the choice of a specific factor when trying to produce measurements of aggregated data remains scant. The aim of this paper is to shed some light on the impact of conversion factors for two commonly encountered issues, mass to volume equivalencies (barrels to tonnes) and for broad energy measures encountered in world oil statistics. This paper will seek to demonstrate how inappropriate and misused conversion factors can yield wildly varying results and ultimately distort oil statistics. Examples will show that while discrepancies in commonly used conversion factors may seem trivial, their impact on the assessment of a world oil balance is far from negligible. A unified and harmonised convention for conversion factors is necessary to achieve accurate comparisons and aggregate oil statistics for the benefit of both end-users and policy makers

  1. A method for statistically comparing spatial distribution maps

    Directory of Open Access Journals (Sweden)

    Reynolds Mary G

    2009-01-01

    Full Text Available Abstract Background Ecological niche modeling is a method for estimation of species distributions based on certain ecological parameters. Thus far, empirical determination of significant differences between independently generated distribution maps for a single species (maps which are created through equivalent processes, but with different ecological input parameters, has been challenging. Results We describe a method for comparing model outcomes, which allows a statistical evaluation of whether the strength of prediction and breadth of predicted areas is measurably different between projected distributions. To create ecological niche models for statistical comparison, we utilized GARP (Genetic Algorithm for Rule-Set Production software to generate ecological niche models of human monkeypox in Africa. We created several models, keeping constant the case location input records for each model but varying the ecological input data. In order to assess the relative importance of each ecological parameter included in the development of the individual predicted distributions, we performed pixel-to-pixel comparisons between model outcomes and calculated the mean difference in pixel scores. We used a two sample Student's t-test, (assuming as null hypothesis that both maps were identical to each other regardless of which input parameters were used to examine whether the mean difference in corresponding pixel scores from one map to another was greater than would be expected by chance alone. We also utilized weighted kappa statistics, frequency distributions, and percent difference to look at the disparities in pixel scores. Multiple independent statistical tests indicated precipitation as the single most important independent ecological parameter in the niche model for human monkeypox disease. Conclusion In addition to improving our understanding of the natural factors influencing the distribution of human monkeypox disease, such pixel-to-pixel comparison

  2. Causality Statistical Perspectives and Applications

    CERN Document Server

    Berzuini, Carlo; Bernardinell, Luisa

    2012-01-01

    A state of the art volume on statistical causality Causality: Statistical Perspectives and Applications presents a wide-ranging collection of seminal contributions by renowned experts in the field, providing a thorough treatment of all aspects of statistical causality. It covers the various formalisms in current use, methods for applying them to specific problems, and the special requirements of a range of examples from medicine, biology and economics to political science. This book:Provides a clear account and comparison of formal languages, concepts and models for statistical causality. Addr

  3. 40 CFR Appendix IV to Part 265 - Tests for Significance

    Science.gov (United States)

    2010-07-01

    ... introductory statistics texts. ... student's t-test involves calculation of the value of a t-statistic for each comparison of the mean... parameter with its initial background concentration or value. The calculated value of the t-statistic must...

  4. Are Statistics Labs Worth the Effort?--Comparison of Introductory Statistics Courses Using Different Teaching Methods

    Directory of Open Access Journals (Sweden)

    Jose H. Guardiola

    2010-01-01

    Full Text Available This paper compares the academic performance of students in three similar elementary statistics courses taught by the same instructor, but with the lab component differing among the three. One course is traditionally taught without a lab component; the second with a lab component using scenarios and an extensive use of technology, but without explicit coordination between lab and lecture; and the third using a lab component with an extensive use of technology that carefully coordinates the lab with the lecture. Extensive use of technology means, in this context, using Minitab software in the lab section, doing homework and quizzes using MyMathlab ©, and emphasizing interpretation of computer output during lectures. Initially, an online instrument based on Gardner’s multiple intelligences theory, is given to students to try to identify students’ learning styles and intelligence types as covariates. An analysis of covariance is performed in order to compare differences in achievement. In this study there is no attempt to measure difference in student performance across the different treatments. The purpose of this study is to find indications of associations among variables that support the claim that statistics labs could be associated with superior academic achievement in one of these three instructional environments. Also, this study tries to identify individual student characteristics that could be associated with superior academic performance. This study did not find evidence of any individual student characteristics that could be associated with superior achievement. The response variable was computed as percentage of correct answers for the three exams during the semester added together. The results of this study indicate a significant difference across these three different instructional methods, showing significantly higher mean scores for the response variable on students taking the lab component that was carefully coordinated with

  5. The distribution of P-values in medical research articles suggested selective reporting associated with statistical significance.

    Science.gov (United States)

    Perneger, Thomas V; Combescure, Christophe

    2017-07-01

    Published P-values provide a window into the global enterprise of medical research. The aim of this study was to use the distribution of published P-values to estimate the relative frequencies of null and alternative hypotheses and to seek irregularities suggestive of publication bias. This cross-sectional study included P-values published in 120 medical research articles in 2016 (30 each from the BMJ, JAMA, Lancet, and New England Journal of Medicine). The observed distribution of P-values was compared with expected distributions under the null hypothesis (i.e., uniform between 0 and 1) and the alternative hypothesis (strictly decreasing from 0 to 1). P-values were categorized according to conventional levels of statistical significance and in one-percent intervals. Among 4,158 recorded P-values, 26.1% were highly significant (P values values equal to 1, and (3) about twice as many P-values less than 0.05 compared with those more than 0.05. The latter finding was seen in both randomized trials and observational studies, and in most types of analyses, excepting heterogeneity tests and interaction tests. Under plausible assumptions, we estimate that about half of the tested hypotheses were null and the other half were alternative. This analysis suggests that statistical tests published in medical journals are not a random sample of null and alternative hypotheses but that selective reporting is prevalent. In particular, significant results are about twice as likely to be reported as nonsignificant results. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Personality assessment and model comparison with behavioral data: A statistical framework and empirical demonstration with bonobos (Pan paniscus).

    Science.gov (United States)

    Martin, Jordan S; Suarez, Scott A

    2017-08-01

    Interest in quantifying consistent among-individual variation in primate behavior, also known as personality, has grown rapidly in recent decades. Although behavioral coding is the most frequently utilized method for assessing primate personality, limitations in current statistical practice prevent researchers' from utilizing the full potential of their coding datasets. These limitations include the use of extensive data aggregation, not modeling biologically relevant sources of individual variance during repeatability estimation, not partitioning between-individual (co)variance prior to modeling personality structure, the misuse of principal component analysis, and an over-reliance upon exploratory statistical techniques to compare personality models across populations, species, and data collection methods. In this paper, we propose a statistical framework for primate personality research designed to address these limitations. Our framework synthesizes recently developed mixed-effects modeling approaches for quantifying behavioral variation with an information-theoretic model selection paradigm for confirmatory personality research. After detailing a multi-step analytic procedure for personality assessment and model comparison, we employ this framework to evaluate seven models of personality structure in zoo-housed bonobos (Pan paniscus). We find that differences between sexes, ages, zoos, time of observation, and social group composition contributed to significant behavioral variance. Independently of these factors, however, personality nonetheless accounted for a moderate to high proportion of variance in average behavior across observational periods. A personality structure derived from past rating research receives the strongest support relative to our model set. This model suggests that personality variation across the measured behavioral traits is best described by two correlated but distinct dimensions reflecting individual differences in affiliation and

  7. Efficacy of a Word- and Text-Based Intervention for Students With Significant Reading Difficulties.

    Science.gov (United States)

    Vaughn, Sharon; Roberts, Garrett J; Miciak, Jeremy; Taylor, Pat; Fletcher, Jack M

    2018-05-01

    We examine the efficacy of an intervention to improve word reading and reading comprehension in fourth- and fifth-grade students with significant reading problems. Using a randomized control trial design, we compare the fourth- and fifth-grade reading outcomes of students with severe reading difficulties who were provided a researcher-developed treatment with reading outcomes of students in a business-as-usual (BAU) comparison condition. A total of 280 fourth- and fifth-grade students were randomly assigned within school in a 1:1 ratio to either the BAU comparison condition ( n = 139) or the treatment condition ( n = 141). Treatment students were provided small-group tutoring for 30 to 45 minutes for an average of 68 lessons (mean hours of instruction = 44.4, SD = 11.2). Treatment students performed statistically significantly higher than BAU students on a word reading measure (effect size [ES] = 0. 58) and a measure of reading fluency (ES = 0.46). Though not statistically significant, effect sizes for students in the treatment condition were consistently higher than BAU students for decoding measures (ES = 0.06, 0.08), and mixed for comprehension (ES = -0.02, 0.14).

  8. Proper interpretation of chronic toxicity studies and their statistics: A critique of "Which level of evidence does the US National Toxicology Program provide? Statistical considerations using the Technical Report 578 on Ginkgo biloba as an example".

    Science.gov (United States)

    Kissling, Grace E; Haseman, Joseph K; Zeiger, Errol

    2015-09-02

    A recent article by Gaus (2014) demonstrates a serious misunderstanding of the NTP's statistical analysis and interpretation of rodent carcinogenicity data as reported in Technical Report 578 (Ginkgo biloba) (NTP, 2013), as well as a failure to acknowledge the abundant literature on false positive rates in rodent carcinogenicity studies. The NTP reported Ginkgo biloba extract to be carcinogenic in mice and rats. Gaus claims that, in this study, 4800 statistical comparisons were possible, and that 209 of them were statistically significant (p<0.05) compared with 240 (4800×0.05) expected by chance alone; thus, the carcinogenicity of Ginkgo biloba extract cannot be definitively established. However, his assumptions and calculations are flawed since he incorrectly assumes that the NTP uses no correction for multiple comparisons, and that significance tests for discrete data operate at exactly the nominal level. He also misrepresents the NTP's decision making process, overstates the number of statistical comparisons made, and ignores the fact that the mouse liver tumor effects were so striking (e.g., p<0.0000000000001) that it is virtually impossible that they could be false positive outcomes. Gaus' conclusion that such obvious responses merely "generate a hypothesis" rather than demonstrate a real carcinogenic effect has no scientific credibility. Moreover, his claims regarding the high frequency of false positive outcomes in carcinogenicity studies are misleading because of his methodological misconceptions and errors. Published by Elsevier Ireland Ltd.

  9. Statistics for nuclear engineers and scientists. Part 1. Basic statistical inference

    Energy Technology Data Exchange (ETDEWEB)

    Beggs, W.J.

    1981-02-01

    This report is intended for the use of engineers and scientists working in the nuclear industry, especially at the Bettis Atomic Power Laboratory. It serves as the basis for several Bettis in-house statistics courses. The objectives of the report are to introduce the reader to the language and concepts of statistics and to provide a basic set of techniques to apply to problems of the collection and analysis of data. Part 1 covers subjects of basic inference. The subjects include: descriptive statistics; probability; simple inference for normally distributed populations, and for non-normal populations as well; comparison of two populations; the analysis of variance; quality control procedures; and linear regression analysis.

  10. Representative volume size: A comparison of statistical continuum mechanics and statistical physics

    Energy Technology Data Exchange (ETDEWEB)

    AIDUN,JOHN B.; TRUCANO,TIMOTHY G.; LO,CHI S.; FYE,RICHARD M.

    1999-05-01

    In this combination background and position paper, the authors argue that careful work is needed to develop accurate methods for relating the results of fine-scale numerical simulations of material processes to meaningful values of macroscopic properties for use in constitutive models suitable for finite element solid mechanics simulations. To provide a definite context for this discussion, the problem is couched in terms of the lack of general objective criteria for identifying the size of the representative volume (RV) of a material. The objective of this report is to lay out at least the beginnings of an approach for applying results and methods from statistical physics to develop concepts and tools necessary for determining the RV size, as well as alternatives to RV volume-averaging for situations in which the RV is unmanageably large. The background necessary to understand the pertinent issues and statistical physics concepts is presented.

  11. Statistical multistep direct and statistical multistep compound models for calculations of nuclear data for applications

    International Nuclear Information System (INIS)

    Seeliger, D.

    1993-01-01

    This contribution contains a brief presentation and comparison of the different Statistical Multistep Approaches, presently available for practical nuclear data calculations. (author). 46 refs, 5 figs

  12. A simple statistical method for catch comparison studies

    DEFF Research Database (Denmark)

    Holst, René; Revill, Andrew

    2009-01-01

    For analysing catch comparison data, we propose a simple method based on Generalised Linear Mixed Models (GLMM) and use polynomial approximations to fit the proportions caught in the test codend. The method provides comparisons of fish catch at length by the two gears through a continuous curve...... with a realistic confidence band. We demonstrate the versatility of this method, on field data obtained from the first known testing in European waters of the Rhode Island (USA) 'Eliminator' trawl. These data are interesting as they include a range of species with different selective patterns. Crown Copyright (C...

  13. Comparisons of significant parameters for a standard 20% enriched and FLIP 70% enriched TRIGA core

    International Nuclear Information System (INIS)

    Ringle, John C.; Anderson, Terrance V.; Johnson, Arthur G.

    1978-01-01

    A comparison is made between the 20% and 70% enriched cores. The initial start-up data for both cores show the FLIP needs ∼3.8 times the 235 U mass as the 20% core just to go critical. Operational configurations for both cores indicate a need for ∼33% additional fuel above initial critical for adequate maneuvering excess. The fuel element worths are higher in the central core locations for the 20% elements while the peripheral element worths are about the same (with some thermal flux peaking in the FLIP perheral elements). Pulsing comparisons of the two cores show significant differences in reactivity insertions and power peaks. (author)

  14. Characterizing the Joint Effect of Diverse Test-Statistic Correlation Structures and Effect Size on False Discovery Rates in a Multiple-Comparison Study of Many Outcome Measures

    Science.gov (United States)

    Feiveson, Alan H.; Ploutz-Snyder, Robert; Fiedler, James

    2011-01-01

    In their 2009 Annals of Statistics paper, Gavrilov, Benjamini, and Sarkar report the results of a simulation assessing the robustness of their adaptive step-down procedure (GBS) for controlling the false discovery rate (FDR) when normally distributed test statistics are serially correlated. In this study we extend the investigation to the case of multiple comparisons involving correlated non-central t-statistics, in particular when several treatments or time periods are being compared to a control in a repeated-measures design with many dependent outcome measures. In addition, we consider several dependence structures other than serial correlation and illustrate how the FDR depends on the interaction between effect size and the type of correlation structure as indexed by Foerstner s distance metric from an identity. The relationship between the correlation matrix R of the original dependent variables and R, the correlation matrix of associated t-statistics is also studied. In general R depends not only on R, but also on sample size and the signed effect sizes for the multiple comparisons.

  15. Statistical learning techniques applied to epidemiology: a simulated case-control comparison study with logistic regression

    Directory of Open Access Journals (Sweden)

    Land Walker H

    2011-01-01

    Full Text Available Abstract Background When investigating covariate interactions and group associations with standard regression analyses, the relationship between the response variable and exposure may be difficult to characterize. When the relationship is nonlinear, linear modeling techniques do not capture the nonlinear information content. Statistical learning (SL techniques with kernels are capable of addressing nonlinear problems without making parametric assumptions. However, these techniques do not produce findings relevant for epidemiologic interpretations. A simulated case-control study was used to contrast the information embedding characteristics and separation boundaries produced by a specific SL technique with logistic regression (LR modeling representing a parametric approach. The SL technique was comprised of a kernel mapping in combination with a perceptron neural network. Because the LR model has an important epidemiologic interpretation, the SL method was modified to produce the analogous interpretation and generate odds ratios for comparison. Results The SL approach is capable of generating odds ratios for main effects and risk factor interactions that better capture nonlinear relationships between exposure variables and outcome in comparison with LR. Conclusions The integration of SL methods in epidemiology may improve both the understanding and interpretation of complex exposure/disease relationships.

  16. Can We Use Polya’s Method to Improve Students’ Performance in the Statistics Classes?

    Directory of Open Access Journals (Sweden)

    Indika Wickramasinghe

    2015-01-01

    Full Text Available In this study, Polya’s problem-solving method is introduced in a statistics class in an effort to enhance students’ performance. Teaching the method was applied to one of the two introductory-level statistics classes taught by the same instructor, and a comparison was made between the performances in the two classes. The results indicate there was a significant improvement of the students’ performance in the class in which Polya’s method was introduced.

  17. Development of a simplified statistical methodology for nuclear fuel rod internal pressure calculation

    International Nuclear Information System (INIS)

    Kim, Kyu Tae; Kim, Oh Hwan

    1999-01-01

    A simplified statistical methodology is developed in order to both reduce over-conservatism of deterministic methodologies employed for PWR fuel rod internal pressure (RIP) calculation and simplify the complicated calculation procedure of the widely used statistical methodology which employs the response surface method and Monte Carlo simulation. The simplified statistical methodology employs the system moment method with a deterministic statistical methodology employs the system moment method with a deterministic approach in determining the maximum variance of RIP. The maximum RIP variance is determined with the square sum of each maximum value of a mean RIP value times a RIP sensitivity factor for all input variables considered. This approach makes this simplified statistical methodology much more efficient in the routine reload core design analysis since it eliminates the numerous calculations required for the power history-dependent RIP variance determination. This simplified statistical methodology is shown to be more conservative in generating RIP distribution than the widely used statistical methodology. Comparison of the significances of each input variable to RIP indicates that fission gas release model is the most significant input variable. (author). 11 refs., 6 figs., 2 tabs

  18. Automated Analysis of 123I-beta-CIT SPECT Images with Statistical Probabilistic Anatomical Mapping

    International Nuclear Information System (INIS)

    Eo, Jae Seon; Lee, Hoyoung; Lee, Jae Sung; Kim, Yu Kyung; Jeon, Bumseok; Lee, Dong Soo

    2014-01-01

    Population-based statistical probabilistic anatomical maps have been used to generate probabilistic volumes of interest for analyzing perfusion and metabolic brain imaging. We investigated the feasibility of automated analysis for dopamine transporter images using this technique and evaluated striatal binding potentials in Parkinson's disease and Wilson's disease. We analyzed 2β-Carbomethoxy-3β-(4- 123 I-iodophenyl)tropane ( 123 I-beta-CIT) SPECT images acquired from 26 people with Parkinson's disease (M:F=11:15,mean age=49±12 years), 9 people with Wilson's disease (M: F=6:3, mean age=26±11 years) and 17 normal controls (M:F=5:12, mean age=39±16 years). A SPECT template was created using striatal statistical probabilistic map images. All images were spatially normalized onto the template, and probability-weighted regional counts in striatal structures were estimated. The binding potential was calculated using the ratio of specific and nonspecific binding activities at equilibrium. Voxel-based comparisons between groups were also performed using statistical parametric mapping. Qualitative assessment showed that spatial normalizations of the SPECT images were successful for all images. The striatal binding potentials of participants with Parkinson's disease and Wilson's disease were significantly lower than those of normal controls. Statistical parametric mapping analysis found statistically significant differences only in striatal regions in both disease groups compared to controls. We successfully evaluated the regional 123 I-beta-CIT distribution using the SPECT template and probabilistic map data automatically. This procedure allows an objective and quantitative comparison of the binding potential, which in this case showed a significantly decreased binding potential in the striata of patients with Parkinson's disease or Wilson's disease

  19. Statistical variability comparison in MODIS and AERONET derived aerosol optical depth over Indo-Gangetic Plains using time series modeling.

    Science.gov (United States)

    Soni, Kirti; Parmar, Kulwinder Singh; Kapoor, Sangeeta; Kumar, Nishant

    2016-05-15

    A lot of studies in the literature of Aerosol Optical Depth (AOD) done by using Moderate Resolution Imaging Spectroradiometer (MODIS) derived data, but the accuracy of satellite data in comparison to ground data derived from ARrosol Robotic NETwork (AERONET) has been always questionable. So to overcome from this situation, comparative study of a comprehensive ground based and satellite data for the period of 2001-2012 is modeled. The time series model is used for the accurate prediction of AOD and statistical variability is compared to assess the performance of the model in both cases. Root mean square error (RMSE), mean absolute percentage error (MAPE), stationary R-squared, R-squared, maximum absolute percentage error (MAPE), normalized Bayesian information criterion (NBIC) and Ljung-Box methods are used to check the applicability and validity of the developed ARIMA models revealing significant precision in the model performance. It was found that, it is possible to predict the AOD by statistical modeling using time series obtained from past data of MODIS and AERONET as input data. Moreover, the result shows that MODIS data can be formed from AERONET data by adding 0.251627 ± 0.133589 and vice-versa by subtracting. From the forecast available for AODs for the next four years (2013-2017) by using the developed ARIMA model, it is concluded that the forecasted ground AOD has increased trend. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Statistics of Local Extremes

    DEFF Research Database (Denmark)

    Larsen, Gunner Chr.; Bierbooms, W.; Hansen, Kurt Schaldemose

    2003-01-01

    . A theoretical expression for the probability density function associated with local extremes of a stochasticprocess is presented. The expression is basically based on the lower four statistical moments and a bandwidth parameter. The theoretical expression is subsequently verified by comparison with simulated...

  1. Detection and comparison of Selenomonas sputigena in subgingival biofilms in chronic and aggressive periodontitis patients

    Directory of Open Access Journals (Sweden)

    Disha Nagpal

    2016-01-01

    Full Text Available Background: With the advent of DNA-based culture-independent techniques, a constantly growing number of Selenomonas phylotypes have been detected in patients with destructive periodontal diseases. However, the prevalence levels that have been determined in different studies vary considerably. Aim: The present study was undertaken to detect and compare the presence of Selenomonas sputigena in the subgingival plaque samples from generalized aggressive periodontitis (GAP, chronic generalized periodontitis, and periodontally healthy patients using conventional polymerase chain reaction (PCR technique. Materials and Methods: A total of 90 patients were categorized as periodontally healthy individuals (Group I, n = 30, chronic generalized periodontitis (Group II, n = 30, and GAP (Group III, n = 30. The clinical parameters were recorded and subgingival plaque samples were collected. These were then subjected to conventional PCR analysis.Statistical Analysis Used: Kruskal–Wallis ANOVA test was used for multiple group comparisons followed by Mann–Whitney U-test for pairwise comparison. Results: On comparison between three groups, all the clinical parameters were found to be statistically highly significant. Comparing Groups I-II and I-III, the difference in detection was found to be statistically highly significant whereas in Groups II-III, it was statistically nonsignificant. On comparison of S. sputigena detected and undetected patients to clinical parameters in various study groups, the difference was found to be nonsignificant. Conclusion:S. sputigena was found to be significantly associated with chronic and aggressive periodontitis. Although the difference in its detection frequency in both groups was statistically nonsignificant when compared clinically, S. sputigena was more closely associated with the GAP.

  2. Statistical power as a function of Cronbach alpha of instrument questionnaire items.

    Science.gov (United States)

    Heo, Moonseong; Kim, Namhee; Faith, Myles S

    2015-10-14

    In countless number of clinical trials, measurements of outcomes rely on instrument questionnaire items which however often suffer measurement error problems which in turn affect statistical power of study designs. The Cronbach alpha or coefficient alpha, here denoted by C(α), can be used as a measure of internal consistency of parallel instrument items that are developed to measure a target unidimensional outcome construct. Scale score for the target construct is often represented by the sum of the item scores. However, power functions based on C(α) have been lacking for various study designs. We formulate a statistical model for parallel items to derive power functions as a function of C(α) under several study designs. To this end, we assume fixed true score variance assumption as opposed to usual fixed total variance assumption. That assumption is critical and practically relevant to show that smaller measurement errors are inversely associated with higher inter-item correlations, and thus that greater C(α) is associated with greater statistical power. We compare the derived theoretical statistical power with empirical power obtained through Monte Carlo simulations for the following comparisons: one-sample comparison of pre- and post-treatment mean differences, two-sample comparison of pre-post mean differences between groups, and two-sample comparison of mean differences between groups. It is shown that C(α) is the same as a test-retest correlation of the scale scores of parallel items, which enables testing significance of C(α). Closed-form power functions and samples size determination formulas are derived in terms of C(α), for all of the aforementioned comparisons. Power functions are shown to be an increasing function of C(α), regardless of comparison of interest. The derived power functions are well validated by simulation studies that show that the magnitudes of theoretical power are virtually identical to those of the empirical power. Regardless

  3. Statistical analysis of subjective preferences for video enhancement

    Science.gov (United States)

    Woods, Russell L.; Satgunam, PremNandhini; Bronstad, P. Matthew; Peli, Eli

    2010-02-01

    Measuring preferences for moving video quality is harder than for static images due to the fleeting and variable nature of moving video. Subjective preferences for image quality can be tested by observers indicating their preference for one image over another. Such pairwise comparisons can be analyzed using Thurstone scaling (Farrell, 1999). Thurstone (1927) scaling is widely used in applied psychology, marketing, food tasting and advertising research. Thurstone analysis constructs an arbitrary perceptual scale for the items that are compared (e.g. enhancement levels). However, Thurstone scaling does not determine the statistical significance of the differences between items on that perceptual scale. Recent papers have provided inferential statistical methods that produce an outcome similar to Thurstone scaling (Lipovetsky and Conklin, 2004). Here, we demonstrate that binary logistic regression can analyze preferences for enhanced video.

  4. Comparison of untreated adolescent idiopathic scoliosis with normal controls: a review and statistical analysis of the literature.

    Science.gov (United States)

    Rushton, Paul R P; Grevitt, Michael P

    2013-04-20

    Review and statistical analysis of studies evaluating health-related quality of life (HRQOL) in adolescents with untreated adolescent idiopathic scoliosis (AIS) using Scoliosis Research Society (SRS) outcomes. To apply normative values and minimum clinical important differences for the SRS-22r to the literature. Identify whether the HRQOL of adolescents with untreated AIS differs from unaffected peers and whether any differences are clinically relevant. The effect of untreated AIS on adolescent HRQOL is uncertain. The lack of published normative values and minimum clinical important difference for the SRS-22r has so far hindered our interpretation of previous studies. The publication of this background data allows these studies to be re-examined. Using suitable inclusion criteria, a literature search identified studies examining HRQOL in untreated adolescents with AIS. Each cohort was analyzed individually. Statistically significant differences were identified by using 95% confidence intervals for the difference in SRS-22r domain mean scores between the cohorts with AIS and the published data for unaffected adolescents. If the lower bound of the confidence interval was greater than the minimum clinical important difference, the difference was considered clinically significant. Of the 21 included patient cohorts, 81% reported statistically worse pain than those unaffected. Yet in only 5% of cohorts was this difference clinically important. Of the 11 cohorts included examining patient self-image, 91% reported statistically worse scores than those unaffected. In 73% of cohorts this difference was clinically significant. Affected cohorts tended to score well in function/activity and mental health domains and differences from those unaffected rarely reached clinically significant values. Pain and self-image tend to be statistically lower among cohorts with AIS than those unaffected. The literature to date suggests that it is only self-image which consistently differs

  5. Cluster size statistic and cluster mass statistic: two novel methods for identifying changes in functional connectivity between groups or conditions.

    Science.gov (United States)

    Ing, Alex; Schwarzbauer, Christian

    2014-01-01

    Functional connectivity has become an increasingly important area of research in recent years. At a typical spatial resolution, approximately 300 million connections link each voxel in the brain with every other. This pattern of connectivity is known as the functional connectome. Connectivity is often compared between experimental groups and conditions. Standard methods used to control the type 1 error rate are likely to be insensitive when comparisons are carried out across the whole connectome, due to the huge number of statistical tests involved. To address this problem, two new cluster based methods--the cluster size statistic (CSS) and cluster mass statistic (CMS)--are introduced to control the family wise error rate across all connectivity values. These methods operate within a statistical framework similar to the cluster based methods used in conventional task based fMRI. Both methods are data driven, permutation based and require minimal statistical assumptions. Here, the performance of each procedure is evaluated in a receiver operator characteristic (ROC) analysis, utilising a simulated dataset. The relative sensitivity of each method is also tested on real data: BOLD (blood oxygen level dependent) fMRI scans were carried out on twelve subjects under normal conditions and during the hypercapnic state (induced through the inhalation of 6% CO2 in 21% O2 and 73%N2). Both CSS and CMS detected significant changes in connectivity between normal and hypercapnic states. A family wise error correction carried out at the individual connection level exhibited no significant changes in connectivity.

  6. Comparison of adaptive statistical iterative and filtered back projection reconstruction techniques in brain CT

    International Nuclear Information System (INIS)

    Ren, Qingguo; Dewan, Sheilesh Kumar; Li, Ming; Li, Jianying; Mao, Dingbiao; Wang, Zhenglei; Hua, Yanqing

    2012-01-01

    Purpose: To compare image quality and visualization of normal structures and lesions in brain computed tomography (CT) with adaptive statistical iterative reconstruction (ASIR) and filtered back projection (FBP) reconstruction techniques in different X-ray tube current–time products. Materials and methods: In this IRB-approved prospective study, forty patients (nineteen men, twenty-one women; mean age 69.5 ± 11.2 years) received brain scan at different tube current–time products (300 and 200 mAs) in 64-section multi-detector CT (GE, Discovery CT750 HD). Images were reconstructed with FBP and four levels of ASIR-FBP blending. Two radiologists (please note that our hospital is renowned for its geriatric medicine department, and these two radiologists are more experienced in chronic cerebral vascular disease than in neoplastic disease, so this research did not contain cerebral tumors but as a discussion) assessed all the reconstructed images for visibility of normal structures, lesion conspicuity, image contrast and diagnostic confidence in a blinded and randomized manner. Volume CT dose index (CTDI vol ) and dose-length product (DLP) were recorded. All the data were analyzed by using SPSS 13.0 statistical analysis software. Results: There was no statistically significant difference between the image qualities at 200 mAs with 50% ASIR blending technique and 300 mAs with FBP technique (p > .05). While between the image qualities at 200 mAs with FBP and 300 mAs with FBP technique a statistically significant difference (p < .05) was found. Conclusion: ASIR provided same image quality and diagnostic ability in brain imaging with greater than 30% dose reduction compared with FBP reconstruction technique

  7. Comparison of adaptive statistical iterative and filtered back projection reconstruction techniques in brain CT

    Energy Technology Data Exchange (ETDEWEB)

    Ren, Qingguo, E-mail: renqg83@163.com [Department of Radiology, Hua Dong Hospital of Fudan University, Shanghai 200040 (China); Dewan, Sheilesh Kumar, E-mail: sheilesh_d1@hotmail.com [Department of Geriatrics, Hua Dong Hospital of Fudan University, Shanghai 200040 (China); Li, Ming, E-mail: minli77@163.com [Department of Radiology, Hua Dong Hospital of Fudan University, Shanghai 200040 (China); Li, Jianying, E-mail: Jianying.Li@med.ge.com [CT Imaging Research Center, GE Healthcare China, Beijing (China); Mao, Dingbiao, E-mail: maodingbiao74@163.com [Department of Radiology, Hua Dong Hospital of Fudan University, Shanghai 200040 (China); Wang, Zhenglei, E-mail: Williswang_doc@yahoo.com.cn [Department of Radiology, Shanghai Electricity Hospital, Shanghai 200050 (China); Hua, Yanqing, E-mail: cjr.huayanqing@vip.163.com [Department of Radiology, Hua Dong Hospital of Fudan University, Shanghai 200040 (China)

    2012-10-15

    Purpose: To compare image quality and visualization of normal structures and lesions in brain computed tomography (CT) with adaptive statistical iterative reconstruction (ASIR) and filtered back projection (FBP) reconstruction techniques in different X-ray tube current–time products. Materials and methods: In this IRB-approved prospective study, forty patients (nineteen men, twenty-one women; mean age 69.5 ± 11.2 years) received brain scan at different tube current–time products (300 and 200 mAs) in 64-section multi-detector CT (GE, Discovery CT750 HD). Images were reconstructed with FBP and four levels of ASIR-FBP blending. Two radiologists (please note that our hospital is renowned for its geriatric medicine department, and these two radiologists are more experienced in chronic cerebral vascular disease than in neoplastic disease, so this research did not contain cerebral tumors but as a discussion) assessed all the reconstructed images for visibility of normal structures, lesion conspicuity, image contrast and diagnostic confidence in a blinded and randomized manner. Volume CT dose index (CTDI{sub vol}) and dose-length product (DLP) were recorded. All the data were analyzed by using SPSS 13.0 statistical analysis software. Results: There was no statistically significant difference between the image qualities at 200 mAs with 50% ASIR blending technique and 300 mAs with FBP technique (p > .05). While between the image qualities at 200 mAs with FBP and 300 mAs with FBP technique a statistically significant difference (p < .05) was found. Conclusion: ASIR provided same image quality and diagnostic ability in brain imaging with greater than 30% dose reduction compared with FBP reconstruction technique.

  8. Statistical characteristics of trajectories of diamagnetic unicellular organisms in a magnetic field.

    Science.gov (United States)

    Gorobets, Yu I; Gorobets, O Yu

    2015-01-01

    The statistical model is proposed in this paper for description of orientation of trajectories of unicellular diamagnetic organisms in a magnetic field. The statistical parameter such as the effective energy is calculated on basis of this model. The resulting effective energy is the statistical characteristics of trajectories of diamagnetic microorganisms in a magnetic field connected with their metabolism. The statistical model is applicable for the case when the energy of the thermal motion of bacteria is negligible in comparison with their energy in a magnetic field and the bacteria manifest the significant "active random movement", i.e. there is the randomizing motion of the bacteria of non thermal nature, for example, movement of bacteria by means of flagellum. The energy of the randomizing active self-motion of bacteria is characterized by the new statistical parameter for biological objects. The parameter replaces the energy of the randomizing thermal motion in calculation of the statistical distribution. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Intelligent system for statistically significant expertise knowledge on the basis of the model of self-organizing nonequilibrium dissipative system

    Directory of Open Access Journals (Sweden)

    E. A. Tatokchin

    2017-01-01

    Full Text Available Development of the modern educational technologies caused by broad introduction of comput-er testing and development of distant forms of education does necessary revision of methods of an examination of pupils. In work it was shown, need transition to mathematical criteria, exami-nations of knowledge which are deprived of subjectivity. In article the review of the problems arising at realization of this task and are offered approaches for its decision. The greatest atten-tion is paid to discussion of a problem of objective transformation of rated estimates of the ex-pert on to the scale estimates of the student. In general, the discussion this question is was con-cluded that the solution to this problem lies in the creation of specialized intellectual systems. The basis for constructing intelligent system laid the mathematical model of self-organizing nonequilibrium dissipative system, which is a group of students. This article assumes that the dissipative system is provided by the constant influx of new test items of the expert and non-equilibrium – individual psychological characteristics of students in the group. As a result, the system must self-organize themselves into stable patterns. This patern will allow for, relying on large amounts of data, get a statistically significant assessment of student. To justify the pro-posed approach in the work presents the data of the statistical analysis of the results of testing a large sample of students (> 90. Conclusions from this statistical analysis allowed to develop intelligent system statistically significant examination of student performance. It is based on data clustering algorithm (k-mean for the three key parameters. It is shown that this approach allows you to create of the dynamics and objective expertise evaluation.

  10. The Importance of Integrating Clinical Relevance and Statistical Significance in the Assessment of Quality of Care--Illustrated Using the Swedish Stroke Register.

    Directory of Open Access Journals (Sweden)

    Anita Lindmark

    Full Text Available When profiling hospital performance, quality inicators are commonly evaluated through hospital-specific adjusted means with confidence intervals. When identifying deviations from a norm, large hospitals can have statistically significant results even for clinically irrelevant deviations while important deviations in small hospitals can remain undiscovered. We have used data from the Swedish Stroke Register (Riksstroke to illustrate the properties of a benchmarking method that integrates considerations of both clinical relevance and level of statistical significance.The performance measure used was case-mix adjusted risk of death or dependency in activities of daily living within 3 months after stroke. A hospital was labeled as having outlying performance if its case-mix adjusted risk exceeded a benchmark value with a specified statistical confidence level. The benchmark was expressed relative to the population risk and should reflect the clinically relevant deviation that is to be detected. A simulation study based on Riksstroke patient data from 2008-2009 was performed to investigate the effect of the choice of the statistical confidence level and benchmark value on the diagnostic properties of the method.Simulations were based on 18,309 patients in 76 hospitals. The widely used setting, comparing 95% confidence intervals to the national average, resulted in low sensitivity (0.252 and high specificity (0.991. There were large variations in sensitivity and specificity for different requirements of statistical confidence. Lowering statistical confidence improved sensitivity with a relatively smaller loss of specificity. Variations due to different benchmark values were smaller, especially for sensitivity. This allows the choice of a clinically relevant benchmark to be driven by clinical factors without major concerns about sufficiently reliable evidence.The study emphasizes the importance of combining clinical relevance and level of statistical

  11. The Importance of Integrating Clinical Relevance and Statistical Significance in the Assessment of Quality of Care--Illustrated Using the Swedish Stroke Register.

    Science.gov (United States)

    Lindmark, Anita; van Rompaye, Bart; Goetghebeur, Els; Glader, Eva-Lotta; Eriksson, Marie

    2016-01-01

    When profiling hospital performance, quality inicators are commonly evaluated through hospital-specific adjusted means with confidence intervals. When identifying deviations from a norm, large hospitals can have statistically significant results even for clinically irrelevant deviations while important deviations in small hospitals can remain undiscovered. We have used data from the Swedish Stroke Register (Riksstroke) to illustrate the properties of a benchmarking method that integrates considerations of both clinical relevance and level of statistical significance. The performance measure used was case-mix adjusted risk of death or dependency in activities of daily living within 3 months after stroke. A hospital was labeled as having outlying performance if its case-mix adjusted risk exceeded a benchmark value with a specified statistical confidence level. The benchmark was expressed relative to the population risk and should reflect the clinically relevant deviation that is to be detected. A simulation study based on Riksstroke patient data from 2008-2009 was performed to investigate the effect of the choice of the statistical confidence level and benchmark value on the diagnostic properties of the method. Simulations were based on 18,309 patients in 76 hospitals. The widely used setting, comparing 95% confidence intervals to the national average, resulted in low sensitivity (0.252) and high specificity (0.991). There were large variations in sensitivity and specificity for different requirements of statistical confidence. Lowering statistical confidence improved sensitivity with a relatively smaller loss of specificity. Variations due to different benchmark values were smaller, especially for sensitivity. This allows the choice of a clinically relevant benchmark to be driven by clinical factors without major concerns about sufficiently reliable evidence. The study emphasizes the importance of combining clinical relevance and level of statistical confidence when

  12. Comparison of four statistical and machine learning methods for crash severity prediction.

    Science.gov (United States)

    Iranitalab, Amirfarrokh; Khattak, Aemal

    2017-11-01

    Crash severity prediction models enable different agencies to predict the severity of a reported crash with unknown severity or the severity of crashes that may be expected to occur sometime in the future. This paper had three main objectives: comparison of the performance of four statistical and machine learning methods including Multinomial Logit (MNL), Nearest Neighbor Classification (NNC), Support Vector Machines (SVM) and Random Forests (RF), in predicting traffic crash severity; developing a crash costs-based approach for comparison of crash severity prediction methods; and investigating the effects of data clustering methods comprising K-means Clustering (KC) and Latent Class Clustering (LCC), on the performance of crash severity prediction models. The 2012-2015 reported crash data from Nebraska, United States was obtained and two-vehicle crashes were extracted as the analysis data. The dataset was split into training/estimation (2012-2014) and validation (2015) subsets. The four prediction methods were trained/estimated using the training/estimation dataset and the correct prediction rates for each crash severity level, overall correct prediction rate and a proposed crash costs-based accuracy measure were obtained for the validation dataset. The correct prediction rates and the proposed approach showed NNC had the best prediction performance in overall and in more severe crashes. RF and SVM had the next two sufficient performances and MNL was the weakest method. Data clustering did not affect the prediction results of SVM, but KC improved the prediction performance of MNL, NNC and RF, while LCC caused improvement in MNL and RF but weakened the performance of NNC. Overall correct prediction rate had almost the exact opposite results compared to the proposed approach, showing that neglecting the crash costs can lead to misjudgment in choosing the right prediction method. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Arboreal biomass estimation: a comparison between neural networks and statistical methods; Estimativa de biomassa arborea: uma comparacao entre metodos estatisticos e redes neurais

    Energy Technology Data Exchange (ETDEWEB)

    Almeida, Arthur C.; Barros, Paulo L.C.; Monteiro, Jose H.A.; Rocha, Brigida R.P. [Universidade Federal do Para (DEEC/UFPA), Belem, PA (Brazil). Dept. de Engenharia Eletrica e Computacao. Grupo de Pesquisa ENERBIO], e-mails: arthur@ufpa.br, jhumberto01@yahoo.com.br, brigida@ufpa.br, paulo.contente@ufra.edu.br

    2006-07-01

    The current methodologies for calculating the volume of biomass and the consequent potential energy widely used in forest inventories, based primarily in statistical methodology to obtain their results. However, more recent techniques, based on the ability of nonlinear mappings, offered by artificial neural networks, have been used successfully in several areas of technology, with superior performance. This work shows a comparison between the statistical model to estimate the volume of trees and a model based on neural networks, which can be used with advantage for this activity related with biomass energy planning.

  14. Statistical utilitarianism

    OpenAIRE

    Pivato, Marcus

    2013-01-01

    We show that, in a sufficiently large population satisfying certain statistical regularities, it is often possible to accurately estimate the utilitarian social welfare function, even if we only have very noisy data about individual utility functions and interpersonal utility comparisons. In particular, we show that it is often possible to identify an optimal or close-to-optimal utilitarian social choice using voting rules such as the Borda rule, approval voting, relative utilitarianism, or a...

  15. How to measure renal artery stenosis - a retrospective comparison of morphological measurement approaches in relation to hemodynamic significance

    International Nuclear Information System (INIS)

    Andersson, Malin; Jägervall, Karl; Eriksson, Per; Persson, Anders; Granerus, Göran; Wang, Chunliang; Smedby, Örjan

    2015-01-01

    Although it is well known that renal artery stenosis may cause renovascular hypertension, it is unclear how the degree of stenosis should best be measured in morphological images. The aim of this study was to determine which morphological measures from Computed Tomography Angiography (CTA) and Magnetic Resonance Angiography (MRA) are best in predicting whether a renal artery stenosis is hemodynamically significant or not. Forty-seven patients with hypertension and a clinical suspicion of renovascular hypertension were examined with CTA, MRA, captopril-enhanced renography (CER) and captopril test (Ctest). CTA and MRA images of the renal arteries were analyzed by two readers using interactive vessel segmentation software. The measures included minimum diameter, minimum area, diameter reduction and area reduction. In addition, two radiologists visually judged the diameter reduction without automated segmentation. The results were then compared using limits of agreement and intra-class correlation, and correlated with the results from CER combined with Ctest (which were used as standard of reference) using receiver operating characteristics (ROC) analysis. A total of 68 kidneys had all three investigations (CTA, MRA and CER + Ctest), where 11 kidneys (16.2 %) got a positive result on the CER + Ctest. The greatest area under ROC curve (AUROC) was found for the area reduction on MRA, with a value of 0.91 (95 % confidence interval 0.82–0.99), excluding accessory renal arteries. As comparison, the AUROC for the radiologists’ visual assessments on CTA and MRA were 0.90 (0.82–0.98) and 0.91 (0.83–0.99) respectively. None of the differences were statistically significant. No significant differences were found between the morphological measures in their ability to predict hemodynamically significant stenosis, but a tendency of MRA having higher AUROC than CTA. There was no significant difference between measurements made by the radiologists and measurements made with

  16. Test the Overall Significance of p-values by Using Joint Tail Probability of Ordered p-values as Test Statistic

    NARCIS (Netherlands)

    Fang, Yongxiang; Wit, Ernst

    2008-01-01

    Fisher’s combined probability test is the most commonly used method to test the overall significance of a set independent p-values. However, it is very obviously that Fisher’s statistic is more sensitive to smaller p-values than to larger p-value and a small p-value may overrule the other p-values

  17. R for statistics

    CERN Document Server

    Cornillon, Pierre-Andre; Husson, Francois; Jegou, Nicolas; Josse, Julie; Kloareg, Maela; Matzner-Lober, Eric; Rouviere, Laurent

    2012-01-01

    An Overview of RMain ConceptsInstalling RWork SessionHelpR ObjectsFunctionsPackagesExercisesPreparing DataReading Data from FileExporting ResultsManipulating VariablesManipulating IndividualsConcatenating Data TablesCross-TabulationExercisesR GraphicsConventional Graphical FunctionsGraphical Functions with latticeExercisesMaking Programs with RControl FlowsPredefined FunctionsCreating a FunctionExercisesStatistical MethodsIntroduction to the Statistical MethodsA Quick Start with RInstalling ROpening and Closing RThe Command PromptAttribution, Objects, and FunctionSelectionOther Rcmdr PackageImporting (or Inputting) DataGraphsStatistical AnalysisHypothesis TestConfidence Intervals for a MeanChi-Square Test of IndependenceComparison of Two MeansTesting Conformity of a ProportionComparing Several ProportionsThe Power of a TestRegressionSimple Linear RegressionMultiple Linear RegressionPartial Least Squares (PLS) RegressionAnalysis of Variance and CovarianceOne-Way Analysis of VarianceMulti-Way Analysis of Varian...

  18. Statistical characterization report for Single-Shell Tank 241-T-107

    International Nuclear Information System (INIS)

    Cromar, R.D.; Wilmarth, S.R.; Jensen, L.

    1994-01-01

    This report contains the results of the statistical analysis of data from three core samples obtained from single-shell tank 241-T-107 (T-107). Four specific topics are addressed. They are summarized below. Section 3.0 contains mean concentration estimates of analytes found in T-107. The estimates of open-quotes errorclose quotes associated with the concentration estimates are given as 95% confidence intervals (CI) on the mean. The results given are based on three types of samples: core composite samples, core segment samples, and drainable liquid samples. Section 4.0 contains estimates of the spatial variability (variability between cores and between segments) and the analytical variability (variability between the primary and the duplicate analysis). Statistical tests were performed to test the hypothesis that the between cores and the between segments spatial variability is zero. The results of the tests are as follows. Based on the core composite data, the between cores variance is significantly different from zero for 35 out of 74 analytes; i.e., for 53% of the analytes there is no statistically significant difference between the concentration means for two cores. Based on core segment data, the between segments variance is significantly different from zero for 22 out of 24 analytes and the between cores variance is significantly different from zero for 4 out of 24 analytes; i.e., for 8% of the analytes there is no statistically significant difference between segment means and for 83% of the analytes there is no difference between the means from the three cores. Section 5.0 contains the results of the application of multiple comparison methods to the core composite data, the core segment data, and the drainable liquid data. Section 6.0 contains the results of a statistical test conducted to determine the 222-S Analytical Laboratory's ability to homogenize solid core segments

  19. Statistical determination of significant curved I-girder bridge seismic response parameters

    Science.gov (United States)

    Seo, Junwon

    2013-06-01

    Curved steel bridges are commonly used at interchanges in transportation networks and more of these structures continue to be designed and built in the United States. Though the use of these bridges continues to increase in locations that experience high seismicity, the effects of curvature and other parameters on their seismic behaviors have been neglected in current risk assessment tools. These tools can evaluate the seismic vulnerability of a transportation network using fragility curves. One critical component of fragility curve development for curved steel bridges is the completion of sensitivity analyses that help identify influential parameters related to their seismic response. In this study, an accessible inventory of existing curved steel girder bridges located primarily in the Mid-Atlantic United States (MAUS) was used to establish statistical characteristics used as inputs for a seismic sensitivity study. Critical seismic response quantities were captured using 3D nonlinear finite element models. Influential parameters from these quantities were identified using statistical tools that incorporate experimental Plackett-Burman Design (PBD), which included Pareto optimal plots and prediction profiler techniques. The findings revealed that the potential variation in the influential parameters included number of spans, radius of curvature, maximum span length, girder spacing, and cross-frame spacing. These parameters showed varying levels of influence on the critical bridge response.

  20. A Mixed Method Investigation of Social Science Graduate Students' Statistics Anxiety Conditions before and after the Introductory Statistics Course

    Science.gov (United States)

    Huang, Liuli

    2018-01-01

    Research frequently uses the quantitative approach to explore undergraduate students' anxiety regarding statistics. However, few studies of adults' statistics anxiety use the qualitative method, or a sole focus on graduate students. Moreover, even fewer studies focus on a comparison of adults' anxiety levels before and after an introductory…

  1. Clinical significance of intramammary arterial calcifications in diabetic women

    Directory of Open Access Journals (Sweden)

    Milošević Zorica

    2004-01-01

    Full Text Available Background. It is well known that intramammary arterial calcifications diagnosed by mammography as a part of generalized diabetic macroangiopathy may be an indirect sign of diabetes mellitus. Hence, the aim of this study was to determine the incidence of intramammary arterial calcifications, the patient’s age when the calcifications occur, as well as to observe the influence of diabetic polineuropathy, type, and the duration of diabetes on the onset of calcifications, in comparison with nondiabetic women. Methods. Mammographic findings of 113 diabetic female patients (21 with type 1 diabetes and 92 with type 2, as well as of 208 nondiabetic women (the control group were analyzed in the prospective study. The data about the type of diabetes, its duration, and polineuropathy were obtained using the questionnaire. Statistical differences were determined by Mann-Whitney test. Results. Intramammary arterial calcifications were identified in 33.3% of the women with type 1 diabetes, in 40.2% with type 2, and in 8.2% of the women from the control group, respectively. The differences comparing the women with type 1, as well as type 2 diabetes and the controls were statistically significant (p=0.0001. Women with intramammary arterial calcifications and type 1 diabetes were younger comparing to the control group (median age 52 years, comparing to 67 years of age, p=0.001, while there was no statistically significant difference in age between the women with calcifications and type 2 diabetes (61 years of age in relation to the control group (p=0.176. The incidence of polineuropathy in diabetic women was higher in the group with intramammary arterial calcifications (52.3% in comparison to the group without calcifications (26.1%, (p=0.005. The association between intramammary arterial calcifications and the duration of diabetes was not found. Conclusion. The obtained results supported the theory that intramammary arterial calcifications, detected by

  2. Image sequence analysis in nuclear medicine: (1) Parametric imaging using statistical modelling

    International Nuclear Information System (INIS)

    Liehn, J.C.; Hannequin, P.; Valeyre, J.

    1989-01-01

    This is a review of parametric imaging methods on Nuclear Medicine. A Parametric Image is an image in which each pixel value is a function of the value of the same pixel of an image sequence. The Local Model Method is the fitting of each pixel time activity curve by a model which parameter values form the Parametric Images. The Global Model Method is the modelling of the changes between two images. It is applied to image comparison. For both methods, the different models, the identification criterion, the optimization methods and the statistical properties of the images are discussed. The analysis of one or more Parametric Images is performed using 1D or 2D histograms. The statistically significant Parametric Images, (Images of significant Variances, Amplitudes and Differences) are also proposed [fr

  3. Urban pavement surface temperature. Comparison of numerical and statistical approach

    Science.gov (United States)

    Marchetti, Mario; Khalifa, Abderrahmen; Bues, Michel; Bouilloud, Ludovic; Martin, Eric; Chancibaut, Katia

    2015-04-01

    The forecast of pavement surface temperature is very specific in the context of urban winter maintenance. to manage snow plowing and salting of roads. Such forecast mainly relies on numerical models based on a description of the energy balance between the atmosphere, the buildings and the pavement, with a canyon configuration. Nevertheless, there is a specific need in the physical description and the numerical implementation of the traffic in the energy flux balance. This traffic was originally considered as a constant. Many changes were performed in a numerical model to describe as accurately as possible the traffic effects on this urban energy balance, such as tires friction, pavement-air exchange coefficient, and infrared flux neat balance. Some experiments based on infrared thermography and radiometry were then conducted to quantify the effect fo traffic on urban pavement surface. Based on meteorological data, corresponding pavement temperature forecast were calculated and were compared with fiels measurements. Results indicated a good agreement between the forecast from the numerical model based on this energy balance approach. A complementary forecast approach based on principal component analysis (PCA) and partial least-square regression (PLS) was also developed, with data from thermal mapping usng infrared radiometry. The forecast of pavement surface temperature with air temperature was obtained in the specific case of urban configurtation, and considering traffic into measurements used for the statistical analysis. A comparison between results from the numerical model based on energy balance, and PCA/PLS was then conducted, indicating the advantages and limits of each approach.

  4. Equivalent statistics and data interpretation.

    Science.gov (United States)

    Francis, Gregory

    2017-08-01

    Recent reform efforts in psychological science have led to a plethora of choices for scientists to analyze their data. A scientist making an inference about their data must now decide whether to report a p value, summarize the data with a standardized effect size and its confidence interval, report a Bayes Factor, or use other model comparison methods. To make good choices among these options, it is necessary for researchers to understand the characteristics of the various statistics used by the different analysis frameworks. Toward that end, this paper makes two contributions. First, it shows that for the case of a two-sample t test with known sample sizes, many different summary statistics are mathematically equivalent in the sense that they are based on the very same information in the data set. When the sample sizes are known, the p value provides as much information about a data set as the confidence interval of Cohen's d or a JZS Bayes factor. Second, this equivalence means that different analysis methods differ only in their interpretation of the empirical data. At first glance, it might seem that mathematical equivalence of the statistics suggests that it does not matter much which statistic is reported, but the opposite is true because the appropriateness of a reported statistic is relative to the inference it promotes. Accordingly, scientists should choose an analysis method appropriate for their scientific investigation. A direct comparison of the different inferential frameworks provides some guidance for scientists to make good choices and improve scientific practice.

  5. Bayesian versus frequentist statistical inference for investigating a one-off cancer cluster reported to a health department

    Directory of Open Access Journals (Sweden)

    Wills Rachael A

    2009-05-01

    Full Text Available Abstract Background The problem of silent multiple comparisons is one of the most difficult statistical problems faced by scientists. It is a particular problem for investigating a one-off cancer cluster reported to a health department because any one of hundreds, or possibly thousands, of neighbourhoods, schools, or workplaces could have reported a cluster, which could have been for any one of several types of cancer or any one of several time periods. Methods This paper contrasts the frequentist approach with a Bayesian approach for dealing with silent multiple comparisons in the context of a one-off cluster reported to a health department. Two published cluster investigations were re-analysed using the Dunn-Sidak method to adjust frequentist p-values and confidence intervals for silent multiple comparisons. Bayesian methods were based on the Gamma distribution. Results Bayesian analysis with non-informative priors produced results similar to the frequentist analysis, and suggested that both clusters represented a statistical excess. In the frequentist framework, the statistical significance of both clusters was extremely sensitive to the number of silent multiple comparisons, which can only ever be a subjective "guesstimate". The Bayesian approach is also subjective: whether there is an apparent statistical excess depends on the specified prior. Conclusion In cluster investigations, the frequentist approach is just as subjective as the Bayesian approach, but the Bayesian approach is less ambitious in that it treats the analysis as a synthesis of data and personal judgements (possibly poor ones, rather than objective reality. Bayesian analysis is (arguably a useful tool to support complicated decision-making, because it makes the uncertainty associated with silent multiple comparisons explicit.

  6. Permutation statistical methods an integrated approach

    CERN Document Server

    Berry, Kenneth J; Johnston, Janis E

    2016-01-01

    This research monograph provides a synthesis of a number of statistical tests and measures, which, at first consideration, appear disjoint and unrelated. Numerous comparisons of permutation and classical statistical methods are presented, and the two methods are compared via probability values and, where appropriate, measures of effect size. Permutation statistical methods, compared to classical statistical methods, do not rely on theoretical distributions, avoid the usual assumptions of normality and homogeneity of variance, and depend only on the data at hand. This text takes a unique approach to explaining statistics by integrating a large variety of statistical methods, and establishing the rigor of a topic that to many may seem to be a nascent field in statistics. This topic is new in that it took modern computing power to make permutation methods available to people working in the mainstream of research. This research monograph addresses a statistically-informed audience, and can also easily serve as a ...

  7. Comparison of accuracy in predicting emotional instability from MMPI data: fisherian versus contingent probability statistics

    Energy Technology Data Exchange (ETDEWEB)

    Berghausen, P.E. Jr.; Mathews, T.W.

    1987-01-01

    The security plans of nuclear power plants generally require that all personnel who are to have access to protected areas or vital islands be screened for emotional stability. In virtually all instances, the screening involves the administration of one or more psychological tests, usually including the Minnesota Multiphasic Personality Inventory (MMPI). At some plants, all employees receive a structured clinical interview after they have taken the MMPI and results have been obtained. At other plants, only those employees with dirty MMPI are interviewed. This latter protocol is referred to as interviews by exception. Behaviordyne Psychological Corp. has succeeded in removing some of the uncertainty associated with interview-by-exception protocols by developing an empirically based, predictive equation. This equation permits utility companies to make informed choices regarding the risks they are assuming. A conceptual problem exists with the predictive equation, however. Like most predictive equations currently in use, it is based on Fisherian statistics, involving least-squares analyses. Consequently, Behaviordyne Psychological Corp., in conjunction with T.W. Mathews and Associates, has just developed a second predictive equation, one based on contingent probability statistics. The particular technique used in the multi-contingent analysis of probability systems (MAPS) approach. The present paper presents a comparison of predictive accuracy of the two equations: the one derived using Fisherian techniques versus the one thing contingent probability techniques.

  8. Comparison of accuracy in predicting emotional instability from MMPI data: fisherian versus contingent probability statistics

    International Nuclear Information System (INIS)

    Berghausen, P.E. Jr.; Mathews, T.W.

    1987-01-01

    The security plans of nuclear power plants generally require that all personnel who are to have access to protected areas or vital islands be screened for emotional stability. In virtually all instances, the screening involves the administration of one or more psychological tests, usually including the Minnesota Multiphasic Personality Inventory (MMPI). At some plants, all employees receive a structured clinical interview after they have taken the MMPI and results have been obtained. At other plants, only those employees with dirty MMPI are interviewed. This latter protocol is referred to as interviews by exception. Behaviordyne Psychological Corp. has succeeded in removing some of the uncertainty associated with interview-by-exception protocols by developing an empirically based, predictive equation. This equation permits utility companies to make informed choices regarding the risks they are assuming. A conceptual problem exists with the predictive equation, however. Like most predictive equations currently in use, it is based on Fisherian statistics, involving least-squares analyses. Consequently, Behaviordyne Psychological Corp., in conjunction with T.W. Mathews and Associates, has just developed a second predictive equation, one based on contingent probability statistics. The particular technique used in the multi-contingent analysis of probability systems (MAPS) approach. The present paper presents a comparison of predictive accuracy of the two equations: the one derived using Fisherian techniques versus the one thing contingent probability techniques

  9. A comparison of 99mTc-HMPAO SPET changes in dementia with Lewy bodies and Alzheimer's disease using statistical parametric mapping

    International Nuclear Information System (INIS)

    Colloby, Sean J.; Paling, Sean M.; Lobotesis, Kyriakos; Ballard, Clive; McKeith, Ian; O'Brien, John T.; Fenwick, John D.; Williams, David E.

    2002-01-01

    Differences in regional cerebral blood flow (rCBF) between subjects with Alzheimer's disease (AD), dementia with Lewy bodies (DLB) and healthy volunteers were investigated using statistical parametric mapping (SPM99). Forty-eight AD, 23 DLB and 20 age-matched control subjects participated. Technetium-99m hexamethylpropylene amine oxime (HMPAO) brain single-photon emission tomography (SPET) scans were acquired for each subject using a single-headed rotating gamma camera (IGE CamStar XR/T). The SPET images were spatially normalised and group comparison was performed by SPM99. In addition, covariate analysis was undertaken on the standardised images taking the Mini Mental State Examination (MMSE) scores as a variable. Applying a height threshold of P≤0.001 uncorrected, significant perfusion deficits in the parietal and frontal regions of the brain were observed in both AD and DLB groups compared with the control subjects. In addition, significant temporoparietal perfusion deficits were identified in the AD subjects, whereas the DLB patients had deficits in the occipital region. Comparison of dementia groups (height threshold of P≤0.01 uncorrected) yielded hypoperfusion in both the parietal [Brodmann area (BA) 7] and occipital (BA 17, 18) regions of the brain in DLB compared with AD. Abnormalities in these areas, which included visual cortex and several areas involved in higher visual processing and visuospatial function, may be important in understanding the visual hallucinations and visuospatial deficits which are characteristic of DLB. Covariate analysis indicated group differences between AD and DLB in terms of a positive correlation between cognitive test score and temporoparietal blood flow. In conclusion, we found evidence of frontal and parietal hypoperfusion in both AD and DLB, while temporal perfusion deficits were observed exclusively in AD and parieto-occipital deficits in DLB. (orig.)

  10. Notes on the MUF-D statistic

    International Nuclear Information System (INIS)

    Picard, R.R.

    1987-01-01

    Verification of an inventory or of a reported material unaccounted for (MUF) calls for the remeasurement of a sample of items by an inspector followed by comparison of the inspector's data to the facility's reported values. Such comparison is intended to protect against falsification of accounting data that could conceal material loss. In the international arena, the observed discrepancies between the inspector's data and the reported data are quantified using the D statistic. If data have been falsified by the facility, the standard deviations of the D and MUF-D statistics are inflated owing to the sampling distribution. Moreover, under certain conditions the distributions of those statistics can depart markedly from normality, complicating evaluation of an inspection plan's performance. Detection probabilities estimated using standard deviations appropriate for the no-falsification case in conjunction with assumed normality can be far too optimistic. Under very general conditions regarding the facility's and/or the inspector's measurement error procedures and the inspector's sampling regime, the variance of the MUF-D statistic can be broken into three components. The inspection's sensitivity against various falsification scenarios can be traced to one or more of these components. Obvious implications exist for the planning of effective inspections, particularly in the area of resource optimization

  11. Statistically significant dependence of the Xaa-Pro peptide bond conformation on secondary structure and amino acid sequence

    Directory of Open Access Journals (Sweden)

    Leitner Dietmar

    2005-04-01

    Full Text Available Abstract Background A reliable prediction of the Xaa-Pro peptide bond conformation would be a useful tool for many protein structure calculation methods. We have analyzed the Protein Data Bank and show that the combined use of sequential and structural information has a predictive value for the assessment of the cis versus trans peptide bond conformation of Xaa-Pro within proteins. For the analysis of the data sets different statistical methods such as the calculation of the Chou-Fasman parameters and occurrence matrices were used. Furthermore we analyzed the relationship between the relative solvent accessibility and the relative occurrence of prolines in the cis and in the trans conformation. Results One of the main results of the statistical investigations is the ranking of the secondary structure and sequence information with respect to the prediction of the Xaa-Pro peptide bond conformation. We observed a significant impact of secondary structure information on the occurrence of the Xaa-Pro peptide bond conformation, while the sequence information of amino acids neighboring proline is of little predictive value for the conformation of this bond. Conclusion In this work, we present an extensive analysis of the occurrence of the cis and trans proline conformation in proteins. Based on the data set, we derived patterns and rules for a possible prediction of the proline conformation. Upon adoption of the Chou-Fasman parameters, we are able to derive statistically relevant correlations between the secondary structure of amino acid fragments and the Xaa-Pro peptide bond conformation.

  12. A statistical method for testing epidemiological results, as applied to the Hanford worker population

    International Nuclear Information System (INIS)

    Brodsky, A.

    1979-01-01

    Some recent reports of Mancuso, Stewart and Kneale claim findings of radiation-produced cancer in the Hanford worker population. These claims are based on statistical computations that use small differences in accumulated exposures between groups dying of cancer and groups dying of other causes; actual mortality and longevity were not reported. This paper presents a statistical method for evaluation of actual mortality and longevity longitudinally over time, as applied in a primary analysis of the mortality experience of the Hanford worker population. Although available, this method was not utilized in the Mancuso-Stewart-Kneale paper. The author's preliminary longitudinal analysis shows that the gross mortality experience of persons employed at Hanford during 1943-70 interval did not differ significantly from that of certain controls, when both employees and controls were selected from families with two or more offspring and comparison were matched by age, sex, race and year of entry into employment. This result is consistent with findings reported by Sanders (Health Phys. vol.35, 521-538, 1978). The method utilizes an approximate chi-square (1 D.F.) statistic for testing population subgroup comparisons, as well as the cumulation of chi-squares (1 D.F.) for testing the overall result of a particular type of comparison. The method is available for computer testing of the Hanford mortality data, and could also be adapted to morbidity or other population studies. (author)

  13. Fisher statistics for analysis of diffusion tensor directional information.

    Science.gov (United States)

    Hutchinson, Elizabeth B; Rutecki, Paul A; Alexander, Andrew L; Sutula, Thomas P

    2012-04-30

    A statistical approach is presented for the quantitative analysis of diffusion tensor imaging (DTI) directional information using Fisher statistics, which were originally developed for the analysis of vectors in the field of paleomagnetism. In this framework, descriptive and inferential statistics have been formulated based on the Fisher probability density function, a spherical analogue of the normal distribution. The Fisher approach was evaluated for investigation of rat brain DTI maps to characterize tissue orientation in the corpus callosum, fornix, and hilus of the dorsal hippocampal dentate gyrus, and to compare directional properties in these regions following status epilepticus (SE) or traumatic brain injury (TBI) with values in healthy brains. Direction vectors were determined for each region of interest (ROI) for each brain sample and Fisher statistics were applied to calculate the mean direction vector and variance parameters in the corpus callosum, fornix, and dentate gyrus of normal rats and rats that experienced TBI or SE. Hypothesis testing was performed by calculation of Watson's F-statistic and associated p-value giving the likelihood that grouped observations were from the same directional distribution. In the fornix and midline corpus callosum, no directional differences were detected between groups, however in the hilus, significant (pstatistical comparison of tissue structural orientation. Copyright © 2012 Elsevier B.V. All rights reserved.

  14. CONFIDENCE LEVELS AND/VS. STATISTICAL HYPOTHESIS TESTING IN STATISTICAL ANALYSIS. CASE STUDY

    Directory of Open Access Journals (Sweden)

    ILEANA BRUDIU

    2009-05-01

    Full Text Available Estimated parameters with confidence intervals and testing statistical assumptions used in statistical analysis to obtain conclusions on research from a sample extracted from the population. Paper to the case study presented aims to highlight the importance of volume of sample taken in the study and how this reflects on the results obtained when using confidence intervals and testing for pregnant. If statistical testing hypotheses not only give an answer "yes" or "no" to some questions of statistical estimation using statistical confidence intervals provides more information than a test statistic, show high degree of uncertainty arising from small samples and findings build in the "marginally significant" or "almost significant (p very close to 0.05.

  15. A comparison of online versus face-to-face teaching delivery in statistics instruction for undergraduate health science students.

    Science.gov (United States)

    Lu, Fletcher; Lemonde, Manon

    2013-12-01

    The objective of this study was to assess if online teaching delivery produces comparable student test performance as the traditional face-to-face approach irrespective of academic aptitude. This study involves a quasi-experimental comparison of student performance in an undergraduate health science statistics course partitioned in two ways. The first partition involves one group of students taught with a traditional face-to-face classroom approach and the other through a completely online instructional approach. The second partition of the subjects categorized the academic aptitude of the students into groups of higher and lower academically performing based on their assignment grades during the course. Controls that were placed on the study to reduce the possibility of confounding variables were: the same instructor taught both groups covering the same subject information, using the same assessment methods and delivered over the same period of time. The results of this study indicate that online teaching delivery is as effective as a traditional face-to-face approach in terms of producing comparable student test performance but only if the student is academically higher performing. For academically lower performing students, the online delivery method produced significantly poorer student test results compared to those lower performing students taught in a traditional face-to-face environment.

  16. Comparison of 5 health care professionals’ratings of the clinical significance of drug related problems

    DEFF Research Database (Denmark)

    Villesen, Christine; Hojsted, Jette; Kjeldsen, Lene Juel

    2014-01-01

    to a mutual agreement on the level of clinical significance. However, to what degree does the panel agree?Purpose To compare the agreement between different health care professionals who have evaluated the clinical significance of DRPs.Materials and methods DRPs were identified in 30 comprehensive medicines...... reviews conducted by a clinical pharmacist. Two hospital pharmacists, a general practitioner and two specialists in pain management from hospital care (the Panel) evaluated each DRP considering the potential clinical outcome for the patient. The DRPs were rated either nil, low, minor, moderate or highly...... clinically significant. Agreement was analysed using Kappa statistics. A Kappa value of 0.8 to 1.0 indicated nearly perfect agreement between ratings of the Panel members.Results The Panel rated 45 percent of the total 162 DRPs as of moderate clinical significance. However, the overall kappa score was 0...

  17. Statistical summary 1990-91

    International Nuclear Information System (INIS)

    1991-01-01

    The information contained in this statistical summary leaflet summarizes in bar charts or pie charts Nuclear Electric's performance in 1990-91 in the areas of finance, plant and plant operations, safety, commercial operations and manpower. It is intended that the information will provide a basis for comparison in future years. The leaflet also includes a summary of Nuclear Electric's environmental policy statement. (UK)

  18. Assessing compositional variability through graphical analysis and Bayesian statistical approaches: case studies on transgenic crops.

    Science.gov (United States)

    Harrigan, George G; Harrison, Jay M

    2012-01-01

    New transgenic (GM) crops are subjected to extensive safety assessments that include compositional comparisons with conventional counterparts as a cornerstone of the process. The influence of germplasm, location, environment, and agronomic treatments on compositional variability is, however, often obscured in these pair-wise comparisons. Furthermore, classical statistical significance testing can often provide an incomplete and over-simplified summary of highly responsive variables such as crop composition. In order to more clearly describe the influence of the numerous sources of compositional variation we present an introduction to two alternative but complementary approaches to data analysis and interpretation. These include i) exploratory data analysis (EDA) with its emphasis on visualization and graphics-based approaches and ii) Bayesian statistical methodology that provides easily interpretable and meaningful evaluations of data in terms of probability distributions. The EDA case-studies include analyses of herbicide-tolerant GM soybean and insect-protected GM maize and soybean. Bayesian approaches are presented in an analysis of herbicide-tolerant GM soybean. Advantages of these approaches over classical frequentist significance testing include the more direct interpretation of results in terms of probabilities pertaining to quantities of interest and no confusion over the application of corrections for multiple comparisons. It is concluded that a standardized framework for these methodologies could provide specific advantages through enhanced clarity of presentation and interpretation in comparative assessments of crop composition.

  19. An analytical statistical approach to the 3D reconstruction problem

    Energy Technology Data Exchange (ETDEWEB)

    Cierniak, Robert [Czestochowa Univ. of Technology (Poland). Inst. of Computer Engineering

    2011-07-01

    The presented here approach is concerned with the reconstruction problem for 3D spiral X-ray tomography. The reconstruction problem is formulated taking into considerations the statistical properties of signals obtained in X-ray CT. Additinally, image processing performed in our approach is involved in analytical methodology. This conception significantly improves quality of the obtained after reconstruction images and decreases the complexity of the reconstruction problem in comparison with other approaches. Computer simulations proved that schematically described here reconstruction algorithm outperforms conventional analytical methods in obtained image quality. (orig.)

  20. Social indicators and other income statistics using the EUROMOD baseline: a comparison with Eurostat and National Statistics

    OpenAIRE

    Mantovani, Daniela; Sutherland, Holly

    2003-01-01

    This paper reports an exercise to validate EUROMOD output for 1998 by comparing income statistics calculated from the baseline micro-output with comparable statistics from other sources, including the European Community Household Panel. The main potential reasons for discrepancies are identified. While there are some specific national issues that arise, there are two main general points to consider in interpreting EUROMOD estimates of social indicators across EU member States: (a) the method ...

  1. Petroleum reserach: a statistical comparison and economic outlook

    Energy Technology Data Exchange (ETDEWEB)

    Perrodon, A

    1965-10-01

    The oil wealth of a country or of a sedimentary basin is quite variable. The cumulative quantities drawn to the ''useful'' sedimentary surface may vary from some 10 cu m of oil to many hundreds of thousands per sq km in the richest petroleum-bearing provinces. These results have been obtained after years of exploratory work, especially drilling, which have been carried out for the last 10 or 15 yr. It varies from a few wells to more than one thousand per 10,000 kmU2D. This overall task may give a first approximation of the amount of investments devoted to exploration. A comparison between the results obtained and the means employed reveals different criteria such as the amount of oil or gas discovered per exploratory well, thus giving an idea of the productive capacity of a basin, of the cost of exploration, and of its variation as prospecting proceeds. Thus, the general evolution of oil exploration sets forth, successively, a period of expansion, a phase of maturity, and a period of decline. Such comparisons, which enable various petroliferous provinces to be situated more accurately, must be completed later on by more serious analyses of the geological factors in relation to the origin of these different riches. (18 refs.)

  2. Possible future changes in South East Australian frost frequency: an inter-comparison of statistical downscaling approaches

    Science.gov (United States)

    Crimp, Steven; Jin, Huidong; Kokic, Philip; Bakar, Shuvo; Nicholls, Neville

    2018-04-01

    Anthropogenic climate change has already been shown to effect the frequency, intensity, spatial extent, duration and seasonality of extreme climate events. Understanding these changes is an important step in determining exposure, vulnerability and focus for adaptation. In an attempt to support adaptation decision-making we have examined statistical modelling techniques to improve the representation of global climate model (GCM) derived projections of minimum temperature extremes (frosts) in Australia. We examine the spatial changes in minimum temperature extreme metrics (e.g. monthly and seasonal frost frequency etc.), for a region exhibiting the strongest station trends in Australia, and compare these changes with minimum temperature extreme metrics derived from 10 GCMs, from the Coupled Model Inter-comparison Project Phase 5 (CMIP 5) datasets, and via statistical downscaling. We compare the observed trends with those derived from the "raw" GCM minimum temperature data as well as examine whether quantile matching (QM) or spatio-temporal (spTimerQM) modelling with Quantile Matching can be used to improve the correlation between observed and simulated extreme minimum temperatures. We demonstrate, that the spTimerQM modelling approach provides correlations with observed daily minimum temperatures for the period August to November of 0.22. This represents an almost fourfold improvement over either the "raw" GCM or QM results. The spTimerQM modelling approach also improves correlations with observed monthly frost frequency statistics to 0.84 as opposed to 0.37 and 0.81 for the "raw" GCM and QM results respectively. We apply the spatio-temporal model to examine future extreme minimum temperature projections for the period 2016 to 2048. The spTimerQM modelling results suggest the persistence of current levels of frost risk out to 2030, with the evidence of continuing decadal variation.

  3. Determining coding CpG islands by identifying regions significant for pattern statistics on Markov chains.

    Science.gov (United States)

    Singer, Meromit; Engström, Alexander; Schönhuth, Alexander; Pachter, Lior

    2011-09-23

    Recent experimental and computational work confirms that CpGs can be unmethylated inside coding exons, thereby showing that codons may be subjected to both genomic and epigenomic constraint. It is therefore of interest to identify coding CpG islands (CCGIs) that are regions inside exons enriched for CpGs. The difficulty in identifying such islands is that coding exons exhibit sequence biases determined by codon usage and constraints that must be taken into account. We present a method for finding CCGIs that showcases a novel approach we have developed for identifying regions of interest that are significant (with respect to a Markov chain) for the counts of any pattern. Our method begins with the exact computation of tail probabilities for the number of CpGs in all regions contained in coding exons, and then applies a greedy algorithm for selecting islands from among the regions. We show that the greedy algorithm provably optimizes a biologically motivated criterion for selecting islands while controlling the false discovery rate. We applied this approach to the human genome (hg18) and annotated CpG islands in coding exons. The statistical criterion we apply to evaluating islands reduces the number of false positives in existing annotations, while our approach to defining islands reveals significant numbers of undiscovered CCGIs in coding exons. Many of these appear to be examples of functional epigenetic specialization in coding exons.

  4. Comparison of base flows to selected streamflow statistics representative of 1930-2002 in West Virginia

    Science.gov (United States)

    Wiley, Jeffrey B.

    2012-01-01

    Base flows were compared with published streamflow statistics to assess climate variability and to determine the published statistics that can be substituted for annual and seasonal base flows of unregulated streams in West Virginia. The comparison study was done by the U.S. Geological Survey, in cooperation with the West Virginia Department of Environmental Protection, Division of Water and Waste Management. The seasons were defined as winter (January 1-March 31), spring (April 1-June 30), summer (July 1-September 30), and fall (October 1-December 31). Differences in mean annual base flows for five record sub-periods (1930-42, 1943-62, 1963-69, 1970-79, and 1980-2002) range from -14.9 to 14.6 percent when compared to the values for the period 1930-2002. Differences between mean seasonal base flows and values for the period 1930-2002 are less variable for winter and spring, -11.2 to 11.0 percent, than for summer and fall, -47.0 to 43.6 percent. Mean summer base flows (July-September) and mean monthly base flows for July, August, September, and October are approximately equal, within 7.4 percentage points of mean annual base flow. The mean of each of annual, spring, summer, fall, and winter base flows are approximately equal to the annual 50-percent (standard error of 10.3 percent), 45-percent (error of 14.6 percent), 75-percent (error of 11.8 percent), 55-percent (error of 11.2 percent), and 35-percent duration flows (error of 11.1 percent), respectively. The mean seasonal base flows for spring, summer, fall, and winter are approximately equal to the spring 50- to 55-percent (standard error of 6.8 percent), summer 45- to 50-percent (error of 6.7 percent), fall 45-percent (error of 15.2 percent), and winter 60-percent duration flows (error of 8.5 percent), respectively. Annual and seasonal base flows representative of the period 1930-2002 at unregulated streamflow-gaging stations and ungaged locations in West Virginia can be estimated using previously published

  5. A weighted generalized score statistic for comparison of predictive values of diagnostic tests.

    Science.gov (United States)

    Kosinski, Andrzej S

    2013-03-15

    Positive and negative predictive values are important measures of a medical diagnostic test performance. We consider testing equality of two positive or two negative predictive values within a paired design in which all patients receive two diagnostic tests. The existing statistical tests for testing equality of predictive values are either Wald tests based on the multinomial distribution or the empirical Wald and generalized score tests within the generalized estimating equations (GEE) framework. As presented in the literature, these test statistics have considerably complex formulas without clear intuitive insight. We propose their re-formulations that are mathematically equivalent but algebraically simple and intuitive. As is clearly seen with a new re-formulation we presented, the generalized score statistic does not always reduce to the commonly used score statistic in the independent samples case. To alleviate this, we introduce a weighted generalized score (WGS) test statistic that incorporates empirical covariance matrix with newly proposed weights. This statistic is simple to compute, always reduces to the score statistic in the independent samples situation, and preserves type I error better than the other statistics as demonstrated by simulations. Thus, we believe that the proposed WGS statistic is the preferred statistic for testing equality of two predictive values and for corresponding sample size computations. The new formulas of the Wald statistics may be useful for easy computation of confidence intervals for difference of predictive values. The introduced concepts have potential to lead to development of the WGS test statistic in a general GEE setting. Copyright © 2012 John Wiley & Sons, Ltd.

  6. Bilateral Comparison CIEMAT-CENTIS-DMR for radionuclide activity measurements

    International Nuclear Information System (INIS)

    Oropesa Verdecia, P.; Garcia-Torano, E.

    2004-01-01

    We present the results of a bilateral comparison of radionuclide activity measurements between the Radionuclide Metrology Department of the Center of Isotopes of Cuba (CENTIS-DMR), and the Ionising Radiation Metrology Laboratory (LMRI) of the Centro de Investigaciones Energeticas, Medioambientales y Tecnologicas (CIEMAT) of Spain. The aim of the comparison was to establish the comparability of the measurement instruments and methods used to obtain radioactive reference materials of some gamma-emitting nuclides at CENTIS-DMR. The results revealed that there are no statistically significant differences between the data reported by both laboratories. (Author) 7 refs

  7. An improved Fuzzy Kappa statistic that accounts for spatial autocorrelation

    NARCIS (Netherlands)

    Hagen - Zanker, A.H.

    2009-01-01

    The Fuzzy Kappa statistic expresses the agreement between two categorical raster maps. The statistic goes beyond cell-by-cell comparison and gives partial credit to cells based on the categories found in the neighborhood. When matching categories are found at shorter distances the agreement is

  8. Statistical modelling of citation exchange between statistics journals.

    Science.gov (United States)

    Varin, Cristiano; Cattelan, Manuela; Firth, David

    2016-01-01

    Rankings of scholarly journals based on citation data are often met with scepticism by the scientific community. Part of the scepticism is due to disparity between the common perception of journals' prestige and their ranking based on citation counts. A more serious concern is the inappropriate use of journal rankings to evaluate the scientific influence of researchers. The paper focuses on analysis of the table of cross-citations among a selection of statistics journals. Data are collected from the Web of Science database published by Thomson Reuters. Our results suggest that modelling the exchange of citations between journals is useful to highlight the most prestigious journals, but also that journal citation data are characterized by considerable heterogeneity, which needs to be properly summarized. Inferential conclusions require care to avoid potential overinterpretation of insignificant differences between journal ratings. Comparison with published ratings of institutions from the UK's research assessment exercise shows strong correlation at aggregate level between assessed research quality and journal citation 'export scores' within the discipline of statistics.

  9. First international 26Al interlaboratory comparison - Part II

    International Nuclear Information System (INIS)

    Merchel, Silke; Bremser, Wolfram

    2005-01-01

    After finishing Part I of the first international 26 Al interlaboratory comparison with accelerator mass spectrometry (AMS) laboratories [S. Merchel, W. Bremser, Nucl. Instr. and Meth. B 223-224 (2004) 393], the evaluation of Part II with radionuclide counting laboratories took place. The evaluation of the results of the seven participating laboratories on four meteorite samples shows a good overall agreement between laboratories, i.e. it does not reveal any statistically significant differences if results are compared sample-by-sample. However, certain interlaboratory bias is observed with a more detailed statistical analysis including some multivariate approaches

  10. Statistical comparison of leaching behavior of incineration bottom ash using seawater and deionized water: Significant findings based on several leaching methods.

    Science.gov (United States)

    Yin, Ke; Dou, Xiaomin; Ren, Fei; Chan, Wei-Ping; Chang, Victor Wei-Chung

    2018-02-15

    Bottom ashes generated from municipal solid waste incineration have gained increasing popularity as alternative construction materials, however, they contains elevated heavy metals posing a challenge for its free usage. Different leaching methods are developed to quantify leaching potential of incineration bottom ashes meanwhile guide its environmentally friendly application. Yet, there are diverse IBA applications while the in situ environment is always complicated, challenging its legislation. In this study, leaching tests were conveyed using batch and column leaching methods with seawater as opposed to deionized water, to unveil the metal leaching potential of IBA subjected to salty environment, which is commonly encountered when using IBA in land reclamation yet not well understood. Statistical analysis for different leaching methods suggested disparate performance between seawater and deionized water primarily ascribed to ionic strength. Impacts of leachant are metal-specific dependent on leaching methods and have a function of intrinsic characteristics of incineration bottom ashes. Leaching performances were further compared on additional perspectives, e.g. leaching approach and liquid to solid ratio, indicating sophisticated leaching potentials dominated by combined geochemistry. It is necessary to develop application-oriented leaching methods with corresponding leaching criteria to preclude discriminations between different applications, e.g., terrestrial applications vs. land reclamation. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Application of sonoelastography: Comparison of performance between mass and non-mass lesion

    International Nuclear Information System (INIS)

    Ko, Eun Sook; Choi, Hye Young; Kim, Rock Bum; Noh, Woo-Chul

    2012-01-01

    Objectives: The purpose of this study was to determine the performance of the conventional ultrasonography (US) and sonoelastography (SE) in three conditions of all lesions, confined to mass, and confined to non-mass lesion and to compare the performance of each modality between mass and non-mass lesion. Materials and methods: A total 364 patients with 375 lesions were evaluated with US and subsequently SE before performing US-guided biopsy. Two radiologists retrospectively analyzed conventional US and elasticity images by consensus. The US findings were classified as mass or non-mass lesion. With final pathology as reference, in each case of all lesions, masses, and non-mass lesions, areas under the ROC curves (Az) were calculated and compared for the two techniques. The comparison of Az values between the curves for US and SE, and between the curves for mass and non-mass lesion was performed. Results: Among 375 lesions, 104 (28%) lesions were malignant and 271 (72%) lesions were benign. 36 (9.6%) of 375 lesions were classified as non-mass lesion at US. There were statistically significant difference of performance between US and SE in cases of all lesion (p = 0.003) and mass (p = 0.023). However, there was no statistically significant difference of performance in case of non-mass lesion (p = 0.5). Comparisons of the Az values of US and SE between mass and non-mass lesions were not statistically significant (p = 0.745, p = 0.415, respectively). Conclusion: There was no statistically significant difference of performance of US and SE between mass and non-mass lesion.

  12. Study designs, use of statistical tests, and statistical analysis software choice in 2015: Results from two Pakistani monthly Medline indexed journals.

    Science.gov (United States)

    Shaikh, Masood Ali

    2017-09-01

    Assessment of research articles in terms of study designs used, statistical tests applied and the use of statistical analysis programmes help determine research activity profile and trends in the country. In this descriptive study, all original articles published by Journal of Pakistan Medical Association (JPMA) and Journal of the College of Physicians and Surgeons Pakistan (JCPSP), in the year 2015 were reviewed in terms of study designs used, application of statistical tests, and the use of statistical analysis programmes. JPMA and JCPSP published 192 and 128 original articles, respectively, in the year 2015. Results of this study indicate that cross-sectional study design, bivariate inferential statistical analysis entailing comparison between two variables/groups, and use of statistical software programme SPSS to be the most common study design, inferential statistical analysis, and statistical analysis software programmes, respectively. These results echo previously published assessment of these two journals for the year 2014.

  13. Test the Overall Significance of p-values by Using Joint Tail Probability of Ordered p-values as Test Statistic

    OpenAIRE

    Fang, Yongxiang; Wit, Ernst

    2008-01-01

    Fisher’s combined probability test is the most commonly used method to test the overall significance of a set independent p-values. However, it is very obviously that Fisher’s statistic is more sensitive to smaller p-values than to larger p-value and a small p-value may overrule the other p-values and decide the test result. This is, in some cases, viewed as a flaw. In order to overcome this flaw and improve the power of the test, the joint tail probability of a set p-values is proposed as a ...

  14. Practical statistics in pain research.

    Science.gov (United States)

    Kim, Tae Kyun

    2017-10-01

    Pain is subjective, while statistics related to pain research are objective. This review was written to help researchers involved in pain research make statistical decisions. The main issues are related with the level of scales that are often used in pain research, the choice of statistical methods between parametric or nonparametric statistics, and problems which arise from repeated measurements. In the field of pain research, parametric statistics used to be applied in an erroneous way. This is closely related with the scales of data and repeated measurements. The level of scales includes nominal, ordinal, interval, and ratio scales. The level of scales affects the choice of statistics between parametric or non-parametric methods. In the field of pain research, the most frequently used pain assessment scale is the ordinal scale, which would include the visual analogue scale (VAS). There used to be another view, however, which considered the VAS to be an interval or ratio scale, so that the usage of parametric statistics would be accepted practically in some cases. Repeated measurements of the same subjects always complicates statistics. It means that measurements inevitably have correlations between each other, and would preclude the application of one-way ANOVA in which independence between the measurements is necessary. Repeated measures of ANOVA (RMANOVA), however, would permit the comparison between the correlated measurements as long as the condition of sphericity assumption is satisfied. Conclusively, parametric statistical methods should be used only when the assumptions of parametric statistics, such as normality and sphericity, are established.

  15. Detecting changes in ultrasound backscattered statistics by using Nakagami parameters: Comparisons of moment-based and maximum likelihood estimators.

    Science.gov (United States)

    Lin, Jen-Jen; Cheng, Jung-Yu; Huang, Li-Fei; Lin, Ying-Hsiu; Wan, Yung-Liang; Tsui, Po-Hsiang

    2017-05-01

    The Nakagami distribution is an approximation useful to the statistics of ultrasound backscattered signals for tissue characterization. Various estimators may affect the Nakagami parameter in the detection of changes in backscattered statistics. In particular, the moment-based estimator (MBE) and maximum likelihood estimator (MLE) are two primary methods used to estimate the Nakagami parameters of ultrasound signals. This study explored the effects of the MBE and different MLE approximations on Nakagami parameter estimations. Ultrasound backscattered signals of different scatterer number densities were generated using a simulation model, and phantom experiments and measurements of human liver tissues were also conducted to acquire real backscattered echoes. Envelope signals were employed to estimate the Nakagami parameters by using the MBE, first- and second-order approximations of MLE (MLE 1 and MLE 2 , respectively), and Greenwood approximation (MLE gw ) for comparisons. The simulation results demonstrated that, compared with the MBE and MLE 1 , the MLE 2 and MLE gw enabled more stable parameter estimations with small sample sizes. Notably, the required data length of the envelope signal was 3.6 times the pulse length. The phantom and tissue measurement results also showed that the Nakagami parameters estimated using the MLE 2 and MLE gw could simultaneously differentiate various scatterer concentrations with lower standard deviations and reliably reflect physical meanings associated with the backscattered statistics. Therefore, the MLE 2 and MLE gw are suggested as estimators for the development of Nakagami-based methodologies for ultrasound tissue characterization. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Student Engagement Theory: A Comparison of Jesuit, Catholic, and Christian Universities

    Science.gov (United States)

    Williamson, Robin Marie

    2010-01-01

    This research study analyzed the results of the Jesuit Universities Consortium in comparison with the results of the Catholic Colleges and Universities and the Council for Christian Colleges Consortia as measured by the 2005 National Survey of Student Engagement (NSSE) in order to determine and identify any statistically significant differences…

  17. Dynamical and statistical downscaling of precipitation and temperature in a Mediterranean area

    KAUST Repository

    Pizzigalli, Claudia; Palatella, L.; Zampieri, M.; Lionello, P.; Miglietta, M.M.; Paradisi, P.

    2012-01-01

    In this paper we present and discuss a comparison between statistical and regional climate modeling techniques for downscaling GCM prediction . The comparison is carried out over the “Capitanata” region, an area of agricultural interest in south

  18. Computed Tomography Image Quality Evaluation of a New Iterative Reconstruction Algorithm in the Abdomen (Adaptive Statistical Iterative Reconstruction-V) a Comparison With Model-Based Iterative Reconstruction, Adaptive Statistical Iterative Reconstruction, and Filtered Back Projection Reconstructions.

    Science.gov (United States)

    Goodenberger, Martin H; Wagner-Bartak, Nicolaus A; Gupta, Shiva; Liu, Xinming; Yap, Ramon Q; Sun, Jia; Tamm, Eric P; Jensen, Corey T

    The purpose of this study was to compare abdominopelvic computed tomography images reconstructed with adaptive statistical iterative reconstruction-V (ASIR-V) with model-based iterative reconstruction (Veo 3.0), ASIR, and filtered back projection (FBP). Abdominopelvic computed tomography scans for 36 patients (26 males and 10 females) were reconstructed using FBP, ASIR (80%), Veo 3.0, and ASIR-V (30%, 60%, 90%). Mean ± SD patient age was 32 ± 10 years with mean ± SD body mass index of 26.9 ± 4.4 kg/m. Images were reviewed by 2 independent readers in a blinded, randomized fashion. Hounsfield unit, noise, and contrast-to-noise ratio (CNR) values were calculated for each reconstruction algorithm for further comparison. Phantom evaluation of low-contrast detectability (LCD) and high-contrast resolution was performed. Adaptive statistical iterative reconstruction-V 30%, ASIR-V 60%, and ASIR 80% were generally superior qualitatively compared with ASIR-V 90%, Veo 3.0, and FBP (P ASIR-V 60% with respective CNR values of 5.54 ± 2.39, 8.78 ± 3.15, and 3.49 ± 1.77 (P ASIR 80% had the best and worst spatial resolution, respectively. Adaptive statistical iterative reconstruction-V 30% and ASIR-V 60% provided the best combination of qualitative and quantitative performance. Adaptive statistical iterative reconstruction 80% was equivalent qualitatively, but demonstrated inferior spatial resolution and LCD.

  19. Technical issues relating to the statistical parametric mapping of brain SPECT studies

    International Nuclear Information System (INIS)

    Hatton, R.L.; Cordato, N.; Hutton, B.F.; Lau, Y.H.; Evans, S.G.

    2000-01-01

    Full text: Statistical Parametric Mapping (SPM) is a software tool designed for the statistical analysis of functional neuro images, specifically Positron Emission Tomography and functional Magnetic Resonance Imaging, and more recently SPECT. This review examines some problems associated with the analysis of SPECT. A comparison of a patient group with normal studies revealed factors that could influence results, some that commonly occur, others that require further exploration. To optimise the differences between two groups of subjects, both spatial variability and differences in global activity must be minimised. The choice and effectiveness of co registration method and approach to normalisation of activity concentration can affect the optimisation. A small number of subject scans were identified as possessing truncated data resulting in edge effects that could adversely influence the analysis. Other problems included unusual areas of significance possibly related to reconstruction methods and the geometry associated with nonparallel collimators. Areas of extra cerebral significance are a point of concern - and may result from scatter effects, or mis registration. Difficulties in patient positioning, due to postural limitations, can lead to resolution differences. SPM has been used to assess areas of statistical significance arising from these technical factors, as opposed to areas of true clinical significance when comparing subject groups. This contributes to a better understanding of the effects of technical factors so that these may be eliminated, minimised, or incorporated in the study design. Copyright (2000) The Australian and New Zealand Society of Nuclear Medicine Inc

  20. Is it safe to use Poisson statistics in nuclear spectrometry?

    International Nuclear Information System (INIS)

    Pomme, S.; Robouch, P.; Arana, G.; Eguskiza, M.; Maguregui, M.I.

    2000-01-01

    The boundary conditions in which Poisson statistics can be applied in nuclear spectrometry are investigated. Improved formulas for the uncertainty of nuclear counting with deadtime and pulse pileup are presented. A comparison is made between the expected statistical uncertainty for loss-free counting, fixed live-time and fixed real-time measurements. (author)

  1. Selenium determination in biological materials by neutron activation analysis - statistical comparison between the use of 77m Se (t1/2 17.45 s) and 75 Se (t1/2 = 119,8 d)

    International Nuclear Information System (INIS)

    Catharino, Marilia G.M.; Vasconcelos, Marina B.A.

    2002-01-01

    Selenium is nowadays considered to be an essential trace element in human diet. The most extensively studied biochemical role of this element is related to its participation in the composition of glutathione peroxidase. This enzyme acts as an antioxidant for the free radicals formed in the human body. In the present work, selenium was determined by INAA in reference materials ('Human hair' IAEA-085, 'Human hair' IAEA-086, 'Dogfish Liver' DOLT-1 e 'Dogfish Muscle' DORM-1) and in toenails and vitamin supplement, using the short-lived radioisotope 77m Se. The usual method, which utilizes long- lived 75 Se, was also employed, in order to make a comparative study. A statistical test was applied for this comparison. It was verified that the average concentrations of selenium, in the reference materials and in the samples analyzed, do not differ statistically at a significance level of 0.05, which indicates the applicability of the short-lived 77m Se for INAA of the matrixes studied. (author)

  2. Surveys Assessing Students' Attitudes toward Statistics: A Systematic Review of Validity and Reliability

    Science.gov (United States)

    Nolan, Meaghan M.; Beran, Tanya; Hecker, Kent G.

    2012-01-01

    Students with positive attitudes toward statistics are likely to show strong academic performance in statistics courses. Multiple surveys measuring students' attitudes toward statistics exist; however, a comparison of the validity and reliability of interpretations based on their scores is needed. A systematic review of relevant electronic…

  3. ArrayVigil: a methodology for statistical comparison of gene signatures using segregated-one-tailed (SOT) Wilcoxon's signed-rank test.

    Science.gov (United States)

    Khan, Haseeb Ahmad

    2005-01-28

    Due to versatile diagnostic and prognostic fidelity molecular signatures or fingerprints are anticipated as the most powerful tools for cancer management in the near future. Notwithstanding the experimental advancements in microarray technology, methods for analyzing either whole arrays or gene signatures have not been firmly established. Recently, an algorithm, ArraySolver has been reported by Khan for two-group comparison of microarray gene expression data using two-tailed Wilcoxon signed-rank test. Most of the molecular signatures are composed of two sets of genes (hybrid signatures) wherein up-regulation of one set and down-regulation of the other set collectively define the purpose of a gene signature. Since the direction of a selected gene's expression (positive or negative) with respect to a particular disease condition is known, application of one-tailed statistics could be a more relevant choice. A novel method, ArrayVigil, is described for comparing hybrid signatures using segregated-one-tailed (SOT) Wilcoxon signed-rank test and the results compared with integrated-two-tailed (ITT) procedures (SPSS and ArraySolver). ArrayVigil resulted in lower P values than those obtained from ITT statistics while comparing real data from four signatures.

  4. A comparison of {sup 99m}Tc-HMPAO SPET changes in dementia with Lewy bodies and Alzheimer's disease using statistical parametric mapping

    Energy Technology Data Exchange (ETDEWEB)

    Colloby, Sean J.; Paling, Sean M.; Lobotesis, Kyriakos; Ballard, Clive; McKeith, Ian; O' Brien, John T. [Wolfson Research Centre, Institute for Ageing and Health, Newcastle upon Tyne (United Kingdom); Fenwick, John D. [Regional Medical Physics Department, Newcastle General Hospital, Newcastle upon Tyne (United Kingdom); Williams, David E. [Regional Medical Physics Department, Sunderland Royal Hospital (United Kingdom)

    2002-05-01

    Differences in regional cerebral blood flow (rCBF) between subjects with Alzheimer's disease (AD), dementia with Lewy bodies (DLB) and healthy volunteers were investigated using statistical parametric mapping (SPM99). Forty-eight AD, 23 DLB and 20 age-matched control subjects participated. Technetium-99m hexamethylpropylene amine oxime (HMPAO) brain single-photon emission tomography (SPET) scans were acquired for each subject using a single-headed rotating gamma camera (IGE CamStar XR/T). The SPET images were spatially normalised and group comparison was performed by SPM99. In addition, covariate analysis was undertaken on the standardised images taking the Mini Mental State Examination (MMSE) scores as a variable. Applying a height threshold of P{<=}0.001 uncorrected, significant perfusion deficits in the parietal and frontal regions of the brain were observed in both AD and DLB groups compared with the control subjects. In addition, significant temporoparietal perfusion deficits were identified in the AD subjects, whereas the DLB patients had deficits in the occipital region. Comparison of dementia groups (height threshold of P{<=}0.01 uncorrected) yielded hypoperfusion in both the parietal [Brodmann area (BA) 7] and occipital (BA 17, 18) regions of the brain in DLB compared with AD. Abnormalities in these areas, which included visual cortex and several areas involved in higher visual processing and visuospatial function, may be important in understanding the visual hallucinations and visuospatial deficits which are characteristic of DLB. Covariate analysis indicated group differences between AD and DLB in terms of a positive correlation between cognitive test score and temporoparietal blood flow. In conclusion, we found evidence of frontal and parietal hypoperfusion in both AD and DLB, while temporal perfusion deficits were observed exclusively in AD and parieto-occipital deficits in DLB. (orig.)

  5. Comparison of small n statistical tests of differential expression applied to microarrays

    Directory of Open Access Journals (Sweden)

    Lee Anna Y

    2009-02-01

    Full Text Available Abstract Background DNA microarrays provide data for genome wide patterns of expression between observation classes. Microarray studies often have small samples sizes, however, due to cost constraints or specimen availability. This can lead to poor random error estimates and inaccurate statistical tests of differential expression. We compare the performance of the standard t-test, fold change, and four small n statistical test methods designed to circumvent these problems. We report results of various normalization methods for empirical microarray data and of various random error models for simulated data. Results Three Empirical Bayes methods (CyberT, BRB, and limma t-statistics were the most effective statistical tests across simulated and both 2-colour cDNA and Affymetrix experimental data. The CyberT regularized t-statistic in particular was able to maintain expected false positive rates with simulated data showing high variances at low gene intensities, although at the cost of low true positive rates. The Local Pooled Error (LPE test introduced a bias that lowered false positive rates below theoretically expected values and had lower power relative to the top performers. The standard two-sample t-test and fold change were also found to be sub-optimal for detecting differentially expressed genes. The generalized log transformation was shown to be beneficial in improving results with certain data sets, in particular high variance cDNA data. Conclusion Pre-processing of data influences performance and the proper combination of pre-processing and statistical testing is necessary for obtaining the best results. All three Empirical Bayes methods assessed in our study are good choices for statistical tests for small n microarray studies for both Affymetrix and cDNA data. Choice of method for a particular study will depend on software and normalization preferences.

  6. Local sequence alignments statistics: deviations from Gumbel statistics in the rare-event tail

    Directory of Open Access Journals (Sweden)

    Burghardt Bernd

    2007-07-01

    Full Text Available Abstract Background The optimal score for ungapped local alignments of infinitely long random sequences is known to follow a Gumbel extreme value distribution. Less is known about the important case, where gaps are allowed. For this case, the distribution is only known empirically in the high-probability region, which is biologically less relevant. Results We provide a method to obtain numerically the biologically relevant rare-event tail of the distribution. The method, which has been outlined in an earlier work, is based on generating the sequences with a parametrized probability distribution, which is biased with respect to the original biological one, in the framework of Metropolis Coupled Markov Chain Monte Carlo. Here, we first present the approach in detail and evaluate the convergence of the algorithm by considering a simple test case. In the earlier work, the method was just applied to one single example case. Therefore, we consider here a large set of parameters: We study the distributions for protein alignment with different substitution matrices (BLOSUM62 and PAM250 and affine gap costs with different parameter values. In the logarithmic phase (large gap costs it was previously assumed that the Gumbel form still holds, hence the Gumbel distribution is usually used when evaluating p-values in databases. Here we show that for all cases, provided that the sequences are not too long (L > 400, a "modified" Gumbel distribution, i.e. a Gumbel distribution with an additional Gaussian factor is suitable to describe the data. We also provide a "scaling analysis" of the parameters used in the modified Gumbel distribution. Furthermore, via a comparison with BLAST parameters, we show that significance estimations change considerably when using the true distributions as presented here. Finally, we study also the distribution of the sum statistics of the k best alignments. Conclusion Our results show that the statistics of gapped and ungapped local

  7. Statistical analysis and digital processing of the Mössbauer spectra

    International Nuclear Information System (INIS)

    Prochazka, Roman; Tucek, Jiri; Mashlan, Miroslav; Pechousek, Jiri; Tucek, Pavel; Marek, Jaroslav

    2010-01-01

    This work is focused on using the statistical methods and development of the filtration procedures for signal processing in Mössbauer spectroscopy. Statistical tools for noise filtering in the measured spectra are used in many scientific areas. The use of a pure statistical approach in accumulated Mössbauer spectra filtration is described. In Mössbauer spectroscopy, the noise can be considered as a Poisson statistical process with a Gaussian distribution for high numbers of observations. This noise is a superposition of the non-resonant photons counting with electronic noise (from γ-ray detection and discrimination units), and the velocity system quality that can be characterized by the velocity nonlinearities. The possibility of a noise-reducing process using a new design of statistical filter procedure is described. This mathematical procedure improves the signal-to-noise ratio and thus makes it easier to determine the hyperfine parameters of the given Mössbauer spectra. The filter procedure is based on a periodogram method that makes it possible to assign the statistically important components in the spectral domain. The significance level for these components is then feedback-controlled using the correlation coefficient test results. The estimation of the theoretical correlation coefficient level which corresponds to the spectrum resolution is performed. Correlation coefficient test is based on comparison of the theoretical and the experimental correlation coefficients given by the Spearman method. The correctness of this solution was analyzed by a series of statistical tests and confirmed by many spectra measured with increasing statistical quality for a given sample (absorber). The effect of this filter procedure depends on the signal-to-noise ratio and the applicability of this method has binding conditions

  8. Statistical analysis and digital processing of the Mössbauer spectra

    Science.gov (United States)

    Prochazka, Roman; Tucek, Pavel; Tucek, Jiri; Marek, Jaroslav; Mashlan, Miroslav; Pechousek, Jiri

    2010-02-01

    This work is focused on using the statistical methods and development of the filtration procedures for signal processing in Mössbauer spectroscopy. Statistical tools for noise filtering in the measured spectra are used in many scientific areas. The use of a pure statistical approach in accumulated Mössbauer spectra filtration is described. In Mössbauer spectroscopy, the noise can be considered as a Poisson statistical process with a Gaussian distribution for high numbers of observations. This noise is a superposition of the non-resonant photons counting with electronic noise (from γ-ray detection and discrimination units), and the velocity system quality that can be characterized by the velocity nonlinearities. The possibility of a noise-reducing process using a new design of statistical filter procedure is described. This mathematical procedure improves the signal-to-noise ratio and thus makes it easier to determine the hyperfine parameters of the given Mössbauer spectra. The filter procedure is based on a periodogram method that makes it possible to assign the statistically important components in the spectral domain. The significance level for these components is then feedback-controlled using the correlation coefficient test results. The estimation of the theoretical correlation coefficient level which corresponds to the spectrum resolution is performed. Correlation coefficient test is based on comparison of the theoretical and the experimental correlation coefficients given by the Spearman method. The correctness of this solution was analyzed by a series of statistical tests and confirmed by many spectra measured with increasing statistical quality for a given sample (absorber). The effect of this filter procedure depends on the signal-to-noise ratio and the applicability of this method has binding conditions.

  9. After statistics reform : Should we still teach significance testing?

    NARCIS (Netherlands)

    A. Hak (Tony)

    2014-01-01

    textabstractIn the longer term null hypothesis significance testing (NHST) will disappear because p- values are not informative and not replicable. Should we continue to teach in the future the procedures of then abolished routines (i.e., NHST)? Three arguments are discussed for not teaching NHST in

  10. Statistical comparison of spectral and biochemical measurements on an example of Norway spruce stands in the Ore Mountains, Czech Republic

    Directory of Open Access Journals (Sweden)

    Markéta Potůčková

    2016-07-01

    Full Text Available The physiological status of vegetation and changes thereto can be monitored by means of biochemical analysis of collected samples as well as by means of spectroscopic measurements either on the leaf level, using field (or laboratory spectroradiometers or on the canopy level, applying hyperspectral airborne or spaceborne image data. The presented study focuses on the statistical comparison and ascertainment of relations between three datasets collected from selected Norway spruce forest stands in the Ore Mountains, Czechia. The data sets comprise i photosynthetic pigments (chlorophylls, carotenoids and water content of 495 samples collected from 55 trees from three different vertical levels and the first three needle age classes, ii the spectral reflectance of the same samples measured with an ASD Field Spec 4 Wide-Res spectroradiometer equipped with a plant contact probe, iii an airborne hyperspecral image acquired with an Apex sensor. The datasets cover two localities in the Ore Mountains that were affected differently by acid deposits in the 1970s and 1980s. A one-way analysis of variance (ANOVA, Tukey’s honest significance test, hot spot analysis and linear regression were applied either on the original measurements (the content of leaf compounds and reflectance spectra or derived values, i.e., selected spectral indices. The results revealed a generally low correlation between the photosynthetic pigments, water content and spectral measurement. The results of the ANOVA showed significant differences between sites (model areas only in the case of the leaf compound dataset. Differences between the stands on various levels of significance exist in all three datasets and are explained in detail. The study also proved that the vertical gradient of the biochemical and biophysical parameters in the canopy play a role when the optical properties of the forest stands are modelled.

  11. Advances in ranking and selection, multiple comparisons, and reliability methodology and applications

    CERN Document Server

    Balakrishnan, N; Nagaraja, HN

    2007-01-01

    S. Panchapakesan has made significant contributions to ranking and selection and has published in many other areas of statistics, including order statistics, reliability theory, stochastic inequalities, and inference. Written in his honor, the twenty invited articles in this volume reflect recent advances in these areas and form a tribute to Panchapakesan's influence and impact on these areas. Thematically organized, the chapters cover a broad range of topics from: Inference; Ranking and Selection; Multiple Comparisons and Tests; Agreement Assessment; Reliability; and Biostatistics. Featuring

  12. Consistent dynamical and statistical description of fission and comparison

    Energy Technology Data Exchange (ETDEWEB)

    Shunuan, Wang [Chinese Nuclear Data Center, Beijing, BJ (China)

    1996-06-01

    The research survey of consistent dynamical and statistical description of fission is briefly introduced. The channel theory of fission with diffusive dynamics based on Bohr channel theory of fission and Fokker-Planck equation and Kramers-modified Bohr-Wheeler expression according to Strutinsky method given by P.Frobrich et al. are compared and analyzed. (2 figs.).

  13. Worry, Intolerance of Uncertainty, and Statistics Anxiety

    Science.gov (United States)

    Williams, Amanda S.

    2013-01-01

    Statistics anxiety is a problem for most graduate students. This study investigates the relationship between intolerance of uncertainty, worry, and statistics anxiety. Intolerance of uncertainty was significantly related to worry, and worry was significantly related to three types of statistics anxiety. Six types of statistics anxiety were…

  14. Evaluation of significantly modified water bodies in Vojvodina by using multivariate statistical techniques

    Directory of Open Access Journals (Sweden)

    Vujović Svetlana R.

    2013-01-01

    Full Text Available This paper illustrates the utility of multivariate statistical techniques for analysis and interpretation of water quality data sets and identification of pollution sources/factors with a view to get better information about the water quality and design of monitoring network for effective management of water resources. Multivariate statistical techniques, such as factor analysis (FA/principal component analysis (PCA and cluster analysis (CA, were applied for the evaluation of variations and for the interpretation of a water quality data set of the natural water bodies obtained during 2010 year of monitoring of 13 parameters at 33 different sites. FA/PCA attempts to explain the correlations between the observations in terms of the underlying factors, which are not directly observable. Factor analysis is applied to physico-chemical parameters of natural water bodies with the aim classification and data summation as well as segmentation of heterogeneous data sets into smaller homogeneous subsets. Factor loadings were categorized as strong and moderate corresponding to the absolute loading values of >0.75, 0.75-0.50, respectively. Four principal factors were obtained with Eigenvalues >1 summing more than 78 % of the total variance in the water data sets, which is adequate to give good prior information regarding data structure. Each factor that is significantly related to specific variables represents a different dimension of water quality. The first factor F1 accounting for 28 % of the total variance and represents the hydrochemical dimension of water quality. The second factor F2 accounting for 18% of the total variance and may be taken factor of water eutrophication. The third factor F3 accounting 17 % of the total variance and represents the influence of point sources of pollution on water quality. The fourth factor F4 accounting 13 % of the total variance and may be taken as an ecological dimension of water quality. Cluster analysis (CA is an

  15. Testing earthquake prediction algorithms: Statistically significant advance prediction of the largest earthquakes in the Circum-Pacific, 1992-1997

    Science.gov (United States)

    Kossobokov, V.G.; Romashkova, L.L.; Keilis-Borok, V. I.; Healy, J.H.

    1999-01-01

    Algorithms M8 and MSc (i.e., the Mendocino Scenario) were used in a real-time intermediate-term research prediction of the strongest earthquakes in the Circum-Pacific seismic belt. Predictions are made by M8 first. Then, the areas of alarm are reduced by MSc at the cost that some earthquakes are missed in the second approximation of prediction. In 1992-1997, five earthquakes of magnitude 8 and above occurred in the test area: all of them were predicted by M8 and MSc identified correctly the locations of four of them. The space-time volume of the alarms is 36% and 18%, correspondingly, when estimated with a normalized product measure of empirical distribution of epicenters and uniform time. The statistical significance of the achieved results is beyond 99% both for M8 and MSc. For magnitude 7.5 + , 10 out of 19 earthquakes were predicted by M8 in 40% and five were predicted by M8-MSc in 13% of the total volume considered. This implies a significance level of 81% for M8 and 92% for M8-MSc. The lower significance levels might result from a global change in seismic regime in 1993-1996, when the rate of the largest events has doubled and all of them become exclusively normal or reversed faults. The predictions are fully reproducible; the algorithms M8 and MSc in complete formal definitions were published before we started our experiment [Keilis-Borok, V.I., Kossobokov, V.G., 1990. Premonitory activation of seismic flow: Algorithm M8, Phys. Earth and Planet. Inter. 61, 73-83; Kossobokov, V.G., Keilis-Borok, V.I., Smith, S.W., 1990. Localization of intermediate-term earthquake prediction, J. Geophys. Res., 95, 19763-19772; Healy, J.H., Kossobokov, V.G., Dewey, J.W., 1992. A test to evaluate the earthquake prediction algorithm, M8. U.S. Geol. Surv. OFR 92-401]. M8 is available from the IASPEI Software Library [Healy, J.H., Keilis-Borok, V.I., Lee, W.H.K. (Eds.), 1997. Algorithms for Earthquake Statistics and Prediction, Vol. 6. IASPEI Software Library]. ?? 1999 Elsevier

  16. The clinical significances of the abnormal expressions of Piwil1 and Piwil2 in colonic adenoma and adenocarcinoma

    Directory of Open Access Journals (Sweden)

    Wang HL

    2015-05-01

    Full Text Available Hai-Ling Wang,1 Bei-Bei Chen,1 Xin-Guang Cao,1 Jin Wang,2 Xiu-Feng Hu,1 Xiao-Qian Mu,1 Xiao-Bing Chen1 1The Affiliated Cancer Hospital of Zhengzhou University, Henan Cancer Hospital, Zhengzhou, People’s Republic of China; 2The First Affiliated Hospital of Zhengzhou University, Zhengzhou, People’s Republic of China Objective: The objective of the present investigation was to study the clinical significances of the abnormal expressions of Piwil1 and Piwil2 protein in colonic adenoma and adenocarcinoma.Methods: This study had applied immunohistochemical method to detect 45 cases of tissues adjacent to carcinoma (distance to cancerous tissue was above 5 cm, 41 cases of colonic adenoma and 92 cases of colon cancer tissues, and their Piwil1 and Piwil2 protein expression levels.Analysis: The correlation of both expression and its relationship with clinicopathological features of colon cancer was analyzed.Results: Positive expression rates of Piwil1 in tissues adjacent to carcinoma, colonic adenoma, and colon cancer were 11.1% (5/45, 53.7% (22/41, and 80.4% (74/92, respectively; the expression rates increased, and the comparisons between each two groups were statistically significant (P<0.05. In each group, the positive expression rates of Piwil2 were 24.4% (11/45 cases, 75.6% (31/41 cases, and 92.4% (85/92 cases; expression rates increased, and the comparisons between each two groups were statistically significant (P<0.05. Piwil1 expression and the correlation of the degree of differentiation, TNM stage, and lymph node metastasis were statistically significant (P<0.05. Piwil2 expression and the correlation of the degree of differentiation, tumor node metastasis (TNM stage, and lymph node metastasis had no statistical significance (P>0.05. In colon cancer tissue, Piwil1 and Piwil2 expressions were positively correlated (r=0.262, P<0.05.Conclusion: The results showed that the abnormal expression of Piwil1 and Piwil2 might play an important role in

  17. Characterization of microbial associations with methanotrophic archaea and sulfate-reducing bacteria through statistical comparison of nested Magneto-FISH enrichments

    Directory of Open Access Journals (Sweden)

    Elizabeth Trembath-Reichert

    2016-04-01

    Full Text Available Methane seep systems along continental margins host diverse and dynamic microbial assemblages, sustained in large part through the microbially mediated process of sulfate-coupled Anaerobic Oxidation of Methane (AOM. This methanotrophic metabolism has been linked to consortia of anaerobic methane-oxidizing archaea (ANME and sulfate-reducing bacteria (SRB. These two groups are the focus of numerous studies; however, less is known about the wide diversity of other seep associated microorganisms. We selected a hierarchical set of FISH probes targeting a range of Deltaproteobacteria diversity. Using the Magneto-FISH enrichment technique, we then magnetically captured CARD-FISH hybridized cells and their physically associated microorganisms from a methane seep sediment incubation. DNA from nested Magneto-FISH experiments was analyzed using Illumina tag 16S rRNA gene sequencing (iTag. Enrichment success and potential bias with iTag was evaluated in the context of full-length 16S rRNA gene clone libraries, CARD-FISH, functional gene clone libraries, and iTag mock communities. We determined commonly used Earth Microbiome Project (EMP iTAG primers introduced bias in some common methane seep microbial taxa that reduced the ability to directly compare OTU relative abundances within a sample, but comparison of relative abundances between samples (in nearly all cases and whole community-based analyses were robust. The iTag dataset was subjected to statistical co-occurrence measures of the most abundant OTUs to determine which taxa in this dataset were most correlated across all samples. Many non-canonical microbial partnerships were statistically significant in our co-occurrence network analysis, most of which were not recovered with conventional clone library sequencing, demonstrating the utility of combining Magneto-FISH and iTag sequencing methods for hypothesis generation of associations within complex microbial communities. Network analysis pointed to

  18. Characterization of microbial associations with methanotrophic archaea and sulfate-reducing bacteria through statistical comparison of nested Magneto-FISH enrichments.

    Science.gov (United States)

    Trembath-Reichert, Elizabeth; Case, David H; Orphan, Victoria J

    2016-01-01

    Methane seep systems along continental margins host diverse and dynamic microbial assemblages, sustained in large part through the microbially mediated process of sulfate-coupled Anaerobic Oxidation of Methane (AOM). This methanotrophic metabolism has been linked to consortia of anaerobic methane-oxidizing archaea (ANME) and sulfate-reducing bacteria (SRB). These two groups are the focus of numerous studies; however, less is known about the wide diversity of other seep associated microorganisms. We selected a hierarchical set of FISH probes targeting a range of Deltaproteobacteria diversity. Using the Magneto-FISH enrichment technique, we then magnetically captured CARD-FISH hybridized cells and their physically associated microorganisms from a methane seep sediment incubation. DNA from nested Magneto-FISH experiments was analyzed using Illumina tag 16S rRNA gene sequencing (iTag). Enrichment success and potential bias with iTag was evaluated in the context of full-length 16S rRNA gene clone libraries, CARD-FISH, functional gene clone libraries, and iTag mock communities. We determined commonly used Earth Microbiome Project (EMP) iTAG primers introduced bias in some common methane seep microbial taxa that reduced the ability to directly compare OTU relative abundances within a sample, but comparison of relative abundances between samples (in nearly all cases) and whole community-based analyses were robust. The iTag dataset was subjected to statistical co-occurrence measures of the most abundant OTUs to determine which taxa in this dataset were most correlated across all samples. Many non-canonical microbial partnerships were statistically significant in our co-occurrence network analysis, most of which were not recovered with conventional clone library sequencing, demonstrating the utility of combining Magneto-FISH and iTag sequencing methods for hypothesis generation of associations within complex microbial communities. Network analysis pointed to many co

  19. Statistical methods for change-point detection in surface temperature records

    Science.gov (United States)

    Pintar, A. L.; Possolo, A.; Zhang, N. F.

    2013-09-01

    We describe several statistical methods to detect possible change-points in a time series of values of surface temperature measured at a meteorological station, and to assess the statistical significance of such changes, taking into account the natural variability of the measured values, and the autocorrelations between them. These methods serve to determine whether the record may suffer from biases unrelated to the climate signal, hence whether there may be a need for adjustments as considered by M. J. Menne and C. N. Williams (2009) "Homogenization of Temperature Series via Pairwise Comparisons", Journal of Climate 22 (7), 1700-1717. We also review methods to characterize patterns of seasonality (seasonal decomposition using monthly medians or robust local regression), and explain the role they play in the imputation of missing values, and in enabling robust decompositions of the measured values into a seasonal component, a possible climate signal, and a station-specific remainder. The methods for change-point detection that we describe include statistical process control, wavelet multi-resolution analysis, adaptive weights smoothing, and a Bayesian procedure, all of which are applicable to single station records.

  20. Statistical criteria for characterizing irradiance time series.

    Energy Technology Data Exchange (ETDEWEB)

    Stein, Joshua S.; Ellis, Abraham; Hansen, Clifford W.

    2010-10-01

    We propose and examine several statistical criteria for characterizing time series of solar irradiance. Time series of irradiance are used in analyses that seek to quantify the performance of photovoltaic (PV) power systems over time. Time series of irradiance are either measured or are simulated using models. Simulations of irradiance are often calibrated to or generated from statistics for observed irradiance and simulations are validated by comparing the simulation output to the observed irradiance. Criteria used in this comparison should derive from the context of the analyses in which the simulated irradiance is to be used. We examine three statistics that characterize time series and their use as criteria for comparing time series. We demonstrate these statistics using observed irradiance data recorded in August 2007 in Las Vegas, Nevada, and in June 2009 in Albuquerque, New Mexico.

  1. Dynamical and statistical downscaling of precipitation and temperature in a Mediterranean area

    KAUST Repository

    Pizzigalli, Claudia

    2012-03-28

    In this paper we present and discuss a comparison between statistical and regional climate modeling techniques for downscaling GCM prediction . The comparison is carried out over the “Capitanata” region, an area of agricultural interest in south-eastern Italy, for current (1961-1990) and future (2071–2100) climate. The statistical model is based on Canonical Correlation Analysis (CCA), associated with a data pre-filtering obtained by a Principal Component Analysis (PCA), whereas the Regional Climate Model REGCM3 was used for dynamical downscaling. Downscaling techniques were applied to estimate rainfall, maximum and minimum temperatures and average number of consecutive wet and dry days. Both methods have comparable skills in estimating stations data. They show good results for spring, the most important season for agriculture. Both statistical and dynamical models reproduce the statistical properties of precipitation well, the crucial variable for the growth of crops.

  2. A novel complete-case analysis to determine statistical significance between treatments in an intention-to-treat population of randomized clinical trials involving missing data.

    Science.gov (United States)

    Liu, Wei; Ding, Jinhui

    2018-04-01

    The application of the principle of the intention-to-treat (ITT) to the analysis of clinical trials is challenged in the presence of missing outcome data. The consequences of stopping an assigned treatment in a withdrawn subject are unknown. It is difficult to make a single assumption about missing mechanisms for all clinical trials because there are complicated reactions in the human body to drugs due to the presence of complex biological networks, leading to data missing randomly or non-randomly. Currently there is no statistical method that can tell whether a difference between two treatments in the ITT population of a randomized clinical trial with missing data is significant at a pre-specified level. Making no assumptions about the missing mechanisms, we propose a generalized complete-case (GCC) analysis based on the data of completers. An evaluation of the impact of missing data on the ITT analysis reveals that a statistically significant GCC result implies a significant treatment effect in the ITT population at a pre-specified significance level unless, relative to the comparator, the test drug is poisonous to the non-completers as documented in their medical records. Applications of the GCC analysis are illustrated using literature data, and its properties and limits are discussed.

  3. Empirical and Statistical Evaluation of the Effectiveness of Four ...

    African Journals Online (AJOL)

    Akorede

    ABSTRACT: Data compression is the process of reducing the size of a file to effectively ... Through the statistical analysis performed using Boxplot and ANOVA and comparison made ...... Automatic Control, Electronics and Computer Science.

  4. Nonparametric statistical inference

    CERN Document Server

    Gibbons, Jean Dickinson

    2014-01-01

    Thoroughly revised and reorganized, the fourth edition presents in-depth coverage of the theory and methods of the most widely used nonparametric procedures in statistical analysis and offers example applications appropriate for all areas of the social, behavioral, and life sciences. The book presents new material on the quantiles, the calculation of exact and simulated power, multiple comparisons, additional goodness-of-fit tests, methods of analysis of count data, and modern computer applications using MINITAB, SAS, and STATXACT. It includes tabular guides for simplified applications of tests and finding P values and confidence interval estimates.

  5. Achievement Goal Orientations and Adolescents' Subjective Well-Being in School: The Mediating Roles of Academic Social Comparison Directions.

    Science.gov (United States)

    Tian, Lili; Yu, Tingting; Huebner, E Scott

    2017-01-01

    The purpose of this study was to examine the multiple mediational roles of academic social comparison directions (upward academic social comparison and downward academic social comparison) on the relationships between achievement goal orientations (i.e., mastery goals, performance-approach goals, and performance-avoidance goals) and subjective well-being (SWB) in school (school satisfaction, school affect) in adolescent students in China. A total of 883 Chinese adolescent students (430 males; Mean age = 12.99) completed a multi-measure questionnaire. Structural equation modeling was used to examine the hypotheses. Results indicated that (1) mastery goal orientations and performance-approach goal orientations both showed a statistically significant, positive correlation with SWB in school whereas performance-avoidance goal orientations showed a statistically significant, negative correlation with SWB in school among adolescents; (2) upward academic social comparisons mediated the relation between the three types of achievement goal orientations (i.e., mastery goals, performance-approach goals, and performance-avoidance goals) and SWB in school; (3) downward academic social comparisons mediated the relation between mastery goal orientations and SWB in school as well as the relation between performance-avoidance goal orientations and SWB in school. The findings suggest possible important cultural differences in the antecedents of SWB in school in adolescent students in China compared to adolescent students in Western nations.

  6. Multivariate statistical assessments of greenhouse-gas-induced climatic change and comparison with results from general circulation models

    International Nuclear Information System (INIS)

    Schoenwiese, C.D.

    1990-01-01

    Based on univariate correction and coherence analyses, including techniques moving in time, and taking account of the physical basis of the relationships, a simple multivariate concept is presented which correlates observational climatic time series simultaneously with solar, volcanic, ENSO (El Nino/Souther Oscillation) and anthropogenic greenhouse-gas forcing. The climatic elements considered are air temperature (near the ground and stratosphere), sea surface temperature, sea level and precipitation, and cover at least the period 1881-1980 (stratospheric temperature only since 1960). The climate signal assessments which may be hypothetically attributed to the observed CO 2 or equivalent CO 2 (implying additional greenhouse gases) increase are compared with those resulting from GCM experiments. In case of the Northern hemisphere air temperature these comparisons are performed not only in respect to hemispheric and global means, but also in respect to the regional and seasonal patterns. Autocorrelations and phase shifts of the climate response to natural and anthropogenic forcing complicate the statistical assessments

  7. Statistical Learning in Specific Language Impairment and Autism Spectrum Disorder: A Meta-Analysis

    Directory of Open Access Journals (Sweden)

    Rita Obeid

    2016-08-01

    Full Text Available Impairments in statistical learning might be a common deficit among individuals with Specific Language Impairment (SLI and Autism Spectrum Disorder (ASD. Using meta-analysis, we examined statistical learning in SLI (14 studies, 15 comparisons and ASD (13 studies, 20 comparisons to evaluate this hypothesis. Effect sizes were examined as a function of diagnosis across multiple statistical learning tasks (Serial Reaction Time, Contextual Cueing, Artificial Grammar Learning, Speech Stream, Observational Learning, Probabilistic Classification. Individuals with SLI showed deficits in statistical learning relative to age-matched controls g = .47, 95% CI [.28, .66], p < .001. In contrast, statistical learning was intact in individuals with ASD relative to controls, g = –.13, 95% CI [–.34, .08], p = .22. Effect sizes did not vary as a function of task modality or participant age. Our findings inform debates about overlapping social-communicative difficulties in children with SLI and ASD by suggesting distinct underlying mechanisms. In line with the procedural deficit hypothesis (Ullman & Pierpont, 2005, impaired statistical learning may account for phonological and syntactic difficulties associated with SLI. In contrast, impaired statistical learning fails to account for the social-pragmatic difficulties associated with ASD.

  8. Traditional Lecture Versus an Activity Approach for Teaching Statistics: A Comparison of Outcomes

    OpenAIRE

    Loveland, Jennifer L.

    2014-01-01

    Many educational researchers have proposed teaching statistics with less lecture and more active learning methods. However, there are only a few comparative studies that have taught one section of statistics with lectures and one section with activity-based methods; of those studies, the results are contradictory. To address the need for more research on the actual effectiveness of active learning methods in introductory statistics, this research study was undertaken. An introductory, univ...

  9. Comparison of four software packages for CT lung volumetry in healthy individuals

    Energy Technology Data Exchange (ETDEWEB)

    Nemec, Stefan F. [Harvard Medical School, Department of Radiology, Beth Israel Deaconess Medical Center, Boston, MA (United States); Medical University of Vienna, Department of Biomedical Imaging and Image-guided Therapy, Vienna (Austria); Molinari, Francesco [Centre Hospitalier Regional Universitaire de Lille, Department of Radiology, Lille (France); Dufresne, Valerie [CHU de Charleroi - Hopital Vesale, Pneumologie, Montigny-le-Tilleul (Belgium); Gosset, Natacha [CHU Tivoli, Service d' Imagerie Medicale, La Louviere (Belgium); Silva, Mario; Bankier, Alexander A. [Harvard Medical School, Department of Radiology, Beth Israel Deaconess Medical Center, Boston, MA (United States)

    2015-06-01

    To compare CT lung volumetry (CTLV) measurements provided by different software packages, and to provide normative data for lung densitometric measurements in healthy individuals. This retrospective study included 51 chest CTs of 17 volunteers (eight men and nine women; mean age, 30 ± 6 years), who underwent spirometrically monitored CT at total lung capacity (TLC), functional residual capacity (FRC), and mean inspiratory capacity (MIC). Volumetric differences assessed by four commercial software packages were compared with analysis of variance (ANOVA) for repeated measurements and benchmarked against the threshold for acceptable variability between spirometric measurements. Mean lung density (MLD) and parenchymal heterogeneity (MLD-SD) were also compared with ANOVA. Volumetric differences ranged from 12 to 213 ml (0.20 % to 6.45 %). Although 16/18 comparisons (among four software packages at TLC, MIC, and FRC) were statistically significant (P < 0.001 to P = 0.004), only 3/18 comparisons, one at MIC and two at FRC, exceeded the spirometry variability threshold. MLD and MLD-SD significantly increased with decreasing volumes, and were significantly larger in lower compared to upper lobes (P < 0.001). Lung volumetric differences provided by different software packages are small. These differences should not be interpreted based on statistical significance alone, but together with absolute volumetric differences. (orig.)

  10. Comparison of stability statistics for yield in barley (Hordeum vulgare ...

    African Journals Online (AJOL)

    STORAGESEVER

    2010-03-15

    Mar 15, 2010 ... statistics and yield indicated that only TOP method would be useful for simultaneously selecting for high yield and ... metric stability methods; i) they reduce the bias caused by outliers, ii) ...... Biometrics, 43: 45-53. Sabaghnia N ...

  11. Sub-Poissonian statistics in order-to-chaos transition

    International Nuclear Information System (INIS)

    Kryuchkyan, Gagik Yu.; Manvelyan, Suren B.

    2003-01-01

    We study the phenomena at the overlap of quantum chaos and nonclassical statistics for the time-dependent model of nonlinear oscillator. It is shown in the framework of Mandel Q parameter and Wigner function that the statistics of oscillatory excitation numbers is drastically changed in the order-to-chaos transition. The essential improvement of sub-Poissonian statistics in comparison with an analogous one for the standard model of driven anharmonic oscillator is observed for the regular operational regime. It is shown that in the chaotic regime, the system exhibits the range of sub-Poissonian and super-Poissonian statistics which alternate one to other depending on time intervals. Unusual dependence of the variance of oscillatory number on the external noise level for the chaotic dynamics is observed. The scaling invariance of the quantum statistics is demonstrated and its relation to dissipation and decoherence is studied

  12. Characterizing the D2 statistic: word matches in biological sequences.

    Science.gov (United States)

    Forêt, Sylvain; Wilson, Susan R; Burden, Conrad J

    2009-01-01

    Word matches are often used in sequence comparison methods, either as a measure of sequence similarity or in the first search steps of algorithms such as BLAST or BLAT. The D2 statistic is the number of matches of words of k letters between two sequences. Recent advances have been made in the characterization of this statistic and in the approximation of its distribution. Here, these results are extended to the case of approximate word matches. We compute the exact value of the variance of the D2 statistic for the case of a uniform letter distribution, and introduce a method to provide accurate approximations of the variance in the remaining cases. This enables the distribution of D2 to be approximated for typical situations arising in biological research. We apply these results to the identification of cis-regulatory modules, and show that this method detects such sequences with a high accuracy. The ability to approximate the distribution of D2 for both exact and approximate word matches will enable the use of this statistic in a more precise manner for sequence comparison, database searches, and identification of transcription factor binding sites.

  13. Comparison between syringe irrigation and RinsEndo in reduction of Enterococcus faecalis in experimentally infected root canal

    Directory of Open Access Journals (Sweden)

    Sharareh Mousavi Zahed

    2015-05-01

    Full Text Available Background and Aims: To ensure root canal treatment success, endodontic microbiota should be efficiently reduced. Several irrigation devices have been recently introduced with the main objective of improving root canal disinfection.The purpose of this study was to evaluate the rinsing effect of RinsEndo system in reduction of enterococcus faecalis in comparison with conventional hand syringe in infected root canals.   Materials and Methods: 60 extracted single canal anterior teeth were infected with enterococcus faecalis and divided into 3 groups: RinsEndo system, conventional hand syringe and control group. The enterococcus faecalis colonies were counted in each group before and after rinsing. Data were analyzed using Variance and Kruskal Wallis test.   Results: The mean of enterococcus faecalis growth after rinsing was 3.50×103 in group with conventional syring rinsing, 2.04×103 in group with RinsEndo washing and 6.11×103 in control group. Reduction of enterococcus faecalis after rinsing was statistically significant in each group (P<0.001. The amount of reduction in number of colonies with RinsEndo and conventional syringe rinsing was higher in comparison with control group and this difference was significant (P<0.001. RinsEndo rinsing effect was statistically significantly higher in comparison to conventional syringe as well (P<0.001.   Conclusion: Rinsing with RinsEndo system was significantly more efficient in reduction of enterococcus faecalis from root canal in comparison with hand syringe washing.

  14. Statistical analysis of thermal conductivity of nanofluid containing ...

    Indian Academy of Sciences (India)

    Thermal conductivity measurements of nanofluids were analysed via two-factor completely randomized design and comparison of data means is carried out with Duncan's multiple-range test. Statistical analysis of experimental data show that temperature and weight fraction have a reasonable impact on the thermal ...

  15. Statistical theory applications and associated computer codes

    International Nuclear Information System (INIS)

    Prince, A.

    1980-01-01

    The general format is along the same lines as that used in the O.M. Session, i.e. an introduction to the nature of the physical problems and methods of solution based on the statistical model of the nucleus. Both binary and higher multiple reactions are considered. The computer codes used in this session are a combination of optical model and statistical theory. As with the O.M. sessions, the preparation of input and analysis of output are thoroughly examined. Again, comparison with experimental data serves to demonstrate the validity of the results and possible areas for improvement. (author)

  16. Structure and statistics of turbulent flow over riblets

    Science.gov (United States)

    Henderson, R. D.; Crawford, C. H.; Karniadakis, G. E.

    1993-01-01

    In this paper we present comparisons of turbulence statistics obtained from direct numerical simulation of flow over streamwise aligned triangular riblets with experimental results. We also present visualizations of the instantaneous velocity field inside and around the riblet valleys. In light of the behavior of the statistics and flowfields inside the riblet valleys, we investigate previously reported physical mechanisms for the drag reducing effect of riblets; our results here support the hypothesis of flow anchoring by the riblet valleys and the corresponding inhibition of spanwise flow motions.

  17. Comparison of measurement methods with a mixed effects procedure accounting for replicated evaluations (COM3PARE): method comparison algorithm implementation for head and neck IGRT positional verification.

    Science.gov (United States)

    Roy, Anuradha; Fuller, Clifton D; Rosenthal, David I; Thomas, Charles R

    2015-08-28

    Comparison of imaging measurement devices in the absence of a gold-standard comparator remains a vexing problem; especially in scenarios where multiple, non-paired, replicated measurements occur, as in image-guided radiotherapy (IGRT). As the number of commercially available IGRT presents a challenge to determine whether different IGRT methods may be used interchangeably, an unmet need conceptually parsimonious and statistically robust method to evaluate the agreement between two methods with replicated observations. Consequently, we sought to determine, using an previously reported head and neck positional verification dataset, the feasibility and utility of a Comparison of Measurement Methods with the Mixed Effects Procedure Accounting for Replicated Evaluations (COM3PARE), a unified conceptual schema and analytic algorithm based upon Roy's linear mixed effects (LME) model with Kronecker product covariance structure in a doubly multivariate set-up, for IGRT method comparison. An anonymized dataset consisting of 100 paired coordinate (X/ measurements from a sequential series of head and neck cancer patients imaged near-simultaneously with cone beam CT (CBCT) and kilovoltage X-ray (KVX) imaging was used for model implementation. Software-suggested CBCT and KVX shifts for the lateral (X), vertical (Y) and longitudinal (Z) dimensions were evaluated for bias, inter-method (between-subject variation), intra-method (within-subject variation), and overall agreement using with a script implementing COM3PARE with the MIXED procedure of the statistical software package SAS (SAS Institute, Cary, NC, USA). COM3PARE showed statistically significant bias agreement and difference in inter-method between CBCT and KVX was observed in the Z-axis (both p - value<0.01). Intra-method and overall agreement differences were noted as statistically significant for both the X- and Z-axes (all p - value<0.01). Using pre-specified criteria, based on intra-method agreement, CBCT was deemed

  18. Statistical processing of technological and radiochemical data

    International Nuclear Information System (INIS)

    Lahodova, Zdena; Vonkova, Kateřina

    2011-01-01

    The project described in this article had two goals. The main goal was to compare technological and radiochemical data from two units of nuclear power plant. The other goal was to check the collection, organization and interpretation of routinely measured data. Monitoring of analytical and radiochemical data is a very valuable source of knowledge for some processes in the primary circuit. Exploratory analysis of one-dimensional data was performed to estimate location and variability and to find extreme values, data trends, distribution, autocorrelation etc. This process allowed for the cleaning and completion of raw data. Then multiple analyses such as multiple comparisons, multiple correlation, variance analysis, and so on were performed. Measured data was organized into a data matrix. The results and graphs such as Box plots, Mahalanobis distance, Biplot, Correlation, and Trend graphs are presented in this article as statistical analysis tools. Tables of data were replaced with graphs because graphs condense large amounts of information into easy-to-understand formats. The significant conclusion of this work is that the collection and comprehension of data is a very substantial part of statistical processing. With well-prepared and well-understood data, its accurate evaluation is possible. Cooperation between the technicians who collect data and the statistician who processes it is also very important. (author)

  19. Summary of significant solar-initiated events during STIP interval XII

    International Nuclear Information System (INIS)

    Gergely, T.E.

    1982-01-01

    A summary of the significant solar-terrestrial events of STIP Interval XII (April 10-July 1, 1981) is presented. It is shown that the first half of the interval was extremely active, with several of the largest X-ray flares, particle events, and shocks of this solar cycle taking place during April and the first half of May. However, the second half of the interval was characterized by relatively quiet conditions. A detailed examination is presented of several large events which occurred on 10, 24, and 27 April and on 8 and 16 May. It is suggested that the comparison and statistical analysis of the numerous events for which excellent observations are available could provide information on what causes a type II burst to propagate in the interplanetary medium

  20. Evaluating statistical and clinical significance of intervention effects in single-case experimental designs: an SPSS method to analyze univariate data.

    Science.gov (United States)

    Maric, Marija; de Haan, Else; Hogendoorn, Sanne M; Wolters, Lidewij H; Huizenga, Hilde M

    2015-03-01

    Single-case experimental designs are useful methods in clinical research practice to investigate individual client progress. Their proliferation might have been hampered by methodological challenges such as the difficulty applying existing statistical procedures. In this article, we describe a data-analytic method to analyze univariate (i.e., one symptom) single-case data using the common package SPSS. This method can help the clinical researcher to investigate whether an intervention works as compared with a baseline period or another intervention type, and to determine whether symptom improvement is clinically significant. First, we describe the statistical method in a conceptual way and show how it can be implemented in SPSS. Simulation studies were performed to determine the number of observation points required per intervention phase. Second, to illustrate this method and its implications, we present a case study of an adolescent with anxiety disorders treated with cognitive-behavioral therapy techniques in an outpatient psychotherapy clinic, whose symptoms were regularly assessed before each session. We provide a description of the data analyses and results of this case study. Finally, we discuss the advantages and shortcomings of the proposed method. Copyright © 2014. Published by Elsevier Ltd.

  1. Performance studies of GooFit on GPUs vs RooFit on CPUs while estimating the statistical significance of a new physical signal

    Science.gov (United States)

    Di Florio, Adriano

    2017-10-01

    In order to test the computing capabilities of GPUs with respect to traditional CPU cores a high-statistics toy Monte Carlo technique has been implemented both in ROOT/RooFit and GooFit frameworks with the purpose to estimate the statistical significance of the structure observed by CMS close to the kinematical boundary of the J/ψϕ invariant mass in the three-body decay B + → J/ψϕK +. GooFit is a data analysis open tool under development that interfaces ROOT/RooFit to CUDA platform on nVidia GPU. The optimized GooFit application running on GPUs hosted by servers in the Bari Tier2 provides striking speed-up performances with respect to the RooFit application parallelised on multiple CPUs by means of PROOF-Lite tool. The considerable resulting speed-up, evident when comparing concurrent GooFit processes allowed by CUDA Multi Process Service and a RooFit/PROOF-Lite process with multiple CPU workers, is presented and discussed in detail. By means of GooFit it has also been possible to explore the behaviour of a likelihood ratio test statistic in different situations in which the Wilks Theorem may or may not apply because its regularity conditions are not satisfied.

  2. The extended statistical analysis of toxicity tests using standardised effect sizes (SESs): a comparison of nine published papers.

    Science.gov (United States)

    Festing, Michael F W

    2014-01-01

    The safety of chemicals, drugs, novel foods and genetically modified crops is often tested using repeat-dose sub-acute toxicity tests in rats or mice. It is important to avoid misinterpretations of the results as these tests are used to help determine safe exposure levels in humans. Treated and control groups are compared for a range of haematological, biochemical and other biomarkers which may indicate tissue damage or other adverse effects. However, the statistical analysis and presentation of such data poses problems due to the large number of statistical tests which are involved. Often, it is not clear whether a "statistically significant" effect is real or a false positive (type I error) due to sampling variation. The author's conclusions appear to be reached somewhat subjectively by the pattern of statistical significances, discounting those which they judge to be type I errors and ignoring any biomarker where the p-value is greater than p = 0.05. However, by using standardised effect sizes (SESs) a range of graphical methods and an over-all assessment of the mean absolute response can be made. The approach is an extension, not a replacement of existing methods. It is intended to assist toxicologists and regulators in the interpretation of the results. Here, the SES analysis has been applied to data from nine published sub-acute toxicity tests in order to compare the findings with those of the author's. Line plots, box plots and bar plots show the pattern of response. Dose-response relationships are easily seen. A "bootstrap" test compares the mean absolute differences across dose groups. In four out of seven papers where the no observed adverse effect level (NOAEL) was estimated by the authors, it was set too high according to the bootstrap test, suggesting that possible toxicity is under-estimated.

  3. The extended statistical analysis of toxicity tests using standardised effect sizes (SESs: a comparison of nine published papers.

    Directory of Open Access Journals (Sweden)

    Michael F W Festing

    Full Text Available The safety of chemicals, drugs, novel foods and genetically modified crops is often tested using repeat-dose sub-acute toxicity tests in rats or mice. It is important to avoid misinterpretations of the results as these tests are used to help determine safe exposure levels in humans. Treated and control groups are compared for a range of haematological, biochemical and other biomarkers which may indicate tissue damage or other adverse effects. However, the statistical analysis and presentation of such data poses problems due to the large number of statistical tests which are involved. Often, it is not clear whether a "statistically significant" effect is real or a false positive (type I error due to sampling variation. The author's conclusions appear to be reached somewhat subjectively by the pattern of statistical significances, discounting those which they judge to be type I errors and ignoring any biomarker where the p-value is greater than p = 0.05. However, by using standardised effect sizes (SESs a range of graphical methods and an over-all assessment of the mean absolute response can be made. The approach is an extension, not a replacement of existing methods. It is intended to assist toxicologists and regulators in the interpretation of the results. Here, the SES analysis has been applied to data from nine published sub-acute toxicity tests in order to compare the findings with those of the author's. Line plots, box plots and bar plots show the pattern of response. Dose-response relationships are easily seen. A "bootstrap" test compares the mean absolute differences across dose groups. In four out of seven papers where the no observed adverse effect level (NOAEL was estimated by the authors, it was set too high according to the bootstrap test, suggesting that possible toxicity is under-estimated.

  4. Comparison of safflower oil extraction kinetics under two characteristic moisture conditions: statistical analysis of non-linear model parameters

    Directory of Open Access Journals (Sweden)

    E. Baümler

    2014-06-01

    Full Text Available In this study the kinetics of oil extraction from partially dehulled safflower seeds under two moisture conditions (7 and 9% dry basis was investigated. The extraction assays were performed using a stirred batch system, thermostated at 50 ºC, using n-hexane as solvent. The data obtained were fitted to a modified diffusion model in order to represent the extraction kinetics. The model took into account a washing and a diffusive step. Fitting parameters were compared statistically for both moisture conditions. The oil yield increased with the extraction time in both cases, although the oil was released at different rates. A comparison of the parameters showed that both the portion extracted in the washing phase and the effective diffusion coefficient were moisture-dependent. The effective diffusivities were 2.81 10-12 and 8.06 10-13 m²s-1 for moisture contents of 7% and 9%, respectively.

  5. Comparison of U-spatial statistics and C-A fractal models for delineating anomaly patterns of porphyry-type Cu geochemical signatures in the Varzaghan district, NW Iran

    Science.gov (United States)

    Ghezelbash, Reza; Maghsoudi, Abbas

    2018-05-01

    The delineation of populations of stream sediment geochemical data is a crucial task in regional exploration surveys. In this contribution, uni-element stream sediment geochemical data of Cu, Au, Mo, and Bi have been subjected to two reliable anomaly-background separation methods, namely, the concentration-area (C-A) fractal and the U-spatial statistics methods to separate geochemical anomalies related to porphyry-type Cu mineralization in northwest Iran. The quantitative comparison of the delineated geochemical populations using the modified success-rate curves revealed the superiority of the U-spatial statistics method over the fractal model. Moreover, geochemical maps of investigated elements revealed strongly positive correlations between strong anomalies and Oligocene-Miocene intrusions in the study area. Therefore, follow-up exploration programs should focus on these areas.

  6. Effects of social comparison direction, threat, and self-esteem on affect, self-evaluation, and expected success.

    Science.gov (United States)

    Aspinwall, L G; Taylor, S E

    1993-05-01

    Two studies explored the conditions under which social comparisons are used to manage negative affect and naturalistic threats. Study 1 examined induced mood and dispositional self-esteem as determinants of affective responses to upward and downward comparisons. Consistent with a mood repair prediction, only low-self-esteem Ss in whom a negative mood had been induced reported improved mood after exposure to downward comparison information. Study 2 examined the impact of naturalistic threats on responses to comparison information. Relative to a no-comparison baseline, low-self-esteem Ss who had experienced a recent academic setback reported more favorable self-evaluations and greater expectations of future success in college after exposure to downward comparison information. These results remained significant after controlling statistically for general distress. Implications for downward comparison theory are discussed.

  7. 12th Brazilian Meeting on Bayesian Statistics

    CERN Document Server

    Louzada, Francisco; Rifo, Laura; Stern, Julio; Lauretto, Marcelo

    2015-01-01

    Through refereed papers, this volume focuses on the foundations of the Bayesian paradigm; their comparison to objectivistic or frequentist Statistics counterparts; and the appropriate application of Bayesian foundations. This research in Bayesian Statistics is applicable to data analysis in biostatistics, clinical trials, law, engineering, and the social sciences. EBEB, the Brazilian Meeting on Bayesian Statistics, is held every two years by the ISBrA, the International Society for Bayesian Analysis, one of the most active chapters of the ISBA. The 12th meeting took place March 10-14, 2014 in Atibaia. Interest in foundations of inductive Statistics has grown recently in accordance with the increasing availability of Bayesian methodological alternatives. Scientists need to deal with the ever more difficult choice of the optimal method to apply to their problem. This volume shows how Bayes can be the answer. The examination and discussion on the foundations work towards the goal of proper application of Bayesia...

  8. Statistical modelling of transcript profiles of differentially regulated genes

    Directory of Open Access Journals (Sweden)

    Sergeant Martin J

    2008-07-01

    allowed 11% of the Escherichia coli features to be fitted by an exponential function, and 25% of the Rattus norvegicus features could be described by the critical exponential model, all with statistical significance of p Conclusion The statistical non-linear regression approaches presented in this study provide detailed biologically oriented descriptions of individual gene expression profiles, using biologically variable data to generate a set of defining parameters. These approaches have application to the modelling and greater interpretation of profiles obtained across a wide range of platforms, such as microarrays. Through careful choice of appropriate model forms, such statistical regression approaches allow an improved comparison of gene expression profiles, and may provide an approach for the greater understanding of common regulatory mechanisms between genes.

  9. The statistical evaluation and comparison of ADMS-Urban model for the prediction of nitrogen dioxide with air quality monitoring network.

    Science.gov (United States)

    Dėdelė, Audrius; Miškinytė, Auksė

    2015-09-01

    In many countries, road traffic is one of the main sources of air pollution associated with adverse effects on human health and environment. Nitrogen dioxide (NO2) is considered to be a measure of traffic-related air pollution, with concentrations tending to be higher near highways, along busy roads, and in the city centers, and the exceedances are mainly observed at measurement stations located close to traffic. In order to assess the air quality in the city and the air pollution impact on public health, air quality models are used. However, firstly, before the model can be used for these purposes, it is important to evaluate the accuracy of the dispersion modelling as one of the most widely used method. The monitoring and dispersion modelling are two components of air quality monitoring system (AQMS), in which statistical comparison was made in this research. The evaluation of the Atmospheric Dispersion Modelling System (ADMS-Urban) was made by comparing monthly modelled NO2 concentrations with the data of continuous air quality monitoring stations in Kaunas city. The statistical measures of model performance were calculated for annual and monthly concentrations of NO2 for each monitoring station site. The spatial analysis was made using geographic information systems (GIS). The calculation of statistical parameters indicated a good ADMS-Urban model performance for the prediction of NO2. The results of this study showed that the agreement of modelled values and observations was better for traffic monitoring stations compared to the background and residential stations.

  10. Bayesian Information Criterion as an Alternative way of Statistical Inference

    Directory of Open Access Journals (Sweden)

    Nadejda Yu. Gubanova

    2012-05-01

    Full Text Available The article treats Bayesian information criterion as an alternative to traditional methods of statistical inference, based on NHST. The comparison of ANOVA and BIC results for psychological experiment is discussed.

  11. Field significance of performance measures in the context of regional climate model evaluation. Part 2: precipitation

    Science.gov (United States)

    Ivanov, Martin; Warrach-Sagi, Kirsten; Wulfmeyer, Volker

    2018-04-01

    A new approach for rigorous spatial analysis of the downscaling performance of regional climate model (RCM) simulations is introduced. It is based on a multiple comparison of the local tests at the grid cells and is also known as `field' or `global' significance. The block length for the local resampling tests is precisely determined to adequately account for the time series structure. New performance measures for estimating the added value of downscaled data relative to the large-scale forcing fields are developed. The methodology is exemplarily applied to a standard EURO-CORDEX hindcast simulation with the Weather Research and Forecasting (WRF) model coupled with the land surface model NOAH at 0.11 ∘ grid resolution. Daily precipitation climatology for the 1990-2009 period is analysed for Germany for winter and summer in comparison with high-resolution gridded observations from the German Weather Service. The field significance test controls the proportion of falsely rejected local tests in a meaningful way and is robust to spatial dependence. Hence, the spatial patterns of the statistically significant local tests are also meaningful. We interpret them from a process-oriented perspective. While the downscaled precipitation distributions are statistically indistinguishable from the observed ones in most regions in summer, the biases of some distribution characteristics are significant over large areas in winter. WRF-NOAH generates appropriate stationary fine-scale climate features in the daily precipitation field over regions of complex topography in both seasons and appropriate transient fine-scale features almost everywhere in summer. As the added value of global climate model (GCM)-driven simulations cannot be smaller than this perfect-boundary estimate, this work demonstrates in a rigorous manner the clear additional value of dynamical downscaling over global climate simulations. The evaluation methodology has a broad spectrum of applicability as it is

  12. Robust statistical methods for significance evaluation and applications in cancer driver detection and biomarker discovery

    DEFF Research Database (Denmark)

    Madsen, Tobias

    2017-01-01

    In the present thesis I develop, implement and apply statistical methods for detecting genomic elements implicated in cancer development and progression. This is done in two separate bodies of work. The first uses the somatic mutation burden to distinguish cancer driver mutations from passenger m...

  13. Statistical parameter characteristics of gas-phase fluctuations for gas-liquid intermittent flow

    Energy Technology Data Exchange (ETDEWEB)

    Matsui, G.; Monji, H.; Takaguchi, M. [Univ. of Tsukuba (Japan)

    1995-09-01

    This study deals with theoretical analysis on the general behaviour of statistical parameters of gas-phase fluctuations and comparison of statistical parameter characteristics for the real void fraction fluctuations measured with those for the wave form modified the real fluctuations. In order to investigate the details of the relation between the behavior of the statistical parameters in real intermittent flow and analytical results obtained from information on the real flow, the distributions of statistical parameters for general fundamental wave form of gas-phase fluctuations are discussed in detail. By modifying the real gas-phase fluctuations to a trapezoidaly wave, the experimental results can be directly compared with the analytical results. The analytical results for intermittent flow show that the wave form parameter, and the total amplitude of void fraction fluctuations, affects strongly on the statistical parameter characteristics. The comparison with experiment using nitrogen gas-water intermittent flow suggests that the parameters of skewness and excess may be better as indicators of flow pattern. That is, the macroscopic nature of intermittent flow can be grasped by the skewness and the excess, and the detailed flow structure may be described by the mean and the standard deviation.

  14. Statistical parameter characteristics of gas-phase fluctuations for gas-liquid intermittent flow

    International Nuclear Information System (INIS)

    Matsui, G.; Monji, H.; Takaguchi, M.

    1995-01-01

    This study deals with theoretical analysis on the general behaviour of statistical parameters of gas-phase fluctuations and comparison of statistical parameter characteristics for the real void fraction fluctuations measured with those for the wave form modified the real fluctuations. In order to investigate the details of the relation between the behavior of the statistical parameters in real intermittent flow and analytical results obtained from information on the real flow, the distributions of statistical parameters for general fundamental wave form of gas-phase fluctuations are discussed in detail. By modifying the real gas-phase fluctuations to a trapezoidaly wave, the experimental results can be directly compared with the analytical results. The analytical results for intermittent flow show that the wave form parameter, and the total amplitude of void fraction fluctuations, affects strongly on the statistical parameter characteristics. The comparison with experiment using nitrogen gas-water intermittent flow suggests that the parameters of skewness and excess may be better as indicators of flow pattern. That is, the macroscopic nature of intermittent flow can be grasped by the skewness and the excess, and the detailed flow structure may be described by the mean and the standard deviation

  15. SSD for R: A Comprehensive Statistical Package to Analyze Single-System Data

    Science.gov (United States)

    Auerbach, Charles; Schudrich, Wendy Zeitlin

    2013-01-01

    The need for statistical analysis in single-subject designs presents a challenge, as analytical methods that are applied to group comparison studies are often not appropriate in single-subject research. "SSD for R" is a robust set of statistical functions with wide applicability to single-subject research. It is a comprehensive package…

  16. An audit of the statistics and the comparison with the parameter in the population

    Science.gov (United States)

    Bujang, Mohamad Adam; Sa'at, Nadiah; Joys, A. Reena; Ali, Mariana Mohamad

    2015-10-01

    The sufficient sample size that is needed to closely estimate the statistics for particular parameters are use to be an issue. Although sample size might had been calculated referring to objective of the study, however, it is difficult to confirm whether the statistics are closed with the parameter for a particular population. All these while, guideline that uses a p-value less than 0.05 is widely used as inferential evidence. Therefore, this study had audited results that were analyzed from various sub sample and statistical analyses and had compared the results with the parameters in three different populations. Eight types of statistical analysis and eight sub samples for each statistical analysis were analyzed. Results found that the statistics were consistent and were closed to the parameters when the sample study covered at least 15% to 35% of population. Larger sample size is needed to estimate parameter that involve with categorical variables compared with numerical variables. Sample sizes with 300 to 500 are sufficient to estimate the parameters for medium size of population.

  17. COMPARISON OF STATISTICALLY CONTROLLED MACHINING SOLUTIONS OF TITANIUM ALLOYS USING USM

    Directory of Open Access Journals (Sweden)

    R. Singh

    2010-06-01

    Full Text Available The purpose of the present investigation is to compare the statistically controlled machining solution of titanium alloys using ultrasonic machining (USM. In this study, the previously developed Taguchi model for USM of titanium and its alloys has been investigated and compared. Relationships between the material removal rate, tool wear rate, surface roughness and other controllable machining parameters (power rating, tool type, slurry concentration, slurry type, slurry temperature and slurry size have been deduced. The results of this study suggest that at the best settings of controllable machining parameters for titanium alloys (based upon the Taguchi design, the machining solution with USM is statistically controlled, which is not observed for other settings of input parameters on USM.

  18. How to construct the statistic network? An association network of herbaceous

    Directory of Open Access Journals (Sweden)

    WenJun Zhang

    2012-06-01

    Full Text Available In present study I defined a new type of network, the statistic network. The statistic network is a weighted and non-deterministic network. In the statistic network, a connection value, i.e., connection weight, represents connection strength and connection likelihood between two nodes and its absolute value falls in the interval (0,1]. The connection value is expressed as a statistical measure such as correlation coefficient, association coefficient, or Jaccard coefficient, etc. In addition, all connections of the statistic network can be statistically tested for their validity. A connection is true if the connection value is statistically significant. If all connection values of a node are not statistically significant, it is an isolated node. An isolated node has not any connection to other nodes in the statistic network. Positive and negative connection values denote distinct connectiontypes (positive or negative association or interaction. In the statistic network, two nodes with the greater connection value will show more similar trend in the change of their states. At any time we can obtain a sample network of the statistic network. A sample network is a non-weighted and deterministic network. Thestatistic network, in particular the plant association network that constructed from field sampling, is mostly an information network. Most of the interspecific relationships in plant community are competition and cooperation. Therefore in comparison to animal networks, the methodology of statistic network is moresuitable to construct plant association networks. Some conclusions were drawn from this study: (1 in the plant association network, most connections are weak and positive interactions. The association network constructed from Spearman rank correlation has most connections and isolated taxa are fewer. From net linear correlation,linear correlation, to Spearman rank correlation, the practical number of connections and connectance in the

  19. Significance evaluation in factor graphs

    DEFF Research Database (Denmark)

    Madsen, Tobias; Hobolth, Asger; Jensen, Jens Ledet

    2017-01-01

    in genomics and the multiple-testing issues accompanying them, accurate significance evaluation is of great importance. We here address the problem of evaluating statistical significance of observations from factor graph models. Results Two novel numerical approximations for evaluation of statistical...... significance are presented. First a method using importance sampling. Second a saddlepoint approximation based method. We develop algorithms to efficiently compute the approximations and compare them to naive sampling and the normal approximation. The individual merits of the methods are analysed both from....... Conclusions The applicability of saddlepoint approximation and importance sampling is demonstrated on known models in the factor graph framework. Using the two methods we can substantially improve computational cost without compromising accuracy. This contribution allows analyses of large datasets...

  20. Computational and Statistical Models: A Comparison for Policy Modeling of Childhood Obesity

    Science.gov (United States)

    Mabry, Patricia L.; Hammond, Ross; Ip, Edward Hak-Sing; Huang, Terry T.-K.

    As systems science methodologies have begun to emerge as a set of innovative approaches to address complex problems in behavioral, social science, and public health research, some apparent conflicts with traditional statistical methodologies for public health have arisen. Computational modeling is an approach set in context that integrates diverse sources of data to test the plausibility of working hypotheses and to elicit novel ones. Statistical models are reductionist approaches geared towards proving the null hypothesis. While these two approaches may seem contrary to each other, we propose that they are in fact complementary and can be used jointly to advance solutions to complex problems. Outputs from statistical models can be fed into computational models, and outputs from computational models can lead to further empirical data collection and statistical models. Together, this presents an iterative process that refines the models and contributes to a greater understanding of the problem and its potential solutions. The purpose of this panel is to foster communication and understanding between statistical and computational modelers. Our goal is to shed light on the differences between the approaches and convey what kinds of research inquiries each one is best for addressing and how they can serve complementary (and synergistic) roles in the research process, to mutual benefit. For each approach the panel will cover the relevant "assumptions" and how the differences in what is assumed can foster misunderstandings. The interpretations of the results from each approach will be compared and contrasted and the limitations for each approach will be delineated. We will use illustrative examples from CompMod, the Comparative Modeling Network for Childhood Obesity Policy. The panel will also incorporate interactive discussions with the audience on the issues raised here.

  1. Statistical Survey of Non-Formal Education

    Directory of Open Access Journals (Sweden)

    Ondřej Nývlt

    2012-12-01

    Full Text Available focused on a programme within a regular education system. Labour market flexibility and new requirements on employees create a new domain of education called non-formal education. Is there a reliable statistical source with a good methodological definition for the Czech Republic? Labour Force Survey (LFS has been the basic statistical source for time comparison of non-formal education for the last ten years. Furthermore, a special Adult Education Survey (AES in 2011 was focused on individual components of non-formal education in a detailed way. In general, the goal of the EU is to use data from both internationally comparable surveys for analyses of the particular fields of lifelong learning in the way, that annual LFS data could be enlarged by detailed information from AES in five years periods. This article describes reliability of statistical data aboutnon-formal education. This analysis is usually connected with sampling and non-sampling errors.

  2. Where did Venomous Snakes Strike? A Spatial Statistical Analysis of Snakebite Cases in Bondowoso Regency, Indonesia

    Directory of Open Access Journals (Sweden)

    Farid Rifaie

    2017-07-01

    Full Text Available Snakebite envenomation in Indonesia is a health burden that receives no attention from stakeholders. The high mortality and morbidity rate caused by snakebite in Indonesia is estimated from regional reports. The true burden of this issue in Indonesia needs to be revealed even starting from a small part of the country. Medical records from a Hospital in Bondowoso Regency were the data source of the snakebite cases. Three spatial statistical summaries were applied to analyze the spatial pattern of snakebite incidents. The comparison between statistical functions and the theoretical model of random distributions shows a significant clustering pattern of the events. The pattern indicates that five subdistricts in Bondowoso have a substantial number of snakebite cases more than other regions. This finding shows the potential application of spatial statistics for the snakebite combating strategy in this area by identifying the priority locations of the snakebite cases.

  3. Time-of-flight experiments using a pseudo-statistical chopper

    International Nuclear Information System (INIS)

    Aizawa, Otohiko; Kanda, Keiji

    1975-01-01

    A ''pseudo-statistical'' chopper was manufactured and used for the experiments on neutron transmission and scattering. The characteristics of the chopper and the experimental results are discussed in comparison with those in the time-of-flight technique using a conventional chopper. Which of the two methods is superior depends on the form of the time-of-flight distribution to be measured. Pseudo-statistical pulsing may be especially advantageous for scattering experiments with single or a few-line time-of-flight spectrum. (auth.)

  4. Statistical study of clone survival curves after irradiation in one or two stages. Comparison and generalization of different models

    International Nuclear Information System (INIS)

    Lachet, Bernard.

    1975-01-01

    A statistical study was carried out on 208 survival curves for chlorella subjected to γ or particle radiations. The computing programmes used were written in Fortran. The different experimental causes contributing to the variance of a survival rate are analyzed and consequently the experiments can be planned. Each curve was fitted to four models by the weighted least squares method applied to non-linear functions. The validity of the fits obtained can be checked by the F test. It was possible to define the confidence and prediction zones around an adjusted curve by weighting of the residual variance, in spite of error on the doses delivered; the confidence limits can them be fixed for a dose estimated from an exact or measured survival. The four models adopted were compared for the precision of their fit (by a non-parametric simultaneous comparison test) and the scattering of their adjusted parameters: Wideroe's model gives a very good fit with the experimental points in return for a scattering of its parameters, which robs them of their presumed meaning. The principal component analysis showed the statistical equivalence of the 1 and 2 hit target models. Division of the irradiation into two doses, the first fixed by the investigator, leads to families of curves for which the equation was established from that of any basic model expressing the dose survival relationship in one-stage irradiation [fr

  5. Using machine learning, neural networks and statistics to predict bankruptcy

    NARCIS (Netherlands)

    Pompe, P.P.M.; Feelders, A.J.; Feelders, A.J.

    1997-01-01

    Recent literature strongly suggests that machine learning approaches to classification outperform "classical" statistical methods. We make a comparison between the performance of linear discriminant analysis, classification trees, and neural networks in predicting corporate bankruptcy. Linear

  6. Probability, Statistics, and Stochastic Processes

    CERN Document Server

    Olofsson, Peter

    2012-01-01

    This book provides a unique and balanced approach to probability, statistics, and stochastic processes.   Readers gain a solid foundation in all three fields that serves as a stepping stone to more advanced investigations into each area.  The Second Edition features new coverage of analysis of variance (ANOVA), consistency and efficiency of estimators, asymptotic theory for maximum likelihood estimators, empirical distribution function and the Kolmogorov-Smirnov test, general linear models, multiple comparisons, Markov chain Monte Carlo (MCMC), Brownian motion, martingales, and

  7. Statistical evaluation of SAGE libraries: consequences for experimental design

    NARCIS (Netherlands)

    Ruijter, Jan M.; van Kampen, Antoine H. C.; Baas, Frank

    2002-01-01

    Since the introduction of serial analysis of gene expression (SAGE) as a method to quantitatively analyze the differential expression of genes, several statistical tests have been published for the pairwise comparison of SAGE libraries. Testing the difference between the number of specific tags

  8. Achievement Goal Orientations and Adolescents’ Subjective Well-Being in School: The Mediating Roles of Academic Social Comparison Directions

    Science.gov (United States)

    Tian, Lili; Yu, Tingting; Huebner, E. Scott

    2017-01-01

    The purpose of this study was to examine the multiple mediational roles of academic social comparison directions (upward academic social comparison and downward academic social comparison) on the relationships between achievement goal orientations (i.e., mastery goals, performance-approach goals, and performance-avoidance goals) and subjective well-being (SWB) in school (school satisfaction, school affect) in adolescent students in China. A total of 883 Chinese adolescent students (430 males; Mean age = 12.99) completed a multi-measure questionnaire. Structural equation modeling was used to examine the hypotheses. Results indicated that (1) mastery goal orientations and performance-approach goal orientations both showed a statistically significant, positive correlation with SWB in school whereas performance-avoidance goal orientations showed a statistically significant, negative correlation with SWB in school among adolescents; (2) upward academic social comparisons mediated the relation between the three types of achievement goal orientations (i.e., mastery goals, performance-approach goals, and performance-avoidance goals) and SWB in school; (3) downward academic social comparisons mediated the relation between mastery goal orientations and SWB in school as well as the relation between performance-avoidance goal orientations and SWB in school. The findings suggest possible important cultural differences in the antecedents of SWB in school in adolescent students in China compared to adolescent students in Western nations. PMID:28197109

  9. Head-to-head comparison of adaptive statistical and model-based iterative reconstruction algorithms for submillisievert coronary CT angiography.

    Science.gov (United States)

    Benz, Dominik C; Fuchs, Tobias A; Gräni, Christoph; Studer Bruengger, Annina A; Clerc, Olivier F; Mikulicic, Fran; Messerli, Michael; Stehli, Julia; Possner, Mathias; Pazhenkottil, Aju P; Gaemperli, Oliver; Kaufmann, Philipp A; Buechel, Ronny R

    2018-02-01

    Iterative reconstruction (IR) algorithms allow for a significant reduction in radiation dose of coronary computed tomography angiography (CCTA). We performed a head-to-head comparison of adaptive statistical IR (ASiR) and model-based IR (MBIR) algorithms to assess their impact on quantitative image parameters and diagnostic accuracy for submillisievert CCTA. CCTA datasets of 91 patients were reconstructed using filtered back projection (FBP), increasing contributions of ASiR (20, 40, 60, 80, and 100%), and MBIR. Signal and noise were measured in the aortic root to calculate signal-to-noise ratio (SNR). In a subgroup of 36 patients, diagnostic accuracy of ASiR 40%, ASiR 100%, and MBIR for diagnosis of coronary artery disease (CAD) was compared with invasive coronary angiography. Median radiation dose was 0.21 mSv for CCTA. While increasing levels of ASiR gradually reduced image noise compared with FBP (up to - 48%, P ASiR (-59% compared with ASiR 100%; P ASiR 40% and ASiR 100% resulted in substantially lower diagnostic accuracy to detect CAD as diagnosed by invasive coronary angiography compared with MBIR: sensitivity and specificity were 100 and 37%, 100 and 57%, and 100 and 74% for ASiR 40%, ASiR 100%, and MBIR, respectively. MBIR offers substantial noise reduction with increased SNR, paving the way for implementation of submillisievert CCTA protocols in clinical routine. In contrast, inferior noise reduction by ASiR negatively affects diagnostic accuracy of submillisievert CCTA for CAD detection. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2017. For permissions, please email: journals.permissions@oup.com.

  10. Statistical Models of Adaptive Immune populations

    Science.gov (United States)

    Sethna, Zachary; Callan, Curtis; Walczak, Aleksandra; Mora, Thierry

    The availability of large (104-106 sequences) datasets of B or T cell populations from a single individual allows reliable fitting of complex statistical models for naïve generation, somatic selection, and hypermutation. It is crucial to utilize a probabilistic/informational approach when modeling these populations. The inferred probability distributions allow for population characterization, calculation of probability distributions of various hidden variables (e.g. number of insertions), as well as statistical properties of the distribution itself (e.g. entropy). In particular, the differences between the T cell populations of embryonic and mature mice will be examined as a case study. Comparing these populations, as well as proposed mixed populations, provides a concrete exercise in model creation, comparison, choice, and validation.

  11. Forest statistics for Southeast Texas counties - 1986

    Science.gov (United States)

    William H. McWilliams; Daniel F. Bertelson

    1986-01-01

    These tables were derived from data obtained during a 1986 inventory of 22 counties comprising the Southeast Unit of Texas (fig. 1). Grimes, Leon, Madison, and Waller counties have been added to the Southeastern Unit since the previous inventory if 1975. All comparisons of the 1975 and 1986 forest statistics made in this Bulletin account for this change. The data on...

  12. Comparison of classical statistical methods and artificial neural network in traffic noise prediction

    International Nuclear Information System (INIS)

    Nedic, Vladimir; Despotovic, Danijela; Cvetanovic, Slobodan; Despotovic, Milan; Babic, Sasa

    2014-01-01

    Traffic is the main source of noise in urban environments and significantly affects human mental and physical health and labor productivity. Therefore it is very important to model the noise produced by various vehicles. Techniques for traffic noise prediction are mainly based on regression analysis, which generally is not good enough to describe the trends of noise. In this paper the application of artificial neural networks (ANNs) for the prediction of traffic noise is presented. As input variables of the neural network, the proposed structure of the traffic flow and the average speed of the traffic flow are chosen. The output variable of the network is the equivalent noise level in the given time period L eq . Based on these parameters, the network is modeled, trained and tested through a comparative analysis of the calculated values and measured levels of traffic noise using the originally developed user friendly software package. It is shown that the artificial neural networks can be a useful tool for the prediction of noise with sufficient accuracy. In addition, the measured values were also used to calculate equivalent noise level by means of classical methods, and comparative analysis is given. The results clearly show that ANN approach is superior in traffic noise level prediction to any other statistical method. - Highlights: • We proposed an ANN model for prediction of traffic noise. • We developed originally designed user friendly software package. • The results are compared with classical statistical methods. • The results are much better predictive capabilities of ANN model

  13. Comparison of classical statistical methods and artificial neural network in traffic noise prediction

    Energy Technology Data Exchange (ETDEWEB)

    Nedic, Vladimir, E-mail: vnedic@kg.ac.rs [Faculty of Philology and Arts, University of Kragujevac, Jovana Cvijića bb, 34000 Kragujevac (Serbia); Despotovic, Danijela, E-mail: ddespotovic@kg.ac.rs [Faculty of Economics, University of Kragujevac, Djure Pucara Starog 3, 34000 Kragujevac (Serbia); Cvetanovic, Slobodan, E-mail: slobodan.cvetanovic@eknfak.ni.ac.rs [Faculty of Economics, University of Niš, Trg kralja Aleksandra Ujedinitelja, 18000 Niš (Serbia); Despotovic, Milan, E-mail: mdespotovic@kg.ac.rs [Faculty of Engineering, University of Kragujevac, Sestre Janjic 6, 34000 Kragujevac (Serbia); Babic, Sasa, E-mail: babicsf@yahoo.com [College of Applied Mechanical Engineering, Trstenik (Serbia)

    2014-11-15

    Traffic is the main source of noise in urban environments and significantly affects human mental and physical health and labor productivity. Therefore it is very important to model the noise produced by various vehicles. Techniques for traffic noise prediction are mainly based on regression analysis, which generally is not good enough to describe the trends of noise. In this paper the application of artificial neural networks (ANNs) for the prediction of traffic noise is presented. As input variables of the neural network, the proposed structure of the traffic flow and the average speed of the traffic flow are chosen. The output variable of the network is the equivalent noise level in the given time period L{sub eq}. Based on these parameters, the network is modeled, trained and tested through a comparative analysis of the calculated values and measured levels of traffic noise using the originally developed user friendly software package. It is shown that the artificial neural networks can be a useful tool for the prediction of noise with sufficient accuracy. In addition, the measured values were also used to calculate equivalent noise level by means of classical methods, and comparative analysis is given. The results clearly show that ANN approach is superior in traffic noise level prediction to any other statistical method. - Highlights: • We proposed an ANN model for prediction of traffic noise. • We developed originally designed user friendly software package. • The results are compared with classical statistical methods. • The results are much better predictive capabilities of ANN model.

  14. Comparison of precipitation nowcasting by extrapolation and statistical-advection methods

    Czech Academy of Sciences Publication Activity Database

    Sokol, Zbyněk; Kitzmiller, D.; Pešice, Petr; Mejsnar, Jan

    2013-01-01

    Roč. 123, 1 April (2013), s. 17-30 ISSN 0169-8095 R&D Projects: GA MŠk ME09033 Institutional support: RVO:68378289 Keywords : Precipitation forecast * Statistical models * Regression * Quantitative precipitation forecast * Extrapolation forecast Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 2.421, year: 2013 http://www.sciencedirect.com/science/article/pii/S0169809512003390

  15. Comparison of normal adult and children brain SPECT imaging using statistical parametric mapping(SPM)

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Myoung Hoon; Yoon, Seok Nam; Joh, Chul Woo; Lee, Dong Soo [Ajou University School of Medicine, Suwon (Korea, Republic of); Lee, Jae Sung [Seoul national University College of Medicine, Seoul (Korea, Republic of)

    2002-07-01

    This study compared rCBF pattern in normal adult and normal children using statistical parametric mapping (SPM). The purpose of this study was to determine distribution pattern not seen visual analysis in both groups. Tc-99m ECD brain SPECT was performed in 12 normal adults (M:F=11:1, average age 35 year old) and 6 normal control children (M:F=4:2, 10.5{+-}3.1y) who visited psychiatry clinic to evaluate ADHD. Their brain SPECT revealed normal rCBF pattern in visual analysis and they were diagnosed clinically normal. Using SPM method, we compared normal adult group's SPECT images with those of 6 normal children subjects and measured the extent of the area with significant hypoperfusion and hyperperfusion (p<0.001, extent threshold=16). The areas of both angnlar gyrus, both postcentral gyrus, both superior frontal gyrus, and both superior parietal lobe showed significant hyperperfusion in normal adult group compared with normal children group. The areas of left amygdala gyrus, brain stem, both cerebellum, left globus pallidus, both hippocampal formations, both parahippocampal gyrus, both thalamus, both uncus, both lateral and medial occipitotemporal gyrus revealed significantly hyperperfusion in the children. These results demonstrated that SPM can say more precise anatomical area difference not seen visual analysis.

  16. Comparison of normal adult and children brain SPECT imaging using statistical parametric mapping(SPM)

    International Nuclear Information System (INIS)

    Lee, Myoung Hoon; Yoon, Seok Nam; Joh, Chul Woo; Lee, Dong Soo; Lee, Jae Sung

    2002-01-01

    This study compared rCBF pattern in normal adult and normal children using statistical parametric mapping (SPM). The purpose of this study was to determine distribution pattern not seen visual analysis in both groups. Tc-99m ECD brain SPECT was performed in 12 normal adults (M:F=11:1, average age 35 year old) and 6 normal control children (M:F=4:2, 10.5±3.1y) who visited psychiatry clinic to evaluate ADHD. Their brain SPECT revealed normal rCBF pattern in visual analysis and they were diagnosed clinically normal. Using SPM method, we compared normal adult group's SPECT images with those of 6 normal children subjects and measured the extent of the area with significant hypoperfusion and hyperperfusion (p<0.001, extent threshold=16). The areas of both angnlar gyrus, both postcentral gyrus, both superior frontal gyrus, and both superior parietal lobe showed significant hyperperfusion in normal adult group compared with normal children group. The areas of left amygdala gyrus, brain stem, both cerebellum, left globus pallidus, both hippocampal formations, both parahippocampal gyrus, both thalamus, both uncus, both lateral and medial occipitotemporal gyrus revealed significantly hyperperfusion in the children. These results demonstrated that SPM can say more precise anatomical area difference not seen visual analysis

  17. Comparison of pure and 'Latinized' centroidal Voronoi tessellation against various other statistical sampling methods

    International Nuclear Information System (INIS)

    Romero, Vicente J.; Burkardt, John V.; Gunzburger, Max D.; Peterson, Janet S.

    2006-01-01

    A recently developed centroidal Voronoi tessellation (CVT) sampling method is investigated here to assess its suitability for use in statistical sampling applications. CVT efficiently generates a highly uniform distribution of sample points over arbitrarily shaped M-dimensional parameter spaces. On several 2-D test problems CVT has recently been found to provide exceedingly effective and efficient point distributions for response surface generation. Additionally, for statistical function integration and estimation of response statistics associated with uniformly distributed random-variable inputs (uncorrelated), CVT has been found in initial investigations to provide superior points sets when compared against latin-hypercube and simple-random Monte Carlo methods and Halton and Hammersley quasi-random sequence methods. In this paper, the performance of all these sampling methods and a new variant ('Latinized' CVT) are further compared for non-uniform input distributions. Specifically, given uncorrelated normal inputs in a 2-D test problem, statistical sampling efficiencies are compared for resolving various statistics of response: mean, variance, and exceedence probabilities

  18. Analytical model of SiPM time resolution and order statistics with crosstalk

    International Nuclear Information System (INIS)

    Vinogradov, S.

    2015-01-01

    Time resolution is the most important parameter of photon detectors in a wide range of time-of-flight and time correlation applications within the areas of high energy physics, medical imaging, and others. Silicon photomultipliers (SiPM) have been initially recognized as perfect photon-number-resolving detectors; now they also provide outstanding results in the scintillator timing resolution. However, crosstalk and afterpulsing introduce false secondary non-Poissonian events, and SiPM time resolution models are experiencing significant difficulties with that. This study presents an attempt to develop an analytical model of the timing resolution of an SiPM taking into account statistics of secondary events resulting from a crosstalk. Two approaches have been utilized to derive an analytical expression for time resolution: the first one based on statistics of independent identically distributed detection event times and the second one based on order statistics of these times. The first approach is found to be more straightforward and “analytical-friendly” to model analog SiPMs. Comparisons of coincidence resolving times predicted by the model with the known experimental results from a LYSO:Ce scintillator and a Hamamatsu MPPC are presented

  19. Analytical model of SiPM time resolution and order statistics with crosstalk

    Energy Technology Data Exchange (ETDEWEB)

    Vinogradov, S., E-mail: Sergey.Vinogradov@liverpool.ac.uk [University of Liverpool and Cockcroft Institute, Sci-Tech Daresbury, Keckwick Lane, Warrington WA4 4AD (United Kingdom); P.N. Lebedev Physical Institute of the Russian Academy of Sciences, 119991 Leninskiy Prospekt 53, Moscow (Russian Federation)

    2015-07-01

    Time resolution is the most important parameter of photon detectors in a wide range of time-of-flight and time correlation applications within the areas of high energy physics, medical imaging, and others. Silicon photomultipliers (SiPM) have been initially recognized as perfect photon-number-resolving detectors; now they also provide outstanding results in the scintillator timing resolution. However, crosstalk and afterpulsing introduce false secondary non-Poissonian events, and SiPM time resolution models are experiencing significant difficulties with that. This study presents an attempt to develop an analytical model of the timing resolution of an SiPM taking into account statistics of secondary events resulting from a crosstalk. Two approaches have been utilized to derive an analytical expression for time resolution: the first one based on statistics of independent identically distributed detection event times and the second one based on order statistics of these times. The first approach is found to be more straightforward and “analytical-friendly” to model analog SiPMs. Comparisons of coincidence resolving times predicted by the model with the known experimental results from a LYSO:Ce scintillator and a Hamamatsu MPPC are presented.

  20. Expression and clinical significance of Pax6 gene in retinoblastoma

    Directory of Open Access Journals (Sweden)

    Hai-Dong Huang

    2013-07-01

    Full Text Available AIM: To discuss the expression and clinical significance of Pax6 gene in retinoblastoma(Rb. METHODS: Totally 15 cases of fresh Rb organizations were selected as observation group and 15 normal retinal organizations as control group. Western-Blot and reverse transcriptase polymerase chain reaction(RT-PCRmethods were used to detect Pax6 protein and Pax6 mRNA expressions of the normal retina organizations and Rb organizations. At the same time, Western Blot method was used to detect the Pax6 gene downstream MATH5 and BRN3b differentiation gene protein level expression. After the comparison between two groups, the expression and clinical significance of Pax6 gene in Rb were discussed. RESULTS: In the observation group, average value of mRNA expression of Pax6 gene was 0.99±0.03; average value of Pax6 gene protein expression was 2.07±0.15; average value of BRN3b protein expression was 0.195±0.016; average value of MATH5 protein expression was 0.190±0.031. They were significantly higher than the control group, and the differences were statistically significant(PCONCLUSION: Abnormal expression of Pax6 gene is likely to accelerate the occurrence of Rb.

  1. A comparison of statistical methods for identifying out-of-date systematic reviews.

    Directory of Open Access Journals (Sweden)

    Porjai Pattanittum

    Full Text Available BACKGROUND: Systematic reviews (SRs can provide accurate and reliable evidence, typically about the effectiveness of health interventions. Evidence is dynamic, and if SRs are out-of-date this information may not be useful; it may even be harmful. This study aimed to compare five statistical methods to identify out-of-date SRs. METHODS: A retrospective cohort of SRs registered in the Cochrane Pregnancy and Childbirth Group (CPCG, published between 2008 and 2010, were considered for inclusion. For each eligible CPCG review, data were extracted and "3-years previous" meta-analyses were assessed for the need to update, given the data from the most recent 3 years. Each of the five statistical methods was used, with random effects analyses throughout the study. RESULTS: Eighty reviews were included in this study; most were in the area of induction of labour. The numbers of reviews identified as being out-of-date using the Ottawa, recursive cumulative meta-analysis (CMA, and Barrowman methods were 34, 7, and 7 respectively. No reviews were identified as being out-of-date using the simulation-based power method, or the CMA for sufficiency and stability method. The overall agreement among the three discriminating statistical methods was slight (Kappa = 0.14; 95% CI 0.05 to 0.23. The recursive cumulative meta-analysis, Ottawa, and Barrowman methods were practical according to the study criteria. CONCLUSION: Our study shows that three practical statistical methods could be applied to examine the need to update SRs.

  2. A rank-based algorithm of differential expression analysis for small cell line data with statistical control.

    Science.gov (United States)

    Li, Xiangyu; Cai, Hao; Wang, Xianlong; Ao, Lu; Guo, You; He, Jun; Gu, Yunyan; Qi, Lishuang; Guan, Qingzhou; Lin, Xu; Guo, Zheng

    2017-10-13

    To detect differentially expressed genes (DEGs) in small-scale cell line experiments, usually with only two or three technical replicates for each state, the commonly used statistical methods such as significance analysis of microarrays (SAM), limma and RankProd (RP) lack statistical power, while the fold change method lacks any statistical control. In this study, we demonstrated that the within-sample relative expression orderings (REOs) of gene pairs were highly stable among technical replicates of a cell line but often widely disrupted after certain treatments such like gene knockdown, gene transfection and drug treatment. Based on this finding, we customized the RankComp algorithm, previously designed for individualized differential expression analysis through REO comparison, to identify DEGs with certain statistical control for small-scale cell line data. In both simulated and real data, the new algorithm, named CellComp, exhibited high precision with much higher sensitivity than the original RankComp, SAM, limma and RP methods. Therefore, CellComp provides an efficient tool for analyzing small-scale cell line data. © The Author 2017. Published by Oxford University Press.

  3. Statistical lamb wave localization based on extreme value theory

    Science.gov (United States)

    Harley, Joel B.

    2018-04-01

    Guided wave localization methods based on delay-and-sum imaging, matched field processing, and other techniques have been designed and researched to create images that locate and describe structural damage. The maximum value of these images typically represent an estimated damage location. Yet, it is often unclear if this maximum value, or any other value in the image, is a statistically significant indicator of damage. Furthermore, there are currently few, if any, approaches to assess the statistical significance of guided wave localization images. As a result, we present statistical delay-and-sum and statistical matched field processing localization methods to create statistically significant images of damage. Our framework uses constant rate of false alarm statistics and extreme value theory to detect damage with little prior information. We demonstrate our methods with in situ guided wave data from an aluminum plate to detect two 0.75 cm diameter holes. Our results show an expected improvement in statistical significance as the number of sensors increase. With seventeen sensors, both methods successfully detect damage with statistical significance.

  4. Quantum mechanics as a natural generalization of classical statistical mechanics

    International Nuclear Information System (INIS)

    Xu Laizi; Qian Shangwu

    1994-01-01

    By comparison between equations of motion of geometrical optics (GO) and that of classical statistical mechanics (CSM), it is found that there should be an analogy between GO and CSM instead of GO and classical mechanics (CM). Furthermore, by comparison between the classical limit (CL) of quantum mechanics (QM) and CSM, the authors find that CL of QM is CSM not CM, hence they demonstrated that QM is a natural generalization of CSM instead of CM

  5. Dissecting protein loops with a statistical scalpel suggests a functional implication of some structural motifs.

    Science.gov (United States)

    Regad, Leslie; Martin, Juliette; Camproux, Anne-Claude

    2011-06-20

    One of the strategies for protein function annotation is to search particular structural motifs that are known to be shared by proteins with a given function. Here, we present a systematic extraction of structural motifs of seven residues from protein loops and we explore their correspondence with functional sites. Our approach is based on the structural alphabet HMM-SA (Hidden Markov Model - Structural Alphabet), which allows simplification of protein structures into uni-dimensional sequences, and advanced pattern statistics adapted to short sequences. Structural motifs of interest are selected by looking for structural motifs significantly over-represented in SCOP superfamilies in protein loops. We discovered two types of structural motifs significantly over-represented in SCOP superfamilies: (i) ubiquitous motifs, shared by several superfamilies and (ii) superfamily-specific motifs, over-represented in few superfamilies. A comparison of ubiquitous words with known small structural motifs shows that they contain well-described motifs as turn, niche or nest motifs. A comparison between superfamily-specific motifs and biological annotations of Swiss-Prot reveals that some of them actually correspond to functional sites involved in the binding sites of small ligands, such as ATP/GTP, NAD(P) and SAH/SAM. Our findings show that statistical over-representation in SCOP superfamilies is linked to functional features. The detection of over-represented motifs within structures simplified by HMM-SA is therefore a promising approach for prediction of functional sites and annotation of uncharacterized proteins.

  6. Dissecting protein loops with a statistical scalpel suggests a functional implication of some structural motifs

    Directory of Open Access Journals (Sweden)

    Martin Juliette

    2011-06-01

    Full Text Available Abstract Background One of the strategies for protein function annotation is to search particular structural motifs that are known to be shared by proteins with a given function. Results Here, we present a systematic extraction of structural motifs of seven residues from protein loops and we explore their correspondence with functional sites. Our approach is based on the structural alphabet HMM-SA (Hidden Markov Model - Structural Alphabet, which allows simplification of protein structures into uni-dimensional sequences, and advanced pattern statistics adapted to short sequences. Structural motifs of interest are selected by looking for structural motifs significantly over-represented in SCOP superfamilies in protein loops. We discovered two types of structural motifs significantly over-represented in SCOP superfamilies: (i ubiquitous motifs, shared by several superfamilies and (ii superfamily-specific motifs, over-represented in few superfamilies. A comparison of ubiquitous words with known small structural motifs shows that they contain well-described motifs as turn, niche or nest motifs. A comparison between superfamily-specific motifs and biological annotations of Swiss-Prot reveals that some of them actually correspond to functional sites involved in the binding sites of small ligands, such as ATP/GTP, NAD(P and SAH/SAM. Conclusions Our findings show that statistical over-representation in SCOP superfamilies is linked to functional features. The detection of over-represented motifs within structures simplified by HMM-SA is therefore a promising approach for prediction of functional sites and annotation of uncharacterized proteins.

  7. LOD significance thresholds for QTL analysis in experimental populations of diploid species

    Science.gov (United States)

    Van Ooijen JW

    1999-11-01

    Linkage analysis with molecular genetic markers is a very powerful tool in the biological research of quantitative traits. The lack of an easy way to know what areas of the genome can be designated as statistically significant for containing a gene affecting the quantitative trait of interest hampers the important prediction of the rate of false positives. In this paper four tables, obtained by large-scale simulations, are presented that can be used with a simple formula to get the false-positives rate for analyses of the standard types of experimental populations with diploid species with any size of genome. A new definition of the term 'suggestive linkage' is proposed that allows a more objective comparison of results across species.

  8. A Comparison of Two Invagination Techniques for Pancreatojejunostomy after Pancreatoduodenectomy

    Directory of Open Access Journals (Sweden)

    Katarzyna Kusnierz

    2015-01-01

    Full Text Available Background. The aim of the study was to compare two invagination techniques for pancreatojejunostomy after pancreatoduodenectomy. Methods. For effective prevention of the development of pancreatic leakage, we modified invagination technique that we term the “serous touch.” We analysed the diameter of the main pancreatic duct, the texture of the remnant pancreas, the method of the reconstruction, pancreatic external drainage, anastomotic procedure time, histopathological examination, and postoperative complications. Results. Fifty-two patients underwent pancreatoduodenectomy with pancreatojejunostomy using “serous touch” technique (ST group and 52 classic pancreatojejunostomy (C group. In the ST group one patient (1.9% was diagnosed as grade B pancreatic fistula, and no patient experienced fistula grade A or C. In the C group 6 patients (11.5% were diagnosed as fistula grade A, 1 (1.9% patient as fistula grade B, and 1 (1.9% patient as fistula grade C. There was a significant statistical difference in incidents of pancreatic fistula (P<0.05 and no statistical difference in other postoperative complications or mortality in comparison group. Anastomosis time was statistically shorter in the ST group. Conclusions. “Serous touch” technique appeared to be easy, safe, associated with fewer incidences of pancreatic fistulas, and less time consuming in comparison with classical pancreatojejunostomy.

  9. Statistical analysis of acoustic characteristics of Tibetan Lhasa dialect speech emotion

    Directory of Open Access Journals (Sweden)

    Guo Dandan

    2016-01-01

    Full Text Available The paper makes a quantitative analysis and comparison on the continuous speech emotion of Lhasa Tibetan in the four basic emotional patterns (happy, surprise, sad, neutral pitch, energy and time length by experimental phonetics and the linear statistical research methods, found that there is a positive correlation between the Lhasa Tibetan emotional speech and pitch, energy and duration, etc. And the pitch, energy and duration of negative emotion acoustic parameters are bigger than positive emotion, on this basis, drawing the Lhasa Tibetan speech emotion acoustic feature patterns. Compared with the Chinese language and the Tibetan, even though both have the tone prosodic features, they also have significant differences in the acoustic characteristics of the speech emotion.

  10. Statistical 3D damage accumulation model for ion implant simulators

    CERN Document Server

    Hernandez-Mangas, J M; Enriquez, L E; Bailon, L; Barbolla, J; Jaraiz, M

    2003-01-01

    A statistical 3D damage accumulation model, based on the modified Kinchin-Pease formula, for ion implant simulation has been included in our physically based ion implantation code. It has only one fitting parameter for electronic stopping and uses 3D electron density distributions for different types of targets including compound semiconductors. Also, a statistical noise reduction mechanism based on the dose division is used. The model has been adapted to be run under parallel execution in order to speed up the calculation in 3D structures. Sequential ion implantation has been modelled including previous damage profiles. It can also simulate the implantation of molecular and cluster projectiles. Comparisons of simulated doping profiles with experimental SIMS profiles are presented. Also comparisons between simulated amorphization and experimental RBS profiles are shown. An analysis of sequential versus parallel processing is provided.

  11. Statistical 3D damage accumulation model for ion implant simulators

    International Nuclear Information System (INIS)

    Hernandez-Mangas, J.M.; Lazaro, J.; Enriquez, L.; Bailon, L.; Barbolla, J.; Jaraiz, M.

    2003-01-01

    A statistical 3D damage accumulation model, based on the modified Kinchin-Pease formula, for ion implant simulation has been included in our physically based ion implantation code. It has only one fitting parameter for electronic stopping and uses 3D electron density distributions for different types of targets including compound semiconductors. Also, a statistical noise reduction mechanism based on the dose division is used. The model has been adapted to be run under parallel execution in order to speed up the calculation in 3D structures. Sequential ion implantation has been modelled including previous damage profiles. It can also simulate the implantation of molecular and cluster projectiles. Comparisons of simulated doping profiles with experimental SIMS profiles are presented. Also comparisons between simulated amorphization and experimental RBS profiles are shown. An analysis of sequential versus parallel processing is provided

  12. STATISTICAL DOWNSCALING DENGAN PERGESERAN WAKTU BERDASARKAN KORELASI SILANG

    Directory of Open Access Journals (Sweden)

    Aji Hamim Wigena

    2015-09-01

    Full Text Available Pergeseran waktu (time lag dalam analisis data deret waktu diperlukan terutama untuk analisis hubungan dua peubah (variable, seperti dalam statistical downscaling. Pergeseran waktu ini ditentukan berdasarkan korelasi silang tinggi yang setara dengan hubungan yang kuat antar kedua peubah tersebut sehingga dapat digunakan dalam pemodelan untuk prakiraan yang lebih akurat. Makalah ini mengenai statistical downscaling dengan memperhatikan korelasi silang antara data curah hujan dengan data presipitasi Global Circulation Model (GCM dari Climate Model Inter Comparison Project (CMIP5. Salah satu syarat dalam statistical downscaling adalah peubah  skala lokal dan global berkorelasi tinggi. Kedua tipe peubah tersebut berupa data deret waktu sehingga fungsi korelasi silang diterapkan untuk memperoleh pergeseran waktu. Korelasi silang yang tinggi menentukan pergeseran waktu pada luaran GCM yang menghasilkan hubungan fungsional lebih kuat antara kedua tipe peubah. Model regresi komponen utama dan regresi kuadrat terkecil parsial digunakan dalam makalah ini. Model-model dengan pergeseran waktu menduga curah hujan lebih baik daripada model-model tanpa pergeseran waktu.   Time lag in time series data analysis is required especially to analyze the relationship of two variables, such as in statistical downscaling. Time lag is determined based on high cross correlation which is equivalent to strong relationship between the two variables and can be used in modeling for a more accurate forecast. This paper is about  statistical downscaling by considering the cross correlation between rainfall data and precipitation data from Global Circulation Model (GCM of Climate Model Inter Comparison Project (CMIP5. One of the conditions in statistical downscaling is that local scale and global scale variables are highly correlated. Both types of variables are time series data, thus cross correlation function is applied to find time lags. High cross correlation determines

  13. Primary pulmonary non-small cell carcinomas: the College of American Pathologists Interlaboratory Comparison Program confirms a significant trend toward subcategorization based upon fine-needle aspiration cytomorphology alone.

    Science.gov (United States)

    Yildiz-Aktas, Isil Z; Sturgis, Charles D; Barkan, Guliz A; Souers, Rhona J; Fraig, Mostafa M; Laucirica, Rodolfo; Khalbuss, Walid E; Moriarty, Ann T

    2014-01-01

    Context.-Subtyping of non-small cell lung carcinomas (NSCLCs) is necessary for optimal patient management with specific diagnoses triggering specific molecular tests and affecting therapy. Objective.-To assess the accuracy of the participants of the College of American Pathologists Interlaboratory Comparison Program in diagnosing and subtyping NSCLC fine-needle aspiration (FNA) slides, based on morphology alone, considering preparation and participant type and trends over time. Design.-The performance of program participants was reviewed for the 5-year period spanning 2007-2011. Lung FNA challenges with reference diagnoses of adenocarcinoma and squamous cell carcinoma (SCC) were evaluated for diagnostic concordance by using a nonlinear mixed model analysis. Results.-There were 10 493 pathologist and 6378 cytotechnologist responses with concordance rates of 97.4% and 97.9% for malignancy, respectively. Overall concordance rates for subcategorization were 54.6% for adenocarcinoma and 74.9% for SCC. For the exact reference diagnoses, pathologists performed better for adenocarcinoma and cytotechnologists performed better for SCC. Accurate subcategorization of adenocarcinomas significantly increased over time with 31.5% of adenocarcinomas classified as NSCLC in 2007 and 25.5% of adenocarcinomas classified as NSCLC in 2011 (P alone. During the study period, a statistically significant trend was confirmed toward greater accuracy of subcategorization of adenocarcinomas, suggesting that participants are cognizant of the impact that more specific cytomorphologic interpretations have in directing molecular triage and therapy.

  14. Statistical optimization of substrate, carbon and nitrogen source by ...

    African Journals Online (AJOL)

    PRECIOUS

    2009-11-16

    Nov 16, 2009 ... Full Length Research Paper. Statistical ... extraction, in chocolate and tea fermentation and in vegetable waste ... Total of 20 shake flasks media (50 mL in 250 mL Erlenmeyer), including the ..... Physiological comparison between pectinase producing mutants of ... pectinases in bioreactor. Bioprocess Eng.

  15. Nonparametric Bayesian predictive distributions for future order statistics

    Science.gov (United States)

    Richard A. Johnson; James W. Evans; David W. Green

    1999-01-01

    We derive the predictive distribution for a specified order statistic, determined from a future random sample, under a Dirichlet process prior. Two variants of the approach are treated and some limiting cases studied. A practical application to monitoring the strength of lumber is discussed including choices of prior expectation and comparisons made to a Bayesian...

  16. Statistics of resonances in a one-dimensional chain: a weak disorder limit

    International Nuclear Information System (INIS)

    Vinayak

    2012-01-01

    We study statistics of resonances in a one-dimensional disordered chain coupled to an outer world simulated by a perfect lead. We consider a limiting case for weak disorder and derive some results which are new in these studies. The main focus of this study is to describe the statistics of the scattered complex energies. We derive compact analytic statistical results for long chains. A comparison of these results has been found to be in good agreement with numerical simulations. (paper)

  17. Comparison of preoperative and postoperative radiation therapy for patients with carcinoma of head and neck

    International Nuclear Information System (INIS)

    Snow, J.B.; Gelber, R.D.; Kramer, S.; Davis, L.W.; Marcial, V.A.; Lowry, L.D.

    1981-01-01

    Three hundred and fifty-four patients with squamous cell carcinoma of the oral cavity, oropharynx, supraglottic larynx, hypopharynx or maxillary sinus have been randomized for preoperative radiation therapy and surgery versus surgery and postoperative radiation therapy plus, in the case of patients with lesions of the oral cavity and oropharynx, radical radiation therapy. Data have been analyzed on 320 patients in this interim report. In the supraglottic larynx group local-regional control is significantly better for surgery and postoperative radiation therapy. The treatment differences in local-regional control in the oral cavity oropharynx and hypopharynx groups are statistically significant. No statistically significant treatment differences exist for survival in all sites or in any site; continued follow- up is necessary to make definite treatment comparisons. (authors)

  18. Time to significant pain reduction following DETP application vs placebo for acute soft tissue injuries.

    Science.gov (United States)

    Yanchick, J; Magelli, M; Bodie, J; Sjogren, J; Rovati, S

    2010-08-01

    Nonsteroidal anti-inflammatory drugs (NSAIDs) provide fast and effective acute pain relief, but systemic administration has increased risk for some adverse reactions. The diclofenac epolamine 1.3% topical patch (DETP) is a topical NSAID with demonstrated safety and efficacy in treatment of acute pain from minor soft tissue injuries. Significant pain reduction has been observed in clinical trials within several hours following DETP application, suggesting rapid pain relief; however, this has not been extensively studied for topical NSAIDs in general. This retrospective post-hoc analysis examined time to onset of significant pain reduction after DETP application compared to a placebo patch for patients with mild-to-moderate acute ankle sprain, evaluating the primary efficacy endpoint from two nearly identical studies. Data from two double-blind, randomized, parallel-group, placebo-controlled studies (N = 274) of safety and efficacy of the DETP applied once daily for 7 days for acute ankle sprain were evaluated post-hoc using statistical modeling to estimate time to onset of significant pain reduction following DETP application. Pain on active movement on a 100 mm Visual Analog Scale (VAS) recorded in patient diaries; physician- and patient-assessed tolerability; and adverse events. DETP treatment resulted in significant pain reduction within approximately 3 hours compared to placebo. Within-treatment post-hoc analysis based on a statistical model suggested significant pain reduction occurred as early as 1.27 hours for the DETP group. The study may have been limited by the retrospective nature of the analyses. In both studies, the DETP was well tolerated with few adverse events, limited primarily to application site skin reactions. The DETP is an effective treatment for acute minor soft tissue injury, providing pain relief as rapidly as 1.27 hours post-treatment. Statistical modeling may be useful in estimating time to onset of pain relief for comparison of topical

  19. TRAN-STAT: statistics for environmental studies, Number 22. Comparison of soil-sampling techniques for plutonium at Rocky Flats

    International Nuclear Information System (INIS)

    Gilbert, R.O.; Bernhardt, D.E.; Hahn, P.B.

    1983-01-01

    A summary of a field soil sampling study conducted around the Rocky Flats Colorado plant in May 1977 is preseted. Several different soil sampling techniques that had been used in the area were applied at four different sites. One objective was to comparethe average 239 - 240 Pu concentration values obtained by the various soil sampling techniques used. There was also interest in determining whether there are differences in the reproducibility of the various techniques and how the techniques compared with the proposed EPA technique of sampling to 1 cm depth. Statistically significant differences in average concentrations between the techniques were found. The differences could be largely related to the differences in sampling depth-the primary physical variable between the techniques. The reproducibility of the techniques was evaluated by comparing coefficients of variation. Differences between coefficients of variation were not statistically significant. Average (median) coefficients ranged from 21 to 42 percent for the five sampling techniques. A laboratory study indicated that various sample treatment and particle sizing techniques could increase the concentration of plutonium in the less than 10 micrometer size fraction by up to a factor of about 4 compared to the 2 mm size fraction

  20. Statistical Characterization of the Chandra Source Catalog

    Science.gov (United States)

    Primini, Francis A.; Houck, John C.; Davis, John E.; Nowak, Michael A.; Evans, Ian N.; Glotfelty, Kenny J.; Anderson, Craig S.; Bonaventura, Nina R.; Chen, Judy C.; Doe, Stephen M.; Evans, Janet D.; Fabbiano, Giuseppina; Galle, Elizabeth C.; Gibbs, Danny G.; Grier, John D.; Hain, Roger M.; Hall, Diane M.; Harbo, Peter N.; He, Xiangqun Helen; Karovska, Margarita; Kashyap, Vinay L.; Lauer, Jennifer; McCollough, Michael L.; McDowell, Jonathan C.; Miller, Joseph B.; Mitschang, Arik W.; Morgan, Douglas L.; Mossman, Amy E.; Nichols, Joy S.; Plummer, David A.; Refsdal, Brian L.; Rots, Arnold H.; Siemiginowska, Aneta; Sundheim, Beth A.; Tibbetts, Michael S.; Van Stone, David W.; Winkelman, Sherry L.; Zografou, Panagoula

    2011-06-01

    The first release of the Chandra Source Catalog (CSC) contains ~95,000 X-ray sources in a total area of 0.75% of the entire sky, using data from ~3900 separate ACIS observations of a multitude of different types of X-ray sources. In order to maximize the scientific benefit of such a large, heterogeneous data set, careful characterization of the statistical properties of the catalog, i.e., completeness, sensitivity, false source rate, and accuracy of source properties, is required. Characterization efforts of other large Chandra catalogs, such as the ChaMP Point Source Catalog or the 2 Mega-second Deep Field Surveys, while informative, cannot serve this purpose, since the CSC analysis procedures are significantly different and the range of allowable data is much less restrictive. We describe here the characterization process for the CSC. This process includes both a comparison of real CSC results with those of other, deeper Chandra catalogs of the same targets and extensive simulations of blank-sky and point-source populations.

  1. Multi-reader ROC studies with split-plot designs: a comparison of statistical methods.

    Science.gov (United States)

    Obuchowski, Nancy A; Gallas, Brandon D; Hillis, Stephen L

    2012-12-01

    Multireader imaging trials often use a factorial design, in which study patients undergo testing with all imaging modalities and readers interpret the results of all tests for all patients. A drawback of this design is the large number of interpretations required of each reader. Split-plot designs have been proposed as an alternative, in which one or a subset of readers interprets all images of a sample of patients, while other readers interpret the images of other samples of patients. In this paper, the authors compare three methods of analysis for the split-plot design. Three statistical methods are presented: the Obuchowski-Rockette method modified for the split-plot design, a newly proposed marginal-mean analysis-of-variance approach, and an extension of the three-sample U-statistic method. A simulation study using the Roe-Metz model was performed to compare the type I error rate, power, and confidence interval coverage of the three test statistics. The type I error rates for all three methods are close to the nominal level but tend to be slightly conservative. The statistical power is nearly identical for the three methods. The coverage of 95% confidence intervals falls close to the nominal coverage for small and large sample sizes. The split-plot multireader, multicase study design can be statistically efficient compared to the factorial design, reducing the number of interpretations required per reader. Three methods of analysis, shown to have nominal type I error rates, similar power, and nominal confidence interval coverage, are available for this study design. Copyright © 2012 AUR. All rights reserved.

  2. The significance of Good Chair as part of children’s school and home environment in the preventive treatment of body statistics distortions

    OpenAIRE

    Mirosław Mrozkowiak; Hanna Żukowska

    2015-01-01

    Mrozkowiak Mirosław, Żukowska Hanna. Znaczenie Dobrego Krzesła, jako elementu szkolnego i domowego środowiska ucznia, w profilaktyce zaburzeń statyki postawy ciała = The significance of Good Chair as part of children’s school and home environment in the preventive treatment of body statistics distortions. Journal of Education, Health and Sport. 2015;5(7):179-215. ISSN 2391-8306. DOI 10.5281/zenodo.19832 http://ojs.ukw.edu.pl/index.php/johs/article/view/2015%3B5%287%29%3A179-215 https:...

  3. Can a significance test be genuinely Bayesian?

    OpenAIRE

    Pereira, Carlos A. de B.; Stern, Julio Michael; Wechsler, Sergio

    2008-01-01

    The Full Bayesian Significance Test, FBST, is extensively reviewed. Its test statistic, a genuine Bayesian measure of evidence, is discussed in detail. Its behavior in some problems of statistical inference like testing for independence in contingency tables is discussed.

  4. Beyond δ : Tailoring marked statistics to reveal modified gravity

    Science.gov (United States)

    Valogiannis, Georgios; Bean, Rachel

    2018-01-01

    Models that seek to explain cosmic acceleration through modifications to general relativity (GR) evade stringent Solar System constraints through a restoring, screening mechanism. Down-weighting the high-density, screened regions in favor of the low density, unscreened ones offers the potential to enhance the amount of information carried in such modified gravity models. In this work, we assess the performance of a new "marked" transformation and perform a systematic comparison with the clipping and logarithmic transformations, in the context of Λ CDM and the symmetron and f (R ) modified gravity models. Performance is measured in terms of the fractional boost in the Fisher information and the signal-to-noise ratio (SNR) for these models relative to the statistics derived from the standard density distribution. We find that all three statistics provide improved Fisher boosts over the basic density statistics. The model parameters for the marked and clipped transformation that best enhance signals and the Fisher boosts are determined. We also show that the mark is useful both as a Fourier and real-space transformation; a marked correlation function also enhances the SNR relative to the standard correlation function, and can on mildly nonlinear scales show a significant difference between the Λ CDM and the modified gravity models. Our results demonstrate how a series of simple analytical transformations could dramatically increase the predicted information extracted on deviations from GR, from large-scale surveys, and give the prospect for a much more feasible potential detection.

  5. Statistical theory of dynamo

    Science.gov (United States)

    Kim, E.; Newton, A. P.

    2012-04-01

    One major problem in dynamo theory is the multi-scale nature of the MHD turbulence, which requires statistical theory in terms of probability distribution functions. In this contribution, we present the statistical theory of magnetic fields in a simplified mean field α-Ω dynamo model by varying the statistical property of alpha, including marginal stability and intermittency, and then utilize observational data of solar activity to fine-tune the mean field dynamo model. Specifically, we first present a comprehensive investigation into the effect of the stochastic parameters in a simplified α-Ω dynamo model. Through considering the manifold of marginal stability (the region of parameter space where the mean growth rate is zero), we show that stochastic fluctuations are conductive to dynamo. Furthermore, by considering the cases of fluctuating alpha that are periodic and Gaussian coloured random noise with identical characteristic time-scales and fluctuating amplitudes, we show that the transition to dynamo is significantly facilitated for stochastic alpha with random noise. Furthermore, we show that probability density functions (PDFs) of the growth-rate, magnetic field and magnetic energy can provide a wealth of useful information regarding the dynamo behaviour/intermittency. Finally, the precise statistical property of the dynamo such as temporal correlation and fluctuating amplitude is found to be dependent on the distribution the fluctuations of stochastic parameters. We then use observations of solar activity to constrain parameters relating to the effect in stochastic α-Ω nonlinear dynamo models. This is achieved through performing a comprehensive statistical comparison by computing PDFs of solar activity from observations and from our simulation of mean field dynamo model. The observational data that are used are the time history of solar activity inferred for C14 data in the past 11000 years on a long time scale and direct observations of the sun spot

  6. Comparison of Statistical Post-Processing Methods for Probabilistic Wind Speed Forecasting

    Science.gov (United States)

    Han, Keunhee; Choi, JunTae; Kim, Chansoo

    2018-02-01

    In this study, the statistical post-processing methods that include bias-corrected and probabilistic forecasts of wind speed measured in PyeongChang, which is scheduled to host the 2018 Winter Olympics, are compared and analyzed to provide more accurate weather information. The six post-processing methods used in this study are as follows: mean bias-corrected forecast, mean and variance bias-corrected forecast, decaying averaging forecast, mean absolute bias-corrected forecast, and the alternative implementations of ensemble model output statistics (EMOS) and Bayesian model averaging (BMA) models, which are EMOS and BMA exchangeable models by assuming exchangeable ensemble members and simplified version of EMOS and BMA models. Observations for wind speed were obtained from the 26 stations in PyeongChang and 51 ensemble member forecasts derived from the European Centre for Medium-Range Weather Forecasts (ECMWF Directorate, 2012) that were obtained between 1 May 2013 and 18 March 2016. Prior to applying the post-processing methods, reliability analysis was conducted by using rank histograms to identify the statistical consistency of ensemble forecast and corresponding observations. Based on the results of our study, we found that the prediction skills of probabilistic forecasts of EMOS and BMA models were superior to the biascorrected forecasts in terms of deterministic prediction, whereas in probabilistic prediction, BMA models showed better prediction skill than EMOS. Even though the simplified version of BMA model exhibited best prediction skill among the mentioned six methods, the results showed that the differences of prediction skills between the versions of EMOS and BMA were negligible.

  7. Improvement of vertical velocity statistics measured by a Doppler lidar through comparison with sonic anemometer observations

    Science.gov (United States)

    Bonin, Timothy A.; Newman, Jennifer F.; Klein, Petra M.; Chilson, Phillip B.; Wharton, Sonia

    2016-12-01

    Since turbulence measurements from Doppler lidars are being increasingly used within wind energy and boundary-layer meteorology, it is important to assess and improve the accuracy of these observations. While turbulent quantities are measured by Doppler lidars in several different ways, the simplest and most frequently used statistic is vertical velocity variance (w'2) from zenith stares. However, the competing effects of signal noise and resolution volume limitations, which respectively increase and decrease w'2, reduce the accuracy of these measurements. Herein, an established method that utilises the autocovariance of the signal to remove noise is evaluated and its skill in correcting for volume-averaging effects in the calculation of w'2 is also assessed. Additionally, this autocovariance technique is further refined by defining the amount of lag time to use for the most accurate estimates of w'2. Through comparison of observations from two Doppler lidars and sonic anemometers on a 300 m tower, the autocovariance technique is shown to generally improve estimates of w'2. After the autocovariance technique is applied, values of w'2 from the Doppler lidars are generally in close agreement (R2 ≈ 0.95 - 0.98) with those calculated from sonic anemometer measurements.

  8. A Comparison of Several Statistical Tests of Reciprocity of Self-Disclosure.

    Science.gov (United States)

    Dindia, Kathryn

    1988-01-01

    Reports the results of a study that used several statistical tests of reciprocity of self-disclosure. Finds little evidence for reciprocity of self-disclosure, and concludes that either reciprocity is an illusion, or that different or more sophisticated methods are needed to detect it. (MS)

  9. Comparison of static MRI and pseudo-dynamic MRI in temporomandibular joint disorder patients

    International Nuclear Information System (INIS)

    Lee, Jin Ho; Yun, Kyoung In; Park, In Woo; Choi, Hang Moon; Park, Moon Soo

    2006-01-01

    The purpose of this study was to elevate comparison of static MRI and pseudo-dynamic (cine) MRI in temporomandibular joint (TMJ) disorder patients. In this investigation, 33 patients with TMJ disorders were examined using both conventional static MRI and pseudo-dynamic MRI. Multiple spoiled gradient recalled acquisition in the steady state (SPGR) images were obtained when mouth opened and closed. Proton density weighted images were obtained at the closed and open mouth position in static MRI. Two oral and maxillofacial radiologists evaluated location of the articular disk, movement of condyle and bony change respectively and the posterior boundary of articular disk was obtained. No statistically significant difference was found in the observation of articular disk position, mandibular condylar movement and posterior boundary of articular disk using static MRI and pseudo-dynamic MRI (P>0.05). Statistically significant difference was noted in bony changes of condyle using static MRI and pseudo-dynamic MRI (P<0.05). This study showed that pseudo-dynamic MRI didn't make a difference in diagnosing internal derangement of TMJ in comparison with static MRI. But it was considered as an additional method to be supplemented in observing bony change

  10. Comparison of static MRI and pseudo-dynamic MRI in temporomandibular joint disorder patients

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jin Ho; Yun, Kyoung In [Eulji Univ. School of Medicine, Seoul (Korea, Republic of); Park, In Woo; Choi, Hang Moon; Park, Moon Soo [Kangnung National Univ. College of Dentistry, Kangnung (Korea, Republic of)

    2006-12-15

    The purpose of this study was to elevate comparison of static MRI and pseudo-dynamic (cine) MRI in temporomandibular joint (TMJ) disorder patients. In this investigation, 33 patients with TMJ disorders were examined using both conventional static MRI and pseudo-dynamic MRI. Multiple spoiled gradient recalled acquisition in the steady state (SPGR) images were obtained when mouth opened and closed. Proton density weighted images were obtained at the closed and open mouth position in static MRI. Two oral and maxillofacial radiologists evaluated location of the articular disk, movement of condyle and bony change respectively and the posterior boundary of articular disk was obtained. No statistically significant difference was found in the observation of articular disk position, mandibular condylar movement and posterior boundary of articular disk using static MRI and pseudo-dynamic MRI (P>0.05). Statistically significant difference was noted in bony changes of condyle using static MRI and pseudo-dynamic MRI (P<0.05). This study showed that pseudo-dynamic MRI didn't make a difference in diagnosing internal derangement of TMJ in comparison with static MRI. But it was considered as an additional method to be supplemented in observing bony change.

  11. Statistics for experimentalists

    CERN Document Server

    Cooper, B E

    2014-01-01

    Statistics for Experimentalists aims to provide experimental scientists with a working knowledge of statistical methods and search approaches to the analysis of data. The book first elaborates on probability and continuous probability distributions. Discussions focus on properties of continuous random variables and normal variables, independence of two random variables, central moments of a continuous distribution, prediction from a normal distribution, binomial probabilities, and multiplication of probabilities and independence. The text then examines estimation and tests of significance. Topics include estimators and estimates, expected values, minimum variance linear unbiased estimators, sufficient estimators, methods of maximum likelihood and least squares, and the test of significance method. The manuscript ponders on distribution-free tests, Poisson process and counting problems, correlation and function fitting, balanced incomplete randomized block designs and the analysis of covariance, and experiment...

  12. A comparison of test statistics for the recovery of rapid growth-based enumeration tests

    NARCIS (Netherlands)

    van den Heuvel, Edwin R.; IJzerman-Boon, Pieta C.

    This paper considers five test statistics for comparing the recovery of a rapid growth-based enumeration test with respect to the compendial microbiological method using a specific nonserial dilution experiment. The finite sample distributions of these test statistics are unknown, because they are

  13. Parts of the Whole: Hands On Statistics

    Directory of Open Access Journals (Sweden)

    Dorothy Wallace

    2018-01-01

    Full Text Available In this column we describe a hands-on data collection lab for an introductory statistics course. The exercise elicits issues of normality, sampling, and sample mean comparisons. Based on volcanology models of tephra dispersion, this lab leads students to question the accuracy of some assumptions made in the model, particularly regarding the normality of the dispersal of tephra of identical size in a given atmospheric layer.

  14. A comparison of linear and nonlinear statistical techniques in performance attribution.

    Science.gov (United States)

    Chan, N H; Genovese, C R

    2001-01-01

    Performance attribution is usually conducted under the linear framework of multifactor models. Although commonly used by practitioners in finance, linear multifactor models are known to be less than satisfactory in many situations. After a brief survey of nonlinear methods, nonlinear statistical techniques are applied to performance attribution of a portfolio constructed from a fixed universe of stocks using factors derived from some commonly used cross sectional linear multifactor models. By rebalancing this portfolio monthly, the cumulative returns for procedures based on standard linear multifactor model and three nonlinear techniques-model selection, additive models, and neural networks-are calculated and compared. It is found that the first two nonlinear techniques, especially in combination, outperform the standard linear model. The results in the neural-network case are inconclusive because of the great variety of possible models. Although these methods are more complicated and may require some tuning, toolboxes are developed and suggestions on calibration are proposed. This paper demonstrates the usefulness of modern nonlinear statistical techniques in performance attribution.

  15. How to interpret the results of medical time series data analysis: Classical statistical approaches versus dynamic Bayesian network modeling.

    Science.gov (United States)

    Onisko, Agnieszka; Druzdzel, Marek J; Austin, R Marshall

    2016-01-01

    Classical statistics is a well-established approach in the analysis of medical data. While the medical community seems to be familiar with the concept of a statistical analysis and its interpretation, the Bayesian approach, argued by many of its proponents to be superior to the classical frequentist approach, is still not well-recognized in the analysis of medical data. The goal of this study is to encourage data analysts to use the Bayesian approach, such as modeling with graphical probabilistic networks, as an insightful alternative to classical statistical analysis of medical data. This paper offers a comparison of two approaches to analysis of medical time series data: (1) classical statistical approach, such as the Kaplan-Meier estimator and the Cox proportional hazards regression model, and (2) dynamic Bayesian network modeling. Our comparison is based on time series cervical cancer screening data collected at Magee-Womens Hospital, University of Pittsburgh Medical Center over 10 years. The main outcomes of our comparison are cervical cancer risk assessments produced by the three approaches. However, our analysis discusses also several aspects of the comparison, such as modeling assumptions, model building, dealing with incomplete data, individualized risk assessment, results interpretation, and model validation. Our study shows that the Bayesian approach is (1) much more flexible in terms of modeling effort, and (2) it offers an individualized risk assessment, which is more cumbersome for classical statistical approaches.

  16. Operator Performance Comparison of two VDT-based Alarm Systems

    International Nuclear Information System (INIS)

    Lee, Hyun-Chul; Oh, In-Suk; Sim, Bong-Shick; Koo, In-Soo; Kim, Jeong-Taek; Lee, Ki-Young; Park, Jong-Kyun

    1998-01-01

    This study is carried out to investigate performance differences between two alarm presentation methods from the viewpoint of human factors and to provide items to be improved. One of the alarm display methods considered in this study displays alarm lists on VDT combined with hardwired alarm panels. The other method displays alarms on plant mimic diagrams of VDT. This alarm display method has other features for operator aid with which operator can get detailed information on the activated alarm in the mimic diagrams, and the capability for alarm processing such as alarm reduction and prioritization. To compare the two display methods, a human factor experiment was performed with a plant simulator in the ITF (Integrated Test Facility) that plant operators run for 4 event scenarios. During the experiment, physiological measurements, system and operator action log, and audio/video recordings were collected. Operators subjective opinion was collected as well after the experiment. Time, error rate and situation awareness were major human factor criteria used for the comparison during the analysis stage of the experiment. No statistical significance was found in the results of our statistical comparison analysis. Several findings were identified, however, through the analysis of subjective opinions. (authors)

  17. On two methods of statistical image analysis

    NARCIS (Netherlands)

    Missimer, J; Knorr, U; Maguire, RP; Herzog, H; Seitz, RJ; Tellman, L; Leenders, K.L.

    1999-01-01

    The computerized brain atlas (CBA) and statistical parametric mapping (SPM) are two procedures for voxel-based statistical evaluation of PET activation studies. Each includes spatial standardization of image volumes, computation of a statistic, and evaluation of its significance. In addition,

  18. Statistical test for the distribution of galaxies on plates

    International Nuclear Information System (INIS)

    Garcia Lambas, D.

    1985-01-01

    A statistical test for the distribution of galaxies on plates is presented. We apply the test to synthetic astronomical plates obtained by means of numerical simulation (Garcia Lambas and Sersic 1983) with three different models for the 3-dimensional distribution, comparison with an observational plate, suggest the presence of filamentary structure. (author)

  19. Forecasting winds over nuclear power plants statistics

    International Nuclear Information System (INIS)

    Marais, Ch.

    1997-01-01

    In the event of an accident at nuclear power plant, it is essential to forecast the wind velocity at the level where the efflux occurs (about 100 m). At present meteorologists refine the wind forecast from the coarse grid of numerical weather prediction (NWP) models. The purpose of this study is to improve the forecasts by developing a statistical adaptation method which corrects the NWP forecasts by using statistical comparisons between wind forecasts and observations. The Multiple Linear Regression method is used here to forecast the 100 m wind at 12 and 24 hours range for three Electricite de France (EDF) sites. It turns out that this approach gives better forecasts than the NWP model alone and is worthy of operational use. (author)

  20. SOCR Analyses - an Instructional Java Web-based Statistical Analysis Toolkit.

    Science.gov (United States)

    Chu, Annie; Cui, Jenny; Dinov, Ivo D

    2009-03-01

    The Statistical Online Computational Resource (SOCR) designs web-based tools for educational use in a variety of undergraduate courses (Dinov 2006). Several studies have demonstrated that these resources significantly improve students' motivation and learning experiences (Dinov et al. 2008). SOCR Analyses is a new component that concentrates on data modeling and analysis using parametric and non-parametric techniques supported with graphical model diagnostics. Currently implemented analyses include commonly used models in undergraduate statistics courses like linear models (Simple Linear Regression, Multiple Linear Regression, One-Way and Two-Way ANOVA). In addition, we implemented tests for sample comparisons, such as t-test in the parametric category; and Wilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, in the non-parametric category. SOCR Analyses also include several hypothesis test models, such as Contingency tables, Friedman's test and Fisher's exact test.The code itself is open source (http://socr.googlecode.com/), hoping to contribute to the efforts of the statistical computing community. The code includes functionality for each specific analysis model and it has general utilities that can be applied in various statistical computing tasks. For example, concrete methods with API (Application Programming Interface) have been implemented in statistical summary, least square solutions of general linear models, rank calculations, etc. HTML interfaces, tutorials, source code, activities, and data are freely available via the web (www.SOCR.ucla.edu). Code examples for developers and demos for educators are provided on the SOCR Wiki website.In this article, the pedagogical utilization of the SOCR Analyses is discussed, as well as the underlying design framework. As the SOCR project is on-going and more functions and tools are being added to it, these resources are constantly improved. The reader is strongly encouraged to check the SOCR site for most

  1. Statistics Anxiety among Postgraduate Students

    Science.gov (United States)

    Koh, Denise; Zawi, Mohd Khairi

    2014-01-01

    Most postgraduate programmes, that have research components, require students to take at least one course of research statistics. Not all postgraduate programmes are science based, there are a significant number of postgraduate students who are from the social sciences that will be taking statistics courses, as they try to complete their…

  2. Field significance of performance measures in the context of regional climate model evaluation. Part 1: temperature

    Science.gov (United States)

    Ivanov, Martin; Warrach-Sagi, Kirsten; Wulfmeyer, Volker

    2018-04-01

    A new approach for rigorous spatial analysis of the downscaling performance of regional climate model (RCM) simulations is introduced. It is based on a multiple comparison of the local tests at the grid cells and is also known as "field" or "global" significance. New performance measures for estimating the added value of downscaled data relative to the large-scale forcing fields are developed. The methodology is exemplarily applied to a standard EURO-CORDEX hindcast simulation with the Weather Research and Forecasting (WRF) model coupled with the land surface model NOAH at 0.11 ∘ grid resolution. Monthly temperature climatology for the 1990-2009 period is analysed for Germany for winter and summer in comparison with high-resolution gridded observations from the German Weather Service. The field significance test controls the proportion of falsely rejected local tests in a meaningful way and is robust to spatial dependence. Hence, the spatial patterns of the statistically significant local tests are also meaningful. We interpret them from a process-oriented perspective. In winter and in most regions in summer, the downscaled distributions are statistically indistinguishable from the observed ones. A systematic cold summer bias occurs in deep river valleys due to overestimated elevations, in coastal areas due probably to enhanced sea breeze circulation, and over large lakes due to the interpolation of water temperatures. Urban areas in concave topography forms have a warm summer bias due to the strong heat islands, not reflected in the observations. WRF-NOAH generates appropriate fine-scale features in the monthly temperature field over regions of complex topography, but over spatially homogeneous areas even small biases can lead to significant deteriorations relative to the driving reanalysis. As the added value of global climate model (GCM)-driven simulations cannot be smaller than this perfect-boundary estimate, this work demonstrates in a rigorous manner the

  3. Mapping patent classifications: portfolio and statistical analysis, and the comparison of strengths and weaknesses.

    Science.gov (United States)

    Leydesdorff, Loet; Kogler, Dieter Franz; Yan, Bowen

    2017-01-01

    The Cooperative Patent Classifications (CPC) recently developed cooperatively by the European and US Patent Offices provide a new basis for mapping patents and portfolio analysis. CPC replaces International Patent Classifications (IPC) of the World Intellectual Property Organization. In this study, we update our routines previously based on IPC for CPC and use the occasion for rethinking various parameter choices. The new maps are significantly different from the previous ones, although this may not always be obvious on visual inspection. We provide nested maps online and a routine for generating portfolio overlays on the maps; a new tool is provided for "difference maps" between patent portfolios of organizations or firms. This is illustrated by comparing the portfolios of patents granted to two competing firms-Novartis and MSD-in 2016. Furthermore, the data is organized for the purpose of statistical analysis.

  4. Irrigated Area Maps and Statistics of India Using Remote Sensing and National Statistics

    Directory of Open Access Journals (Sweden)

    Prasad S. Thenkabail

    2009-04-01

    Full Text Available The goal of this research was to compare the remote-sensing derived irrigated areas with census-derived statistics reported in the national system. India, which has nearly 30% of global annualized irrigated areas (AIAs, and is the leading irrigated area country in the World, along with China, was chosen for the study. Irrigated areas were derived for nominal year 2000 using time-series remote sensing at two spatial resolutions: (a 10-km Advanced Very High Resolution Radiometer (AVHRR and (b 500-m Moderate Resolution Imaging Spectroradiometer (MODIS. These areas were compared with the Indian National Statistical Data on irrigated areas reported by the: (a Directorate of Economics and Statistics (DES of the Ministry of Agriculture (MOA, and (b Ministry of Water Resources (MoWR. A state-by-state comparison of remote sensing derived irrigated areas when compared with MoWR derived irrigation potential utilized (IPU, an equivalent of AIA, provided a high degree of correlation with R2 values of: (a 0.79 with 10-km, and (b 0.85 with MODIS 500-m. However, the remote sensing derived irrigated area estimates for India were consistently higher than the irrigated areas reported by the national statistics. The remote sensing derived total area available for irrigation (TAAI, which does not consider intensity of irrigation, was 101 million hectares (Mha using 10-km and 113 Mha using 500-m. The AIAs, which considers intensity of irrigation, was 132 Mha using 10-km and 146 Mha using 500-m. In contrast the IPU, an equivalent of AIAs, as reported by MoWR was 83 Mha. There are “large variations” in irrigated area statistics reported, even between two ministries (e.g., Directorate of Statistics of Ministry of Agriculture and Ministry of Water Resources of the same national system. The causes include: (a reluctance on part of the states to furnish irrigated area data in view of their vested interests in sharing of water, and (b reporting of large volumes of data

  5. A statistical physics perspective on criticality in financial markets

    International Nuclear Information System (INIS)

    Bury, Thomas

    2013-01-01

    Stock markets are complex systems exhibiting collective phenomena and particular features such as synchronization, fluctuations distributed as power-laws, non-random structures and similarity to neural networks. Such specific properties suggest that markets operate at a very special point. Financial markets are believed to be critical by analogy to physical systems, but little statistically founded evidence has been given. Through a data-based methodology and comparison to simulations inspired by the statistical physics of complex systems, we show that the Dow Jones and index sets are not rigorously critical. However, financial systems are closer to criticality in the crash neighborhood. (paper)

  6. The statistical chopper in the time-of-flight technique

    International Nuclear Information System (INIS)

    Albuquerque Vieira, J. de.

    1975-12-01

    A detailed study of the 'statistical' chopper and of the method of analysis of the data obtained by this technique is made. The study includes the basic ideas behind correlation methods applied in time-of-flight techniques; comparisons with the conventional chopper made by an analysis of statistical errors; the development of a FORTRAN computer programme to analyse experimental results; the presentation of the related fields of work to demonstrate the potential of this method and suggestions for future study together with the criteria for a time-of-flight experiment using the method being studied [pt

  7. Municipal solid waste composition: Sampling methodology, statistical analyses, and case study evaluation

    International Nuclear Information System (INIS)

    Edjabou, Maklawe Essonanawe; Jensen, Morten Bang; Götze, Ramona; Pivnenko, Kostyantyn; Petersen, Claus; Scheutz, Charlotte; Astrup, Thomas Fruergaard

    2015-01-01

    Highlights: • Tiered approach to waste sorting ensures flexibility and facilitates comparison of solid waste composition data. • Food and miscellaneous wastes are the main fractions contributing to the residual household waste. • Separation of food packaging from food leftovers during sorting is not critical for determination of the solid waste composition. - Abstract: Sound waste management and optimisation of resource recovery require reliable data on solid waste generation and composition. In the absence of standardised and commonly accepted waste characterisation methodologies, various approaches have been reported in literature. This limits both comparability and applicability of the results. In this study, a waste sampling and sorting methodology for efficient and statistically robust characterisation of solid waste was introduced. The methodology was applied to residual waste collected from 1442 households distributed among 10 individual sub-areas in three Danish municipalities (both single and multi-family house areas). In total 17 tonnes of waste were sorted into 10–50 waste fractions, organised according to a three-level (tiered approach) facilitating comparison of the waste data between individual sub-areas with different fractionation (waste from one municipality was sorted at “Level III”, e.g. detailed, while the two others were sorted only at “Level I”). The results showed that residual household waste mainly contained food waste (42 ± 5%, mass per wet basis) and miscellaneous combustibles (18 ± 3%, mass per wet basis). The residual household waste generation rate in the study areas was 3–4 kg per person per week. Statistical analyses revealed that the waste composition was independent of variations in the waste generation rate. Both, waste composition and waste generation rates were statistically similar for each of the three municipalities. While the waste generation rates were similar for each of the two housing types (single

  8. Municipal solid waste composition: Sampling methodology, statistical analyses, and case study evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Edjabou, Maklawe Essonanawe, E-mail: vine@env.dtu.dk [Department of Environmental Engineering, Technical University of Denmark, 2800 Kgs. Lyngby (Denmark); Jensen, Morten Bang; Götze, Ramona; Pivnenko, Kostyantyn [Department of Environmental Engineering, Technical University of Denmark, 2800 Kgs. Lyngby (Denmark); Petersen, Claus [Econet AS, Omøgade 8, 2.sal, 2100 Copenhagen (Denmark); Scheutz, Charlotte; Astrup, Thomas Fruergaard [Department of Environmental Engineering, Technical University of Denmark, 2800 Kgs. Lyngby (Denmark)

    2015-02-15

    Highlights: • Tiered approach to waste sorting ensures flexibility and facilitates comparison of solid waste composition data. • Food and miscellaneous wastes are the main fractions contributing to the residual household waste. • Separation of food packaging from food leftovers during sorting is not critical for determination of the solid waste composition. - Abstract: Sound waste management and optimisation of resource recovery require reliable data on solid waste generation and composition. In the absence of standardised and commonly accepted waste characterisation methodologies, various approaches have been reported in literature. This limits both comparability and applicability of the results. In this study, a waste sampling and sorting methodology for efficient and statistically robust characterisation of solid waste was introduced. The methodology was applied to residual waste collected from 1442 households distributed among 10 individual sub-areas in three Danish municipalities (both single and multi-family house areas). In total 17 tonnes of waste were sorted into 10–50 waste fractions, organised according to a three-level (tiered approach) facilitating comparison of the waste data between individual sub-areas with different fractionation (waste from one municipality was sorted at “Level III”, e.g. detailed, while the two others were sorted only at “Level I”). The results showed that residual household waste mainly contained food waste (42 ± 5%, mass per wet basis) and miscellaneous combustibles (18 ± 3%, mass per wet basis). The residual household waste generation rate in the study areas was 3–4 kg per person per week. Statistical analyses revealed that the waste composition was independent of variations in the waste generation rate. Both, waste composition and waste generation rates were statistically similar for each of the three municipalities. While the waste generation rates were similar for each of the two housing types (single

  9. A note on the statistical analysis of point judgment matrices

    Directory of Open Access Journals (Sweden)

    MG Kabera

    2013-06-01

    Full Text Available The Analytic Hierarchy Process is a multicriteria decision making technique developed by Saaty in the 1970s. The core of the approach is the pairwise comparison of objects according to a single criterion using a 9-point ratio scale and the estimation of weights associated with these objects based on the resultant judgment matrix. In the present paper some statistical approaches to extracting the weights of objects from a judgment matrix are reviewed and new ideas which are rooted in the traditional method of paired comparisons are introduced.

  10. Comprehensive analysis of tornado statistics in comparison to earthquakes: intensity and temporal behaviour

    Directory of Open Access Journals (Sweden)

    L. Schielicke

    2013-01-01

    Full Text Available Tornadoes and earthquakes are characterised by a high variability in their properties concerning intensity, geometric properties and temporal behaviour. Earthquakes are known for power-law behaviour in their intensity (Gutenberg–Richter law and temporal statistics (e.g. Omori law and interevent waiting times. The observed similarity of high variability of these two phenomena motivated us to compare the statistical behaviour of tornadoes using seismological methods and quest for power-law behaviour. In general, the statistics of tornadoes show power-law behaviour partly coextensive with characteristic scales when the temporal resolution is high (10 to 60 min. These characteristic scales match with the typical diurnal behaviour of tornadoes, which is characterised by a maximum of tornado occurrences in the late afternoon hours. Furthermore, the distributions support the observation that tornadoes cluster in time. Finally, we shortly discuss a possible similar underlying structure composed of heterogeneous, coupled, interactive threshold oscillators that possibly explains the observed behaviour.

  11. Arthroscopic Debridement for Primary Degenerative Osteoarthritis of the Elbow Leads to Significant Improvement in Range of Motion and Clinical Outcomes: A Systematic Review.

    Science.gov (United States)

    Sochacki, Kyle R; Jack, Robert A; Hirase, Takashi; McCulloch, Patrick C; Lintner, David M; Liberman, Shari R; Harris, Joshua D

    2017-12-01

    The purpose of this investigation was to determine whether arthroscopic debridement of primary elbow osteoarthritis results in statistically significant and clinically relevant improvement in (1) elbow range of motion and (2) clinical outcomes with (3) low complication and reoperation rates. A systematic review was registered with PROSPERO and performed using PRISMA guidelines. Databases were searched for studies that investigated the outcomes of arthroscopic debridement for the treatment of primary osteoarthritis of the elbow in adult human patients. Study methodological quality was analyzed. Studies that included post-traumatic arthritis were excluded. Elbow motion and all elbow-specific patient-reported outcome scores were eligible for analysis. Comparisons between preoperative and postoperative values from each study were made using 2-sample Z-tests (http://in-silico.net/tools/statistics/ztest) using a P value osteoarthritis results in statistically significant and clinically relevant improvement in elbow range of motion and clinical outcomes with low complication and reoperation rates. Systematic review of level IV studies. Copyright © 2017 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.

  12. Statistical models for expert judgement and wear prediction

    International Nuclear Information System (INIS)

    Pulkkinen, U.

    1994-01-01

    This thesis studies the statistical analysis of expert judgements and prediction of wear. The point of view adopted is the one of information theory and Bayesian statistics. A general Bayesian framework for analyzing both the expert judgements and wear prediction is presented. Information theoretic interpretations are given for some averaging techniques used in the determination of consensus distributions. Further, information theoretic models are compared with a Bayesian model. The general Bayesian framework is then applied in analyzing expert judgements based on ordinal comparisons. In this context, the value of information lost in the ordinal comparison process is analyzed by applying decision theoretic concepts. As a generalization of the Bayesian framework, stochastic filtering models for wear prediction are formulated. These models utilize the information from condition monitoring measurements in updating the residual life distribution of mechanical components. Finally, the application of stochastic control models in optimizing operational strategies for inspected components are studied. Monte-Carlo simulation methods, such as the Gibbs sampler and the stochastic quasi-gradient method, are applied in the determination of posterior distributions and in the solution of stochastic optimization problems. (orig.) (57 refs., 7 figs., 1 tab.)

  13. Dosimetric and radiobiological comparison of TG-43 and Monte Carlo calculations in 192Ir breast brachytherapy applications.

    Science.gov (United States)

    Peppa, V; Pappas, E P; Karaiskos, P; Major, T; Polgár, C; Papagiannis, P

    2016-10-01

    To investigate the clinical significance of introducing model based dose calculation algorithms (MBDCAs) as an alternative to TG-43 in 192 Ir interstitial breast brachytherapy. A 57 patient cohort was used in a retrospective comparison between TG-43 based dosimetry data exported from a treatment planning system and Monte Carlo (MC) dosimetry performed using MCNP v. 6.1 with plan and anatomy information in DICOM-RT format. Comparison was performed for the target, ipsilateral lung, heart, skin, breast and ribs, using dose distributions, dose-volume histograms (DVH) and plan quality indices clinically used for plan evaluation, as well as radiobiological parameters. TG-43 overestimation of target DVH parameters is statistically significant but small (less than 2% for the target coverage indices and 4% for homogeneity indices, on average). Significant dose differences (>5%) were observed close to the skin and at relatively large distances from the implant leading to a TG-43 dose overestimation for the organs at risk. These differences correspond to low dose regions (<50% of the prescribed dose), being less than 2% of the prescribed dose. Detected dosimetric differences did not induce clinically significant differences in calculated tumor control probabilities (mean absolute difference <0.2%) and normal tissue complication probabilities. While TG-43 shows a statistically significant overestimation of most indices used for plan evaluation, differences are small and therefore not clinically significant. Improved MBDCA dosimetry could be important for re-irradiation, technique inter-comparison and/or the assessment of secondary cancer induction risk, where accurate dosimetry in the whole patient anatomy is of the essence. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  14. Statistical and Spatial Analysis of Bathymetric Data for the St. Clair River, 1971-2007

    Science.gov (United States)

    Bennion, David

    2009-01-01

    To address questions concerning ongoing geomorphic processes in the St. Clair River, selected bathymetric datasets spanning 36 years were analyzed. Comparisons of recent high-resolution datasets covering the upper river indicate a highly variable, active environment. Although statistical and spatial comparisons of the datasets show that some changes to the channel size and shape have taken place during the study period, uncertainty associated with various survey methods and interpolation processes limit the statistically certain results. The methods used to spatially compare the datasets are sensitive to small variations in position and depth that are within the range of uncertainty associated with the datasets. Characteristics of the data, such as the density of measured points and the range of values surveyed, can also influence the results of spatial comparison. With due consideration of these limitations, apparently active and ongoing areas of elevation change in the river are mapped and discussed.

  15. Comparisons of treatment means when factors do not interact in two-factorial studies

    KAUST Repository

    Wei, Jiawei; Carroll, Raymond J.; Harden, Kathryn K.; Wu, Guoyao

    2011-01-01

    Scientists in the fields of nutrition and other biological sciences often design factorial studies to test the hypotheses of interest and importance. In the case of two-factorial studies, it is widely recognized that the analysis of factor effects is generally based on treatment means when the interaction of the factors is statistically significant, and involves multiple comparisons of treatment means. However, when the two factors do not interact, a common understanding among biologists is that comparisons among treatment means cannot or should not be made. Here, we bring this misconception into the attention of researchers. Additionally, we indicate what kind of comparisons among the treatment means can be performed when there is a nonsignificant interaction among two factors. Such information should be useful in analyzing the experimental data and drawing meaningful conclusions.

  16. Comparisons of treatment means when factors do not interact in two-factorial studies

    KAUST Repository

    Wei, Jiawei

    2011-05-06

    Scientists in the fields of nutrition and other biological sciences often design factorial studies to test the hypotheses of interest and importance. In the case of two-factorial studies, it is widely recognized that the analysis of factor effects is generally based on treatment means when the interaction of the factors is statistically significant, and involves multiple comparisons of treatment means. However, when the two factors do not interact, a common understanding among biologists is that comparisons among treatment means cannot or should not be made. Here, we bring this misconception into the attention of researchers. Additionally, we indicate what kind of comparisons among the treatment means can be performed when there is a nonsignificant interaction among two factors. Such information should be useful in analyzing the experimental data and drawing meaningful conclusions.

  17. Some statistical properties of gene expression clustering for array data

    DEFF Research Database (Denmark)

    Abreu, G C G; Pinheiro, A; Drummond, R D

    2010-01-01

    DNA array data without a corresponding statistical error measure. We propose an easy-to-implement and simple-to-use technique that uses bootstrap re-sampling to evaluate the statistical error of the nodes provided by SOM-based clustering. Comparisons between SOM and parametric clustering are presented...... for simulated as well as for two real data sets. We also implement a bootstrap-based pre-processing procedure for SOM, that improves the false discovery ratio of differentially expressed genes. Code in Matlab is freely available, as well as some supplementary material, at the following address: https...

  18. Statistical learning of action: the role of conditional probability.

    Science.gov (United States)

    Meyer, Meredith; Baldwin, Dare

    2011-12-01

    Identification of distinct units within a continuous flow of human action is fundamental to action processing. Such segmentation may rest in part on statistical learning. In a series of four experiments, we examined what types of statistics people can use to segment a continuous stream involving many brief, goal-directed action elements. The results of Experiment 1 showed no evidence for sensitivity to conditional probability, whereas Experiment 2 displayed learning based on joint probability. In Experiment 3, we demonstrated that additional exposure to the input failed to engender sensitivity to conditional probability. However, the results of Experiment 4 showed that a subset of adults-namely, those more successful at identifying actions that had been seen more frequently than comparison sequences-were also successful at learning conditional-probability statistics. These experiments help to clarify the mechanisms subserving processing of intentional action, and they highlight important differences from, as well as similarities to, prior studies of statistical learning in other domains, including language.

  19. Comparison of Statistical Methods for Detector Testing Programs

    Energy Technology Data Exchange (ETDEWEB)

    Rennie, John Alan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Abhold, Mark [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-10-14

    A typical goal for any detector testing program is to ascertain not only the performance of the detector systems under test, but also the confidence that systems accepted using that testing program’s acceptance criteria will exceed a minimum acceptable performance (which is usually expressed as the minimum acceptable success probability, p). A similar problem often arises in statistics, where we would like to ascertain the fraction, p, of a population of items that possess a property that may take one of two possible values. Typically, the problem is approached by drawing a fixed sample of size n, with the number of items out of n that possess the desired property, x, being termed successes. The sample mean gives an estimate of the population mean p ≈ x/n, although usually it is desirable to accompany such an estimate with a statement concerning the range within which p may fall and the confidence associated with that range. Procedures for establishing such ranges and confidence limits are described in detail by Clopper, Brown, and Agresti for two-sided symmetric confidence intervals.

  20. Mathematics Anxiety and Statistics Anxiety. Shared but Also Unshared Components and Antagonistic Contributions to Performance in Statistics

    Science.gov (United States)

    Paechter, Manuela; Macher, Daniel; Martskvishvili, Khatuna; Wimmer, Sigrid; Papousek, Ilona

    2017-01-01

    In many social science majors, e.g., psychology, students report high levels of statistics anxiety. However, these majors are often chosen by students who are less prone to mathematics and who might have experienced difficulties and unpleasant feelings in their mathematics courses at school. The present study investigates whether statistics anxiety is a genuine form of anxiety that impairs students' achievements or whether learners mainly transfer previous experiences in mathematics and their anxiety in mathematics to statistics. The relationship between mathematics anxiety and statistics anxiety, their relationship to learning behaviors and to performance in a statistics examination were investigated in a sample of 225 undergraduate psychology students (164 women, 61 men). Data were recorded at three points in time: At the beginning of term students' mathematics anxiety, general proneness to anxiety, school grades, and demographic data were assessed; 2 weeks before the end of term, they completed questionnaires on statistics anxiety and their learning behaviors. At the end of term, examination scores were recorded. Mathematics anxiety and statistics anxiety correlated highly but the comparison of different structural equation models showed that they had genuine and even antagonistic contributions to learning behaviors and performance in the examination. Surprisingly, mathematics anxiety was positively related to performance. It might be that students realized over the course of their first term that knowledge and skills in higher secondary education mathematics are not sufficient to be successful in statistics. Part of mathematics anxiety may then have strengthened positive extrinsic effort motivation by the intention to avoid failure and may have led to higher effort for the exam preparation. However, via statistics anxiety mathematics anxiety also had a negative contribution to performance. Statistics anxiety led to higher procrastination in the structural

  1. Mathematics Anxiety and Statistics Anxiety. Shared but Also Unshared Components and Antagonistic Contributions to Performance in Statistics.

    Science.gov (United States)

    Paechter, Manuela; Macher, Daniel; Martskvishvili, Khatuna; Wimmer, Sigrid; Papousek, Ilona

    2017-01-01

    In many social science majors, e.g., psychology, students report high levels of statistics anxiety. However, these majors are often chosen by students who are less prone to mathematics and who might have experienced difficulties and unpleasant feelings in their mathematics courses at school. The present study investigates whether statistics anxiety is a genuine form of anxiety that impairs students' achievements or whether learners mainly transfer previous experiences in mathematics and their anxiety in mathematics to statistics. The relationship between mathematics anxiety and statistics anxiety, their relationship to learning behaviors and to performance in a statistics examination were investigated in a sample of 225 undergraduate psychology students (164 women, 61 men). Data were recorded at three points in time: At the beginning of term students' mathematics anxiety, general proneness to anxiety, school grades, and demographic data were assessed; 2 weeks before the end of term, they completed questionnaires on statistics anxiety and their learning behaviors. At the end of term, examination scores were recorded. Mathematics anxiety and statistics anxiety correlated highly but the comparison of different structural equation models showed that they had genuine and even antagonistic contributions to learning behaviors and performance in the examination. Surprisingly, mathematics anxiety was positively related to performance. It might be that students realized over the course of their first term that knowledge and skills in higher secondary education mathematics are not sufficient to be successful in statistics. Part of mathematics anxiety may then have strengthened positive extrinsic effort motivation by the intention to avoid failure and may have led to higher effort for the exam preparation. However, via statistics anxiety mathematics anxiety also had a negative contribution to performance. Statistics anxiety led to higher procrastination in the structural

  2. Mathematics Anxiety and Statistics Anxiety. Shared but Also Unshared Components and Antagonistic Contributions to Performance in Statistics

    Directory of Open Access Journals (Sweden)

    Manuela Paechter

    2017-07-01

    Full Text Available In many social science majors, e.g., psychology, students report high levels of statistics anxiety. However, these majors are often chosen by students who are less prone to mathematics and who might have experienced difficulties and unpleasant feelings in their mathematics courses at school. The present study investigates whether statistics anxiety is a genuine form of anxiety that impairs students' achievements or whether learners mainly transfer previous experiences in mathematics and their anxiety in mathematics to statistics. The relationship between mathematics anxiety and statistics anxiety, their relationship to learning behaviors and to performance in a statistics examination were investigated in a sample of 225 undergraduate psychology students (164 women, 61 men. Data were recorded at three points in time: At the beginning of term students' mathematics anxiety, general proneness to anxiety, school grades, and demographic data were assessed; 2 weeks before the end of term, they completed questionnaires on statistics anxiety and their learning behaviors. At the end of term, examination scores were recorded. Mathematics anxiety and statistics anxiety correlated highly but the comparison of different structural equation models showed that they had genuine and even antagonistic contributions to learning behaviors and performance in the examination. Surprisingly, mathematics anxiety was positively related to performance. It might be that students realized over the course of their first term that knowledge and skills in higher secondary education mathematics are not sufficient to be successful in statistics. Part of mathematics anxiety may then have strengthened positive extrinsic effort motivation by the intention to avoid failure and may have led to higher effort for the exam preparation. However, via statistics anxiety mathematics anxiety also had a negative contribution to performance. Statistics anxiety led to higher procrastination in

  3. Statistical Reporting Errors and Collaboration on Statistical Analyses in Psychological Science.

    Science.gov (United States)

    Veldkamp, Coosje L S; Nuijten, Michèle B; Dominguez-Alvarez, Linda; van Assen, Marcel A L M; Wicherts, Jelte M

    2014-01-01

    Statistical analysis is error prone. A best practice for researchers using statistics would therefore be to share data among co-authors, allowing double-checking of executed tasks just as co-pilots do in aviation. To document the extent to which this 'co-piloting' currently occurs in psychology, we surveyed the authors of 697 articles published in six top psychology journals and asked them whether they had collaborated on four aspects of analyzing data and reporting results, and whether the described data had been shared between the authors. We acquired responses for 49.6% of the articles and found that co-piloting on statistical analysis and reporting results is quite uncommon among psychologists, while data sharing among co-authors seems reasonably but not completely standard. We then used an automated procedure to study the prevalence of statistical reporting errors in the articles in our sample and examined the relationship between reporting errors and co-piloting. Overall, 63% of the articles contained at least one p-value that was inconsistent with the reported test statistic and the accompanying degrees of freedom, and 20% of the articles contained at least one p-value that was inconsistent to such a degree that it may have affected decisions about statistical significance. Overall, the probability that a given p-value was inconsistent was over 10%. Co-piloting was not found to be associated with reporting errors.

  4. Comparison of climate envelope models developed using expert-selected variables versus statistical selection

    Science.gov (United States)

    Brandt, Laura A.; Benscoter, Allison; Harvey, Rebecca G.; Speroterra, Carolina; Bucklin, David N.; Romañach, Stephanie; Watling, James I.; Mazzotti, Frank J.

    2017-01-01

    Climate envelope models are widely used to describe potential future distribution of species under different climate change scenarios. It is broadly recognized that there are both strengths and limitations to using climate envelope models and that outcomes are sensitive to initial assumptions, inputs, and modeling methods Selection of predictor variables, a central step in modeling, is one of the areas where different techniques can yield varying results. Selection of climate variables to use as predictors is often done using statistical approaches that develop correlations between occurrences and climate data. These approaches have received criticism in that they rely on the statistical properties of the data rather than directly incorporating biological information about species responses to temperature and precipitation. We evaluated and compared models and prediction maps for 15 threatened or endangered species in Florida based on two variable selection techniques: expert opinion and a statistical method. We compared model performance between these two approaches for contemporary predictions, and the spatial correlation, spatial overlap and area predicted for contemporary and future climate predictions. In general, experts identified more variables as being important than the statistical method and there was low overlap in the variable sets (0.9 for area under the curve (AUC) and >0.7 for true skill statistic (TSS). Spatial overlap, which compares the spatial configuration between maps constructed using the different variable selection techniques, was only moderate overall (about 60%), with a great deal of variability across species. Difference in spatial overlap was even greater under future climate projections, indicating additional divergence of model outputs from different variable selection techniques. Our work is in agreement with other studies which have found that for broad-scale species distribution modeling, using statistical methods of variable

  5. Statistical Analysis Of Tank 19F Floor Sample Results

    International Nuclear Information System (INIS)

    Harris, S.

    2010-01-01

    Representative sampling has been completed for characterization of the residual material on the floor of Tank 19F as per the statistical sampling plan developed by Harris and Shine. Samples from eight locations have been obtained from the tank floor and two of the samples were archived as a contingency. Six samples, referred to in this report as the current scrape samples, have been submitted to and analyzed by SRNL. This report contains the statistical analysis of the floor sample analytical results to determine if further data are needed to reduce uncertainty. Included are comparisons with the prior Mantis samples results to determine if they can be pooled with the current scrape samples to estimate the upper 95% confidence limits (UCL95%) for concentration. Statistical analysis revealed that the Mantis and current scrape sample results are not compatible. Therefore, the Mantis sample results were not used to support the quantification of analytes in the residual material. Significant spatial variability among the current scrape sample results was not found. Constituent concentrations were similar between the North and South hemispheres as well as between the inner and outer regions of the tank floor. The current scrape sample results from all six samples fall within their 3-sigma limits. In view of the results from numerous statistical tests, the data were pooled from all six current scrape samples. As such, an adequate sample size was provided for quantification of the residual material on the floor of Tank 19F. The uncertainty is quantified in this report by an UCL95% on each analyte concentration. The uncertainty in analyte concentration was calculated as a function of the number of samples, the average, and the standard deviation of the analytical results. The UCL95% was based entirely on the six current scrape sample results (each averaged across three analytical determinations).

  6. An exercise in model validation: Comparing univariate statistics and Monte Carlo-based multivariate statistics

    International Nuclear Information System (INIS)

    Weathers, J.B.; Luck, R.; Weathers, J.W.

    2009-01-01

    The complexity of mathematical models used by practicing engineers is increasing due to the growing availability of sophisticated mathematical modeling tools and ever-improving computational power. For this reason, the need to define a well-structured process for validating these models against experimental results has become a pressing issue in the engineering community. This validation process is partially characterized by the uncertainties associated with the modeling effort as well as the experimental results. The net impact of the uncertainties on the validation effort is assessed through the 'noise level of the validation procedure', which can be defined as an estimate of the 95% confidence uncertainty bounds for the comparison error between actual experimental results and model-based predictions of the same quantities of interest. Although general descriptions associated with the construction of the noise level using multivariate statistics exists in the literature, a detailed procedure outlining how to account for the systematic and random uncertainties is not available. In this paper, the methodology used to derive the covariance matrix associated with the multivariate normal pdf based on random and systematic uncertainties is examined, and a procedure used to estimate this covariance matrix using Monte Carlo analysis is presented. The covariance matrices are then used to construct approximate 95% confidence constant probability contours associated with comparison error results for a practical example. In addition, the example is used to show the drawbacks of using a first-order sensitivity analysis when nonlinear local sensitivity coefficients exist. Finally, the example is used to show the connection between the noise level of the validation exercise calculated using multivariate and univariate statistics.

  7. An exercise in model validation: Comparing univariate statistics and Monte Carlo-based multivariate statistics

    Energy Technology Data Exchange (ETDEWEB)

    Weathers, J.B. [Shock, Noise, and Vibration Group, Northrop Grumman Shipbuilding, P.O. Box 149, Pascagoula, MS 39568 (United States)], E-mail: James.Weathers@ngc.com; Luck, R. [Department of Mechanical Engineering, Mississippi State University, 210 Carpenter Engineering Building, P.O. Box ME, Mississippi State, MS 39762-5925 (United States)], E-mail: Luck@me.msstate.edu; Weathers, J.W. [Structural Analysis Group, Northrop Grumman Shipbuilding, P.O. Box 149, Pascagoula, MS 39568 (United States)], E-mail: Jeffrey.Weathers@ngc.com

    2009-11-15

    The complexity of mathematical models used by practicing engineers is increasing due to the growing availability of sophisticated mathematical modeling tools and ever-improving computational power. For this reason, the need to define a well-structured process for validating these models against experimental results has become a pressing issue in the engineering community. This validation process is partially characterized by the uncertainties associated with the modeling effort as well as the experimental results. The net impact of the uncertainties on the validation effort is assessed through the 'noise level of the validation procedure', which can be defined as an estimate of the 95% confidence uncertainty bounds for the comparison error between actual experimental results and model-based predictions of the same quantities of interest. Although general descriptions associated with the construction of the noise level using multivariate statistics exists in the literature, a detailed procedure outlining how to account for the systematic and random uncertainties is not available. In this paper, the methodology used to derive the covariance matrix associated with the multivariate normal pdf based on random and systematic uncertainties is examined, and a procedure used to estimate this covariance matrix using Monte Carlo analysis is presented. The covariance matrices are then used to construct approximate 95% confidence constant probability contours associated with comparison error results for a practical example. In addition, the example is used to show the drawbacks of using a first-order sensitivity analysis when nonlinear local sensitivity coefficients exist. Finally, the example is used to show the connection between the noise level of the validation exercise calculated using multivariate and univariate statistics.

  8. Comparison of likelihood testing procedures for parallel systems with covariances

    International Nuclear Information System (INIS)

    Ayman Baklizi; Isa Daud; Noor Akma Ibrahim

    1998-01-01

    In this paper we considered investigating and comparing the behavior of the likelihood ratio, the Rao's and the Wald's statistics for testing hypotheses on the parameters of the simple linear regression model based on parallel systems with covariances. These statistics are asymptotically equivalent (Barndorff-Nielsen and Cox, 1994). However, their relative performances in finite samples are generally known. A Monte Carlo experiment is conducted to stimulate the sizes and the powers of these statistics for complete samples and in the presence of time censoring. Comparisons of the statistics are made according to the attainment of assumed size of the test and their powers at various points in the parameter space. The results show that the likelihood ratio statistics appears to have the best performance in terms of the attainment of the assumed size of the test. Power comparisons show that the Rao statistic has some advantage over the Wald statistic in almost all of the space of alternatives while likelihood ratio statistic occupies either the first or the last position in term of power. Overall, the likelihood ratio statistic appears to be more appropriate to the model under study, especially for small sample sizes

  9. Below and above boiling point comparison of microwave irradiation and conductive heating for municipal sludge digestion under identical heating/cooling profiles.

    Science.gov (United States)

    Hosseini Koupaie, E; Eskicioglu, C

    2015-01-01

    This research provides a comprehensive comparison between microwave (MW) and conductive heating (CH) sludge pretreatments under identical heating/cooling profiles at below and above boiling point temperatures. Previous comparison studies were constrained to an uncontrolled or a single heating rate due to lack of a CH equipment simulating MW under identical thermal profiles. In this research, a novel custom-built pressure-sealed vessel which could simulate MW pretreatment under identical heating/cooling profiles was used for CH pretreatment. No statistically significant difference was proven between MW and CH pretreatments in terms of sludge solubilization, anaerobic biogas yield and organics biodegradation rate (p-value>0.05), while statistically significant effects of temperature and heating rate were observed (p-value<0.05). These results explain the contradictory results of previous studies in which only the final temperature (not heating/cooling rates) was controlled. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Modality-Constrained Statistical Learning of Tactile, Visual, and Auditory Sequences

    Science.gov (United States)

    Conway, Christopher M.; Christiansen, Morten H.

    2005-01-01

    The authors investigated the extent to which touch, vision, and audition mediate the processing of statistical regularities within sequential input. Few researchers have conducted rigorous comparisons across sensory modalities; in particular, the sense of touch has been virtually ignored. The current data reveal not only commonalities but also…

  11. Confidence Intervals: From tests of statistical significance to confidence intervals, range hypotheses and substantial effects

    Directory of Open Access Journals (Sweden)

    Dominic Beaulieu-Prévost

    2006-03-01

    Full Text Available For the last 50 years of research in quantitative social sciences, the empirical evaluation of scientific hypotheses has been based on the rejection or not of the null hypothesis. However, more than 300 articles demonstrated that this method was problematic. In summary, null hypothesis testing (NHT is unfalsifiable, its results depend directly on sample size and the null hypothesis is both improbable and not plausible. Consequently, alternatives to NHT such as confidence intervals (CI and measures of effect size are starting to be used in scientific publications. The purpose of this article is, first, to provide the conceptual tools necessary to implement an approach based on confidence intervals, and second, to briefly demonstrate why such an approach is an interesting alternative to an approach based on NHT. As demonstrated in the article, the proposed CI approach avoids most problems related to a NHT approach and can often improve the scientific and contextual relevance of the statistical interpretations by testing range hypotheses instead of a point hypothesis and by defining the minimal value of a substantial effect. The main advantage of such a CI approach is that it replaces the notion of statistical power by an easily interpretable three-value logic (probable presence of a substantial effect, probable absence of a substantial effect and probabilistic undetermination. The demonstration includes a complete example.

  12. Statistics 101 for Radiologists.

    Science.gov (United States)

    Anvari, Arash; Halpern, Elkan F; Samir, Anthony E

    2015-10-01

    Diagnostic tests have wide clinical applications, including screening, diagnosis, measuring treatment effect, and determining prognosis. Interpreting diagnostic test results requires an understanding of key statistical concepts used to evaluate test efficacy. This review explains descriptive statistics and discusses probability, including mutually exclusive and independent events and conditional probability. In the inferential statistics section, a statistical perspective on study design is provided, together with an explanation of how to select appropriate statistical tests. Key concepts in recruiting study samples are discussed, including representativeness and random sampling. Variable types are defined, including predictor, outcome, and covariate variables, and the relationship of these variables to one another. In the hypothesis testing section, we explain how to determine if observed differences between groups are likely to be due to chance. We explain type I and II errors, statistical significance, and study power, followed by an explanation of effect sizes and how confidence intervals can be used to generalize observed effect sizes to the larger population. Statistical tests are explained in four categories: t tests and analysis of variance, proportion analysis tests, nonparametric tests, and regression techniques. We discuss sensitivity, specificity, accuracy, receiver operating characteristic analysis, and likelihood ratios. Measures of reliability and agreement, including κ statistics, intraclass correlation coefficients, and Bland-Altman graphs and analysis, are introduced. © RSNA, 2015.

  13. Business Statistics: A Comparison of Student Performance in Three Learning Modes

    Science.gov (United States)

    Simmons, Gerald R.

    2014-01-01

    The purpose of this study was to compare the performance of three teaching modes and age groups of business statistics sections in terms of course exam scores. The research questions were formulated to determine the performance of the students within each teaching mode, to compare each mode in terms of exam scores, and to compare exam scores by…

  14. Right adrenal vein: comparison between adaptive statistical iterative reconstruction and model-based iterative reconstruction.

    Science.gov (United States)

    Noda, Y; Goshima, S; Nagata, S; Miyoshi, T; Kawada, H; Kawai, N; Tanahashi, Y; Matsuo, M

    2018-06-01

    To compare right adrenal vein (RAV) visualisation and contrast enhancement degree on adrenal venous phase images reconstructed using adaptive statistical iterative reconstruction (ASiR) and model-based iterative reconstruction (MBIR) techniques. This prospective study was approved by the institutional review board, and written informed consent was waived. Fifty-seven consecutive patients who underwent adrenal venous phase imaging were enrolled. The same raw data were reconstructed using ASiR 40% and MBIR. The expert and beginner independently reviewed computed tomography (CT) images. RAV visualisation rates, background noise, and CT attenuation of the RAV, right adrenal gland, inferior vena cava (IVC), hepatic vein, and bilateral renal veins were compared between the two reconstruction techniques. RAV visualisation rates were higher with MBIR than with ASiR (95% versus 88%, p=0.13 in expert and 93% versus 75%, p=0.002 in beginner, respectively). RAV visualisation confidence ratings with MBIR were significantly greater than with ASiR (pASiR (pASiR (p=0.0013 and 0.02). Reconstruction of adrenal venous phase images using MBIR significantly reduces background noise, leading to an improvement in the RAV visualisation compared with ASiR. Copyright © 2018 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  15. Statistical testing and power analysis for brain-wide association study.

    Science.gov (United States)

    Gong, Weikang; Wan, Lin; Lu, Wenlian; Ma, Liang; Cheng, Fan; Cheng, Wei; Grünewald, Stefan; Feng, Jianfeng

    2018-04-05

    The identification of connexel-wise associations, which involves examining functional connectivities between pairwise voxels across the whole brain, is both statistically and computationally challenging. Although such a connexel-wise methodology has recently been adopted by brain-wide association studies (BWAS) to identify connectivity changes in several mental disorders, such as schizophrenia, autism and depression, the multiple correction and power analysis methods designed specifically for connexel-wise analysis are still lacking. Therefore, we herein report the development of a rigorous statistical framework for connexel-wise significance testing based on the Gaussian random field theory. It includes controlling the family-wise error rate (FWER) of multiple hypothesis testings using topological inference methods, and calculating power and sample size for a connexel-wise study. Our theoretical framework can control the false-positive rate accurately, as validated empirically using two resting-state fMRI datasets. Compared with Bonferroni correction and false discovery rate (FDR), it can reduce false-positive rate and increase statistical power by appropriately utilizing the spatial information of fMRI data. Importantly, our method bypasses the need of non-parametric permutation to correct for multiple comparison, thus, it can efficiently tackle large datasets with high resolution fMRI images. The utility of our method is shown in a case-control study. Our approach can identify altered functional connectivities in a major depression disorder dataset, whereas existing methods fail. A software package is available at https://github.com/weikanggong/BWAS. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Tungsten Ions in Plasmas: Statistical Theory of Radiative-Collisional Processes

    Directory of Open Access Journals (Sweden)

    Alexander V. Demura

    2015-05-01

    Full Text Available The statistical model for calculations of the collisional-radiative processes in plasmas with tungsten impurity was developed. The electron structure of tungsten multielectron ions is considered in terms of both the Thomas-Fermi model and the Brandt-Lundquist model of collective oscillations of atomic electron density. The excitation or ionization of atomic electrons by plasma electron impacts are represented as photo-processes under the action of flux of equivalent photons introduced by E. Fermi. The total electron impact single ionization cross-sections of ions Wk+ with respective rates have been calculated and compared with the available experimental and modeling data (e.g., CADW. Plasma radiative losses on tungsten impurity were also calculated in a wide range of electron temperatures 1 eV–20 keV. The numerical code TFATOM was developed for calculations of radiative-collisional processes involving tungsten ions. The needed computational resources for TFATOM code are orders of magnitudes less than for the other conventional numerical codes. The transition from corona to Boltzmann limit was investigated in detail. The results of statistical approach have been tested by comparison with the vast experimental and conventional code data for a set of ions Wk+. It is shown that the universal statistical model accuracy for the ionization cross-sections and radiation losses is within the data scattering of significantly more complex quantum numerical codes, using different approximations for the calculation of atomic structure and the electronic cross-sections.

  17. Preventing statistical errors in scientific journals.

    NARCIS (Netherlands)

    Nuijten, M.B.

    2016-01-01

    There is evidence for a high prevalence of statistical reporting errors in psychology and other scientific fields. These errors display a systematic preference for statistically significant results, distorting the scientific literature. There are several possible causes for this systematic error

  18. Statistical analysis and Monte Carlo simulation of growing self-avoiding walks on percolation

    Energy Technology Data Exchange (ETDEWEB)

    Zhang Yuxia [Department of Physics, Wuhan University, Wuhan 430072 (China); Sang Jianping [Department of Physics, Wuhan University, Wuhan 430072 (China); Department of Physics, Jianghan University, Wuhan 430056 (China); Zou Xianwu [Department of Physics, Wuhan University, Wuhan 430072 (China)]. E-mail: xwzou@whu.edu.cn; Jin Zhunzhi [Department of Physics, Wuhan University, Wuhan 430072 (China)

    2005-09-26

    The two-dimensional growing self-avoiding walk on percolation was investigated by statistical analysis and Monte Carlo simulation. We obtained the expression of the mean square displacement and effective exponent as functions of time and percolation probability by statistical analysis and made a comparison with simulations. We got a reduced time to scale the motion of walkers in growing self-avoiding walks on regular and percolation lattices.

  19. Basics of statistical physics

    CERN Document Server

    Müller-Kirsten, Harald J W

    2013-01-01

    Statistics links microscopic and macroscopic phenomena, and requires for this reason a large number of microscopic elements like atoms. The results are values of maximum probability or of averaging. This introduction to statistical physics concentrates on the basic principles, and attempts to explain these in simple terms supplemented by numerous examples. These basic principles include the difference between classical and quantum statistics, a priori probabilities as related to degeneracies, the vital aspect of indistinguishability as compared with distinguishability in classical physics, the differences between conserved and non-conserved elements, the different ways of counting arrangements in the three statistics (Maxwell-Boltzmann, Fermi-Dirac, Bose-Einstein), the difference between maximization of the number of arrangements of elements, and averaging in the Darwin-Fowler method. Significant applications to solids, radiation and electrons in metals are treated in separate chapters, as well as Bose-Eins...

  20. Gyrokinetic Statistical Absolute Equilibrium and Turbulence

    International Nuclear Information System (INIS)

    Zhu, Jian-Zhou; Hammett, Gregory W.

    2011-01-01

    A paradigm based on the absolute equilibrium of Galerkin-truncated inviscid systems to aid in understanding turbulence (T.-D. Lee, 'On some statistical properties of hydrodynamical and magnetohydrodynamical fields,' Q. Appl. Math. 10, 69 (1952)) is taken to study gyrokinetic plasma turbulence: A finite set of Fourier modes of the collisionless gyrokinetic equations are kept and the statistical equilibria are calculated; possible implications for plasma turbulence in various situations are discussed. For the case of two spatial and one velocity dimension, in the calculation with discretization also of velocity v with N grid points (where N + 1 quantities are conserved, corresponding to an energy invariant and N entropy-related invariants), the negative temperature states, corresponding to the condensation of the generalized energy into the lowest modes, are found. This indicates a generic feature of inverse energy cascade. Comparisons are made with some classical results, such as those of Charney-Hasegawa-Mima in the cold-ion limit. There is a universal shape for statistical equilibrium of gyrokinetics in three spatial and two velocity dimensions with just one conserved quantity. Possible physical relevance to turbulence, such as ITG zonal flows, and to a critical balance hypothesis are also discussed.

  1. Renyi statistics in equilibrium statistical mechanics

    International Nuclear Information System (INIS)

    Parvan, A.S.; Biro, T.S.

    2010-01-01

    The Renyi statistics in the canonical and microcanonical ensembles is examined both in general and in particular for the ideal gas. In the microcanonical ensemble the Renyi statistics is equivalent to the Boltzmann-Gibbs statistics. By the exact analytical results for the ideal gas, it is shown that in the canonical ensemble, taking the thermodynamic limit, the Renyi statistics is also equivalent to the Boltzmann-Gibbs statistics. Furthermore it satisfies the requirements of the equilibrium thermodynamics, i.e. the thermodynamical potential of the statistical ensemble is a homogeneous function of first degree of its extensive variables of state. We conclude that the Renyi statistics arrives at the same thermodynamical relations, as those stemming from the Boltzmann-Gibbs statistics in this limit.

  2. Unique electron polarimeter analyzing power comparison and precision spin-based energy measurement

    International Nuclear Information System (INIS)

    Joseph Grames; Charles Sinclair; Joseph Mitchell; Eugene Chudakov; Howard Fenker; Arne Freyberger; Douglas Higinbotham; Poelker, B.; Michael Steigerwald; Michael Tiefenback; Christian Cavata; Stephanie Escoffier; Frederic Marie; Thierry Pussieux; Pascal Vernin; Samuel Danagoulian; Kahanawita Dharmawardane; Renee Fatemi; Kyungseon Joo; Markus Zeier; Viktor Gorbenko; Rakhsha Nasseripour; Brian Raue; Riad Suleiman; Benedikt Zihlmann

    2004-01-01

    Precision measurements of the relative analyzing powers of five electron beam polarimeters, based on Compton, Moller, and Mott scattering, have been performed using the CEBAF accelerator at the Thomas Jefferson National Accelerator Facility (Jefferson Laboratory). A Wien filter in the 100 keV beamline of the injector was used to vary the electron spin orientation exiting the injector. High statistical precision measurements of the scattering asymmetry as a function of the spin orientation were made with each polarimeter. Since each polarimeter receives beam with the same magnitude of polarization, these asymmetry measurements permit a high statistical precision comparison of the relative analyzing powers of the five polarimeters. This is the first time a precise comparison of the analyzing powers of Compton, Moller, and Mott scattering polarimeters has been made. Statistically significant disagreements among the values of the beam polarization calculated from the asymmetry measurements made with each polarimeter reveal either errors in the values of the analyzing power, or failure to correctly include all systematic effects. The measurements reported here represent a first step toward understanding the systematic effects of these electron polarimeters. Such studies are necessary to realize high absolute accuracy (ca. 1%) electron polarization measurements, as required for some parity violation measurements planned at Jefferson Laboratory. Finally, a comparison of the value of the spin orientation exiting the injector that provides maximum longitudinal polarization in each experimental hall leads to an independent and very precise (better than 10-4) absolute measurement of the final electron beam energy

  3. The clinical significance of 5% change in vital capacity in patients with idiopathic pulmonary fibrosis: extended analysis of the pirfenidone trial

    Directory of Open Access Journals (Sweden)

    Nakata Koichiro

    2011-07-01

    Full Text Available Abstract Background Our phase III clinical trial of pirfenidone for patients with idiopathic pulmonary fibrosis (IPF revealed the efficacy in reducing the decline of vital capacity (VC and increasing the progression-free survival (PFS time by pirfenidone. Recently, marginal decline in forced VC (FVC has been reported to be associated with poor outcome in IPF. We sought to evaluate the efficacy of pirfenidone from the aspects of 5% change in VC. Methods Improvement ratings based on 5% change in absolute VC, i.e., "improved (VC ≥ 5% increase", "stable (VC Results In the comparison of the improvement ratings, the statistically significant differences were clearly revealed at months 3, 6, 9, and 12 between pirfenidone and placebo groups. Risk reductions by pirfenidone to placebo were approximately 35% over the study period. In the comparison of the PFS times, statistically significant difference was also observed between pirfenidone and placebo groups. The positive/negative predictive values in placebo and pirfenidone groups were 86.1%/50.8% and 87.1%/71.7%, respectively. Further, the baseline characteristics of patients worsened at month 3 had generally severe impairment, and their clinical outcomes including mortality were also significantly worsened after 1 year. Conclusions The efficacy of pirfenidone in Japanese phase III trial was supported by the rating of 5% decline in VC, and the VC changes at month 3 may be used as a prognostic factor of IPF. Trial Registration This clinical trial was registered with the Japan Pharmaceutical Information Center (JAPIC on September 13th, 2005 (Registration Number: JAPICCTI-050121.

  4. The statistical mechanics of financial markets

    CERN Document Server

    Voit, Johannes

    2003-01-01

    From the reviews of the first edition - "Provides an excellent introduction for physicists interested in the statistical properties of financial markets. Appropriately early in the book the basic financial terms such as shorts, limit orders, puts, calls, and other terms are clearly defined. Examples, often with graphs, augment the reader’s understanding of what may be a plethora of new terms and ideas… [This is] an excellent starting point for the physicist interested in the subject. Some of the book’s strongest features are its careful definitions, its detailed examples, and the connection it establishes to physical systems." PHYSICS TODAY "This book is excellent at illustrating the similarities of financial markets with other non-equilibrium physical systems. [...] In summary, a very good book that offers more than just qualitative comparisons of physics and finance." (www.quantnotes.com) This highly-praised introductory treatment describes parallels between statistical physics and finance - both thos...

  5. STATISTICAL CHARACTERIZATION OF THE CHANDRA SOURCE CATALOG

    International Nuclear Information System (INIS)

    Primini, Francis A.; Evans, Ian N.; Glotfelty, Kenny J.; Anderson, Craig S.; Bonaventura, Nina R.; Chen, Judy C.; Doe, Stephen M.; Evans, Janet D.; Fabbiano, Giuseppina; Galle, Elizabeth C.; Gibbs, Danny G.; Grier, John D.; Hain, Roger M.; Harbo, Peter N.; He Xiangqun; Karovska, Margarita; Houck, John C.; Davis, John E.; Nowak, Michael A.; Hall, Diane M.

    2011-01-01

    The first release of the Chandra Source Catalog (CSC) contains ∼95,000 X-ray sources in a total area of 0.75% of the entire sky, using data from ∼3900 separate ACIS observations of a multitude of different types of X-ray sources. In order to maximize the scientific benefit of such a large, heterogeneous data set, careful characterization of the statistical properties of the catalog, i.e., completeness, sensitivity, false source rate, and accuracy of source properties, is required. Characterization efforts of other large Chandra catalogs, such as the ChaMP Point Source Catalog or the 2 Mega-second Deep Field Surveys, while informative, cannot serve this purpose, since the CSC analysis procedures are significantly different and the range of allowable data is much less restrictive. We describe here the characterization process for the CSC. This process includes both a comparison of real CSC results with those of other, deeper Chandra catalogs of the same targets and extensive simulations of blank-sky and point-source populations.

  6. Implementation of International Standards in Russia's Foreign Trade Statistics

    Directory of Open Access Journals (Sweden)

    Natalia E. Grigoruk

    2015-01-01

    Full Text Available The article analyzes the basic documents of international organizations in recent years, which have become the global standard for the development and improvement of statistics of foreign economic relations of most countries, including the Russian Federation. The article describes the key features of the theory and practice of modern foreign trade statistics in Russia and abroad, with an emphasis on the methodological problems of its main parts - the external trade statistics. It shows their interpretation in the most recent recommendations by UN statistical apparatus and other international organizations; considers a range of problems associated with the implementation of the national statistical practices of countries, including Russia and the countries of the Customs Union, the main international standard of foreign trade statistics - UN document "International Merchandise Trade Statistics". The main attention is paid to methodological issues such as: the criteria for selecting the objects of statistical accounting in accordance with international standards, quantitative and cost parameters of foreign trade statistics, statistical methods and estimates of commodity exports and imports, the problems of comparability of data; to a comparison of international standards in 2010 with documents on key precursor methodology of foreign trade statistics, characterized by the practice of introducing these standards in the foreign trade statistics of Russia and the countries of the Customs Union. The article analyzes the content given in the official statistical manuals of Russia foreign trade and foreign countries, covers the main methodological problems of World Trade in conjunction with the major current international statistical standards - System of National Accounts, Manual on Statistics of International Trade in Services and other documents; provides specific data describing the current structure of Russian foreign trade and especially its

  7. Challenges in dental statistics: data and modelling

    OpenAIRE

    Matranga, D.; Castiglia, P.; Solinas, G.

    2013-01-01

    The aim of this work is to present the reflections and proposals derived from the first Workshop of the SISMEC STATDENT working group on statistical methods and applications in dentistry, held in Ancona (Italy) on 28th September 2011. STATDENT began as a forum of comparison and discussion for statisticians working in the field of dental research in order to suggest new and improve existing biostatistical and clinical epidemiological methods. During the meeting, we dealt with very important to...

  8. Effect of higher order nonlinearity, directionality and finite water depth on wave statistics: Comparison of field data and numerical simulations

    Science.gov (United States)

    Fernández, Leandro; Monbaliu, Jaak; Onorato, Miguel; Toffoli, Alessandro

    2014-05-01

    This research is focused on the study of nonlinear evolution of irregular wave fields in water of arbitrary depth by comparing field measurements and numerical simulations.It is now well accepted that modulational instability, known as one of the main mechanisms for the formation of rogue waves, induces strong departures from Gaussian statistics. However, whereas non-Gaussian properties are remarkable when wave fields follow one direction of propagation over an infinite water depth, wave statistics only weakly deviate from Gaussianity when waves spread over a range of different directions. Over finite water depth, furthermore, wave instability attenuates overall and eventually vanishes for relative water depths as low as kh=1.36 (where k is the wavenumber of the dominant waves and h the water depth). Recent experimental results, nonetheless, seem to indicate that oblique perturbations are capable of triggering and sustaining modulational instability even if khthe aim of this research is to understand whether the combined effect of directionality and finite water depth has a significant effect on wave statistics and particularly on the occurrence of extremes. For this purpose, numerical experiments have been performed solving the Euler equation of motion with the Higher Order Spectral Method (HOSM) and compared with data of short crested wave fields for different sea states observed at the Lake George (Australia). A comparative analysis of the statistical properties (i.e. density function of the surface elevation and its statistical moments skewness and kurtosis) between simulations and in-situ data provides a confrontation between the numerical developments and real observations in field conditions.

  9. Missing data imputation using statistical and machine learning methods in a real breast cancer problem.

    Science.gov (United States)

    Jerez, José M; Molina, Ignacio; García-Laencina, Pedro J; Alba, Emilio; Ribelles, Nuria; Martín, Miguel; Franco, Leonardo

    2010-10-01

    Missing data imputation is an important task in cases where it is crucial to use all available data and not discard records with missing values. This work evaluates the performance of several statistical and machine learning imputation methods that were used to predict recurrence in patients in an extensive real breast cancer data set. Imputation methods based on statistical techniques, e.g., mean, hot-deck and multiple imputation, and machine learning techniques, e.g., multi-layer perceptron (MLP), self-organisation maps (SOM) and k-nearest neighbour (KNN), were applied to data collected through the "El Álamo-I" project, and the results were then compared to those obtained from the listwise deletion (LD) imputation method. The database includes demographic, therapeutic and recurrence-survival information from 3679 women with operable invasive breast cancer diagnosed in 32 different hospitals belonging to the Spanish Breast Cancer Research Group (GEICAM). The accuracies of predictions on early cancer relapse were measured using artificial neural networks (ANNs), in which different ANNs were estimated using the data sets with imputed missing values. The imputation methods based on machine learning algorithms outperformed imputation statistical methods in the prediction of patient outcome. Friedman's test revealed a significant difference (p=0.0091) in the observed area under the ROC curve (AUC) values, and the pairwise comparison test showed that the AUCs for MLP, KNN and SOM were significantly higher (p=0.0053, p=0.0048 and p=0.0071, respectively) than the AUC from the LD-based prognosis model. The methods based on machine learning techniques were the most suited for the imputation of missing values and led to a significant enhancement of prognosis accuracy compared to imputation methods based on statistical procedures. Copyright © 2010 Elsevier B.V. All rights reserved.

  10. A statistical evaluation of asbestos air concentrations

    International Nuclear Information System (INIS)

    Lange, J.H.

    1999-01-01

    Both area and personal air samples collected during an asbestos abatement project were matched and statistically analysed. Among the many parameters studied were fibre concentrations and their variability. Mean values for area and personal samples were 0.005 and 0.024 f cm - - 3 of air, respectively. Summary values for area and personal samples suggest that exposures are low with no single exposure value exceeding the current OSHA TWA value of 0.1 f cm -3 of air. Within- and between-worker analysis suggests that these data are homogeneous. Comparison of within- and between-worker values suggests that the exposure source and variability for abatement are more related to the process than individual practices. This supports the importance of control measures for abatement. Study results also suggest that area and personal samples are not statistically related, that is, there is no association observed for these two sampling methods when data are analysed by correlation or regression analysis. Personal samples were statistically higher in concentration than area samples. Area sampling cannot be used as a surrogate exposure for asbestos abatement workers. (author)

  11. Corruption Significantly Increases the Capital Cost of Power Plants in Developing Contexts

    Directory of Open Access Journals (Sweden)

    Kumar Biswajit Debnath

    2018-03-01

    Full Text Available Emerging economies with rapidly growing population and energy demand, own some of the most expensive power plants in the world. We hypothesized that corruption has a relationship with the capital cost of power plants in developing countries such as Bangladesh. For this study, we analyzed the capital cost of 61 operational and planned power plants in Bangladesh. Initial comparison study revealed that the mean capital cost of a power plant in Bangladesh is twice than that of the global average. Then, the statistical analysis revealed a significant correlation between corruption and the cost of power plants, indicating that higher corruption leads to greater capital cost. The high up-front cost can be a significant burden on the economy, at present and in the future, as most are financed through international loans with extended repayment terms. There is, therefore, an urgent need for the review of the procurement and due diligence process of establishing power plants, and for the implementation of a more transparent system to mitigate adverse effects of corruption on megaprojects.

  12. A comparison of in vivo and in vitro methods for determining availability of iron from meals

    International Nuclear Information System (INIS)

    Schricker, B.R.; Miller, D.D.; Rasmussen, R.R.; Van Campen, D.

    1981-01-01

    A comparison is made between in vitro and human and rat in vivo methods for estimating food iron availability. Complex meals formulated to replicate meals used by Cook and Monsen (Am J Clin Nutr 1976;29:859) in human iron availability trials were used in the comparison. The meals were prepared by substituting pork, fish, cheese, egg, liver, or chicken for beef in two basic test meals and were evaluated for iron availability using in vitro and rat in vivo methods. When the criterion for comparison was the ability to show statistically significant differences between iron availability in the various meals, there was substantial agreement between the in vitro and human in vivo methods. There was less agreement between the human in vivo and the rat in vivo and between the in vivo and the rat in vivo and between the in vitro and the rat in vivo methods. Correlation analysis indicated significant agreement between in vitro and human in vivo methods. Correlation between the rat in vivo and human in vivo methods were also significant but correlations between the in vitro and rat in vivo methods were less significant and, in some cases, not significant. The comparison supports the contention that the in vitro method allows a rapid, inexpensive, and accurate estimation of nonheme iron availability in complex meals

  13. Statistics: a Bayesian perspective

    National Research Council Canada - National Science Library

    Berry, Donald A

    1996-01-01

    ...: it is the only introductory textbook based on Bayesian ideas, it combines concepts and methods, it presents statistics as a means of integrating data into the significant process, it develops ideas...

  14. An Evaluation of the Use of Statistical Procedures in Soil Science

    Directory of Open Access Journals (Sweden)

    Laene de Fátima Tavares

    2016-01-01

    Full Text Available ABSTRACT Experimental statistical procedures used in almost all scientific papers are fundamental for clearer interpretation of the results of experiments conducted in agrarian sciences. However, incorrect use of these procedures can lead the researcher to incorrect or incomplete conclusions. Therefore, the aim of this study was to evaluate the characteristics of the experiments and quality of the use of statistical procedures in soil science in order to promote better use of statistical procedures. For that purpose, 200 articles, published between 2010 and 2014, involving only experimentation and studies by sampling in the soil areas of fertility, chemistry, physics, biology, use and management were randomly selected. A questionnaire containing 28 questions was used to assess the characteristics of the experiments, the statistical procedures used, and the quality of selection and use of these procedures. Most of the articles evaluated presented data from studies conducted under field conditions and 27 % of all papers involved studies by sampling. Most studies did not mention testing to verify normality and homoscedasticity, and most used the Tukey test for mean comparisons. Among studies with a factorial structure of the treatments, many had ignored this structure, and data were compared assuming the absence of factorial structure, or the decomposition of interaction was performed without showing or mentioning the significance of the interaction. Almost none of the papers that had split-block factorial designs considered the factorial structure, or they considered it as a split-plot design. Among the articles that performed regression analysis, only a few of them tested non-polynomial fit models, and none reported verification of the lack of fit in the regressions. The articles evaluated thus reflected poor generalization and, in some cases, wrong generalization in experimental design and selection of procedures for statistical analysis.

  15. [Development of an Excel spreadsheet for meta-analysis of indirect and mixed treatment comparisons].

    Science.gov (United States)

    Tobías, Aurelio; Catalá-López, Ferrán; Roqué, Marta

    2014-01-01

    Meta-analyses in clinical research usually aimed to evaluate treatment efficacy and safety in direct comparison with a unique comparator. Indirect comparisons, using the Bucher's method, can summarize primary data when information from direct comparisons is limited or nonexistent. Mixed comparisons allow combining estimates from direct and indirect comparisons, increasing statistical power. There is a need for simple applications for meta-analysis of indirect and mixed comparisons. These can easily be conducted using a Microsoft Office Excel spreadsheet. We developed a spreadsheet for indirect and mixed effects comparisons of friendly use for clinical researchers interested in systematic reviews, but non-familiarized with the use of more advanced statistical packages. The use of the proposed Excel spreadsheet for indirect and mixed comparisons can be of great use in clinical epidemiology to extend the knowledge provided by traditional meta-analysis when evidence from direct comparisons is limited or nonexistent.

  16. Statistical Analysis and Evaluation of the Depth of the Ruts on Lithuanian State Significance Roads

    Directory of Open Access Journals (Sweden)

    Erinijus Getautis

    2011-04-01

    Full Text Available The aim of this work is to gather information about the national flexible pavement roads ruts depth, to determine its statistical dispersijon index and to determine their validity for needed requirements. Analysis of scientific works of ruts apearance in the asphalt and their influence for driving is presented in this work. Dynamical models of ruts in asphalt are presented in the work as well. Experimental outcome data of rut depth dispersijon in the national highway of Lithuania Vilnius – Kaunas is prepared. Conclusions are formulated and presented. Article in Lithuanian

  17. Autonomic Differentiation Map: A Novel Statistical Tool for Interpretation of Heart Rate Variability

    Directory of Open Access Journals (Sweden)

    Daniela Lucini

    2018-04-01

    differentiation profiles that could provide a better understanding of autonomic differences between clinical groups and controls. ANS differentiation map permits to rapidly and simply synthesize the possible difference between clinical groups and controls, evidencing the ANS latent domains that have at least a medium strength of discrimination, while the significance diagram permits to identify the single ANS proxies inside each ANS latent domain that resulted in significant comparisons according to statistical tests.

  18. Trends in violent crime: a comparison between police statistics and victimization surveys

    NARCIS (Netherlands)

    Wittebrood, Karin; Junger, Marianne

    2002-01-01

    Usually, two measures are used to describetrends in violent crime: police statistics andvictimization surveys. Both are available inthe Netherlands. In this contribution, we willfirst provide a description of the trends inviolent crime. It appears that both types ofstatistics reflect a different

  19. Statistical Measures to Quantify Similarity between Molecular Dynamics Simulation Trajectories

    Directory of Open Access Journals (Sweden)

    Jenny Farmer

    2017-11-01

    Full Text Available Molecular dynamics simulation is commonly employed to explore protein dynamics. Despite the disparate timescales between functional mechanisms and molecular dynamics (MD trajectories, functional differences are often inferred from differences in conformational ensembles between two proteins in structure-function studies that investigate the effect of mutations. A common measure to quantify differences in dynamics is the root mean square fluctuation (RMSF about the average position of residues defined by C α -atoms. Using six MD trajectories describing three native/mutant pairs of beta-lactamase, we make comparisons with additional measures that include Jensen-Shannon, modifications of Kullback-Leibler divergence, and local p-values from 1-sample Kolmogorov-Smirnov tests. These additional measures require knowing a probability density function, which we estimate by using a nonparametric maximum entropy method that quantifies rare events well. The same measures are applied to distance fluctuations between C α -atom pairs. Results from several implementations for quantitative comparison of a pair of MD trajectories are made based on fluctuations for on-residue and residue-residue local dynamics. We conclude that there is almost always a statistically significant difference between pairs of 100 ns all-atom simulations on moderate-sized proteins as evident from extraordinarily low p-values.

  20. Energy statistics. France. August 2001

    International Nuclear Information System (INIS)

    2001-08-01

    This document summarizes in a series of tables the statistical data relative to the production, consumption, supplies, resources, and prices of energies in France: 1 - all energies (coal, oil, gas, electric power, renewable energies): supplies, uses per sector, national production and consumption of primary energies, final consumption, general indicators (energy bill, US$ change rate, prices index, prices of imported crude oil, energy independence, internal gross product, evolution between 1973 and 2000, and projections for 2020). 2 - detailed data per energy source (petroleum, natural gas, electric power, solid mineral fuels): resources, uses, and prices. An indicative comparison is made with the other countries of the European Union. (J.S.)

  1. Perceived Statistical Knowledge Level and Self-Reported Statistical Practice Among Academic Psychologists

    Directory of Open Access Journals (Sweden)

    Laura Badenes-Ribera

    2018-06-01

    Full Text Available Introduction: Publications arguing against the null hypothesis significance testing (NHST procedure and in favor of good statistical practices have increased. The most frequently mentioned alternatives to NHST are effect size statistics (ES, confidence intervals (CIs, and meta-analyses. A recent survey conducted in Spain found that academic psychologists have poor knowledge about effect size statistics, confidence intervals, and graphic displays for meta-analyses, which might lead to a misinterpretation of the results. In addition, it also found that, although the use of ES is becoming generalized, the same thing is not true for CIs. Finally, academics with greater knowledge about ES statistics presented a profile closer to good statistical practice and research design. Our main purpose was to analyze the extension of these results to a different geographical area through a replication study.Methods: For this purpose, we elaborated an on-line survey that included the same items as the original research, and we asked academic psychologists to indicate their level of knowledge about ES, their CIs, and meta-analyses, and how they use them. The sample consisted of 159 Italian academic psychologists (54.09% women, mean age of 47.65 years. The mean number of years in the position of professor was 12.90 (SD = 10.21.Results: As in the original research, the results showed that, although the use of effect size estimates is becoming generalized, an under-reporting of CIs for ES persists. The most frequent ES statistics mentioned were Cohen's d and R2/η2, which can have outliers or show non-normality or violate statistical assumptions. In addition, academics showed poor knowledge about meta-analytic displays (e.g., forest plot and funnel plot and quality checklists for studies. Finally, academics with higher-level knowledge about ES statistics seem to have a profile closer to good statistical practices.Conclusions: Changing statistical practice is not

  2. Using statistical inference for decision making in best estimate analyses

    International Nuclear Information System (INIS)

    Sermer, P.; Weaver, K.; Hoppe, F.; Olive, C.; Quach, D.

    2008-01-01

    For broad classes of safety analysis problems, one needs to make decisions when faced with randomly varying quantities which are also subject to errors. The means for doing this involves a statistical approach which takes into account the nature of the physical problems, and the statistical constraints they impose. We describe the methodology for doing this which has been developed at Nuclear Safety Solutions, and we draw some comparisons to other methods which are commonly used in Canada and internationally. Our methodology has the advantages of being robust and accurate and compares favourably to other best estimate methods. (author)

  3. Statistical analysis of RHIC beam position monitors performance

    Science.gov (United States)

    Calaga, R.; Tomás, R.

    2004-04-01

    A detailed statistical analysis of beam position monitors (BPM) performance at RHIC is a critical factor in improving regular operations and future runs. Robust identification of malfunctioning BPMs plays an important role in any orbit or turn-by-turn analysis. Singular value decomposition and Fourier transform methods, which have evolved as powerful numerical techniques in signal processing, will aid in such identification from BPM data. This is the first attempt at RHIC to use a large set of data to statistically enhance the capability of these two techniques and determine BPM performance. A comparison from run 2003 data shows striking agreement between the two methods and hence can be used to improve BPM functioning at RHIC and possibly other accelerators.

  4. Statistical analysis of RHIC beam position monitors performance

    Directory of Open Access Journals (Sweden)

    R. Calaga

    2004-04-01

    Full Text Available A detailed statistical analysis of beam position monitors (BPM performance at RHIC is a critical factor in improving regular operations and future runs. Robust identification of malfunctioning BPMs plays an important role in any orbit or turn-by-turn analysis. Singular value decomposition and Fourier transform methods, which have evolved as powerful numerical techniques in signal processing, will aid in such identification from BPM data. This is the first attempt at RHIC to use a large set of data to statistically enhance the capability of these two techniques and determine BPM performance. A comparison from run 2003 data shows striking agreement between the two methods and hence can be used to improve BPM functioning at RHIC and possibly other accelerators.

  5. Understanding Statistics - Cancer Statistics

    Science.gov (United States)

    Annual reports of U.S. cancer statistics including new cases, deaths, trends, survival, prevalence, lifetime risk, and progress toward Healthy People targets, plus statistical summaries for a number of common cancer types.

  6. Funding source and primary outcome changes in clinical trials registered on ClinicalTrials.gov are associated with the reporting of a statistically significant primary outcome: a cross-sectional study [v2; ref status: indexed, http://f1000r.es/5bj

    Directory of Open Access Journals (Sweden)

    Sreeram V Ramagopalan

    2015-04-01

    Full Text Available Background: We and others have shown a significant proportion of interventional trials registered on ClinicalTrials.gov have their primary outcomes altered after the listed study start and completion dates. The objectives of this study were to investigate whether changes made to primary outcomes are associated with the likelihood of reporting a statistically significant primary outcome on ClinicalTrials.gov. Methods: A cross-sectional analysis of all interventional clinical trials registered on ClinicalTrials.gov as of 20 November 2014 was performed. The main outcome was any change made to the initially listed primary outcome and the time of the change in relation to the trial start and end date. Findings: 13,238 completed interventional trials were registered with ClinicalTrials.gov that also had study results posted on the website. 2555 (19.3% had one or more statistically significant primary outcomes. Statistical analysis showed that registration year, funding source and primary outcome change after trial completion were associated with reporting a statistically significant primary outcome. Conclusions: Funding source and primary outcome change after trial completion are associated with a statistically significant primary outcome report on clinicaltrials.gov.

  7. Statistical techniques to extract information during SMAP soil moisture assimilation

    Science.gov (United States)

    Kolassa, J.; Reichle, R. H.; Liu, Q.; Alemohammad, S. H.; Gentine, P.

    2017-12-01

    Statistical techniques permit the retrieval of soil moisture estimates in a model climatology while retaining the spatial and temporal signatures of the satellite observations. As a consequence, the need for bias correction prior to an assimilation of these estimates is reduced, which could result in a more effective use of the independent information provided by the satellite observations. In this study, a statistical neural network (NN) retrieval algorithm is calibrated using SMAP brightness temperature observations and modeled soil moisture estimates (similar to those used to calibrate the SMAP Level 4 DA system). Daily values of surface soil moisture are estimated using the NN and then assimilated into the NASA Catchment model. The skill of the assimilation estimates is assessed based on a comprehensive comparison to in situ measurements from the SMAP core and sparse network sites as well as the International Soil Moisture Network. The NN retrieval assimilation is found to significantly improve the model skill, particularly in areas where the model does not represent processes related to agricultural practices. Additionally, the NN method is compared to assimilation experiments using traditional bias correction techniques. The NN retrieval assimilation is found to more effectively use the independent information provided by SMAP resulting in larger model skill improvements than assimilation experiments using traditional bias correction techniques.

  8. Validation of Refractivity Profiles Retrieved from FORMOSAT-3/COSMIC Radio Occultation Soundings: Preliminary Results of Statistical Comparisons Utilizing Balloon-Borne Observations

    Directory of Open Access Journals (Sweden)

    Hiroo Hayashi

    2009-01-01

    Full Text Available The GPS radio occultation (RO soundings by the FORMOSAT-3/COSMIC (Taiwan¡¦s Formosa Satellite Misssion #3/Constellation Observing System for Meteorology, Ionosphere and Climate satellites launched in mid-April 2006 are compared with high-resolution balloon-borne (radiosonde and ozonesonde observations. This paper presents preliminary results of validation of the COSMIC RO measurements in terms of refractivity through the troposphere and lower stratosphere. With the use of COSMIC RO soundings within 2 hours and 300 km of sonde profiles, statistical comparisons between the collocated refractivity profiles are erformed for some tropical regions (Malaysia and Western Pacific islands where moisture-rich air is expected in the lower troposphere and for both northern and southern polar areas with a very dry troposphere. The results of the comparisons show good agreement between COSMIC RO and sonde refractivity rofiles throughout the troposphere (1 - 1.5% difference at most with a positive bias generally becoming larger at progressively higher altitudes in the lower stratosphere (1 - 2% difference around 25 km, and a very small standard deviation (about 0.5% or less for a few kilometers below the tropopause level. A large standard deviation of fractional differences in the lowermost troposphere, which reaches up to as much as 3.5 - 5%at 3 km, is seen in the tropics while a much smaller standard deviation (1 - 2% at most is evident throughout the polar troposphere.

  9. Comparison of the Size of ADF Aircrew and US Army Personnel

    Science.gov (United States)

    2013-09-01

    of the Australasian College of Aerospace Medicine, the Aerospace Medical Association, and the Royal Aeronautical Society...2.3 Protocol Comparison and Measurement Selection ............................................ 3 2.4 Statistical Analysis...circumference ratio 2.4 Statistical Analysis The statistical analyses were performed using version 9 of Statistica (Statsoft Inc., Tulsa, Oklahoma, USA

  10. Incorporating big data into treatment plan evaluation: Development of statistical DVH metrics and visualization dashboards.

    Science.gov (United States)

    Mayo, Charles S; Yao, John; Eisbruch, Avraham; Balter, James M; Litzenberg, Dale W; Matuszak, Martha M; Kessler, Marc L; Weyburn, Grant; Anderson, Carlos J; Owen, Dawn; Jackson, William C; Haken, Randall Ten

    2017-01-01

    To develop statistical dose-volume histogram (DVH)-based metrics and a visualization method to quantify the comparison of treatment plans with historical experience and among different institutions. The descriptive statistical summary (ie, median, first and third quartiles, and 95% confidence intervals) of volume-normalized DVH curve sets of past experiences was visualized through the creation of statistical DVH plots. Detailed distribution parameters were calculated and stored in JavaScript Object Notation files to facilitate management, including transfer and potential multi-institutional comparisons. In the treatment plan evaluation, structure DVH curves were scored against computed statistical DVHs and weighted experience scores (WESs). Individual, clinically used, DVH-based metrics were integrated into a generalized evaluation metric (GEM) as a priority-weighted sum of normalized incomplete gamma functions. Historical treatment plans for 351 patients with head and neck cancer, 104 with prostate cancer who were treated with conventional fractionation, and 94 with liver cancer who were treated with stereotactic body radiation therapy were analyzed to demonstrate the usage of statistical DVH, WES, and GEM in a plan evaluation. A shareable dashboard plugin was created to display statistical DVHs and integrate GEM and WES scores into a clinical plan evaluation within the treatment planning system. Benchmarking with normal tissue complication probability scores was carried out to compare the behavior of GEM and WES scores. DVH curves from historical treatment plans were characterized and presented, with difficult-to-spare structures (ie, frequently compromised organs at risk) identified. Quantitative evaluations by GEM and/or WES compared favorably with the normal tissue complication probability Lyman-Kutcher-Burman model, transforming a set of discrete threshold-priority limits into a continuous model reflecting physician objectives and historical experience

  11. Statistical analysis and data management

    International Nuclear Information System (INIS)

    Anon.

    1981-01-01

    This report provides an overview of the history of the WIPP Biology Program. The recommendations of the American Institute of Biological Sciences (AIBS) for the WIPP biology program are summarized. The data sets available for statistical analyses and problems associated with these data sets are also summarized. Biological studies base maps are presented. A statistical model is presented to evaluate any correlation between climatological data and small mammal captures. No statistically significant relationship between variance in small mammal captures on Dr. Gennaro's 90m x 90m grid and precipitation records from the Duval Potash Mine were found

  12. Comparison of de novo assembly statistics of Cucumis sativus L.

    Science.gov (United States)

    Wojcieszek, Michał; Kuśmirek, Wiktor; Pawełkowicz, Magdalena; PlÄ der, Wojciech; Nowak, Robert M.

    2017-08-01

    Genome sequencing is the core of genomic research. With the development of NGS and lowering the cost of procedure there is another tight gap - genome assembly. Developing the proper tool for this task is essential as quality of genome has important impact on further research. Here we present comparison of several de Bruijn assemblers tested on C. sativus genomic reads. The assessment shows that newly developed software - dnaasm provides better results in terms of quantity and quality. The number of generated sequences is lower by 5 - 33% with even two fold higher N50. Quality check showed reliable results were generated by dnaasm. This provides us with very strong base for future genomic analysis.

  13. Confounding and Statistical Significance of Indirect Effects: Childhood Adversity, Education, Smoking, and Anxious and Depressive Symptomatology

    Directory of Open Access Journals (Sweden)

    Mashhood Ahmed Sheikh

    2017-08-01

    mediate the association between childhood adversity and ADS in adulthood. However, when education was excluded as a mediator-response confounding variable, the indirect effect of childhood adversity on ADS in adulthood was statistically significant (p < 0.05. This study shows that a careful inclusion of potential confounding variables is important when assessing mediation.

  14. Information systems development of analysis company financial state based on the expert-statistical approach

    Directory of Open Access Journals (Sweden)

    M. N. Ivliev

    2016-01-01

    Full Text Available The work is devoted to methods of analysis the company financial condition, including aggregated ratings. It is proposed to use the generalized solvency and liquidity indicator and the capital structure composite index. Mathematically, the generalized index is a sum of variables-characteristics and weighting factors characterizing the relative importance of individual characteristics composition. It is offered to select the significant features from a set of standard financial ratios, calculated according to enterprises balance sheets. To obtain the weighting factors values it is proposed to use one of the expert statistical approaches, the analytic hierarchy process. The method is as follows: we choose the most important characteristic and after the experts determine the degree of preference for the main feature based on the linguistic scale. Further, matrix of pairwise comparisons based on the assigned ranks is compiled, which characterizes the relative importance of attributes. The required coefficients are determined as elements of a vector of priorities, which is the first vector of the matrix of paired comparisons. The paper proposes a mechanism for finding the fields for rating numbers analysis. In addition, the paper proposes a method for the statistical evaluation of the balance sheets of various companies by calculating the mutual correlation matrices. Based on the considered mathematical methods to determine quantitative characteristics of technical objects financial and economic activities, was developed algorithms, information and software allowing to realize of different systems economic analysis.

  15. [Big data in official statistics].

    Science.gov (United States)

    Zwick, Markus

    2015-08-01

    The concept of "big data" stands to change the face of official statistics over the coming years, having an impact on almost all aspects of data production. The tasks of future statisticians will not necessarily be to produce new data, but rather to identify and make use of existing data to adequately describe social and economic phenomena. Until big data can be used correctly in official statistics, a lot of questions need to be answered and problems solved: the quality of data, data protection, privacy, and the sustainable availability are some of the more pressing issues to be addressed. The essential skills of official statisticians will undoubtedly change, and this implies a number of challenges to be faced by statistical education systems, in universities, and inside the statistical offices. The national statistical offices of the European Union have concluded a concrete strategy for exploring the possibilities of big data for official statistics, by means of the Big Data Roadmap and Action Plan 1.0. This is an important first step and will have a significant influence on implementing the concept of big data inside the statistical offices of Germany.

  16. The application of statistical methods to assess economic assets

    Directory of Open Access Journals (Sweden)

    D. V. Dianov

    2017-01-01

    Full Text Available The article is devoted to consideration and evaluation of machinery, equipment and special equipment, methodological aspects of the use of standards for assessment of buildings and structures in current prices, the valuation of residential, specialized houses, office premises, assessment and reassessment of existing and inactive military assets, the application of statistical methods to obtain the relevant cost estimates.The objective of the scientific article is to consider possible application of statistical tools in the valuation of the assets, composing the core group of elements of national wealth – the fixed assets. Firstly, capital tangible assets constitute the basis of material base of a new value creation, products and non-financial services. The gain, accumulated of tangible assets of a capital nature is a part of the gross domestic product, and from its volume and specific weight in the composition of GDP we can judge the scope of reproductive processes in the country.Based on the methodological materials of the state statistics bodies of the Russian Federation, regulations of the theory of statistics, which describe the methods of statistical analysis such as the index, average values, regression, the methodical approach is structured in the application of statistical tools to obtain value estimates of property, plant and equipment with significant accumulated depreciation. Until now, the use of statistical methodology in the practice of economic assessment of assets is only fragmentary. This applies to both Federal Legislation (Federal law № 135 «On valuation activities in the Russian Federation» dated 16.07.1998 in edition 05.07.2016 and the methodological documents and regulations of the estimated activities, in particular, the valuation activities’ standards. A particular problem is the use of a digital database of Rosstat (Federal State Statistics Service, as to the specific fixed assets the comparison should be carried

  17. Statistical analysis of seismicity and hazard estimation for Italy (mixed approach). Statistical parameters of main shocks and aftershocks in the Italian region

    International Nuclear Information System (INIS)

    Molchan, G.M.; Kronrod, T.L.; Dmitrieva, O.E.

    1995-03-01

    The catalog of earthquakes of Italy (1900-1993) is analyzed in the present work. The following problems have been considered: 1) a choice of the operating magnitude, 2) an analysis of data completeness, and 3) a grouping (in time and in space). The catalog has been separated into main shocks and aftershocks. Statistical estimations of seismicity parameters (a,b) are performed for the seismogenetic zones defined by GNDT. The non-standard elements of the analysis performed are: (a) statistical estimation and comparison of seismicity parameters under the condition of arbitrary data grouping in magnitude, time and space; (b) use of a not conventional statistical method for the aftershock identification; the method is based on the idea of optimizing two kinds of errors in the aftershock identification process; (c) use of the aftershock zones to reveal seismically- interrelated seismogenic zones. This procedure contributes to the stability of the estimation of the ''b-value'' Refs, 25 figs, tabs

  18. Radiation effects on cancer mortality among A-bomb survivors, 1950-72. Comparison of some statistical models and analysis based on the additive logit model

    Energy Technology Data Exchange (ETDEWEB)

    Otake, M [Hiroshima Univ. (Japan). Faculty of Science

    1976-12-01

    Various statistical models designed to determine the effects of radiation dose on mortality of atomic bomb survivors in Hiroshima and Nagasaki from specific cancers were evaluated on the basis of a basic k(age) x c(dose) x 2 contingency table. From the aspects of application and fits of different models, analysis based on the additive logit model was applied to the mortality experience of this population during the 22year period from 1 Oct. 1950 to 31 Dec. 1972. The advantages and disadvantages of the additive logit model were demonstrated. Leukemia mortality showed a sharp rise with an increase in dose. The dose response relationship suggests a possible curvature or a log linear model, particularly if the dose estimated to be more than 600 rad were set arbitrarily at 600 rad, since the average dose in the 200+ rad group would then change from 434 to 350 rad. In the 22year period from 1950 to 1972, a high mortality risk due to radiation was observed in survivors with doses of 200 rad and over for all cancers except leukemia. On the other hand, during the latest period from 1965 to 1972 a significant risk was noted also for stomach and breast cancers. Survivors who were 9 year old or less at the time of the bomb and who were exposed to high doses of 200+ rad appeared to show a high mortality risk for all cancers except leukemia, although the number of observed deaths is yet small. A number of interesting areas are discussed from the statistical and epidemiological standpoints, i.e., the numerical comparison of risks in various models, the general evaluation of cancer mortality by the additive logit model, the dose response relationship, the relative risk in the high dose group, the time period of radiation induced cancer mortality, the difference of dose response between Hiroshima and Nagasaki and the relative biological effectiveness of neutrons.

  19. National Statistical Commission and Indian Official Statistics*

    Indian Academy of Sciences (India)

    IAS Admin

    a good collection of official statistics of that time. With more .... statistical agencies and institutions to provide details of statistical activities .... ing several training programmes. .... ful completion of Indian Statistical Service examinations, the.

  20. New adaptive statistical iterative reconstruction ASiR-V: Assessment of noise performance in comparison to ASiR.

    Science.gov (United States)

    De Marco, Paolo; Origgi, Daniela

    2018-03-01

    To assess the noise characteristics of the new adaptive statistical iterative reconstruction (ASiR-V) in comparison to ASiR. A water phantom was acquired with common clinical scanning parameters, at five different levels of CTDI vol . Images were reconstructed with different kernels (STD, SOFT, and BONE), different IR levels (40%, 60%, and 100%) and different slice thickness (ST) (0.625 and 2.5 mm), both for ASiR-V and ASiR. Noise properties were investigated and noise power spectrum (NPS) was evaluated. ASiR-V significantly reduced noise relative to FBP: noise reduction was in the range 23%-60% for a 0.625 mm ST and 12%-64% for the 2.5 mm ST. Above 2 mGy, noise reduction for ASiR-V had no dependence on dose. Noise reduction for ASIR-V has dependence on ST, being greater for STD and SOFT kernels at 2.5 mm. For the STD kernel ASiR-V has greater noise reduction for both ST, if compared to ASiR. For the SOFT kernel, results varies according to dose and ST, while for BONE kernel ASIR-V shows less noise reduction. NPS for CT Revolution has dose dependent behavior at lower doses. NPS for ASIR-V and ASiR is similar, showing a shift toward lower frequencies as the IR level increases for STD and SOFT kernels. The NPS is different between ASiR-V and ASIR with BONE kernel. NPS for ASiR-V appears to be ST dependent, having a shift toward lower frequencies for 2.5 mm ST. ASiR-V showed greater noise reduction than ASiR for STD and SOFT kernels, while keeping the same NPS. For the BONE kernel, ASiR-V presents a completely different behavior, with less noise reduction and modified NPS. Noise properties of the ASiR-V are dependent on reconstruction slice thickness. The noise properties of ASiR-V suggest the need for further measurements and efforts to establish new CT protocols to optimize clinical imaging. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  1. Official Statistics and Statistics Education: Bridging the Gap

    Directory of Open Access Journals (Sweden)

    Gal Iddo

    2017-03-01

    Full Text Available This article aims to challenge official statistics providers and statistics educators to ponder on how to help non-specialist adult users of statistics develop those aspects of statistical literacy that pertain to official statistics. We first document the gap in the literature in terms of the conceptual basis and educational materials needed for such an undertaking. We then review skills and competencies that may help adults to make sense of statistical information in areas of importance to society. Based on this review, we identify six elements related to official statistics about which non-specialist adult users should possess knowledge in order to be considered literate in official statistics: (1 the system of official statistics and its work principles; (2 the nature of statistics about society; (3 indicators; (4 statistical techniques and big ideas; (5 research methods and data sources; and (6 awareness and skills for citizens’ access to statistical reports. Based on this ad hoc typology, we discuss directions that official statistics providers, in cooperation with statistics educators, could take in order to (1 advance the conceptualization of skills needed to understand official statistics, and (2 expand educational activities and services, specifically by developing a collaborative digital textbook and a modular online course, to improve public capacity for understanding of official statistics.

  2. Learning Statistics at the Farmers Market? A Comparison of Academic Service Learning and Case Studies in an Introductory Statistics Course

    Science.gov (United States)

    Hiedemann, Bridget; Jones, Stacey M.

    2010-01-01

    We compare the effectiveness of academic service learning to that of case studies in an undergraduate introductory business statistics course. Students in six sections of the course were assigned either an academic service learning project (ASL) or business case studies (CS). We examine two learning outcomes: students' performance on the final…

  3. Validation of statistical models for creep rupture by parametric analysis

    Energy Technology Data Exchange (ETDEWEB)

    Bolton, J., E-mail: john.bolton@uwclub.net [65, Fisher Ave., Rugby, Warks CV22 5HW (United Kingdom)

    2012-01-15

    Statistical analysis is an efficient method for the optimisation of any candidate mathematical model of creep rupture data, and for the comparative ranking of competing models. However, when a series of candidate models has been examined and the best of the series has been identified, there is no statistical criterion to determine whether a yet more accurate model might be devised. Hence there remains some uncertainty that the best of any series examined is sufficiently accurate to be considered reliable as a basis for extrapolation. This paper proposes that models should be validated primarily by parametric graphical comparison to rupture data and rupture gradient data. It proposes that no mathematical model should be considered reliable for extrapolation unless the visible divergence between model and data is so small as to leave no apparent scope for further reduction. This study is based on the data for a 12% Cr alloy steel used in BS PD6605:1998 to exemplify its recommended statistical analysis procedure. The models considered in this paper include a) a relatively simple model, b) the PD6605 recommended model and c) a more accurate model of somewhat greater complexity. - Highlights: Black-Right-Pointing-Pointer The paper discusses the validation of creep rupture models derived from statistical analysis. Black-Right-Pointing-Pointer It demonstrates that models can be satisfactorily validated by a visual-graphic comparison of models to data. Black-Right-Pointing-Pointer The method proposed utilises test data both as conventional rupture stress and as rupture stress gradient. Black-Right-Pointing-Pointer The approach is shown to be more reliable than a well-established and widely used method (BS PD6605).

  4. Comparison of metoprolol as hydrochlorothiazide and antihypertensive agents.

    Science.gov (United States)

    Pedersen, O L

    1976-01-01

    A crossover comparison of metoprolol and hydrochlorothiazide has been performed in 20 patients with mild hypertension. Both drugs caused almost identical statistically significant reduction in blood pressure of about 20 mm Hg systolic and 15 mm Hg diastolic. The side effects during active therapy were few and mild, but 5 patients experienced subjective symptoms during the first few days following abrupt withdrawal of metoprolol, namely general malaise, palpitations, headache, sweating and tremor. The symptoms were more pronounced in the standing position and disappeared at once on resumption of beta-blocker therapy, or gradually over 5 - 7 days when placebo tablets were given. In 11 of the 20 patients hydrochlorothiazide produced subnormal serum potassium levels and potassium supplements were given. The serum uric acid level was also significantly increased during hydrochlorothiazide treatment.

  5. A statistical evaluation of asbestos air concentrations

    Energy Technology Data Exchange (ETDEWEB)

    Lange, J.H. [Envirosafe Training and Consultants, Pittsburgh, PA (United States)

    1999-07-01

    Both area and personal air samples collected during an asbestos abatement project were matched and statistically analysed. Among the many parameters studied were fibre concentrations and their variability. Mean values for area and personal samples were 0.005 and 0.024 f cm{sup -}-{sup 3} of air, respectively. Summary values for area and personal samples suggest that exposures are low with no single exposure value exceeding the current OSHA TWA value of 0.1 f cm{sup -3} of air. Within- and between-worker analysis suggests that these data are homogeneous. Comparison of within- and between-worker values suggests that the exposure source and variability for abatement are more related to the process than individual practices. This supports the importance of control measures for abatement. Study results also suggest that area and personal samples are not statistically related, that is, there is no association observed for these two sampling methods when data are analysed by correlation or regression analysis. Personal samples were statistically higher in concentration than area samples. Area sampling cannot be used as a surrogate exposure for asbestos abatement workers. (author)

  6. Estimating Predictive Variance for Statistical Gas Distribution Modelling

    International Nuclear Information System (INIS)

    Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo

    2009-01-01

    Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.

  7. Statistical inference an integrated approach

    CERN Document Server

    Migon, Helio S; Louzada, Francisco

    2014-01-01

    Introduction Information The concept of probability Assessing subjective probabilities An example Linear algebra and probability Notation Outline of the bookElements of Inference Common statistical modelsLikelihood-based functions Bayes theorem Exchangeability Sufficiency and exponential family Parameter elimination Prior Distribution Entirely subjective specification Specification through functional forms Conjugacy with the exponential family Non-informative priors Hierarchical priors Estimation Introduction to decision theoryBayesian point estimation Classical point estimation Empirical Bayes estimation Comparison of estimators Interval estimation Estimation in the Normal model Approximating Methods The general problem of inference Optimization techniquesAsymptotic theory Other analytical approximations Numerical integration methods Simulation methods Hypothesis Testing Introduction Classical hypothesis testingBayesian hypothesis testing Hypothesis testing and confidence intervalsAsymptotic tests Prediction...

  8. CutL: an alternative to Kulldorff's scan statistics for cluster detection with a specified cut-off level.

    Science.gov (United States)

    Więckowska, Barbara; Marcinkowska, Justyna

    2017-11-06

    When searching for epidemiological clusters, an important tool can be to carry out one's own research with the incidence rate from the literature as the reference level. Values exceeding this level may indicate the presence of a cluster in that location. This paper presents a method of searching for clusters that have significantly higher incidence rates than those specified by the investigator. The proposed method uses the classic binomial exact test for one proportion and an algorithm that joins areas with potential clusters while reducing the number of multiple comparisons needed. The sensitivity and specificity are preserved by this new method, while avoiding the Monte Carlo approach and still delivering results comparable to the commonly used Kulldorff's scan statistics and other similar methods of localising clusters. A strong contributing factor afforded by the statistical software that makes this possible is that it allows analysis and presentation of the results cartographically.

  9. PI-3 correlations and statistical evaluation results

    International Nuclear Information System (INIS)

    Pernica, R.; Cizek, J.

    1992-01-01

    Empirical Critical Heat Flux (CHF) correlations PI-3 having the widest range of validity for flow conditions in both hexagonal and square rod bundle geometries and compared with published CHF correlations are presented. They are valid for vertical water upflow through rod bundles with relatively wide and very tight rod lattices, and include axial and radial non-uniform heating. The correlations were developed with the use of more than 6000 data obtained from 119 electrically heated rod bundles. Comprehensive results of statistical evaluations of the new correlations are presented for various data bases. Also presented is a comparison of statistical evaluations of several well-known CHF correlations in the experimental data base used. A procedure which makes it possible to directly determine the probability that CHF does not occur is described for the purpose of nuclear safety assessment. (author) 8 tabs., 32 figs., 11 refs

  10. Evidence-based orthodontics. Current statistical trends in published articles in one journal.

    Science.gov (United States)

    Law, Scott V; Chudasama, Dipak N; Rinchuse, Donald J

    2010-09-01

    To ascertain the number, type, and overall usage of statistics in American Journal of Orthodontics and Dentofacial (AJODO) articles for 2008. These data were then compared to data from three previous years: 1975, 1985, and 2003. The frequency and distribution of statistics used in the AJODO original articles for 2008 were dichotomized into those using statistics and those not using statistics. Statistical procedures were then broadly divided into descriptive statistics (mean, standard deviation, range, percentage) and inferential statistics (t-test, analysis of variance). Descriptive statistics were used to make comparisons. In 1975, 1985, 2003, and 2008, AJODO published 72, 87, 134, and 141 original articles, respectively. The percentage of original articles using statistics was 43.1% in 1975, 75.9% in 1985, 94.0% in 2003, and 92.9% in 2008; original articles using statistics stayed relatively the same from 2003 to 2008, with only a small 1.1% decrease. The percentage of articles using inferential statistical analyses was 23.7% in 1975, 74.2% in 1985, 92.9% in 2003, and 84.4% in 2008. Comparing AJODO publications in 2003 and 2008, there was an 8.5% increase in the use of descriptive articles (from 7.1% to 15.6%), and there was an 8.5% decrease in articles using inferential statistics (from 92.9% to 84.4%).

  11. A comparison of dynamical and statistical downscaling methods for regional wave climate projections along French coastlines.

    Science.gov (United States)

    Laugel, Amélie; Menendez, Melisa; Benoit, Michel; Mattarolo, Giovanni; Mendez, Fernando

    2013-04-01

    Wave climate forecasting is a major issue for numerous marine and coastal related activities, such as offshore industries, flooding risks assessment and wave energy resource evaluation, among others. Generally, there are two main ways to predict the impacts of the climate change on the wave climate at regional scale: the dynamical and the statistical downscaling of GCM (Global Climate Model). In this study, both methods have been applied on the French coast (Atlantic , English Channel and North Sea shoreline) under three climate change scenarios (A1B, A2, B1) simulated with the GCM ARPEGE-CLIMAT, from Météo-France (AR4, IPCC). The aim of the work is to characterise the wave climatology of the 21st century and compare the statistical and dynamical methods pointing out advantages and disadvantages of each approach. The statistical downscaling method proposed by the Environmental Hydraulics Institute of Cantabria (Spain) has been applied (Menendez et al., 2011). At a particular location, the sea-state climate (Predictand Y) is defined as a function, Y=f(X), of several atmospheric circulation patterns (Predictor X). Assuming these climate associations between predictor and predictand are stationary, the statistical approach has been used to project the future wave conditions with reference to the GCM. The statistical relations between predictor and predictand have been established over 31 years, from 1979 to 2009. The predictor is built as the 3-days-averaged squared sea level pressure gradient from the hourly CFSR database (Climate Forecast System Reanalysis, http://cfs.ncep.noaa.gov/cfsr/). The predictand has been extracted from the 31-years hindcast sea-state database ANEMOC-2 performed with the 3G spectral wave model TOMAWAC (Benoit et al., 1996), developed at EDF R&D LNHE and Saint-Venant Laboratory for Hydraulics and forced by the CFSR 10m wind field. Significant wave height, peak period and mean wave direction have been extracted with an hourly-resolution at

  12. Statistical determination of rainfall-runoff erosivity indices for single storms in the Chinese Loess Plateau.

    Directory of Open Access Journals (Sweden)

    Mingguo Zheng

    Full Text Available Correlation analysis is popular in erosion- or earth-related studies, however, few studies compare correlations on a basis of statistical testing, which should be conducted to determine the statistical significance of the observed sample difference. This study aims to statistically determine the erosivity index of single storms, which requires comparison of a large number of dependent correlations between rainfall-runoff factors and soil loss, in the Chinese Loess Plateau. Data observed at four gauging stations and five runoff experimental plots were presented. Based on the Meng's tests, which is widely used for comparing correlations between a dependent variable and a set of independent variables, two methods were proposed. The first method removes factors that are poorly correlated with soil loss from consideration in a stepwise way, while the second method performs pairwise comparisons that are adjusted using the Bonferroni correction. Among 12 rainfall factors, I30 (the maximum 30-minute rainfall intensity has been suggested for use as the rainfall erosivity index, although I30 is equally correlated with soil loss as factors of I20, EI10 (the product of the rainfall kinetic energy, E, and I10, EI20 and EI30 are. Runoff depth (total runoff volume normalized to drainage area is more correlated with soil loss than all other examined rainfall-runoff factors, including I30, peak discharge and many combined factors. Moreover, sediment concentrations of major sediment-producing events are independent of all examined rainfall-runoff factors. As a result, introducing additional factors adds little to the prediction accuracy of the single factor of runoff depth. Hence, runoff depth should be the best erosivity index at scales from plots to watersheds. Our findings can facilitate predictions of soil erosion in the Loess Plateau. Our methods provide a valuable tool while determining the predictor among a number of variables in terms of correlations.

  13. Statistical Determination of Rainfall-Runoff Erosivity Indices for Single Storms in the Chinese Loess Plateau

    Science.gov (United States)

    Zheng, Mingguo; Chen, Xiaoan

    2015-01-01

    Correlation analysis is popular in erosion- or earth-related studies, however, few studies compare correlations on a basis of statistical testing, which should be conducted to determine the statistical significance of the observed sample difference. This study aims to statistically determine the erosivity index of single storms, which requires comparison of a large number of dependent correlations between rainfall-runoff factors and soil loss, in the Chinese Loess Plateau. Data observed at four gauging stations and five runoff experimental plots were presented. Based on the Meng’s tests, which is widely used for comparing correlations between a dependent variable and a set of independent variables, two methods were proposed. The first method removes factors that are poorly correlated with soil loss from consideration in a stepwise way, while the second method performs pairwise comparisons that are adjusted using the Bonferroni correction. Among 12 rainfall factors, I 30 (the maximum 30-minute rainfall intensity) has been suggested for use as the rainfall erosivity index, although I 30 is equally correlated with soil loss as factors of I 20, EI 10 (the product of the rainfall kinetic energy, E, and I 10), EI 20 and EI 30 are. Runoff depth (total runoff volume normalized to drainage area) is more correlated with soil loss than all other examined rainfall-runoff factors, including I 30, peak discharge and many combined factors. Moreover, sediment concentrations of major sediment-producing events are independent of all examined rainfall-runoff factors. As a result, introducing additional factors adds little to the prediction accuracy of the single factor of runoff depth. Hence, runoff depth should be the best erosivity index at scales from plots to watersheds. Our findings can facilitate predictions of soil erosion in the Loess Plateau. Our methods provide a valuable tool while determining the predictor among a number of variables in terms of correlations. PMID

  14. STATISTICAL ANALYSIS OF TANK 18F FLOOR SAMPLE RESULTS

    Energy Technology Data Exchange (ETDEWEB)

    Harris, S.

    2010-09-02

    Representative sampling has been completed for characterization of the residual material on the floor of Tank 18F as per the statistical sampling plan developed by Shine [1]. Samples from eight locations have been obtained from the tank floor and two of the samples were archived as a contingency. Six samples, referred to in this report as the current scrape samples, have been submitted to and analyzed by SRNL [2]. This report contains the statistical analysis of the floor sample analytical results to determine if further data are needed to reduce uncertainty. Included are comparisons with the prior Mantis samples results [3] to determine if they can be pooled with the current scrape samples to estimate the upper 95% confidence limits (UCL{sub 95%}) for concentration. Statistical analysis revealed that the Mantis and current scrape sample results are not compatible. Therefore, the Mantis sample results were not used to support the quantification of analytes in the residual material. Significant spatial variability among the current sample results was not found. Constituent concentrations were similar between the North and South hemispheres as well as between the inner and outer regions of the tank floor. The current scrape sample results from all six samples fall within their 3-sigma limits. In view of the results from numerous statistical tests, the data were pooled from all six current scrape samples. As such, an adequate sample size was provided for quantification of the residual material on the floor of Tank 18F. The uncertainty is quantified in this report by an upper 95% confidence limit (UCL{sub 95%}) on each analyte concentration. The uncertainty in analyte concentration was calculated as a function of the number of samples, the average, and the standard deviation of the analytical results. The UCL{sub 95%} was based entirely on the six current scrape sample results (each averaged across three analytical determinations).

  15. Comparison of marginal bone loss between internal- and external-connection dental implants in posterior areas without periodontal or peri-implant disease.

    Science.gov (United States)

    Kim, Dae-Hyun; Kim, Hyun Ju; Kim, Sungtae; Koo, Ki-Tae; Kim, Tae-Il; Seol, Yang-Jo; Lee, Yong-Moo; Ku, Young; Rhyu, In-Chul

    2018-04-01

    The purpose of this retrospective study with 4-12 years of follow-up was to compare the marginal bone loss (MBL) between external-connection (EC) and internal-connection (IC) dental implants in posterior areas without periodontal or peri-implant disease on the adjacent teeth or implants. Additional factors influencing MBL were also evaluated. This retrospective study was performed using dental records and radiographic data obtained from patients who had undergone dental implant treatment in the posterior area from March 2006 to March 2007. All the implants that were included had follow-up periods of more than 4 years after loading and satisfied the implant success criteria, without any peri-implant or periodontal disease on the adjacent implants or teeth. They were divided into 2 groups: EC and IC. Subgroup comparisons were conducted according to splinting and the use of cement in the restorations. A statistical analysis was performed using the Mann-Whitney U test for comparisons between 2 groups and the Kruskal-Wallis test for comparisons among more than 2 groups. A total of 355 implants in 170 patients (206 EC and 149 IC) fulfilled the inclusion criteria and were analyzed in this study. The mean MBL was 0.47 mm and 0.15 mm in the EC and IC implants, respectively, which was a statistically significant difference ( P <0.001). Comparisons according to splinting (MBL of single implants: 0.34 mm, MBL of splinted implants: 0.31 mm, P =0.676) and cement use (MBL of cemented implants: 0.27 mm, MBL of non-cemented implants: 0.35 mm, P =0.178) showed no statistically significant differences in MBL, regardless of the implant connection type. IC implants showed a more favorable bone response regarding MBL in posterior areas without peri-implantitis or periodontal disease.

  16. The relation between statistical power and inference in fMRI.

    Directory of Open Access Journals (Sweden)

    Henk R Cremers

    Full Text Available Statistically underpowered studies can result in experimental failure even when all other experimental considerations have been addressed impeccably. In fMRI the combination of a large number of dependent variables, a relatively small number of observations (subjects, and a need to correct for multiple comparisons can decrease statistical power dramatically. This problem has been clearly addressed yet remains controversial-especially in regards to the expected effect sizes in fMRI, and especially for between-subjects effects such as group comparisons and brain-behavior correlations. We aimed to clarify the power problem by considering and contrasting two simulated scenarios of such possible brain-behavior correlations: weak diffuse effects and strong localized effects. Sampling from these scenarios shows that, particularly in the weak diffuse scenario, common sample sizes (n = 20-30 display extremely low statistical power, poorly represent the actual effects in the full sample, and show large variation on subsequent replications. Empirical data from the Human Connectome Project resembles the weak diffuse scenario much more than the localized strong scenario, which underscores the extent of the power problem for many studies. Possible solutions to the power problem include increasing the sample size, using less stringent thresholds, or focusing on a region-of-interest. However, these approaches are not always feasible and some have major drawbacks. The most prominent solutions that may help address the power problem include model-based (multivariate prediction methods and meta-analyses with related synthesis-oriented approaches.

  17. Assessment of climate change using methods of mathematic statistics and theory of probability

    International Nuclear Information System (INIS)

    Trajanoska, Lidija; Kaevski, Ivancho

    2004-01-01

    In simple terms: 'Climate' is the average of 'weather'. The Earth's weather system is a complex machine composed of coupled sub-systems (ocean, air, land, ice and the biosphere) between which energy are exchanged. The understanding and study of climate change does not only rely on the understanding of the physics of climate change but is linked to the following question: 'How we can detect change in a system that is changing all the time under its own volition'? What is even the meaning of 'change' in such a situation? The concept of 'change' we should transform into the concept of 'significant and long-term' then this re-phrasing allows for a definition in mathematical terms. Significant change in a system becomes a measure of how large an observed change is in terms of the variability one would see under 'normal' conditions. Example could be the analyses of the yearly temperature of the air and precipitations, like in this paper. A large amount of data are selected as representing the 'before' case (change) and another set of data are selected as being the 'after' case and then the average in these two cases are compared. These comparisons are in the form of 'hypothesis tests' in which one tests whether the hypothesis that there has Open no change can be rejected. Both parameter and nonparametric statistic methods are used in the theory of mathematic statistic. The most indicative changeable which show global change is an average, standard deviation and probability function distribution on examined time series. Examined meteorological series are taken like haphazard process so we can mathematic statistic applied.(Author)

  18. Statistical mechanics of two-dimensional and geophysical flows

    International Nuclear Information System (INIS)

    Bouchet, Freddy; Venaille, Antoine

    2012-01-01

    The theoretical study of the self-organization of two-dimensional and geophysical turbulent flows is addressed based on statistical mechanics methods. This review is a self-contained presentation of classical and recent works on this subject; from the statistical mechanics basis of the theory up to applications to Jupiter’s troposphere and ocean vortices and jets. Emphasize has been placed on examples with available analytical treatment in order to favor better understanding of the physics and dynamics. After a brief presentation of the 2D Euler and quasi-geostrophic equations, the specificity of two-dimensional and geophysical turbulence is emphasized. The equilibrium microcanonical measure is built from the Liouville theorem. Important statistical mechanics concepts (large deviations and mean field approach) and thermodynamic concepts (ensemble inequivalence and negative heat capacity) are briefly explained and described. On this theoretical basis, we predict the output of the long time evolution of complex turbulent flows as statistical equilibria. This is applied to make quantitative models of two-dimensional turbulence, the Great Red Spot and other Jovian vortices, ocean jets like the Gulf-Stream, and ocean vortices. A detailed comparison between these statistical equilibria and real flow observations is provided. We also present recent results for non-equilibrium situations, for the studies of either the relaxation towards equilibrium or non-equilibrium steady states. In this last case, forces and dissipation are in a statistical balance; fluxes of conserved quantity characterize the system and microcanonical or other equilibrium measures no longer describe the system.

  19. Comparison of Oral Manifestations of Diabetic and Non-Diabetic Uremic Patients Undergoing Hemodialysis

    Directory of Open Access Journals (Sweden)

    Seyed Javad Kia

    2014-06-01

    Full Text Available Background & Objectives: Chronic renal failure (CRF, also known as chronic kidney disease, caused by devastated nephron mass of the kidney results in uremia. Hypertension, diabetes mellitus and glomerulonephritis are common etiologic factors of CRF. This condition causes miscellaneous oral manifestations especially in diabetic patients. The aim of this study was to comparison oral manifestations of diabetic and non-diabetic uremic patients undergoing hemodialysis.   Methods: A total of 95 patients who undergoing hemodialysis in Razi hospital in Rasht city participated in this descriptive analytical study. Patients were divided into two diabetic and non- diabetic groups. Oral cavity examinations were done by latex gloves and single use mirror. Objective and subjective oral manifestations such as xerostomia, bad taste, mucosal pain, uremic odor, coating tongue, petechial, purpura, pale oral mucosa, ulcer, dental erosion and candida infection were recorded in questionnaire. After gathering of information, the data were analyzed by SPSS 15 software using t-test and chi square statistical test.   Results: About 60% of patients (57 person were men and 40 % (38 person were women. The mean age of patients was 48 years (range of 20 -76 years. Common subjective oral manifestation in both groups was xerostomia and most common objective oral manifestations were pale oral mucosa, uremic odor and coating tongue respectively. The DMFT index in diabetic group was significantly higher (17.3±7.63 than non-diabetic patients (12.4±8.26. There was no significant statistical correlation between the time of dialysis, number of dialysis appointment during the week and objective and subjective oral manifestations in two groups.   Conclusion: Although, the present study has shown an increase in oral manifestations in diabetic patients undergoing hemodialysis relative to non-diabetic group, but this increase was not statistically significant. On the other hand

  20. Review of the Statistical Techniques in Medical Sciences | Okeh ...

    African Journals Online (AJOL)

    ... medical researcher in selecting the appropriate statistical techniques. Of course, all statistical techniques have certain underlying assumptions, which must be checked before the technique is applied. Keywords: Variable, Prospective Studies, Retrospective Studies, Statistical significance. Bio-Research Vol. 6 (1) 2008: pp.

  1. QQ-plots for assessing distributions of biomarker measurements and generating defensible summary statistics

    Science.gov (United States)

    One of the main uses of biomarker measurements is to compare different populations to each other and to assess risk in comparison to established parameters. This is most often done using summary statistics such as central tendency, variance components, confidence intervals, excee...

  2. Comparison of different statistical modelling approaches for deriving spatial air temperature patterns in an urban environment

    Science.gov (United States)

    Straub, Annette; Beck, Christoph; Breitner, Susanne; Cyrys, Josef; Geruschkat, Uta; Jacobeit, Jucundus; Kühlbach, Benjamin; Kusch, Thomas; Richter, Katja; Schneider, Alexandra; Umminger, Robin; Wolf, Kathrin

    2017-04-01

    Frequently spatial variations of air temperature of considerable magnitude occur within urban areas. They correspond to varying land use/land cover characteristics and vary with season, time of day and synoptic conditions. These temperature differences have an impact on human health and comfort directly by inducing thermal stress as well as indirectly by means of affecting air quality. Therefore, knowledge of the spatial patterns of air temperature in cities and the factors causing them is of great importance, e.g. for urban planners. A multitude of studies have shown statistical modelling to be a suitable tool for generating spatial air temperature patterns. This contribution presents a comparison of different statistical modelling approaches for deriving spatial air temperature patterns in the urban environment of Augsburg, Southern Germany. In Augsburg there exists a measurement network for air temperature and humidity currently comprising 48 stations in the city and its rural surroundings (corporately operated by the Institute of Epidemiology II, Helmholtz Zentrum München, German Research Center for Environmental Health and the Institute of Geography, University of Augsburg). Using different datasets for land surface characteristics (Open Street Map, Urban Atlas) area percentages of different types of land cover were calculated for quadratic buffer zones of different size (25, 50, 100, 250, 500 m) around the stations as well for source regions of advective air flow and used as predictors together with additional variables such as sky view factor, ground level and distance from the city centre. Multiple Linear Regression and Random Forest models for different situations taking into account season, time of day and weather condition were applied utilizing selected subsets of these predictors in order to model spatial distributions of mean hourly and daily air temperature deviations from a rural reference station. Furthermore, the different model setups were

  3. Student Performance in an Introductory Business Statistics Course: Does Delivery Mode Matter?

    Science.gov (United States)

    Haughton, Jonathan; Kelly, Alison

    2015-01-01

    Approximately 600 undergraduates completed an introductory business statistics course in 2013 in one of two learning environments at Suffolk University, a mid-sized private university in Boston, Massachusetts. The comparison group completed the course in a traditional classroom-based environment, whereas the treatment group completed the course in…

  4. Spectral and cross-spectral analysis of uneven time series with the smoothed Lomb-Scargle periodogram and Monte Carlo evaluation of statistical significance

    Science.gov (United States)

    Pardo-Igúzquiza, Eulogio; Rodríguez-Tovar, Francisco J.

    2012-12-01

    Many spectral analysis techniques have been designed assuming sequences taken with a constant sampling interval. However, there are empirical time series in the geosciences (sediment cores, fossil abundance data, isotope analysis, …) that do not follow regular sampling because of missing data, gapped data, random sampling or incomplete sequences, among other reasons. In general, interpolating an uneven series in order to obtain a succession with a constant sampling interval alters the spectral content of the series. In such cases it is preferable to follow an approach that works with the uneven data directly, avoiding the need for an explicit interpolation step. The Lomb-Scargle periodogram is a popular choice in such circumstances, as there are programs available in the public domain for its computation. One new computer program for spectral analysis improves the standard Lomb-Scargle periodogram approach in two ways: (1) It explicitly adjusts the statistical significance to any bias introduced by variance reduction smoothing, and (2) it uses a permutation test to evaluate confidence levels, which is better suited than parametric methods when neighbouring frequencies are highly correlated. Another novel program for cross-spectral analysis offers the advantage of estimating the Lomb-Scargle cross-periodogram of two uneven time series defined on the same interval, and it evaluates the confidence levels of the estimated cross-spectra by a non-parametric computer intensive permutation test. Thus, the cross-spectrum, the squared coherence spectrum, the phase spectrum, and the Monte Carlo statistical significance of the cross-spectrum and the squared-coherence spectrum can be obtained. Both of the programs are written in ANSI Fortran 77, in view of its simplicity and compatibility. The program code is of public domain, provided on the website of the journal (http://www.iamg.org/index.php/publisher/articleview/frmArticleID/112/). Different examples (with simulated and

  5. Silicon Photomultipliers: Dark Current and its Statistical Spread

    Directory of Open Access Journals (Sweden)

    Roberto PAGANO

    2012-03-01

    Full Text Available Aim of this paper is to investigate on a statistical basis at the wafer level the relationship existing among the dark currents of the single pixel compared to the whole Silicon Photomultiplier array. This is the first time to our knowledge that such a comparison is made, crucial to pass this new technology to the semiconductor manufacturing standards. In particular, emission microscopy measurements and current measurements allowed us to conclude that optical trenches strongly improve the device performances.

  6. The role of social comparison in social judgments of dental appearance: An experimental study.

    Science.gov (United States)

    Al-Kharboush, Ghada H; Asimakopoulou, Koula; AlJabaa, AlJazi H; Newton, J Tim

    2017-06-01

    The objective of this study was to examine the influence of social comparison on social judgments of dental malalignment in a sample of females. In a Repeated measures design, N=218 female participants of which N=128 were orthodontic patients (mean age 31.4) and N=90 controls (mean age 26.1) rated their satisfaction with their facial appearance after viewing stereotypically beautiful images of faces (experimental condition) or houses (neutral condition). After 4-6 weeks participants returned to view an image of a female with severe crowding and were asked to make judgments of social competence (SC), intellectual ability (IA), psychological adjustment (PA) and attractiveness (A). The comparison of social judgments between high comparers (High SocComp) and low comparers (Low SocComp) was not statistically significant; (SC (t (204)=0.30, p=0.76), IA (t (204)=0.14, p=0.89) PA (t (204)=0.004, p=0.996), A (t(204)=1.26, (p=0.209). However, dentally induced social judgments (DISJ) was statistically significant in the clinical sample than the non-clinical sample SC (t (204)=0.784, p=0.434), IA (t (204)=0.2.15, p=0.033) PA (t (204)=-0.003, p=0.997) A (t (204)=1.58, p=0.116). Social comparison has little impact on DISJ. However, there are differences in DISJs between individuals who seek treatment for their malocclusion versus the nonclinical population; the reason for this is unclear but does not appear to be the result of adoption of societal standards of beauty and instead suggests individual ranking of important 'beauty areas' may play a role. This paper uses social comparison theory to investigate the basis of judgments in regards to dental appearance. The findings of this research may help to identify individuals who are more susceptible to societal pressures towards non-ideal dentitions. This will help clinicians become more aware of the patient's comparison orientation, which seems to have an impact on satisfaction with treatment outcomes. This study may form the

  7. Are the correct herbal claims by Hildegard von Bingen only lucky strikes? A new statistical approach.

    Science.gov (United States)

    Uehleke, Bernhard; Hopfenmueller, Werner; Stange, Rainer; Saller, Reinhard

    2012-01-01

    Ancient and medieval herbal books are often believed to describe the same claims still in use today. Medieval herbal books, however, provide long lists of claims for each herb, most of which are not approved today, while the herb's modern use is often missing. So the hypothesis arises that a medieval author could have randomly hit on 'correct' claims among his many 'wrong' ones. We developed a statistical procedure based on a simple probability model. We applied our procedure to the herbal books of Hildegard von Bingen (1098- 1179) as an example for its usefulness. Claim attributions for a certain herb were classified as 'correct' if approximately the same as indicated in actual monographs. The number of 'correct' claim attributions was significantly higher than it could have been by pure chance, even though the vast majority of Hildegard von Bingen's claims were not 'correct'. The hypothesis that Hildegard would have achieved her 'correct' claims purely by chance can be clearly rejected. The finding that medical claims provided by a medieval author are significantly related to modern herbal use supports the importance of traditional medicinal systems as an empirical source. However, since many traditional claims are not in accordance with modern applications, they should be used carefully and analyzed in a systematic, statistics-based manner. Our statistical approach can be used for further systematic comparison of herbal claims of traditional sources as well as in the fields of ethnobotany and ethnopharmacology. Copyright © 2012 S. Karger AG, Basel.

  8. Comparison between statistical and optimization methods in accessing unmixing of spectrally similar materials

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2010-11-01

    Full Text Available This paper reports on the results from ordinary least squares and ridge regression as statistical methods, and is compared to numerical optimization methods such as the stochastic method for global optimization, simulated annealing, particle swarm...

  9. Statistical and Machine Learning forecasting methods: Concerns and ways forward

    Science.gov (United States)

    Makridakis, Spyros; Assimakopoulos, Vassilios

    2018-01-01

    Machine Learning (ML) methods have been proposed in the academic literature as alternatives to statistical ones for time series forecasting. Yet, scant evidence is available about their relative performance in terms of accuracy and computational requirements. The purpose of this paper is to evaluate such performance across multiple forecasting horizons using a large subset of 1045 monthly time series used in the M3 Competition. After comparing the post-sample accuracy of popular ML methods with that of eight traditional statistical ones, we found that the former are dominated across both accuracy measures used and for all forecasting horizons examined. Moreover, we observed that their computational requirements are considerably greater than those of statistical methods. The paper discusses the results, explains why the accuracy of ML models is below that of statistical ones and proposes some possible ways forward. The empirical results found in our research stress the need for objective and unbiased ways to test the performance of forecasting methods that can be achieved through sizable and open competitions allowing meaningful comparisons and definite conclusions. PMID:29584784

  10. Statistical and Machine Learning forecasting methods: Concerns and ways forward.

    Science.gov (United States)

    Makridakis, Spyros; Spiliotis, Evangelos; Assimakopoulos, Vassilios

    2018-01-01

    Machine Learning (ML) methods have been proposed in the academic literature as alternatives to statistical ones for time series forecasting. Yet, scant evidence is available about their relative performance in terms of accuracy and computational requirements. The purpose of this paper is to evaluate such performance across multiple forecasting horizons using a large subset of 1045 monthly time series used in the M3 Competition. After comparing the post-sample accuracy of popular ML methods with that of eight traditional statistical ones, we found that the former are dominated across both accuracy measures used and for all forecasting horizons examined. Moreover, we observed that their computational requirements are considerably greater than those of statistical methods. The paper discusses the results, explains why the accuracy of ML models is below that of statistical ones and proposes some possible ways forward. The empirical results found in our research stress the need for objective and unbiased ways to test the performance of forecasting methods that can be achieved through sizable and open competitions allowing meaningful comparisons and definite conclusions.

  11. Statistical operation of nuclear power plants

    International Nuclear Information System (INIS)

    Gauzit, Maurice; Wilmart, Yves

    1976-01-01

    A comparison of the statistical operating results of nuclear power stations as issued in the literature shows that the values given for availability and the load factor often differ considerably from each other. This may be due to different definitions given to these terms or even to a poor translation from one language into another. A critical analysis of these terms as well as the choice of a parameter from which it is possible to have a quantitative idea of the actual quality of the operation obtained is proposed. The second section gives, on an homogenous basis and from the results supplied by 83 nuclear power stations now in operation, a statistical analysis of their operating results: in particular, the two light water lines, during 1975, as well as the evolution in terms of age, of the units or the starting conditions of the units during their first two operating years. Test values thus obtained are compared also to those taken 'a priori' as hypothesis in some economic studies [fr

  12. Open Access!: Review of Online Statistics: An Interactive Multimedia Course of Study by David Lane

    Directory of Open Access Journals (Sweden)

    Samuel L. Tunstall

    2016-01-01

    Full Text Available David M. Lane (project leader. Online Statistics Education: An Interactive Multimedia Course of Study (http://onlinestatbook.com/ Also: David M. Lane (primary author and editor, with David Scott, Mikki Hebl, Rudy Guerra, Dan Osherson, and Heidi Zimmer. Introduction to Statistics. Online edition (http://onlinestatbook.com/Online_Statistics_Education.pdf, 694 pp. It is rare that students receive high-quality textbooks for free, but David Lane's Online Statistics: An Interactive Multimedia Course of Study permits precisely that. This review gives an overview of the many features in Lane's online textbook, including the Java Applets, the textbook itself, and the resources available for instructors. A discussion of uses of the site, as well as a comparison of the text to alternative online statistics textbooks, is included.

  13. A fast method for the unit scheduling problem with significant renewable power generation

    International Nuclear Information System (INIS)

    Osório, G.J.; Lujano-Rojas, J.M.; Matias, J.C.O.; Catalão, J.P.S.

    2015-01-01

    Highlights: • A model to the scheduling of power systems with significant renewable power generation is provided. • A new methodology that takes information from the analysis of each scenario separately is proposed. • Based on a probabilistic analysis, unit scheduling and corresponding economic dispatch are estimated. • A comparison with others methodologies is in favour of the proposed approach. - Abstract: Optimal operation of power systems with high integration of renewable power sources has become difficult as a consequence of the random nature of some sources like wind energy and photovoltaic energy. Nowadays, this problem is solved using Monte Carlo Simulation (MCS) approach, which allows considering important statistical characteristics of wind and solar power production such as the correlation between consecutive observations, the diurnal profile of the forecasted power production, and the forecasting error. However, MCS method requires the analysis of a representative amount of trials, which is an intensive calculation task that increases considerably with the number of scenarios considered. In this paper, a model to the scheduling of power systems with significant renewable power generation based on scenario generation/reduction method, which establishes a proportional relationship between the number of scenarios and the computational time required to analyse them, is proposed. The methodology takes information from the analysis of each scenario separately to determine the probabilistic behaviour of each generator at each hour in the scheduling problem. Then, considering a determined significance level, the units to be committed are selected and the load dispatch is determined. The proposed technique was illustrated through a case study and the comparison with stochastic programming approach was carried out, concluding that the proposed methodology can provide an acceptable solution in a reduced computational time

  14. User manual for Blossom statistical package for R

    Science.gov (United States)

    Talbert, Marian; Cade, Brian S.

    2005-01-01

    Blossom is an R package with functions for making statistical comparisons with distance-function based permutation tests developed by P.W. Mielke, Jr. and colleagues at Colorado State University (Mielke and Berry, 2001) and for testing parameters estimated in linear models with permutation procedures developed by B. S. Cade and colleagues at the Fort Collins Science Center, U.S. Geological Survey. This manual is intended to provide identical documentation of the statistical methods and interpretations as the manual by Cade and Richards (2005) does for the original Fortran program, but with changes made with respect to command inputs and outputs to reflect the new implementation as a package for R (R Development Core Team, 2012). This implementation in R has allowed for numerous improvements not supported by the Cade and Richards (2005) Fortran implementation, including use of categorical predictor variables in most routines.

  15. Damage detection of engine bladed-disks using multivariate statistical analysis

    Science.gov (United States)

    Fang, X.; Tang, J.

    2006-03-01

    The timely detection of damage in aero-engine bladed-disks is an extremely important and challenging research topic. Bladed-disks have high modal density and, particularly, their vibration responses are subject to significant uncertainties due to manufacturing tolerance (blade-to-blade difference or mistuning), operating condition change and sensor noise. In this study, we present a new methodology for the on-line damage detection of engine bladed-disks using their vibratory responses during spin-up or spin-down operations which can be measured by blade-tip-timing sensing technique. We apply a principle component analysis (PCA)-based approach for data compression, feature extraction, and denoising. The non-model based damage detection is achieved by analyzing the change between response features of the healthy structure and of the damaged one. We facilitate such comparison by incorporating the Hotelling's statistic T2 analysis, which yields damage declaration with a given confidence level. The effectiveness of the method is demonstrated by case studies.

  16. Statistical thermodynamics

    International Nuclear Information System (INIS)

    Lim, Gyeong Hui

    2008-03-01

    This book consists of 15 chapters, which are basic conception and meaning of statistical thermodynamics, Maxwell-Boltzmann's statistics, ensemble, thermodynamics function and fluctuation, statistical dynamics with independent particle system, ideal molecular system, chemical equilibrium and chemical reaction rate in ideal gas mixture, classical statistical thermodynamics, ideal lattice model, lattice statistics and nonideal lattice model, imperfect gas theory on liquid, theory on solution, statistical thermodynamics of interface, statistical thermodynamics of a high molecule system and quantum statistics

  17. Bilateral Comparison CIEMAT-CENTIS-DMR for radionuclide activity measurements; Comparacion Bilateral CIEMAT-CENTIS-DMR de la Medida de Actividad de Radionucleidos

    Energy Technology Data Exchange (ETDEWEB)

    Oropesa Verdecia, P.; Garcia-Torano, E.

    2004-07-01

    We present the results of a bilateral comparison of radionuclide activity measurements between the Radionuclide Metrology Department of the Center of Isotopes of Cuba (CENTIS-DMR), and the Ionising Radiation Metrology Laboratory (LMRI) of the Centro de Investigaciones Energeticas, Medioambientales y Tecnologicas (CIEMAT) of Spain. The aim of the comparison was to establish the comparability of the measurement instruments and methods used to obtain radioactive reference materials of some gamma-emitting nuclides at CENTIS-DMR. The results revealed that there are no statistically significant differences between the data reported by both laboratories. (Author) 7 refs.

  18. Statistical comparisons of Savannah River anemometer data applied to quality control of instrument networks

    International Nuclear Information System (INIS)

    Porch, W.M.; Dickerson, M.H.

    1976-08-01

    Continuous monitoring of extensive meteorological instrument arrays is a requirement in the study of important mesoscale atmospheric phenomena. The phenomena include pollution transport prediction from continuous area sources, or one time releases of toxic materials and wind energy prospecting in areas of topographic enhancement of the wind. Quality control techniques that can be applied to these data to determine if the instruments are operating within their prescribed tolerances were investigated. Savannah River Plant data were analyzed with both independent and comparative statistical techniques. The independent techniques calculate the mean, standard deviation, moments about the mean, kurtosis, skewness, probability density distribution, cumulative probability and power spectra. The comparative techniques include covariance, cross-spectral analysis and two dimensional probability density. At present the calculating and plotting routines for these statistical techniques do not reside in a single code so it is difficult to ascribe independent memory size and computation time accurately. However, given the flexibility of a data system which includes simple and fast running statistics at the instrument end of the data network (ASF) and more sophisticated techniques at the computational end (ACF) a proper balance will be attained. These techniques are described in detail and preliminary results are presented

  19. Response statistics of rotating shaft with non-linear elastic restoring forces by path integration

    Science.gov (United States)

    Gaidai, Oleg; Naess, Arvid; Dimentberg, Michael

    2017-07-01

    Extreme statistics of random vibrations is studied for a Jeffcott rotor under uniaxial white noise excitation. Restoring force is modelled as elastic non-linear; comparison is done with linearized restoring force to see the force non-linearity effect on the response statistics. While for the linear model analytical solutions and stability conditions are available, it is not generally the case for non-linear system except for some special cases. The statistics of non-linear case is studied by applying path integration (PI) method, which is based on the Markov property of the coupled dynamic system. The Jeffcott rotor response statistics can be obtained by solving the Fokker-Planck (FP) equation of the 4D dynamic system. An efficient implementation of PI algorithm is applied, namely fast Fourier transform (FFT) is used to simulate dynamic system additive noise. The latter allows significantly reduce computational time, compared to the classical PI. Excitation is modelled as Gaussian white noise, however any kind distributed white noise can be implemented with the same PI technique. Also multidirectional Markov noise can be modelled with PI in the same way as unidirectional. PI is accelerated by using Monte Carlo (MC) estimated joint probability density function (PDF) as initial input. Symmetry of dynamic system was utilized to afford higher mesh resolution. Both internal (rotating) and external damping are included in mechanical model of the rotor. The main advantage of using PI rather than MC is that PI offers high accuracy in the probability distribution tail. The latter is of critical importance for e.g. extreme value statistics, system reliability, and first passage probability.

  20. RAId_aPS: MS/MS analysis with multiple scoring functions and spectrum-specific statistics.

    Science.gov (United States)

    Alves, Gelio; Ogurtsov, Aleksey Y; Yu, Yi-Kuo

    2010-11-16

    Statistically meaningful comparison/combination of peptide identification results from various search methods is impeded by the lack of a universal statistical standard. Providing an E-value calibration protocol, we demonstrated earlier the feasibility of translating either the score or heuristic E-value reported by any method into the textbook-defined E-value, which may serve as the universal statistical standard. This protocol, although robust, may lose spectrum-specific statistics and might require a new calibration when changes in experimental setup occur. To mitigate these issues, we developed a new MS/MS search tool, RAId_aPS, that is able to provide spectrum-specific-values for additive scoring functions. Given a selection of scoring functions out of RAId score, K-score, Hyperscore and XCorr, RAId_aPS generates the corresponding score histograms of all possible peptides using dynamic programming. Using these score histograms to assign E-values enables a calibration-free protocol for accurate significance assignment for each scoring function. RAId_aPS features four different modes: (i) compute the total number of possible peptides for a given molecular mass range, (ii) generate the score histogram given a MS/MS spectrum and a scoring function, (iii) reassign E-values for a list of candidate peptides given a MS/MS spectrum and the scoring functions chosen, and (iv) perform database searches using selected scoring functions. In modes (iii) and (iv), RAId_aPS is also capable of combining results from different scoring functions using spectrum-specific statistics. The web link is http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/raid_aps/index.html. Relevant binaries for Linux, Windows, and Mac OS X are available from the same page.

  1. Bias in iterative reconstruction of low-statistics PET data: benefits of a resolution model

    Energy Technology Data Exchange (ETDEWEB)

    Walker, M D; Asselin, M-C; Julyan, P J; Feldmann, M; Matthews, J C [School of Cancer and Enabling Sciences, Wolfson Molecular Imaging Centre, MAHSC, University of Manchester, Manchester M20 3LJ (United Kingdom); Talbot, P S [Mental Health and Neurodegeneration Research Group, Wolfson Molecular Imaging Centre, MAHSC, University of Manchester, Manchester M20 3LJ (United Kingdom); Jones, T, E-mail: matthew.walker@manchester.ac.uk [Academic Department of Radiation Oncology, Christie Hospital, University of Manchester, Manchester M20 4BX (United Kingdom)

    2011-02-21

    Iterative image reconstruction methods such as ordered-subset expectation maximization (OSEM) are widely used in PET. Reconstructions via OSEM are however reported to be biased for low-count data. We investigated this and considered the impact for dynamic PET. Patient listmode data were acquired in [{sup 11}C]DASB and [{sup 15}O]H{sub 2}O scans on the HRRT brain PET scanner. These data were subsampled to create many independent, low-count replicates. The data were reconstructed and the images from low-count data were compared to the high-count originals (from the same reconstruction method). This comparison enabled low-statistics bias to be calculated for the given reconstruction, as a function of the noise-equivalent counts (NEC). Two iterative reconstruction methods were tested, one with and one without an image-based resolution model (RM). Significant bias was observed when reconstructing data of low statistical quality, for both subsampled human and simulated data. For human data, this bias was substantially reduced by including a RM. For [{sup 11}C]DASB the low-statistics bias in the caudate head at 1.7 M NEC (approx. 30 s) was -5.5% and -13% with and without RM, respectively. We predicted biases in the binding potential of -4% and -10%. For quantification of cerebral blood flow for the whole-brain grey- or white-matter, using [{sup 15}O]H{sub 2}O and the PET autoradiographic method, a low-statistics bias of <2.5% and <4% was predicted for reconstruction with and without the RM. The use of a resolution model reduces low-statistics bias and can hence be beneficial for quantitative dynamic PET.

  2. [Statistics for statistics?--Thoughts about psychological tools].

    Science.gov (United States)

    Berger, Uwe; Stöbel-Richter, Yve

    2007-12-01

    Statistical methods take a prominent place among psychologists' educational programs. Being known as difficult to understand and heavy to learn, students fear of these contents. Those, who do not aspire after a research carrier at the university, will forget the drilled contents fast. Furthermore, because it does not apply for the work with patients and other target groups at a first glance, the methodological education as a whole was often questioned. For many psychological practitioners the statistical education makes only sense by enforcing respect against other professions, namely physicians. For the own business, statistics is rarely taken seriously as a professional tool. The reason seems to be clear: Statistics treats numbers, while psychotherapy treats subjects. So, does statistics ends in itself? With this article, we try to answer the question, if and how statistical methods were represented within the psychotherapeutical and psychological research. Therefore, we analyzed 46 Originals of a complete volume of the journal Psychotherapy, Psychosomatics, Psychological Medicine (PPmP). Within the volume, 28 different analyse methods were applied, from which 89 per cent were directly based upon statistics. To be able to write and critically read Originals as a backbone of research, presumes a high degree of statistical education. To ignore statistics means to ignore research and at least to reveal the own professional work to arbitrariness.

  3. Real-world comparison of probe vehicle emissions and fuel consumption using diesel and 5% biodiesel (B5) blend

    Energy Technology Data Exchange (ETDEWEB)

    Ropkins, Karl; Quinn, Robert; Tate, James; Bell, Margaret [Institute for Transport Studies, University of Leeds, Leeds, LS2 9JT (United Kingdom); Beebe, Joe [National Center for Vehicle Emissions Control and Safety, Colorado State University, Colorado 80523-1584 (United States); Li, Hu; Daham, Basil; Andrews, Gordon [Energy and Resources Research Institute, University of Leeds, Leeds, LS2 9JT (United Kingdom)

    2007-04-15

    An instrumented EURO I Ford Mondeo was used to perform a real-world comparison of vehicle exhaust (carbon dioxide, carbon monoxide, hydrocarbons and oxides of nitrogen) emissions and fuel consumption for diesel and 5% biodiesel in diesel blend (B5) fuels. Data were collected on multiple replicates of three standardised on-road journeys: (1) a simple urban route; (2) a combined urban/inter-urban route; and, (3) an urban route subject to significant traffic management. At the total journey measurement level, data collected here indicate that replacing diesel with a B5 substitute could result in significant increases in both NO{sub x} emissions (8-13%) and fuel consumption (7-8%). However, statistical analysis of probe vehicle data demonstrated the limitations of comparisons based on such total journey measurements, i.e., methods analogous to those used in conventional dynamometer/drive cycle fuel comparison studies. Here, methods based on the comparison of speed/acceleration emissions and fuel consumption maps are presented. Significant variations across the speed/acceleration surface indicated that direct emission and fuel consumption impacts were highly dependent on the journey/drive cycle employed. The emission and fuel consumption maps were used both as descriptive tools to characterise impacts and predictive tools to estimate journey-specific emission and fuel consumption effects. (author)

  4. Real-world comparison of probe vehicle emissions and fuel consumption using diesel and 5% biodiesel (B5) blend

    International Nuclear Information System (INIS)

    Ropkins, Karl; Quinn, Robert; Tate, James; Bell, Margaret; Beebe, Joe; Li, Hu; Daham, Basil; Andrews, Gordon

    2007-01-01

    An instrumented EURO I Ford Mondeo was used to perform a real-world comparison of vehicle exhaust (carbon dioxide, carbon monoxide, hydrocarbons and oxides of nitrogen) emissions and fuel consumption for diesel and 5% biodiesel in diesel blend (B5) fuels. Data were collected on multiple replicates of three standardised on-road journeys: (1) a simple urban route; (2) a combined urban/inter-urban route; and, (3) an urban route subject to significant traffic management. At the total journey measurement level, data collected here indicate that replacing diesel with a B5 substitute could result in significant increases in both NO x emissions (8-13%) and fuel consumption (7-8%). However, statistical analysis of probe vehicle data demonstrated the limitations of comparisons based on such total journey measurements, i.e., methods analogous to those used in conventional dynamometer/drive cycle fuel comparison studies. Here, methods based on the comparison of speed/acceleration emissions and fuel consumption maps are presented. Significant variations across the speed/acceleration surface indicated that direct emission and fuel consumption impacts were highly dependent on the journey/drive cycle employed. The emission and fuel consumption maps were used both as descriptive tools to characterise impacts and predictive tools to estimate journey-specific emission and fuel consumption effects. (author)

  5. Poverty Assessment in the Philippines and Indonesia: A Methodological Comparison

    OpenAIRE

    David, Isidoro P.; Asra, Abuzar; Virola, Romulo A.

    1997-01-01

    Existing official poverty statistics cannot be directly utilized for cross-country comparison. This paper illustrates why. It presents an assessment of poverty measurement in the Philippines and Indonesia by examining methodologies used and the disparity in their respective poverty statistics. A more comparable poverty estimates in these countries are provided.

  6. Modeling polymer-induced interactions between two grafted surfaces: comparison between interfacial statistical associating fluid theory and self-consistent field theory.

    Science.gov (United States)

    Jain, Shekhar; Ginzburg, Valeriy V; Jog, Prasanna; Weinhold, Jeffrey; Srivastava, Rakesh; Chapman, Walter G

    2009-07-28

    The interaction between two polymer grafted surfaces is important in many applications, such as nanocomposites, colloid stabilization, and polymer alloys. In our previous work [Jain et al., J. Chem. Phys. 128, 154910 (2008)], we showed that interfacial statistical associating fluid density theory (iSAFT) successfully calculates the structure of grafted polymer chains in the absence/presence of a free polymer. In the current work, we have applied this density functional theory to calculate the force of interaction between two such grafted monolayers in implicit good solvent conditions. In particular, we have considered the case where the segment sizes of the free (sigma(f)) and grafted (sigma(g)) polymers are different. The interactions between the two monolayers in the absence of the free polymer are always repulsive. However, in the presence of the free polymer, the force either can be purely repulsive or can have an attractive minimum depending upon the relative chain lengths of the free (N(f)) and grafted polymers (N(g)). The attractive minimum is observed only when the ratio alpha = N(f)/N(g) is greater than a critical value. We find that these critical values of alpha satisfy the following scaling relation: rho(g) square root(N(g)) beta(3) proportional to alpha(-lambda), where beta = sigma(f)/sigma(g) and lambda is the scaling exponent. For beta = 1 or the same segment sizes of the free and grafted polymers, this scaling relation is in agreement with those from previous theoretical studies using self-consistent field theory (SCFT). Detailed comparisons between iSAFT and SCFT are made for the structures of the monolayers and their forces of interaction. These comparisons lead to interesting implications for the modeling of nanocomposite thermodynamics.

  7. An Efficient Graph-based Method for Long-term Land-use Change Statistics

    Directory of Open Access Journals (Sweden)

    Yipeng Zhang

    2015-12-01

    Full Text Available Statistical analysis of land-use change plays an important role in sustainable land management and has received increasing attention from scholars and administrative departments. However, the statistical process involving spatial overlay analysis remains difficult and needs improvement to deal with mass land-use data. In this paper, we introduce a spatio-temporal flow network model to reveal the hidden relational information among spatio-temporal entities. Based on graph theory, the constant condition of saturated multi-commodity flow is derived. A new method based on a network partition technique of spatio-temporal flow network are proposed to optimize the transition statistical process. The effectiveness and efficiency of the proposed method is verified through experiments using land-use data in Hunan from 2009 to 2014. In the comparison among three different land-use change statistical methods, the proposed method exhibits remarkable superiority in efficiency.

  8. On a curvature-statistics theorem

    International Nuclear Information System (INIS)

    Calixto, M; Aldaya, V

    2008-01-01

    The spin-statistics theorem in quantum field theory relates the spin of a particle to the statistics obeyed by that particle. Here we investigate an interesting correspondence or connection between curvature (κ = ±1) and quantum statistics (Fermi-Dirac and Bose-Einstein, respectively). The interrelation between both concepts is established through vacuum coherent configurations of zero modes in quantum field theory on the compact O(3) and noncompact O(2; 1) (spatial) isometry subgroups of de Sitter and Anti de Sitter spaces, respectively. The high frequency limit, is retrieved as a (zero curvature) group contraction to the Newton-Hooke (harmonic oscillator) group. We also make some comments on the physical significance of the vacuum energy density and the cosmological constant problem.

  9. On a curvature-statistics theorem

    Energy Technology Data Exchange (ETDEWEB)

    Calixto, M [Departamento de Matematica Aplicada y Estadistica, Universidad Politecnica de Cartagena, Paseo Alfonso XIII 56, 30203 Cartagena (Spain); Aldaya, V [Instituto de Astrofisica de Andalucia, Apartado Postal 3004, 18080 Granada (Spain)], E-mail: Manuel.Calixto@upct.es

    2008-08-15

    The spin-statistics theorem in quantum field theory relates the spin of a particle to the statistics obeyed by that particle. Here we investigate an interesting correspondence or connection between curvature ({kappa} = {+-}1) and quantum statistics (Fermi-Dirac and Bose-Einstein, respectively). The interrelation between both concepts is established through vacuum coherent configurations of zero modes in quantum field theory on the compact O(3) and noncompact O(2; 1) (spatial) isometry subgroups of de Sitter and Anti de Sitter spaces, respectively. The high frequency limit, is retrieved as a (zero curvature) group contraction to the Newton-Hooke (harmonic oscillator) group. We also make some comments on the physical significance of the vacuum energy density and the cosmological constant problem.

  10. Statistical Analysis of the Polarimetric Cloud Analysis and Seeding Test (POLCAST) Field Projects

    Science.gov (United States)

    Ekness, Jamie Lynn

    The North Dakota farming industry brings in more than $4.1 billion annually in cash receipts. Unfortunately, agriculture sales vary significantly from year to year, which is due in large part to weather events such as hail storms and droughts. One method to mitigate drought is to use hygroscopic seeding to increase the precipitation efficiency of clouds. The North Dakota Atmospheric Research Board (NDARB) sponsored the Polarimetric Cloud Analysis and Seeding Test (POLCAST) research project to determine the effectiveness of hygroscopic seeding in North Dakota. The POLCAST field projects obtained airborne and radar observations, while conducting randomized cloud seeding. The Thunderstorm Identification Tracking and Nowcasting (TITAN) program is used to analyze radar data (33 usable cases) in determining differences in the duration of the storm, rain rate and total rain amount between seeded and non-seeded clouds. The single ratio of seeded to non-seeded cases is 1.56 (0.28 mm/0.18 mm) or 56% increase for the average hourly rainfall during the first 60 minutes after target selection. A seeding effect is indicated with the lifetime of the storms increasing by 41 % between seeded and non-seeded clouds for the first 60 minutes past seeding decision. A double ratio statistic, a comparison of radar derived rain amount of the last 40 minutes of a case (seed/non-seed), compared to the first 20 minutes (seed/non-seed), is used to account for the natural variability of the cloud system and gives a double ratio of 1.85. The Mann-Whitney test on the double ratio of seeded to non-seeded cases (33 cases) gives a significance (p-value) of 0.063. Bootstrapping analysis of the POLCAST set indicates that 50 cases would provide statistically significant results based on the Mann-Whitney test of the double ratio. All the statistical analysis conducted on the POLCAST data set show that hygroscopic seeding in North Dakota does increase precipitation. While an additional POLCAST field

  11. Testing statistical hypotheses of equivalence

    CERN Document Server

    Wellek, Stefan

    2010-01-01

    Equivalence testing has grown significantly in importance over the last two decades, especially as its relevance to a variety of applications has become understood. Yet published work on the general methodology remains scattered in specialists' journals, and for the most part, it focuses on the relatively narrow topic of bioequivalence assessment.With a far broader perspective, Testing Statistical Hypotheses of Equivalence provides the first comprehensive treatment of statistical equivalence testing. The author addresses a spectrum of specific, two-sided equivalence testing problems, from the

  12. Rule-based statistical data mining agents for an e-commerce application

    Science.gov (United States)

    Qin, Yi; Zhang, Yan-Qing; King, K. N.; Sunderraman, Rajshekhar

    2003-03-01

    Intelligent data mining techniques have useful e-Business applications. Because an e-Commerce application is related to multiple domains such as statistical analysis, market competition, price comparison, profit improvement and personal preferences, this paper presents a hybrid knowledge-based e-Commerce system fusing intelligent techniques, statistical data mining, and personal information to enhance QoS (Quality of Service) of e-Commerce. A Web-based e-Commerce application software system, eDVD Web Shopping Center, is successfully implemented uisng Java servlets and an Oracle81 database server. Simulation results have shown that the hybrid intelligent e-Commerce system is able to make smart decisions for different customers.

  13. Detection of significant protein coevolution.

    Science.gov (United States)

    Ochoa, David; Juan, David; Valencia, Alfonso; Pazos, Florencio

    2015-07-01

    The evolution of proteins cannot be fully understood without taking into account the coevolutionary linkages entangling them. From a practical point of view, coevolution between protein families has been used as a way of detecting protein interactions and functional relationships from genomic information. The most common approach to inferring protein coevolution involves the quantification of phylogenetic tree similarity using a family of methodologies termed mirrortree. In spite of their success, a fundamental problem of these approaches is the lack of an adequate statistical framework to assess the significance of a given coevolutionary score (tree similarity). As a consequence, a number of ad hoc filters and arbitrary thresholds are required in an attempt to obtain a final set of confident coevolutionary signals. In this work, we developed a method for associating confidence estimators (P values) to the tree-similarity scores, using a null model specifically designed for the tree comparison problem. We show how this approach largely improves the quality and coverage (number of pairs that can be evaluated) of the detected coevolution in all the stages of the mirrortree workflow, independently of the starting genomic information. This not only leads to a better understanding of protein coevolution and its biological implications, but also to obtain a highly reliable and comprehensive network of predicted interactions, as well as information on the substructure of macromolecular complexes using only genomic information. The software and datasets used in this work are freely available at: http://csbg.cnb.csic.es/pMT/. pazos@cnb.csic.es Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  14. A quantitative comparison of corrective and perfective maintenance

    Science.gov (United States)

    Henry, Joel; Cain, James

    1994-01-01

    This paper presents a quantitative comparison of corrective and perfective software maintenance activities. The comparison utilizes basic data collected throughout the maintenance process. The data collected are extensive and allow the impact of both types of maintenance to be quantitatively evaluated and compared. Basic statistical techniques test relationships between and among process and product data. The results show interesting similarities and important differences in both process and product characteristics.

  15. Statistical Analysis of Radio Propagation Channel in Ruins Environment

    Directory of Open Access Journals (Sweden)

    Jiao He

    2015-01-01

    Full Text Available The cellphone based localization system for search and rescue in complex high density ruins has attracted a great interest in recent years, where the radio channel characteristics are critical for design and development of such a system. This paper presents a spatial smoothing estimation via rotational invariance technique (SS-ESPRIT for radio channel characterization of high density ruins. The radio propagations at three typical mobile communication bands (0.9, 1.8, and 2 GHz are investigated in two different scenarios. Channel parameters, such as arrival time, delays, and complex amplitudes, are statistically analyzed. Furthermore, a channel simulator is built based on these statistics. By comparison analysis of average excess delay and delay spread, the validation results show a good agreement between the measurements and channel modeling results.

  16. Calculating statistical distributions from operator relations: The statistical distributions of various intermediate statistics

    International Nuclear Information System (INIS)

    Dai, Wu-Sheng; Xie, Mi

    2013-01-01

    In this paper, we give a general discussion on the calculation of the statistical distribution from a given operator relation of creation, annihilation, and number operators. Our result shows that as long as the relation between the number operator and the creation and annihilation operators can be expressed as a † b=Λ(N) or N=Λ −1 (a † b), where N, a † , and b denote the number, creation, and annihilation operators, i.e., N is a function of quadratic product of the creation and annihilation operators, the corresponding statistical distribution is the Gentile distribution, a statistical distribution in which the maximum occupation number is an arbitrary integer. As examples, we discuss the statistical distributions corresponding to various operator relations. In particular, besides the Bose–Einstein and Fermi–Dirac cases, we discuss the statistical distributions for various schemes of intermediate statistics, especially various q-deformation schemes. Our result shows that the statistical distributions corresponding to various q-deformation schemes are various Gentile distributions with different maximum occupation numbers which are determined by the deformation parameter q. This result shows that the results given in much literature on the q-deformation distribution are inaccurate or incomplete. -- Highlights: ► A general discussion on calculating statistical distribution from relations of creation, annihilation, and number operators. ► A systemic study on the statistical distributions corresponding to various q-deformation schemes. ► Arguing that many results of q-deformation distributions in literature are inaccurate or incomplete

  17. A power comparison of generalized additive models and the spatial scan statistic in a case-control setting

    Directory of Open Access Journals (Sweden)

    Ozonoff Al

    2010-07-01

    Full Text Available Abstract Background A common, important problem in spatial epidemiology is measuring and identifying variation in disease risk across a study region. In application of statistical methods, the problem has two parts. First, spatial variation in risk must be detected across the study region and, second, areas of increased or decreased risk must be correctly identified. The location of such areas may give clues to environmental sources of exposure and disease etiology. One statistical method applicable in spatial epidemiologic settings is a generalized additive model (GAM which can be applied with a bivariate LOESS smoother to account for geographic location as a possible predictor of disease status. A natural hypothesis when applying this method is whether residential location of subjects is associated with the outcome, i.e. is the smoothing term necessary? Permutation tests are a reasonable hypothesis testing method and provide adequate power under a simple alternative hypothesis. These tests have yet to be compared to other spatial statistics. Results This research uses simulated point data generated under three alternative hypotheses to evaluate the properties of the permutation methods and compare them to the popular spatial scan statistic in a case-control setting. Case 1 was a single circular cluster centered in a circular study region. The spatial scan statistic had the highest power though the GAM method estimates did not fall far behind. Case 2 was a single point source located at the center of a circular cluster and Case 3 was a line source at the center of the horizontal axis of a square study region. Each had linearly decreasing logodds with distance from the point. The GAM methods outperformed the scan statistic in Cases 2 and 3. Comparing sensitivity, measured as the proportion of the exposure source correctly identified as high or low risk, the GAM methods outperformed the scan statistic in all three Cases. Conclusions The GAM

  18. A power comparison of generalized additive models and the spatial scan statistic in a case-control setting.

    Science.gov (United States)

    Young, Robin L; Weinberg, Janice; Vieira, Verónica; Ozonoff, Al; Webster, Thomas F

    2010-07-19

    A common, important problem in spatial epidemiology is measuring and identifying variation in disease risk across a study region. In application of statistical methods, the problem has two parts. First, spatial variation in risk must be detected across the study region and, second, areas of increased or decreased risk must be correctly identified. The location of such areas may give clues to environmental sources of exposure and disease etiology. One statistical method applicable in spatial epidemiologic settings is a generalized additive model (GAM) which can be applied with a bivariate LOESS smoother to account for geographic location as a possible predictor of disease status. A natural hypothesis when applying this method is whether residential location of subjects is associated with the outcome, i.e. is the smoothing term necessary? Permutation tests are a reasonable hypothesis testing method and provide adequate power under a simple alternative hypothesis. These tests have yet to be compared to other spatial statistics. This research uses simulated point data generated under three alternative hypotheses to evaluate the properties of the permutation methods and compare them to the popular spatial scan statistic in a case-control setting. Case 1 was a single circular cluster centered in a circular study region. The spatial scan statistic had the highest power though the GAM method estimates did not fall far behind. Case 2 was a single point source located at the center of a circular cluster and Case 3 was a line source at the center of the horizontal axis of a square study region. Each had linearly decreasing logodds with distance from the point. The GAM methods outperformed the scan statistic in Cases 2 and 3. Comparing sensitivity, measured as the proportion of the exposure source correctly identified as high or low risk, the GAM methods outperformed the scan statistic in all three Cases. The GAM permutation testing methods provide a regression

  19. Manipulating measurement scales in medical statistical analysis and data mining: A review of methodologies

    Directory of Open Access Journals (Sweden)

    Hamid Reza Marateb

    2014-01-01

    Full Text Available Background: selecting the correct statistical test and data mining method depends highly on the measurement scale of data, type of variables, and purpose of the analysis. Different measurement scales are studied in details and statistical comparison, modeling, and data mining methods are studied based upon using several medical examples. We have presented two ordinal-variables clustering examples, as more challenging variable in analysis, using Wisconsin Breast Cancer Data (WBCD. Ordinal-to-Interval scale conversion example: a breast cancer database of nine 10-level ordinal variables for 683 patients was analyzed by two ordinal-scale clustering methods. The performance of the clustering methods was assessed by comparison with the gold standard groups of malignant and benign cases that had been identified by clinical tests. Results: the sensitivity and accuracy of the two clustering methods were 98% and 96%, respectively. Their specificity was comparable. Conclusion: by using appropriate clustering algorithm based on the measurement scale of the variables in the study, high performance is granted. Moreover, descriptive and inferential statistics in addition to modeling approach must be selected based on the scale of the variables.

  20. Manipulating measurement scales in medical statistical analysis and data mining: A review of methodologies

    Science.gov (United States)

    Marateb, Hamid Reza; Mansourian, Marjan; Adibi, Peyman; Farina, Dario

    2014-01-01

    Background: selecting the correct statistical test and data mining method depends highly on the measurement scale of data, type of variables, and purpose of the analysis. Different measurement scales are studied in details and statistical comparison, modeling, and data mining methods are studied based upon using several medical examples. We have presented two ordinal–variables clustering examples, as more challenging variable in analysis, using Wisconsin Breast Cancer Data (WBCD). Ordinal-to-Interval scale conversion example: a breast cancer database of nine 10-level ordinal variables for 683 patients was analyzed by two ordinal-scale clustering methods. The performance of the clustering methods was assessed by comparison with the gold standard groups of malignant and benign cases that had been identified by clinical tests. Results: the sensitivity and accuracy of the two clustering methods were 98% and 96%, respectively. Their specificity was comparable. Conclusion: by using appropriate clustering algorithm based on the measurement scale of the variables in the study, high performance is granted. Moreover, descriptive and inferential statistics in addition to modeling approach must be selected based on the scale of the variables. PMID:24672565

  1. Using Artificial Neural Networks in Educational Research: Some Comparisons with Linear Statistical Models.

    Science.gov (United States)

    Everson, Howard T.; And Others

    This paper explores the feasibility of neural computing methods such as artificial neural networks (ANNs) and abductory induction mechanisms (AIM) for use in educational measurement. ANNs and AIMS methods are contrasted with more traditional statistical techniques, such as multiple regression and discriminant function analyses, for making…

  2. Statistical energy as a tool for binning-free, multivariate goodness-of-fit tests, two-sample comparison and unfolding

    International Nuclear Information System (INIS)

    Aslan, B.; Zech, G.

    2005-01-01

    We introduce the novel concept of statistical energy as a statistical tool. We define statistical energy of statistical distributions in a similar way as for electric charge distributions. Charges of opposite sign are in a state of minimum energy if they are equally distributed. This property is used to check whether two samples belong to the same parent distribution, to define goodness-of-fit tests and to unfold distributions distorted by measurement. The approach is binning-free and especially powerful in multidimensional applications

  3. An improved mixing model providing joint statistics of scalar and scalar dissipation

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, Daniel W. [Department of Energy Resources Engineering, Stanford University, Stanford, CA (United States); Jenny, Patrick [Institute of Fluid Dynamics, ETH Zurich (Switzerland)

    2008-11-15

    For the calculation of nonpremixed turbulent flames with thin reaction zones the joint probability density function (PDF) of the mixture fraction and its dissipation rate plays an important role. The corresponding PDF transport equation involves a mixing model for the closure of the molecular mixing term. Here, the parameterized scalar profile (PSP) mixing model is extended to provide the required joint statistics. Model predictions are validated using direct numerical simulation (DNS) data of a passive scalar mixing in a statistically homogeneous turbulent flow. Comparisons between the DNS and the model predictions are provided, which involve different initial scalar-field lengthscales. (author)

  4. Statistical and theoretical research

    International Nuclear Information System (INIS)

    Anon.

    1983-01-01

    Significant accomplishments include the creation of field designs to detect population impacts, new census procedures for small mammals, and methods for designing studies to determine where and how much of a contaminant is extent over certain landscapes. A book describing these statistical methods is currently being written and will apply to a variety of environmental contaminants, including radionuclides. PNL scientists also have devised an analytical method for predicting the success of field eexperiments on wild populations. Two highlights of current research are the discoveries that population of free-roaming horse herds can double in four years and that grizzly bear populations may be substantially smaller than once thought. As stray horses become a public nuisance at DOE and other large Federal sites, it is important to determine their number. Similar statistical theory can be readily applied to other situations where wild animals are a problem of concern to other government agencies. Another book, on statistical aspects of radionuclide studies, is written specifically for researchers in radioecology

  5. Indirect treatment comparison of bevacizumab + interferon-α-2a vs tyrosine kinase inhibitors in first-line metastatic renal cell carcinoma therapy

    Directory of Open Access Journals (Sweden)

    Gerald HJ Mickisch

    2011-01-01

    +IFN. Sensitivity analyses taking into account real-life influence of patient compliance on clinical outcomes were performed.Results: The indirect efficacy comparison resulted in a statistically nonsignificant PFS difference of BEV+IFN vs SUN (HR: 1.06; 95% CI: 0.78–1.45; P = 0.73 and of BEV+IFN vs PAZ (range based on different connector trials; HR: 0.74–1.03; P = 0.34–0.92. Simulating real-life patient compliance and its effectiveness impact showed an increased tendency towards BEV+IFN without reaching statistical significance.Conclusions: There is no statistically significant PFS difference between BEV+IFN and TKIs in first-line mRCC. These findings imply that additional treatment decision criteria such as tolerability and therapy sequencing need to be considered to guide treatment decisions.Keywords: indirect treatment comparison, progression-free survival, renal cell carcinoma, bevacizumab, sunitinib, pazopanib

  6. Comparison of Vital Statistics Definitions of Suicide against a Coroner Reference Standard: A Population-Based Linkage Study.

    Science.gov (United States)

    Gatov, Evgenia; Kurdyak, Paul; Sinyor, Mark; Holder, Laura; Schaffer, Ayal

    2018-03-01

    We sought to determine the utility of health administrative databases for population-based suicide surveillance, as these data are generally more accessible and more integrated with other data sources compared to coroners' records. In this retrospective validation study, we identified all coroner-confirmed suicides between 2003 and 2012 in Ontario residents aged 21 and over and linked this information to Statistics Canada's vital statistics data set. We examined the overlap between the underlying cause of death field and secondary causes of death using ICD-9 and ICD-10 codes for deliberate self-harm (i.e., suicide) and examined the sociodemographic and clinical characteristics of misclassified records. Among 10,153 linked deaths, there was a very high degree of overlap between records coded as deliberate self-harm in the vital statistics data set and coroner-confirmed suicides using both ICD-9 and ICD-10 definitions (96.88% and 96.84% sensitivity, respectively). This alignment steadily increased throughout the study period (from 95.9% to 98.8%). Other vital statistics diagnoses in primary fields included uncategorised signs and symptoms. Vital statistics records that were misclassified did not differ from valid records in terms of sociodemographic characteristics but were more likely to have had an unspecified place of injury on the death certificate ( P statistics and coroner classification of suicide deaths suggests that health administrative data can reliably be used to identify suicide deaths.

  7. Whither Statistics Education Research?

    Science.gov (United States)

    Watson, Jane

    2016-01-01

    This year marks the 25th anniversary of the publication of a "National Statement on Mathematics for Australian Schools", which was the first curriculum statement this country had including "Chance and Data" as a significant component. It is hence an opportune time to survey the history of the related statistics education…

  8. A statistical comparison of cirrus particle size distributions measured using the 2-D stereo probe during the TC4, SPARTICUS, and MACPEX flight campaigns with historical cirrus datasets

    Science.gov (United States)

    Schwartz, M. Christian

    2017-08-01

    This paper addresses two straightforward questions. First, how similar are the statistics of cirrus particle size distribution (PSD) datasets collected using the Two-Dimensional Stereo (2D-S) probe to cirrus PSD datasets collected using older Particle Measuring Systems (PMS) 2-D Cloud (2DC) and 2-D Precipitation (2DP) probes? Second, how similar are the datasets when shatter-correcting post-processing is applied to the 2DC datasets? To answer these questions, a database of measured and parameterized cirrus PSDs - constructed from measurements taken during the Small Particles in Cirrus (SPARTICUS); Mid-latitude Airborne Cirrus Properties Experiment (MACPEX); and Tropical Composition, Cloud, and Climate Coupling (TC4) flight campaigns - is used.Bulk cloud quantities are computed from the 2D-S database in three ways: first, directly from the 2D-S data; second, by applying the 2D-S data to ice PSD parameterizations developed using sets of cirrus measurements collected using the older PMS probes; and third, by applying the 2D-S data to a similar parameterization developed using the 2D-S data themselves. This is done so that measurements of the same cloud volumes by parameterized versions of the 2DC and 2D-S can be compared with one another. It is thereby seen - given the same cloud field and given the same assumptions concerning ice crystal cross-sectional area, density, and radar cross section - that the parameterized 2D-S and the parameterized 2DC predict similar distributions of inferred shortwave extinction coefficient, ice water content, and 94 GHz radar reflectivity. However, the parameterization of the 2DC based on uncorrected data predicts a statistically significantly higher number of total ice crystals and a larger ratio of small ice crystals to large ice crystals than does the parameterized 2D-S. The 2DC parameterization based on shatter-corrected data also predicts statistically different numbers of ice crystals than does the parameterized 2D-S, but the

  9. A statistical comparison of cirrus particle size distributions measured using the 2-D stereo probe during the TC4, SPARTICUS, and MACPEX flight campaigns with historical cirrus datasets

    Directory of Open Access Journals (Sweden)

    M. C. Schwartz

    2017-08-01

    Full Text Available This paper addresses two straightforward questions. First, how similar are the statistics of cirrus particle size distribution (PSD datasets collected using the Two-Dimensional Stereo (2D-S probe to cirrus PSD datasets collected using older Particle Measuring Systems (PMS 2-D Cloud (2DC and 2-D Precipitation (2DP probes? Second, how similar are the datasets when shatter-correcting post-processing is applied to the 2DC datasets? To answer these questions, a database of measured and parameterized cirrus PSDs – constructed from measurements taken during the Small Particles in Cirrus (SPARTICUS; Mid-latitude Airborne Cirrus Properties Experiment (MACPEX; and Tropical Composition, Cloud, and Climate Coupling (TC4 flight campaigns – is used.Bulk cloud quantities are computed from the 2D-S database in three ways: first, directly from the 2D-S data; second, by applying the 2D-S data to ice PSD parameterizations developed using sets of cirrus measurements collected using the older PMS probes; and third, by applying the 2D-S data to a similar parameterization developed using the 2D-S data themselves. This is done so that measurements of the same cloud volumes by parameterized versions of the 2DC and 2D-S can be compared with one another. It is thereby seen – given the same cloud field and given the same assumptions concerning ice crystal cross-sectional area, density, and radar cross section – that the parameterized 2D-S and the parameterized 2DC predict similar distributions of inferred shortwave extinction coefficient, ice water content, and 94 GHz radar reflectivity. However, the parameterization of the 2DC based on uncorrected data predicts a statistically significantly higher number of total ice crystals and a larger ratio of small ice crystals to large ice crystals than does the parameterized 2D-S. The 2DC parameterization based on shatter-corrected data also predicts statistically different numbers of ice crystals than does the

  10. Statistical problems in nuclear regulation: introduction and overview

    International Nuclear Information System (INIS)

    Moore, R.H.; Easterling, R.G.

    1978-01-01

    The U.S. Nuclear Regulatory Commission (NRC) was organized formally in January 1975. The Commission's responsibilities can be categorized into four broad areas involving the licensing and use of nuclear materials and facilities: protecting public health and safety; protecting the environment; safeguarding nuclear materials and facilities; and assuring conformity with antitrust laws. A large variety of statistical problems are related to these basic responsibilities. They arise from the data-based nature of many of the issues to be resolved in making regulatory decisions. Hence, they are reflected in interactions among the NRC staff and licensees, vendors, and the public. This paper identifies and outlines some of these problems, providing a spectrum for comparison with the other presentations in this session. These problems are linked by the need for clear and objective treatment of data; their articulation and solution will benefit from insights and contributions from an informed statistical community

  11. Statistics

    CERN Document Server

    Hayslett, H T

    1991-01-01

    Statistics covers the basic principles of Statistics. The book starts by tackling the importance and the two kinds of statistics; the presentation of sample data; the definition, illustration and explanation of several measures of location; and the measures of variation. The text then discusses elementary probability, the normal distribution and the normal approximation to the binomial. Testing of statistical hypotheses and tests of hypotheses about the theoretical proportion of successes in a binomial population and about the theoretical mean of a normal population are explained. The text the

  12. Modeling of asphalt-rubber rotational viscosity by statistical analysis and neural networks

    Directory of Open Access Journals (Sweden)

    Luciano Pivoto Specht

    2007-03-01

    Full Text Available It is of a great importance to know binders' viscosity in order to perform handling, mixing, application processes and asphalt mixes compaction in highway surfacing. This paper presents the results of viscosity measurement in asphalt-rubber binders prepared in laboratory. The binders were prepared varying the rubber content, rubber particle size, duration and temperature of mixture, all following a statistical design plan. The statistical analysis and artificial neural networks were used to create mathematical models for prediction of the binders viscosity. The comparison between experimental data and simulated results with the generated models showed best performance of the neural networks analysis in contrast to the statistic models. The results indicated that the rubber content and duration of mixture have major influence on the observed viscosity for the considered interval of parameters variation.

  13. A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix

    KAUST Repository

    Hu, Zongliang; Dong, Kai; Dai, Wenlin; Tong, Tiejun

    2017-01-01

    The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.

  14. A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix

    KAUST Repository

    Hu, Zongliang

    2017-09-27

    The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.

  15. A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix.

    Science.gov (United States)

    Hu, Zongliang; Dong, Kai; Dai, Wenlin; Tong, Tiejun

    2017-09-21

    The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.

  16. Clinical usefulness of normal data bases comparisons for the SPECT diagnosis of Alzheimer's disease

    International Nuclear Information System (INIS)

    Darcourt, J.; Koulibaly, P.M.; Migneco, O.; Dygai, I.; Robert, P.H.; Nobili, F.; Ebmeir, K.

    2002-01-01

    AUC for the 4 readers were: 0.81 for CUTS alone, 0.85 for CUTS+SPM and 0.85 for CUTS+NGam. However, none of the 12 AUC comparison was statistically significant. Conclusion. Expert readers do not increase significantly their diagnostic performances in AD with the help of voxel by voxel normal data base comparison

  17. New scanning technique using Adaptive Statistical Iterative Reconstruction (ASIR) significantly reduced the radiation dose of cardiac CT.

    Science.gov (United States)

    Tumur, Odgerel; Soon, Kean; Brown, Fraser; Mykytowycz, Marcus

    2013-06-01

    The aims of our study were to evaluate the effect of application of Adaptive Statistical Iterative Reconstruction (ASIR) algorithm on the radiation dose of coronary computed tomography angiography (CCTA) and its effects on image quality of CCTA and to evaluate the effects of various patient and CT scanning factors on the radiation dose of CCTA. This was a retrospective study that included 347 consecutive patients who underwent CCTA at a tertiary university teaching hospital between 1 July 2009 and 20 September 2011. Analysis was performed comparing patient demographics, scan characteristics, radiation dose and image quality in two groups of patients in whom conventional Filtered Back Projection (FBP) or ASIR was used for image reconstruction. There were 238 patients in the FBP group and 109 patients in the ASIR group. There was no difference between the groups in the use of prospective gating, scan length or tube voltage. In ASIR group, significantly lower tube current was used compared with FBP group, 550 mA (450-600) vs. 650 mA (500-711.25) (median (interquartile range)), respectively, P ASIR group compared with FBP group, 4.29 mSv (2.84-6.02) vs. 5.84 mSv (3.88-8.39) (median (interquartile range)), respectively, P ASIR was associated with increased image noise compared with FBP (39.93 ± 10.22 vs. 37.63 ± 18.79 (mean ± standard deviation), respectively, P ASIR reduces the radiation dose of CCTA without affecting the image quality. © 2013 The Authors. Journal of Medical Imaging and Radiation Oncology © 2013 The Royal Australian and New Zealand College of Radiologists.

  18. [Clinical research IV. Relevancy of the statistical test chosen].

    Science.gov (United States)

    Talavera, Juan O; Rivas-Ruiz, Rodolfo

    2011-01-01

    When we look at the difference between two therapies or the association of a risk factor or prognostic indicator with its outcome, we need to evaluate the accuracy of the result. This assessment is based on a judgment that uses information about the study design and statistical management of the information. This paper specifically mentions the relevance of the statistical test selected. Statistical tests are chosen mainly from two characteristics: the objective of the study and type of variables. The objective can be divided into three test groups: a) those in which you want to show differences between groups or inside a group before and after a maneuver, b) those that seek to show the relationship (correlation) between variables, and c) those that aim to predict an outcome. The types of variables are divided in two: quantitative (continuous and discontinuous) and qualitative (ordinal and dichotomous). For example, if we seek to demonstrate differences in age (quantitative variable) among patients with systemic lupus erythematosus (SLE) with and without neurological disease (two groups), the appropriate test is the "Student t test for independent samples." But if the comparison is about the frequency of females (binomial variable), then the appropriate statistical test is the χ(2).

  19. Identification and comparison of structural factors of innovation capability in ESCO with desirable status

    Directory of Open Access Journals (Sweden)

    Fatemeh Jalali

    2014-12-01

    Full Text Available The present study describes the identification and comparison of structural factors of innovation capability in Esfahan Steel Company (ESCO. Innovation is a crucial factor in growth, success, and survival of organizations. Since the innovation for organizations is not possible without the level of innovation capabilities and the need for steel products and imports of goods from developed countries has greatly increased, this study intends to investigate the factors affecting the subject that may be able to increase the production and reduce the need to import it. Evaluation of the innovation capability factors of ESCO compared with its desired status in industry can help companies develop innovative strategies and also achieve organizational goals. Statistical analysis methods and mean comparison test by examining the structure of the innovation capability in the form of a standard questionnaire was employed. The findings suggest that the innovation capability in the existing situation of ESCO in comparison with the desired situation is significantly different.

  20. Sample sizes and model comparison metrics for species distribution models

    Science.gov (United States)

    B.B. Hanberry; H.S. He; D.C. Dey

    2012-01-01

    Species distribution models use small samples to produce continuous distribution maps. The question of how small a sample can be to produce an accurate model generally has been answered based on comparisons to maximum sample sizes of 200 observations or fewer. In addition, model comparisons often are made with the kappa statistic, which has become controversial....