WorldWideScience

Sample records for statistically significant increased

  1. Increasing the statistical significance of entanglement detection in experiments.

    Science.gov (United States)

    Jungnitsch, Bastian; Niekamp, Sönke; Kleinmann, Matthias; Gühne, Otfried; Lu, He; Gao, Wei-Bo; Chen, Yu-Ao; Chen, Zeng-Bing; Pan, Jian-Wei

    2010-05-28

    Entanglement is often verified by a violation of an inequality like a Bell inequality or an entanglement witness. Considerable effort has been devoted to the optimization of such inequalities in order to obtain a high violation. We demonstrate theoretically and experimentally that such an optimization does not necessarily lead to a better entanglement test, if the statistical error is taken into account. Theoretically, we show for different error models that reducing the violation of an inequality can improve the significance. Experimentally, we observe this phenomenon in a four-photon experiment, testing the Mermin and Ardehali inequality for different levels of noise. Furthermore, we provide a way to develop entanglement tests with high statistical significance.

  2. Increasing the statistical significance of entanglement detection in experiments

    Energy Technology Data Exchange (ETDEWEB)

    Jungnitsch, Bastian; Niekamp, Soenke; Kleinmann, Matthias; Guehne, Otfried [Institut fuer Quantenoptik und Quanteninformation, Innsbruck (Austria); Lu, He; Gao, Wei-Bo; Chen, Zeng-Bing [Hefei National Laboratory for Physical Sciences at Microscale and Department of Modern Physics, University of Science and Technology of China, Hefei (China); Chen, Yu-Ao; Pan, Jian-Wei [Hefei National Laboratory for Physical Sciences at Microscale and Department of Modern Physics, University of Science and Technology of China, Hefei (China); Physikalisches Institut, Universitaet Heidelberg (Germany)

    2010-07-01

    Entanglement is often verified by a violation of an inequality like a Bell inequality or an entanglement witness. Considerable effort has been devoted to the optimization of such inequalities in order to obtain a high violation. We demonstrate theoretically and experimentally that such an optimization does not necessarily lead to a better entanglement test, if the statistical error is taken into account. Theoretically, we show for different error models that reducing the violation of an inequality can improve the significance. We show this to be the case for an error model in which the variance of an observable is interpreted as its error and for the standard error model in photonic experiments. Specifically, we demonstrate that the Mermin inequality yields a Bell test which is statistically more significant than the Ardehali inequality in the case of a photonic four-qubit state that is close to a GHZ state. Experimentally, we observe this phenomenon in a four-photon experiment, testing the above inequalities for different levels of noise.

  3. Statistical significance of cis-regulatory modules

    Directory of Open Access Journals (Sweden)

    Smith Andrew D

    2007-01-01

    Full Text Available Abstract Background It is becoming increasingly important for researchers to be able to scan through large genomic regions for transcription factor binding sites or clusters of binding sites forming cis-regulatory modules. Correspondingly, there has been a push to develop algorithms for the rapid detection and assessment of cis-regulatory modules. While various algorithms for this purpose have been introduced, most are not well suited for rapid, genome scale scanning. Results We introduce methods designed for the detection and statistical evaluation of cis-regulatory modules, modeled as either clusters of individual binding sites or as combinations of sites with constrained organization. In order to determine the statistical significance of module sites, we first need a method to determine the statistical significance of single transcription factor binding site matches. We introduce a straightforward method of estimating the statistical significance of single site matches using a database of known promoters to produce data structures that can be used to estimate p-values for binding site matches. We next introduce a technique to calculate the statistical significance of the arrangement of binding sites within a module using a max-gap model. If the module scanned for has defined organizational parameters, the probability of the module is corrected to account for organizational constraints. The statistical significance of single site matches and the architecture of sites within the module can be combined to provide an overall estimation of statistical significance of cis-regulatory module sites. Conclusion The methods introduced in this paper allow for the detection and statistical evaluation of single transcription factor binding sites and cis-regulatory modules. The features described are implemented in the Search Tool for Occurrences of Regulatory Motifs (STORM and MODSTORM software.

  4. Understanding the Sampling Distribution and Its Use in Testing Statistical Significance.

    Science.gov (United States)

    Breunig, Nancy A.

    Despite the increasing criticism of statistical significance testing by researchers, particularly in the publication of the 1994 American Psychological Association's style manual, statistical significance test results are still popular in journal articles. For this reason, it remains important to understand the logic of inferential statistics. A…

  5. Swiss solar power statistics 2007 - Significant expansion

    International Nuclear Information System (INIS)

    Hostettler, T.

    2008-01-01

    This article presents and discusses the 2007 statistics for solar power in Switzerland. A significant number of new installations is noted as is the high production figures from newer installations. The basics behind the compilation of the Swiss solar power statistics are briefly reviewed and an overview for the period 1989 to 2007 is presented which includes figures on the number of photovoltaic plant in service and installed peak power. Typical production figures in kilowatt-hours (kWh) per installed kilowatt-peak power (kWp) are presented and discussed for installations of various sizes. Increased production after inverter replacement in older installations is noted. Finally, the general political situation in Switzerland as far as solar power is concerned are briefly discussed as are international developments.

  6. The thresholds for statistical and clinical significance

    DEFF Research Database (Denmark)

    Jakobsen, Janus Christian; Gluud, Christian; Winkel, Per

    2014-01-01

    BACKGROUND: Thresholds for statistical significance are insufficiently demonstrated by 95% confidence intervals or P-values when assessing results from randomised clinical trials. First, a P-value only shows the probability of getting a result assuming that the null hypothesis is true and does...... not reflect the probability of getting a result assuming an alternative hypothesis to the null hypothesis is true. Second, a confidence interval or a P-value showing significance may be caused by multiplicity. Third, statistical significance does not necessarily result in clinical significance. Therefore...... of the probability that a given trial result is compatible with a 'null' effect (corresponding to the P-value) divided by the probability that the trial result is compatible with the intervention effect hypothesised in the sample size calculation; (3) adjust the confidence intervals and the statistical significance...

  7. The insignificance of statistical significance testing

    Science.gov (United States)

    Johnson, Douglas H.

    1999-01-01

    Despite their use in scientific journals such as The Journal of Wildlife Management, statistical hypothesis tests add very little value to the products of research. Indeed, they frequently confuse the interpretation of data. This paper describes how statistical hypothesis tests are often viewed, and then contrasts that interpretation with the correct one. I discuss the arbitrariness of P-values, conclusions that the null hypothesis is true, power analysis, and distinctions between statistical and biological significance. Statistical hypothesis testing, in which the null hypothesis about the properties of a population is almost always known a priori to be false, is contrasted with scientific hypothesis testing, which examines a credible null hypothesis about phenomena in nature. More meaningful alternatives are briefly outlined, including estimation and confidence intervals for determining the importance of factors, decision theory for guiding actions in the face of uncertainty, and Bayesian approaches to hypothesis testing and other statistical practices.

  8. Significance levels for studies with correlated test statistics.

    Science.gov (United States)

    Shi, Jianxin; Levinson, Douglas F; Whittemore, Alice S

    2008-07-01

    When testing large numbers of null hypotheses, one needs to assess the evidence against the global null hypothesis that none of the hypotheses is false. Such evidence typically is based on the test statistic of the largest magnitude, whose statistical significance is evaluated by permuting the sample units to simulate its null distribution. Efron (2007) has noted that correlation among the test statistics can induce substantial interstudy variation in the shapes of their histograms, which may cause misleading tail counts. Here, we show that permutation-based estimates of the overall significance level also can be misleading when the test statistics are correlated. We propose that such estimates be conditioned on a simple measure of the spread of the observed histogram, and we provide a method for obtaining conditional significance levels. We justify this conditioning using the conditionality principle described by Cox and Hinkley (1974). Application of the method to gene expression data illustrates the circumstances when conditional significance levels are needed.

  9. Caveats for using statistical significance tests in research assessments

    DEFF Research Database (Denmark)

    Schneider, Jesper Wiborg

    2013-01-01

    controversial and numerous criticisms have been leveled against their use. Based on examples from articles by proponents of the use statistical significance tests in research assessments, we address some of the numerous problems with such tests. The issues specifically discussed are the ritual practice......This article raises concerns about the advantages of using statistical significance tests in research assessments as has recently been suggested in the debate about proper normalization procedures for citation indicators by Opthof and Leydesdorff (2010). Statistical significance tests are highly...... argue that applying statistical significance tests and mechanically adhering to their results are highly problematic and detrimental to critical thinking. We claim that the use of such tests do not provide any advantages in relation to deciding whether differences between citation indicators...

  10. Statistical significance of epidemiological data. Seminar: Evaluation of epidemiological studies

    International Nuclear Information System (INIS)

    Weber, K.H.

    1993-01-01

    In stochastic damages, the numbers of events, e.g. the persons who are affected by or have died of cancer, and thus the relative frequencies (incidence or mortality) are binomially distributed random variables. Their statistical fluctuations can be characterized by confidence intervals. For epidemiologic questions, especially for the analysis of stochastic damages in the low dose range, the following issues are interesting: - Is a sample (a group of persons) with a definite observed damage frequency part of the whole population? - Is an observed frequency difference between two groups of persons random or statistically significant? - Is an observed increase or decrease of the frequencies with increasing dose random or statistically significant and how large is the regression coefficient (= risk coefficient) in this case? These problems can be solved by sttistical tests. So-called distribution-free tests and tests which are not bound to the supposition of normal distribution are of particular interest, such as: - χ 2 -independence test (test in contingency tables); - Fisher-Yates-test; - trend test according to Cochran; - rank correlation test given by Spearman. These tests are explained in terms of selected epidemiologic data, e.g. of leukaemia clusters, of the cancer mortality of the Japanese A-bomb survivors especially in the low dose range as well as on the sample of the cancer mortality in the high background area in Yangjiang (China). (orig.) [de

  11. Statistically significant relational data mining :

    Energy Technology Data Exchange (ETDEWEB)

    Berry, Jonathan W.; Leung, Vitus Joseph; Phillips, Cynthia Ann; Pinar, Ali; Robinson, David Gerald; Berger-Wolf, Tanya; Bhowmick, Sanjukta; Casleton, Emily; Kaiser, Mark; Nordman, Daniel J.; Wilson, Alyson G.

    2014-02-01

    This report summarizes the work performed under the project (3z(BStatitically significant relational data mining.(3y (BThe goal of the project was to add more statistical rigor to the fairly ad hoc area of data mining on graphs. Our goal was to develop better algorithms and better ways to evaluate algorithm quality. We concetrated on algorithms for community detection, approximate pattern matching, and graph similarity measures. Approximate pattern matching involves finding an instance of a relatively small pattern, expressed with tolerance, in a large graph of data observed with uncertainty. This report gathers the abstracts and references for the eight refereed publications that have appeared as part of this work. We then archive three pieces of research that have not yet been published. The first is theoretical and experimental evidence that a popular statistical measure for comparison of community assignments favors over-resolved communities over approximations to a ground truth. The second are statistically motivated methods for measuring the quality of an approximate match of a small pattern in a large graph. The third is a new probabilistic random graph model. Statisticians favor these models for graph analysis. The new local structure graph model overcomes some of the issues with popular models such as exponential random graph models and latent variable models.

  12. Common pitfalls in statistical analysis: "P" values, statistical significance and confidence intervals

    Directory of Open Access Journals (Sweden)

    Priya Ranganathan

    2015-01-01

    Full Text Available In the second part of a series on pitfalls in statistical analysis, we look at various ways in which a statistically significant study result can be expressed. We debunk some of the myths regarding the ′P′ value, explain the importance of ′confidence intervals′ and clarify the importance of including both values in a paper

  13. Health significance and statistical uncertainty. The value of P-value.

    Science.gov (United States)

    Consonni, Dario; Bertazzi, Pier Alberto

    2017-10-27

    The P-value is widely used as a summary statistics of scientific results. Unfortunately, there is a widespread tendency to dichotomize its value in "P0.05" ("statistically not significant"), with the former implying a "positive" result and the latter a "negative" one. To show the unsuitability of such an approach when evaluating the effects of environmental and occupational risk factors. We provide examples of distorted use of P-value and of the negative consequences for science and public health of such a black-and-white vision. The rigid interpretation of P-value as a dichotomy favors the confusion between health relevance and statistical significance, discourages thoughtful thinking, and distorts attention from what really matters, the health significance. A much better way to express and communicate scientific results involves reporting effect estimates (e.g., risks, risks ratios or risk differences) and their confidence intervals (CI), which summarize and convey both health significance and statistical uncertainty. Unfortunately, many researchers do not usually consider the whole interval of CI but only examine if it includes the null-value, therefore degrading this procedure to the same P-value dichotomy (statistical significance or not). In reporting statistical results of scientific research present effects estimates with their confidence intervals and do not qualify the P-value as "significant" or "not significant".

  14. Common pitfalls in statistical analysis: “P” values, statistical significance and confidence intervals

    Science.gov (United States)

    Ranganathan, Priya; Pramesh, C. S.; Buyse, Marc

    2015-01-01

    In the second part of a series on pitfalls in statistical analysis, we look at various ways in which a statistically significant study result can be expressed. We debunk some of the myths regarding the ‘P’ value, explain the importance of ‘confidence intervals’ and clarify the importance of including both values in a paper PMID:25878958

  15. Testing statistical significance scores of sequence comparison methods with structure similarity

    Directory of Open Access Journals (Sweden)

    Leunissen Jack AM

    2006-10-01

    Full Text Available Abstract Background In the past years the Smith-Waterman sequence comparison algorithm has gained popularity due to improved implementations and rapidly increasing computing power. However, the quality and sensitivity of a database search is not only determined by the algorithm but also by the statistical significance testing for an alignment. The e-value is the most commonly used statistical validation method for sequence database searching. The CluSTr database and the Protein World database have been created using an alternative statistical significance test: a Z-score based on Monte-Carlo statistics. Several papers have described the superiority of the Z-score as compared to the e-value, using simulated data. We were interested if this could be validated when applied to existing, evolutionary related protein sequences. Results All experiments are performed on the ASTRAL SCOP database. The Smith-Waterman sequence comparison algorithm with both e-value and Z-score statistics is evaluated, using ROC, CVE and AP measures. The BLAST and FASTA algorithms are used as reference. We find that two out of three Smith-Waterman implementations with e-value are better at predicting structural similarities between proteins than the Smith-Waterman implementation with Z-score. SSEARCH especially has very high scores. Conclusion The compute intensive Z-score does not have a clear advantage over the e-value. The Smith-Waterman implementations give generally better results than their heuristic counterparts. We recommend using the SSEARCH algorithm combined with e-values for pairwise sequence comparisons.

  16. Publication of statistically significant research findings in prosthodontics & implant dentistry in the context of other dental specialties.

    Science.gov (United States)

    Papageorgiou, Spyridon N; Kloukos, Dimitrios; Petridis, Haralampos; Pandis, Nikolaos

    2015-10-01

    To assess the hypothesis that there is excessive reporting of statistically significant studies published in prosthodontic and implantology journals, which could indicate selective publication. The last 30 issues of 9 journals in prosthodontics and implant dentistry were hand-searched for articles with statistical analyses. The percentages of significant and non-significant results were tabulated by parameter of interest. Univariable/multivariable logistic regression analyses were applied to identify possible predictors of reporting statistically significance findings. The results of this study were compared with similar studies in dentistry with random-effects meta-analyses. From the 2323 included studies 71% of them reported statistically significant results, with the significant results ranging from 47% to 86%. Multivariable modeling identified that geographical area and involvement of statistician were predictors of statistically significant results. Compared to interventional studies, the odds that in vitro and observational studies would report statistically significant results was increased by 1.20 times (OR: 2.20, 95% CI: 1.66-2.92) and 0.35 times (OR: 1.35, 95% CI: 1.05-1.73), respectively. The probability of statistically significant results from randomized controlled trials was significantly lower compared to various study designs (difference: 30%, 95% CI: 11-49%). Likewise the probability of statistically significant results in prosthodontics and implant dentistry was lower compared to other dental specialties, but this result did not reach statistical significant (P>0.05). The majority of studies identified in the fields of prosthodontics and implant dentistry presented statistically significant results. The same trend existed in publications of other specialties in dentistry. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Test for the statistical significance of differences between ROC curves

    International Nuclear Information System (INIS)

    Metz, C.E.; Kronman, H.B.

    1979-01-01

    A test for the statistical significance of observed differences between two measured Receiver Operating Characteristic (ROC) curves has been designed and evaluated. The set of observer response data for each ROC curve is assumed to be independent and to arise from a ROC curve having a form which, in the absence of statistical fluctuations in the response data, graphs as a straight line on double normal-deviate axes. To test the significance of an apparent difference between two measured ROC curves, maximum likelihood estimates of the two parameters of each curve and the associated parameter variances and covariance are calculated from the corresponding set of observer response data. An approximate Chi-square statistic with two degrees of freedom is then constructed from the differences between the parameters estimated for each ROC curve and from the variances and covariances of these estimates. This statistic is known to be truly Chi-square distributed only in the limit of large numbers of trials in the observer performance experiments. Performance of the statistic for data arising from a limited number of experimental trials was evaluated. Independent sets of rating scale data arising from the same underlying ROC curve were paired, and the fraction of differences found (falsely) significant was compared to the significance level, α, used with the test. Although test performance was found to be somewhat dependent on both the number of trials in the data and the position of the underlying ROC curve in the ROC space, the results for various significance levels showed the test to be reliable under practical experimental conditions

  18. Examining reproducibility in psychology : A hybrid method for combining a statistically significant original study and a replication

    NARCIS (Netherlands)

    Van Aert, R.C.M.; Van Assen, M.A.L.M.

    2018-01-01

    The unrealistically high rate of positive results within psychology has increased the attention to replication research. However, researchers who conduct a replication and want to statistically combine the results of their replication with a statistically significant original study encounter

  19. On detection and assessment of statistical significance of Genomic Islands

    Directory of Open Access Journals (Sweden)

    Chaudhuri Probal

    2008-04-01

    Full Text Available Abstract Background Many of the available methods for detecting Genomic Islands (GIs in prokaryotic genomes use markers such as transposons, proximal tRNAs, flanking repeats etc., or they use other supervised techniques requiring training datasets. Most of these methods are primarily based on the biases in GC content or codon and amino acid usage of the islands. However, these methods either do not use any formal statistical test of significance or use statistical tests for which the critical values and the P-values are not adequately justified. We propose a method, which is unsupervised in nature and uses Monte-Carlo statistical tests based on randomly selected segments of a chromosome. Such tests are supported by precise statistical distribution theory, and consequently, the resulting P-values are quite reliable for making the decision. Results Our algorithm (named Design-Island, an acronym for Detection of Statistically Significant Genomic Island runs in two phases. Some 'putative GIs' are identified in the first phase, and those are refined into smaller segments containing horizontally acquired genes in the refinement phase. This method is applied to Salmonella typhi CT18 genome leading to the discovery of several new pathogenicity, antibiotic resistance and metabolic islands that were missed by earlier methods. Many of these islands contain mobile genetic elements like phage-mediated genes, transposons, integrase and IS elements confirming their horizontal acquirement. Conclusion The proposed method is based on statistical tests supported by precise distribution theory and reliable P-values along with a technique for visualizing statistically significant islands. The performance of our method is better than many other well known methods in terms of their sensitivity and accuracy, and in terms of specificity, it is comparable to other methods.

  20. Testing the Difference of Correlated Agreement Coefficients for Statistical Significance

    Science.gov (United States)

    Gwet, Kilem L.

    2016-01-01

    This article addresses the problem of testing the difference between two correlated agreement coefficients for statistical significance. A number of authors have proposed methods for testing the difference between two correlated kappa coefficients, which require either the use of resampling methods or the use of advanced statistical modeling…

  1. A tutorial on hunting statistical significance by chasing N

    Directory of Open Access Journals (Sweden)

    Denes Szucs

    2016-09-01

    Full Text Available There is increasing concern about the replicability of studies in psychology and cognitive neuroscience. Hidden data dredging (also called p-hacking is a major contributor to this crisis because it substantially increases Type I error resulting in a much larger proportion of false positive findings than the usually expected 5%. In order to build better intuition to avoid, detect and criticise some typical problems, here I systematically illustrate the large impact of some easy to implement and so, perhaps frequent data dredging techniques on boosting false positive findings. I illustrate several forms of two special cases of data dredging. First, researchers may violate the data collection stopping rules of null hypothesis significance testing by repeatedly checking for statistical significance with various numbers of participants. Second, researchers may group participants post-hoc along potential but unplanned independent grouping variables. The first approach 'hacks' the number of participants in studies, the second approach ‘hacks’ the number of variables in the analysis. I demonstrate the high amount of false positive findings generated by these techniques with data from true null distributions. I also illustrate that it is extremely easy to introduce strong bias into data by very mild selection and re-testing. Similar, usually undocumented data dredging steps can easily lead to having 20-50%, or more false positives.

  2. Statistical Significance for Hierarchical Clustering

    Science.gov (United States)

    Kimes, Patrick K.; Liu, Yufeng; Hayes, D. Neil; Marron, J. S.

    2017-01-01

    Summary Cluster analysis has proved to be an invaluable tool for the exploratory and unsupervised analysis of high dimensional datasets. Among methods for clustering, hierarchical approaches have enjoyed substantial popularity in genomics and other fields for their ability to simultaneously uncover multiple layers of clustering structure. A critical and challenging question in cluster analysis is whether the identified clusters represent important underlying structure or are artifacts of natural sampling variation. Few approaches have been proposed for addressing this problem in the context of hierarchical clustering, for which the problem is further complicated by the natural tree structure of the partition, and the multiplicity of tests required to parse the layers of nested clusters. In this paper, we propose a Monte Carlo based approach for testing statistical significance in hierarchical clustering which addresses these issues. The approach is implemented as a sequential testing procedure guaranteeing control of the family-wise error rate. Theoretical justification is provided for our approach, and its power to detect true clustering structure is illustrated through several simulation studies and applications to two cancer gene expression datasets. PMID:28099990

  3. Statistical significance of trends in monthly heavy precipitation over the US

    KAUST Repository

    Mahajan, Salil

    2011-05-11

    Trends in monthly heavy precipitation, defined by a return period of one year, are assessed for statistical significance in observations and Global Climate Model (GCM) simulations over the contiguous United States using Monte Carlo non-parametric and parametric bootstrapping techniques. The results from the two Monte Carlo approaches are found to be similar to each other, and also to the traditional non-parametric Kendall\\'s τ test, implying the robustness of the approach. Two different observational data-sets are employed to test for trends in monthly heavy precipitation and are found to exhibit consistent results. Both data-sets demonstrate upward trends, one of which is found to be statistically significant at the 95% confidence level. Upward trends similar to observations are observed in some climate model simulations of the twentieth century, but their statistical significance is marginal. For projections of the twenty-first century, a statistically significant upwards trend is observed in most of the climate models analyzed. The change in the simulated precipitation variance appears to be more important in the twenty-first century projections than changes in the mean precipitation. Stochastic fluctuations of the climate-system are found to be dominate monthly heavy precipitation as some GCM simulations show a downwards trend even in the twenty-first century projections when the greenhouse gas forcings are strong. © 2011 Springer-Verlag.

  4. Sibling Competition & Growth Tradeoffs. Biological vs. Statistical Significance.

    Science.gov (United States)

    Kramer, Karen L; Veile, Amanda; Otárola-Castillo, Erik

    2016-01-01

    Early childhood growth has many downstream effects on future health and reproduction and is an important measure of offspring quality. While a tradeoff between family size and child growth outcomes is theoretically predicted in high-fertility societies, empirical evidence is mixed. This is often attributed to phenotypic variation in parental condition. However, inconsistent study results may also arise because family size confounds the potentially differential effects that older and younger siblings can have on young children's growth. Additionally, inconsistent results might reflect that the biological significance associated with different growth trajectories is poorly understood. This paper addresses these concerns by tracking children's monthly gains in height and weight from weaning to age five in a high fertility Maya community. We predict that: 1) as an aggregate measure family size will not have a major impact on child growth during the post weaning period; 2) competition from young siblings will negatively impact child growth during the post weaning period; 3) however because of their economic value, older siblings will have a negligible effect on young children's growth. Accounting for parental condition, we use linear mixed models to evaluate the effects that family size, younger and older siblings have on children's growth. Congruent with our expectations, it is younger siblings who have the most detrimental effect on children's growth. While we find statistical evidence of a quantity/quality tradeoff effect, the biological significance of these results is negligible in early childhood. Our findings help to resolve why quantity/quality studies have had inconsistent results by showing that sibling competition varies with sibling age composition, not just family size, and that biological significance is distinct from statistical significance.

  5. Sibling Competition & Growth Tradeoffs. Biological vs. Statistical Significance.

    Directory of Open Access Journals (Sweden)

    Karen L Kramer

    Full Text Available Early childhood growth has many downstream effects on future health and reproduction and is an important measure of offspring quality. While a tradeoff between family size and child growth outcomes is theoretically predicted in high-fertility societies, empirical evidence is mixed. This is often attributed to phenotypic variation in parental condition. However, inconsistent study results may also arise because family size confounds the potentially differential effects that older and younger siblings can have on young children's growth. Additionally, inconsistent results might reflect that the biological significance associated with different growth trajectories is poorly understood. This paper addresses these concerns by tracking children's monthly gains in height and weight from weaning to age five in a high fertility Maya community. We predict that: 1 as an aggregate measure family size will not have a major impact on child growth during the post weaning period; 2 competition from young siblings will negatively impact child growth during the post weaning period; 3 however because of their economic value, older siblings will have a negligible effect on young children's growth. Accounting for parental condition, we use linear mixed models to evaluate the effects that family size, younger and older siblings have on children's growth. Congruent with our expectations, it is younger siblings who have the most detrimental effect on children's growth. While we find statistical evidence of a quantity/quality tradeoff effect, the biological significance of these results is negligible in early childhood. Our findings help to resolve why quantity/quality studies have had inconsistent results by showing that sibling competition varies with sibling age composition, not just family size, and that biological significance is distinct from statistical significance.

  6. Reporting effect sizes as a supplement to statistical significance ...

    African Journals Online (AJOL)

    The purpose of the article is to review the statistical significance reporting practices in reading instruction studies and to provide guidelines for when to calculate and report effect sizes in educational research. A review of six readily accessible (online) and accredited journals publishing research on reading instruction ...

  7. Your Chi-Square Test Is Statistically Significant: Now What?

    Science.gov (United States)

    Sharpe, Donald

    2015-01-01

    Applied researchers have employed chi-square tests for more than one hundred years. This paper addresses the question of how one should follow a statistically significant chi-square test result in order to determine the source of that result. Four approaches were evaluated: calculating residuals, comparing cells, ransacking, and partitioning. Data…

  8. Confidence intervals permit, but don't guarantee, better inference than statistical significance testing

    Directory of Open Access Journals (Sweden)

    Melissa Coulson

    2010-07-01

    Full Text Available A statistically significant result, and a non-significant result may differ little, although significance status may tempt an interpretation of difference. Two studies are reported that compared interpretation of such results presented using null hypothesis significance testing (NHST, or confidence intervals (CIs. Authors of articles published in psychology, behavioural neuroscience, and medical journals were asked, via email, to interpret two fictitious studies that found similar results, one statistically significant, and the other non-significant. Responses from 330 authors varied greatly, but interpretation was generally poor, whether results were presented as CIs or using NHST. However, when interpreting CIs respondents who mentioned NHST were 60% likely to conclude, unjustifiably, the two results conflicted, whereas those who interpreted CIs without reference to NHST were 95% likely to conclude, justifiably, the two results were consistent. Findings were generally similar for all three disciplines. An email survey of academic psychologists confirmed that CIs elicit better interpretations if NHST is not invoked. Improved statistical inference can result from encouragement of meta-analytic thinking and use of CIs but, for full benefit, such highly desirable statistical reform requires also that researchers interpret CIs without recourse to NHST.

  9. Statistical significance versus clinical relevance.

    Science.gov (United States)

    van Rijn, Marieke H C; Bech, Anneke; Bouyer, Jean; van den Brand, Jan A J G

    2017-04-01

    In March this year, the American Statistical Association (ASA) posted a statement on the correct use of P-values, in response to a growing concern that the P-value is commonly misused and misinterpreted. We aim to translate these warnings given by the ASA into a language more easily understood by clinicians and researchers without a deep background in statistics. Moreover, we intend to illustrate the limitations of P-values, even when used and interpreted correctly, and bring more attention to the clinical relevance of study findings using two recently reported studies as examples. We argue that P-values are often misinterpreted. A common mistake is saying that P < 0.05 means that the null hypothesis is false, and P ≥0.05 means that the null hypothesis is true. The correct interpretation of a P-value of 0.05 is that if the null hypothesis were indeed true, a similar or more extreme result would occur 5% of the times upon repeating the study in a similar sample. In other words, the P-value informs about the likelihood of the data given the null hypothesis and not the other way around. A possible alternative related to the P-value is the confidence interval (CI). It provides more information on the magnitude of an effect and the imprecision with which that effect was estimated. However, there is no magic bullet to replace P-values and stop erroneous interpretation of scientific results. Scientists and readers alike should make themselves familiar with the correct, nuanced interpretation of statistical tests, P-values and CIs. © The Author 2017. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.

  10. Turking Statistics: Student-Generated Surveys Increase Student Engagement and Performance

    Science.gov (United States)

    Whitley, Cameron T.; Dietz, Thomas

    2018-01-01

    Thirty years ago, Hubert M. Blalock Jr. published an article in "Teaching Sociology" about the importance of teaching statistics. We honor Blalock's legacy by assessing how using Amazon Mechanical Turk (MTurk) in statistics classes can enhance student learning and increase statistical literacy among social science gradaute students. In…

  11. Statistical Significance and Effect Size: Two Sides of a Coin.

    Science.gov (United States)

    Fan, Xitao

    This paper suggests that statistical significance testing and effect size are two sides of the same coin; they complement each other, but do not substitute for one another. Good research practice requires that both should be taken into consideration to make sound quantitative decisions. A Monte Carlo simulation experiment was conducted, and a…

  12. Significant Statistics: Viewed with a Contextual Lens

    Science.gov (United States)

    Tait-McCutcheon, Sandi

    2010-01-01

    This paper examines the pedagogical and organisational changes three lead teachers made to their statistics teaching and learning programs. The lead teachers posed the research question: What would the effect of contextually integrating statistical investigations and literacies into other curriculum areas be on student achievement? By finding the…

  13. "What If" Analyses: Ways to Interpret Statistical Significance Test Results Using EXCEL or "R"

    Science.gov (United States)

    Ozturk, Elif

    2012-01-01

    The present paper aims to review two motivations to conduct "what if" analyses using Excel and "R" to understand the statistical significance tests through the sample size context. "What if" analyses can be used to teach students what statistical significance tests really do and in applied research either prospectively to estimate what sample size…

  14. Statistical vs. Economic Significance in Economics and Econometrics: Further comments on McCloskey & Ziliak

    DEFF Research Database (Denmark)

    Engsted, Tom

    I comment on the controversy between McCloskey & Ziliak and Hoover & Siegler on statistical versus economic significance, in the March 2008 issue of the Journal of Economic Methodology. I argue that while McCloskey & Ziliak are right in emphasizing 'real error', i.e. non-sampling error that cannot...... be eliminated through specification testing, they fail to acknowledge those areas in economics, e.g. rational expectations macroeconomics and asset pricing, where researchers clearly distinguish between statistical and economic significance and where statistical testing plays a relatively minor role in model...

  15. Distinguishing between statistical significance and practical/clinical meaningfulness using statistical inference.

    Science.gov (United States)

    Wilkinson, Michael

    2014-03-01

    Decisions about support for predictions of theories in light of data are made using statistical inference. The dominant approach in sport and exercise science is the Neyman-Pearson (N-P) significance-testing approach. When applied correctly it provides a reliable procedure for making dichotomous decisions for accepting or rejecting zero-effect null hypotheses with known and controlled long-run error rates. Type I and type II error rates must be specified in advance and the latter controlled by conducting an a priori sample size calculation. The N-P approach does not provide the probability of hypotheses or indicate the strength of support for hypotheses in light of data, yet many scientists believe it does. Outcomes of analyses allow conclusions only about the existence of non-zero effects, and provide no information about the likely size of true effects or their practical/clinical value. Bayesian inference can show how much support data provide for different hypotheses, and how personal convictions should be altered in light of data, but the approach is complicated by formulating probability distributions about prior subjective estimates of population effects. A pragmatic solution is magnitude-based inference, which allows scientists to estimate the true magnitude of population effects and how likely they are to exceed an effect magnitude of practical/clinical importance, thereby integrating elements of subjective Bayesian-style thinking. While this approach is gaining acceptance, progress might be hastened if scientists appreciate the shortcomings of traditional N-P null hypothesis significance testing.

  16. Codon Deviation Coefficient: a novel measure for estimating codon usage bias and its statistical significance

    Directory of Open Access Journals (Sweden)

    Zhang Zhang

    2012-03-01

    Full Text Available Abstract Background Genetic mutation, selective pressure for translational efficiency and accuracy, level of gene expression, and protein function through natural selection are all believed to lead to codon usage bias (CUB. Therefore, informative measurement of CUB is of fundamental importance to making inferences regarding gene function and genome evolution. However, extant measures of CUB have not fully accounted for the quantitative effect of background nucleotide composition and have not statistically evaluated the significance of CUB in sequence analysis. Results Here we propose a novel measure--Codon Deviation Coefficient (CDC--that provides an informative measurement of CUB and its statistical significance without requiring any prior knowledge. Unlike previous measures, CDC estimates CUB by accounting for background nucleotide compositions tailored to codon positions and adopts the bootstrapping to assess the statistical significance of CUB for any given sequence. We evaluate CDC by examining its effectiveness on simulated sequences and empirical data and show that CDC outperforms extant measures by achieving a more informative estimation of CUB and its statistical significance. Conclusions As validated by both simulated and empirical data, CDC provides a highly informative quantification of CUB and its statistical significance, useful for determining comparative magnitudes and patterns of biased codon usage for genes or genomes with diverse sequence compositions.

  17. Statistics Refresher for Molecular Imaging Technologists, Part 2: Accuracy of Interpretation, Significance, and Variance.

    Science.gov (United States)

    Farrell, Mary Beth

    2018-06-01

    This article is the second part of a continuing education series reviewing basic statistics that nuclear medicine and molecular imaging technologists should understand. In this article, the statistics for evaluating interpretation accuracy, significance, and variance are discussed. Throughout the article, actual statistics are pulled from the published literature. We begin by explaining 2 methods for quantifying interpretive accuracy: interreader and intrareader reliability. Agreement among readers can be expressed simply as a percentage. However, the Cohen κ-statistic is a more robust measure of agreement that accounts for chance. The higher the κ-statistic is, the higher is the agreement between readers. When 3 or more readers are being compared, the Fleiss κ-statistic is used. Significance testing determines whether the difference between 2 conditions or interventions is meaningful. Statistical significance is usually expressed using a number called a probability ( P ) value. Calculation of P value is beyond the scope of this review. However, knowing how to interpret P values is important for understanding the scientific literature. Generally, a P value of less than 0.05 is considered significant and indicates that the results of the experiment are due to more than just chance. Variance, standard deviation (SD), confidence interval, and standard error (SE) explain the dispersion of data around a mean of a sample drawn from a population. SD is commonly reported in the literature. A small SD indicates that there is not much variation in the sample data. Many biologic measurements fall into what is referred to as a normal distribution taking the shape of a bell curve. In a normal distribution, 68% of the data will fall within 1 SD, 95% will fall within 2 SDs, and 99.7% will fall within 3 SDs. Confidence interval defines the range of possible values within which the population parameter is likely to lie and gives an idea of the precision of the statistic being

  18. Systematic reviews of anesthesiologic interventions reported as statistically significant

    DEFF Research Database (Denmark)

    Imberger, Georgina; Gluud, Christian; Boylan, John

    2015-01-01

    statistically significant meta-analyses of anesthesiologic interventions, we used TSA to estimate power and imprecision in the context of sparse data and repeated updates. METHODS: We conducted a search to identify all systematic reviews with meta-analyses that investigated an intervention that may......: From 11,870 titles, we found 682 systematic reviews that investigated anesthesiologic interventions. In the 50 sampled meta-analyses, the median number of trials included was 8 (interquartile range [IQR], 5-14), the median number of participants was 964 (IQR, 523-1736), and the median number...

  19. Using the Bootstrap Method for a Statistical Significance Test of Differences between Summary Histograms

    Science.gov (United States)

    Xu, Kuan-Man

    2006-01-01

    A new method is proposed to compare statistical differences between summary histograms, which are the histograms summed over a large ensemble of individual histograms. It consists of choosing a distance statistic for measuring the difference between summary histograms and using a bootstrap procedure to calculate the statistical significance level. Bootstrapping is an approach to statistical inference that makes few assumptions about the underlying probability distribution that describes the data. Three distance statistics are compared in this study. They are the Euclidean distance, the Jeffries-Matusita distance and the Kuiper distance. The data used in testing the bootstrap method are satellite measurements of cloud systems called cloud objects. Each cloud object is defined as a contiguous region/patch composed of individual footprints or fields of view. A histogram of measured values over footprints is generated for each parameter of each cloud object and then summary histograms are accumulated over all individual histograms in a given cloud-object size category. The results of statistical hypothesis tests using all three distances as test statistics are generally similar, indicating the validity of the proposed method. The Euclidean distance is determined to be most suitable after comparing the statistical tests of several parameters with distinct probability distributions among three cloud-object size categories. Impacts on the statistical significance levels resulting from differences in the total lengths of satellite footprint data between two size categories are also discussed.

  20. P-Value, a true test of statistical significance? a cautionary note ...

    African Journals Online (AJOL)

    While it's not the intention of the founders of significance testing and hypothesis testing to have the two ideas intertwined as if they are complementary, the inconvenient marriage of the two practices into one coherent, convenient, incontrovertible and misinterpreted practice has dotted our standard statistics textbooks and ...

  1. Codon Deviation Coefficient: A novel measure for estimating codon usage bias and its statistical significance

    KAUST Repository

    Zhang, Zhang

    2012-03-22

    Background: Genetic mutation, selective pressure for translational efficiency and accuracy, level of gene expression, and protein function through natural selection are all believed to lead to codon usage bias (CUB). Therefore, informative measurement of CUB is of fundamental importance to making inferences regarding gene function and genome evolution. However, extant measures of CUB have not fully accounted for the quantitative effect of background nucleotide composition and have not statistically evaluated the significance of CUB in sequence analysis.Results: Here we propose a novel measure--Codon Deviation Coefficient (CDC)--that provides an informative measurement of CUB and its statistical significance without requiring any prior knowledge. Unlike previous measures, CDC estimates CUB by accounting for background nucleotide compositions tailored to codon positions and adopts the bootstrapping to assess the statistical significance of CUB for any given sequence. We evaluate CDC by examining its effectiveness on simulated sequences and empirical data and show that CDC outperforms extant measures by achieving a more informative estimation of CUB and its statistical significance.Conclusions: As validated by both simulated and empirical data, CDC provides a highly informative quantification of CUB and its statistical significance, useful for determining comparative magnitudes and patterns of biased codon usage for genes or genomes with diverse sequence compositions. 2012 Zhang et al; licensee BioMed Central Ltd.

  2. Interpreting Statistical Significance Test Results: A Proposed New "What If" Method.

    Science.gov (United States)

    Kieffer, Kevin M.; Thompson, Bruce

    As the 1994 publication manual of the American Psychological Association emphasized, "p" values are affected by sample size. As a result, it can be helpful to interpret the results of statistical significant tests in a sample size context by conducting so-called "what if" analyses. However, these methods can be inaccurate…

  3. Measuring individual significant change on the Beck Depression Inventory-II through IRT-based statistics.

    NARCIS (Netherlands)

    Brouwer, D.; Meijer, R.R.; Zevalkink, D.J.

    2013-01-01

    Several researchers have emphasized that item response theory (IRT)-based methods should be preferred over classical approaches in measuring change for individual patients. In the present study we discuss and evaluate the use of IRT-based statistics to measure statistical significant individual

  4. Strategies for Testing Statistical and Practical Significance in Detecting DIF with Logistic Regression Models

    Science.gov (United States)

    Fidalgo, Angel M.; Alavi, Seyed Mohammad; Amirian, Seyed Mohammad Reza

    2014-01-01

    This study examines three controversial aspects in differential item functioning (DIF) detection by logistic regression (LR) models: first, the relative effectiveness of different analytical strategies for detecting DIF; second, the suitability of the Wald statistic for determining the statistical significance of the parameters of interest; and…

  5. Thresholds for statistical and clinical significance in systematic reviews with meta-analytic methods

    DEFF Research Database (Denmark)

    Jakobsen, Janus Christian; Wetterslev, Jorn; Winkel, Per

    2014-01-01

    BACKGROUND: Thresholds for statistical significance when assessing meta-analysis results are being insufficiently demonstrated by traditional 95% confidence intervals and P-values. Assessment of intervention effects in systematic reviews with meta-analysis deserves greater rigour. METHODS......: Methodologies for assessing statistical and clinical significance of intervention effects in systematic reviews were considered. Balancing simplicity and comprehensiveness, an operational procedure was developed, based mainly on The Cochrane Collaboration methodology and the Grading of Recommendations...... Assessment, Development, and Evaluation (GRADE) guidelines. RESULTS: We propose an eight-step procedure for better validation of meta-analytic results in systematic reviews (1) Obtain the 95% confidence intervals and the P-values from both fixed-effect and random-effects meta-analyses and report the most...

  6. Gas revenue increasingly significant

    International Nuclear Information System (INIS)

    Megill, R.E.

    1991-01-01

    This paper briefly describes the wellhead prices of natural gas compared to crude oil over the past 70 years. Although natural gas prices have never reached price parity with crude oil, the relative value of a gas BTU has been increasing. It is one of the reasons that the total amount of money coming from natural gas wells is becoming more significant. From 1920 to 1955 the revenue at the wellhead for natural gas was only about 10% of the money received by producers. Most of the money needed for exploration, development, and production came from crude oil. At present, however, over 40% of the money from the upstream portion of the petroleum industry is from natural gas. As a result, in a few short years natural gas may become 50% of the money revenues generated from wellhead production facilities

  7. Rapid Classification and Identification of Multiple Microorganisms with Accurate Statistical Significance via High-Resolution Tandem Mass Spectrometry.

    Science.gov (United States)

    Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y; Drake, Steven K; Gucek, Marjan; Sacks, David B; Yu, Yi-Kuo

    2018-06-05

    Rapid and accurate identification and classification of microorganisms is of paramount importance to public health and safety. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is complicating correct microbial identification even in a simple sample due to the large number of candidates present. To properly untwine candidate microbes in samples containing one or more microbes, one needs to go beyond apparent morphology or simple "fingerprinting"; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptide-centric representations of microbes to better separate them and by augmenting our earlier analysis method that yields accurate statistical significance. Here, we present an updated analysis workflow that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using 226 MS/MS publicly available data files (each containing from 2500 to nearly 100,000 MS/MS spectra) and 4000 additional MS/MS data files, that the updated workflow can correctly identify multiple microbes at the genus and often the species level for samples containing more than one microbe. We have also shown that the proposed workflow computes accurate statistical significances, i.e., E values for identified peptides and unified E values for identified microbes. Our updated analysis workflow MiCId, a freely available software for Microorganism Classification and Identification, is available for download at https://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html . Graphical Abstract ᅟ.

  8. Intensive inpatient treatment for bulimia nervosa: Statistical and clinical significance of symptom changes.

    Science.gov (United States)

    Diedrich, Alice; Schlegl, Sandra; Greetfeld, Martin; Fumi, Markus; Voderholzer, Ulrich

    2018-03-01

    This study examines the statistical and clinical significance of symptom changes during an intensive inpatient treatment program with a strong psychotherapeutic focus for individuals with severe bulimia nervosa. 295 consecutively admitted bulimic patients were administered the Structured Interview for Anorexic and Bulimic Syndromes-Self-Rating (SIAB-S), the Eating Disorder Inventory-2 (EDI-2), the Brief Symptom Inventory (BSI), and the Beck Depression Inventory-II (BDI-II) at treatment intake and discharge. Results indicated statistically significant symptom reductions with large effect sizes regarding severity of binge eating and compensatory behavior (SIAB-S), overall eating disorder symptom severity (EDI-2), overall psychopathology (BSI), and depressive symptom severity (BDI-II) even when controlling for antidepressant medication. The majority of patients showed either reliable (EDI-2: 33.7%, BSI: 34.8%, BDI-II: 18.1%) or even clinically significant symptom changes (EDI-2: 43.2%, BSI: 33.9%, BDI-II: 56.9%). Patients with clinically significant improvement were less distressed at intake and less likely to suffer from a comorbid borderline personality disorder when compared with those who did not improve to a clinically significant extent. Findings indicate that intensive psychotherapeutic inpatient treatment may be effective in about 75% of severely affected bulimic patients. For the remaining non-responding patients, inpatient treatment might be improved through an even stronger focus on the reduction of comorbid borderline personality traits.

  9. Recent Literature on Whether Statistical Significance Tests Should or Should Not Be Banned.

    Science.gov (United States)

    Deegear, James

    This paper summarizes the literature regarding statistical significant testing with an emphasis on recent literature in various discipline and literature exploring why researchers have demonstrably failed to be influenced by the American Psychological Association publication manual's encouragement to report effect sizes. Also considered are…

  10. Cloud-based solution to identify statistically significant MS peaks differentiating sample categories.

    Science.gov (United States)

    Ji, Jun; Ling, Jeffrey; Jiang, Helen; Wen, Qiaojun; Whitin, John C; Tian, Lu; Cohen, Harvey J; Ling, Xuefeng B

    2013-03-23

    Mass spectrometry (MS) has evolved to become the primary high throughput tool for proteomics based biomarker discovery. Until now, multiple challenges in protein MS data analysis remain: large-scale and complex data set management; MS peak identification, indexing; and high dimensional peak differential analysis with the concurrent statistical tests based false discovery rate (FDR). "Turnkey" solutions are needed for biomarker investigations to rapidly process MS data sets to identify statistically significant peaks for subsequent validation. Here we present an efficient and effective solution, which provides experimental biologists easy access to "cloud" computing capabilities to analyze MS data. The web portal can be accessed at http://transmed.stanford.edu/ssa/. Presented web application supplies large scale MS data online uploading and analysis with a simple user interface. This bioinformatic tool will facilitate the discovery of the potential protein biomarkers using MS.

  11. Application of Statistical Increase in Industrial Quality

    International Nuclear Information System (INIS)

    Akhmad-Fauzy

    2000-01-01

    Application of statistical method in industrial field is slightly newcompared with agricultural and biology. Statistical method which is appliedin industrial field more focus on industrial system control and useful formaintaining economical control of produce quality which is produced on bigscale. Application of statistical method in industrial field has increasedrapidly. This fact is supported by release of ISO 9000 quality system in 1987as international quality standard which is adopted by more than 100countries. (author)

  12. Power, effects, confidence, and significance: an investigation of statistical practices in nursing research.

    Science.gov (United States)

    Gaskin, Cadeyrn J; Happell, Brenda

    2014-05-01

    improvement. Most importantly, researchers should abandon the misleading practice of interpreting the results from inferential tests based solely on whether they are statistically significant (or not) and, instead, focus on reporting and interpreting effect sizes, confidence intervals, and significance levels. Nursing researchers also need to conduct and report a priori power analyses, and to address the issue of Type I experiment-wise error inflation in their studies. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  13. Statistical significant changes in ground thermal conditions of alpine Austria during the last decade

    Science.gov (United States)

    Kellerer-Pirklbauer, Andreas

    2016-04-01

    Longer data series (e.g. >10 a) of ground temperatures in alpine regions are helpful to improve the understanding regarding the effects of present climate change on distribution and thermal characteristics of seasonal frost- and permafrost-affected areas. Beginning in 2004 - and more intensively since 2006 - a permafrost and seasonal frost monitoring network was established in Central and Eastern Austria by the University of Graz. This network consists of c.60 ground temperature (surface and near-surface) monitoring sites which are located at 1922-3002 m a.s.l., at latitude 46°55'-47°22'N and at longitude 12°44'-14°41'E. These data allow conclusions about general ground thermal conditions, potential permafrost occurrence, trend during the observation period, and regional pattern of changes. Calculations and analyses of several different temperature-related parameters were accomplished. At an annual scale a region-wide statistical significant warming during the observation period was revealed by e.g. an increase in mean annual temperature values (mean, maximum) or the significant lowering of the surface frost number (F+). At a seasonal scale no significant trend of any temperature-related parameter was in most cases revealed for spring (MAM) and autumn (SON). Winter (DJF) shows only a weak warming. In contrast, the summer (JJA) season reveals in general a significant warming as confirmed by several different temperature-related parameters such as e.g. mean seasonal temperature, number of thawing degree days, number of freezing degree days, or days without night frost. On a monthly basis August shows the statistically most robust and strongest warming of all months, although regional differences occur. Despite the fact that the general ground temperature warming during the last decade is confirmed by the field data in the study region, complications in trend analyses arise by temperature anomalies (e.g. warm winter 2006/07) or substantial variations in the winter

  14. Breast-cancer-associated metastasis is significantly increased in a model of autoimmune arthritis.

    Science.gov (United States)

    Das Roy, Lopamudra; Pathangey, Latha B; Tinder, Teresa L; Schettini, Jorge L; Gruber, Helen E; Mukherjee, Pinku

    2009-01-01

    Sites of chronic inflammation are often associated with the establishment and growth of various malignancies including breast cancer. A common inflammatory condition in humans is autoimmune arthritis (AA) that causes inflammation and deformity of the joints. Other systemic effects associated with arthritis include increased cellular infiltration and inflammation of the lungs. Several studies have reported statistically significant risk ratios between AA and breast cancer. Despite this knowledge, available for a decade, it has never been questioned if the site of chronic inflammation linked to AA creates a milieu that attracts tumor cells to home and grow in the inflamed bones and lungs which are frequent sites of breast cancer metastasis. To determine if chronic inflammation induced by autoimmune arthritis contributes to increased breast cancer-associated metastasis, we generated mammary gland tumors in SKG mice that were genetically prone to develop AA. Two breast cancer cell lines, one highly metastatic (4T1) and the other non-metastatic (TUBO) were used to generate the tumors in the mammary fat pad. Lung and bone metastasis and the associated inflammatory milieu were evaluated in the arthritic versus the non-arthritic mice. We report a three-fold increase in lung metastasis and a significant increase in the incidence of bone metastasis in the pro-arthritic and arthritic mice compared to non-arthritic control mice. We also report that the metastatic breast cancer cells augment the severity of arthritis resulting in a vicious cycle that increases both bone destruction and metastasis. Enhanced neutrophilic and granulocytic infiltration in lungs and bone of the pro-arthritic and arthritic mice and subsequent increase in circulating levels of proinflammatory cytokines, such as macrophage colony stimulating factor (M-CSF), interleukin-17 (IL-17), interleukin-6 (IL-6), vascular endothelial growth factor (VEGF), and tumor necrosis factor-alpha (TNF-alpha) may contribute

  15. Breast cancer-associated metastasis is significantly increased in a model of autoimmune arthritis

    Science.gov (United States)

    Das Roy, Lopamudra; Pathangey, Latha B; Tinder, Teresa L; Schettini, Jorge L; Gruber, Helen E; Mukherjee, Pinku

    2009-01-01

    Introduction Sites of chronic inflammation are often associated with the establishment and growth of various malignancies including breast cancer. A common inflammatory condition in humans is autoimmune arthritis (AA) that causes inflammation and deformity of the joints. Other systemic effects associated with arthritis include increased cellular infiltration and inflammation of the lungs. Several studies have reported statistically significant risk ratios between AA and breast cancer. Despite this knowledge, available for a decade, it has never been questioned if the site of chronic inflammation linked to AA creates a milieu that attracts tumor cells to home and grow in the inflamed bones and lungs which are frequent sites of breast cancer metastasis. Methods To determine if chronic inflammation induced by autoimmune arthritis contributes to increased breast cancer-associated metastasis, we generated mammary gland tumors in SKG mice that were genetically prone to develop AA. Two breast cancer cell lines, one highly metastatic (4T1) and the other non-metastatic (TUBO) were used to generate the tumors in the mammary fat pad. Lung and bone metastasis and the associated inflammatory milieu were evaluated in the arthritic versus the non-arthritic mice. Results We report a three-fold increase in lung metastasis and a significant increase in the incidence of bone metastasis in the pro-arthritic and arthritic mice compared to non-arthritic control mice. We also report that the metastatic breast cancer cells augment the severity of arthritis resulting in a vicious cycle that increases both bone destruction and metastasis. Enhanced neutrophilic and granulocytic infiltration in lungs and bone of the pro-arthritic and arthritic mice and subsequent increase in circulating levels of proinflammatory cytokines, such as macrophage colony stimulating factor (M-CSF), interleukin-17 (IL-17), interleukin-6 (IL-6), vascular endothelial growth factor (VEGF), and tumor necrosis factor

  16. A critical discussion of null hypothesis significance testing and statistical power analysis within psychological research

    DEFF Research Database (Denmark)

    Jones, Allan; Sommerlund, Bo

    2007-01-01

    The uses of null hypothesis significance testing (NHST) and statistical power analysis within psychological research are critically discussed. The article looks at the problems of relying solely on NHST when dealing with small and large sample sizes. The use of power-analysis in estimating...... the potential error introduced by small and large samples is advocated. Power analysis is not recommended as a replacement to NHST but as an additional source of information about the phenomena under investigation. Moreover, the importance of conceptual analysis in relation to statistical analysis of hypothesis...

  17. Statistical determination of significant curved I-girder bridge seismic response parameters

    Science.gov (United States)

    Seo, Junwon

    2013-06-01

    Curved steel bridges are commonly used at interchanges in transportation networks and more of these structures continue to be designed and built in the United States. Though the use of these bridges continues to increase in locations that experience high seismicity, the effects of curvature and other parameters on their seismic behaviors have been neglected in current risk assessment tools. These tools can evaluate the seismic vulnerability of a transportation network using fragility curves. One critical component of fragility curve development for curved steel bridges is the completion of sensitivity analyses that help identify influential parameters related to their seismic response. In this study, an accessible inventory of existing curved steel girder bridges located primarily in the Mid-Atlantic United States (MAUS) was used to establish statistical characteristics used as inputs for a seismic sensitivity study. Critical seismic response quantities were captured using 3D nonlinear finite element models. Influential parameters from these quantities were identified using statistical tools that incorporate experimental Plackett-Burman Design (PBD), which included Pareto optimal plots and prediction profiler techniques. The findings revealed that the potential variation in the influential parameters included number of spans, radius of curvature, maximum span length, girder spacing, and cross-frame spacing. These parameters showed varying levels of influence on the critical bridge response.

  18. Statistical significance estimation of a signal within the GooFit framework on GPUs

    Directory of Open Access Journals (Sweden)

    Cristella Leonardo

    2017-01-01

    Full Text Available In order to test the computing capabilities of GPUs with respect to traditional CPU cores a high-statistics toy Monte Carlo technique has been implemented both in ROOT/RooFit and GooFit frameworks with the purpose to estimate the statistical significance of the structure observed by CMS close to the kinematical boundary of the J/ψϕ invariant mass in the three-body decay B+ → J/ψϕK+. GooFit is a data analysis open tool under development that interfaces ROOT/RooFit to CUDA platform on nVidia GPU. The optimized GooFit application running on GPUs hosted by servers in the Bari Tier2 provides striking speed-up performances with respect to the RooFit application parallelised on multiple CPUs by means of PROOF-Lite tool. The considerable resulting speed-up, evident when comparing concurrent GooFit processes allowed by CUDA Multi Process Service and a RooFit/PROOF-Lite process with multiple CPU workers, is presented and discussed in detail. By means of GooFit it has also been possible to explore the behaviour of a likelihood ratio test statistic in different situations in which the Wilks Theorem may or may not apply because its regularity conditions are not satisfied.

  19. Is statistical significance clinically important?--A guide to judge the clinical relevance of study findings

    NARCIS (Netherlands)

    Sierevelt, Inger N.; van Oldenrijk, Jakob; Poolman, Rudolf W.

    2007-01-01

    In this paper we describe several issues that influence the reporting of statistical significance in relation to clinical importance, since misinterpretation of p values is a common issue in orthopaedic literature. Orthopaedic research is tormented by the risks of false-positive (type I error) and

  20. Increased Statistical Efficiency in a Lognormal Mean Model

    Directory of Open Access Journals (Sweden)

    Grant H. Skrepnek

    2014-01-01

    Full Text Available Within the context of clinical and other scientific research, a substantial need exists for an accurate determination of the point estimate in a lognormal mean model, given that highly skewed data are often present. As such, logarithmic transformations are often advocated to achieve the assumptions of parametric statistical inference. Despite this, existing approaches that utilize only a sample’s mean and variance may not necessarily yield the most efficient estimator. The current investigation developed and tested an improved efficient point estimator for a lognormal mean by capturing more complete information via the sample’s coefficient of variation. Results of an empirical simulation study across varying sample sizes and population standard deviations indicated relative improvements in efficiency of up to 129.47 percent compared to the usual maximum likelihood estimator and up to 21.33 absolute percentage points above the efficient estimator presented by Shen and colleagues (2006. The relative efficiency of the proposed estimator increased particularly as a function of decreasing sample size and increasing population standard deviation.

  1. Statistical significance of theoretical predictions: A new dimension in nuclear structure theories (I)

    International Nuclear Information System (INIS)

    DUDEK, J; SZPAK, B; FORNAL, B; PORQUET, M-G

    2011-01-01

    In this and the follow-up article we briefly discuss what we believe represents one of the most serious problems in contemporary nuclear structure: the question of statistical significance of parametrizations of nuclear microscopic Hamiltonians and the implied predictive power of the underlying theories. In the present Part I, we introduce the main lines of reasoning of the so-called Inverse Problem Theory, an important sub-field in the contemporary Applied Mathematics, here illustrated on the example of the Nuclear Mean-Field Approach.

  2. Statistical Significance of the Contribution of Variables to the PCA Solution: An Alternative Permutation Strategy

    Science.gov (United States)

    Linting, Marielle; van Os, Bart Jan; Meulman, Jacqueline J.

    2011-01-01

    In this paper, the statistical significance of the contribution of variables to the principal components in principal components analysis (PCA) is assessed nonparametrically by the use of permutation tests. We compare a new strategy to a strategy used in previous research consisting of permuting the columns (variables) of a data matrix…

  3. A Note on Comparing the Power of Test Statistics at Low Significance Levels.

    Science.gov (United States)

    Morris, Nathan; Elston, Robert

    2011-01-01

    It is an obvious fact that the power of a test statistic is dependent upon the significance (alpha) level at which the test is performed. It is perhaps a less obvious fact that the relative performance of two statistics in terms of power is also a function of the alpha level. Through numerous personal discussions, we have noted that even some competent statisticians have the mistaken intuition that relative power comparisons at traditional levels such as α = 0.05 will be roughly similar to relative power comparisons at very low levels, such as the level α = 5 × 10 -8 , which is commonly used in genome-wide association studies. In this brief note, we demonstrate that this notion is in fact quite wrong, especially with respect to comparing tests with differing degrees of freedom. In fact, at very low alpha levels the cost of additional degrees of freedom is often comparatively low. Thus we recommend that statisticians exercise caution when interpreting the results of power comparison studies which use alpha levels that will not be used in practice.

  4. ClusterSignificance: A bioconductor package facilitating statistical analysis of class cluster separations in dimensionality reduced data

    DEFF Research Database (Denmark)

    Serviss, Jason T.; Gådin, Jesper R.; Eriksson, Per

    2017-01-01

    , e.g. genes in a specific pathway, alone can separate samples into these established classes. Despite this, the evaluation of class separations is often subjective and performed via visualization. Here we present the ClusterSignificance package; a set of tools designed to assess the statistical...... significance of class separations downstream of dimensionality reduction algorithms. In addition, we demonstrate the design and utility of the ClusterSignificance package and utilize it to determine the importance of long non-coding RNA expression in the identity of multiple hematological malignancies....

  5. Statistical significance versus clinical importance: trials on exercise therapy for chronic low back pain as example.

    NARCIS (Netherlands)

    van Tulder, M.W.; Malmivaara, A.; Hayden, J.; Koes, B.

    2007-01-01

    STUDY DESIGN. Critical appraisal of the literature. OBJECIVES. The objective of this study was to assess if results of back pain trials are statistically significant and clinically important. SUMMARY OF BACKGROUND DATA. There seems to be a discrepancy between conclusions reported by authors and

  6. Indirectional statistics and the significance of an asymmetry discovered by Birch

    International Nuclear Information System (INIS)

    Kendall, D.G.; Young, G.A.

    1984-01-01

    Birch (1982, Nature, 298, 451) reported an apparent 'statistical asymmetry of the Universe'. The authors here develop 'indirectional analysis' as a technique for investigating statistical effects of this kind and conclude that the reported effect (whatever may be its origin) is strongly supported by the observations. The estimated pole of the asymmetry is at RA 13h 30m, Dec. -37deg. The angular error in its estimation is unlikely to exceed 20-30deg. (author)

  7. Increased skills usage statistically mediates symptom reduction in self-guided internet-delivered cognitive-behavioural therapy for depression and anxiety: a randomised controlled trial.

    Science.gov (United States)

    Terides, Matthew D; Dear, Blake F; Fogliati, Vincent J; Gandy, Milena; Karin, Eyal; Jones, Michael P; Titov, Nickolai

    2018-01-01

    Cognitive-behavioural therapy (CBT) is an effective treatment for clinical and subclinical symptoms of depression and general anxiety, and increases life satisfaction. Patients' usage of CBT skills is a core aspect of treatment but there is insufficient empirical evidence suggesting that skills usage behaviours are a mechanism of clinical change. This study investigated if an internet-delivered CBT (iCBT) intervention increased the frequency of CBT skills usage behaviours and if this statistically mediated reductions in symptoms and increased life satisfaction. A two-group randomised controlled trial was conducted comparing internet-delivered CBT (n = 65) with a waitlist control group (n = 75). Participants were individuals experiencing clinically significant symptoms of depression or general anxiety. Mixed-linear models analyses revealed that the treatment group reported a significantly higher frequency of skills usage, lower symptoms, and higher life satisfaction by the end of treatment compared with the control group. Results from bootstrapping mediation analyses revealed that the increased skills usage behaviours statistically mediated symptom reductions and increased life satisfaction. Although skills usage and symptom outcomes were assessed concurrently, these findings support the notion that iCBT increases the frequency of skills usage behaviours and suggest that this may be an important mechanism of change.

  8. Assessing Statistically Significant Heavy-Metal Concentrations in Abandoned Mine Areas via Hot Spot Analysis of Portable XRF Data.

    Science.gov (United States)

    Kim, Sung-Min; Choi, Yosoon

    2017-06-18

    To develop appropriate measures to prevent soil contamination in abandoned mining areas, an understanding of the spatial variation of the potentially toxic trace elements (PTEs) in the soil is necessary. For the purpose of effective soil sampling, this study uses hot spot analysis, which calculates a z -score based on the Getis-Ord Gi* statistic to identify a statistically significant hot spot sample. To constitute a statistically significant hot spot, a feature with a high value should also be surrounded by other features with high values. Using relatively cost- and time-effective portable X-ray fluorescence (PXRF) analysis, sufficient input data are acquired from the Busan abandoned mine and used for hot spot analysis. To calibrate the PXRF data, which have a relatively low accuracy, the PXRF analysis data are transformed using the inductively coupled plasma atomic emission spectrometry (ICP-AES) data. The transformed PXRF data of the Busan abandoned mine are classified into four groups according to their normalized content and z -scores: high content with a high z -score (HH), high content with a low z -score (HL), low content with a high z -score (LH), and low content with a low z -score (LL). The HL and LH cases may be due to measurement errors. Additional or complementary surveys are required for the areas surrounding these suspect samples or for significant hot spot areas. The soil sampling is conducted according to a four-phase procedure in which the hot spot analysis and proposed group classification method are employed to support the development of a sampling plan for the following phase. Overall, 30, 50, 80, and 100 samples are investigated and analyzed in phases 1-4, respectively. The method implemented in this case study may be utilized in the field for the assessment of statistically significant soil contamination and the identification of areas for which an additional survey is required.

  9. Assessing Statistically Significant Heavy-Metal Concentrations in Abandoned Mine Areas via Hot Spot Analysis of Portable XRF Data

    Directory of Open Access Journals (Sweden)

    Sung-Min Kim

    2017-06-01

    Full Text Available To develop appropriate measures to prevent soil contamination in abandoned mining areas, an understanding of the spatial variation of the potentially toxic trace elements (PTEs in the soil is necessary. For the purpose of effective soil sampling, this study uses hot spot analysis, which calculates a z-score based on the Getis-Ord Gi* statistic to identify a statistically significant hot spot sample. To constitute a statistically significant hot spot, a feature with a high value should also be surrounded by other features with high values. Using relatively cost- and time-effective portable X-ray fluorescence (PXRF analysis, sufficient input data are acquired from the Busan abandoned mine and used for hot spot analysis. To calibrate the PXRF data, which have a relatively low accuracy, the PXRF analysis data are transformed using the inductively coupled plasma atomic emission spectrometry (ICP-AES data. The transformed PXRF data of the Busan abandoned mine are classified into four groups according to their normalized content and z-scores: high content with a high z-score (HH, high content with a low z-score (HL, low content with a high z-score (LH, and low content with a low z-score (LL. The HL and LH cases may be due to measurement errors. Additional or complementary surveys are required for the areas surrounding these suspect samples or for significant hot spot areas. The soil sampling is conducted according to a four-phase procedure in which the hot spot analysis and proposed group classification method are employed to support the development of a sampling plan for the following phase. Overall, 30, 50, 80, and 100 samples are investigated and analyzed in phases 1–4, respectively. The method implemented in this case study may be utilized in the field for the assessment of statistically significant soil contamination and the identification of areas for which an additional survey is required.

  10. Statistically significant faunal differences among Middle Ordovician age, Chickamauga Group bryozoan bioherms, central Alabama

    Energy Technology Data Exchange (ETDEWEB)

    Crow, C.J.

    1985-01-01

    Middle Ordovician age Chickamauga Group carbonates crop out along the Birmingham and Murphrees Valley anticlines in central Alabama. The macrofossil contents on exposed surfaces of seven bioherms have been counted to determine their various paleontologic characteristics. Twelve groups of organisms are present in these bioherms. Dominant organisms include bryozoans, algae, brachiopods, sponges, pelmatozoans, stromatoporoids and corals. Minor accessory fauna include predators, scavengers and grazers such as gastropods, ostracods, trilobites, cephalopods and pelecypods. Vertical and horizontal niche zonation has been detected for some of the bioherm dwelling fauna. No one bioherm of those studied exhibits all 12 groups of organisms; rather, individual bioherms display various subsets of the total diversity. Statistical treatment (G-test) of the diversity data indicates a lack of statistical homogeneity of the bioherms, both within and between localities. Between-locality population heterogeneity can be ascribed to differences in biologic responses to such gross environmental factors as water depth and clarity, and energy levels. At any one locality, gross aspects of the paleoenvironments are assumed to have been more uniform. Significant differences among bioherms at any one locality may have resulted from patchy distribution of species populations, differential preservation and other factors.

  11. Relationship between increasing concentrations of two carcinogens and statistical image descriptors of foci morphology in the cell transformation assay.

    Science.gov (United States)

    Callegaro, Giulia; Corvi, Raffaella; Salovaara, Susan; Urani, Chiara; Stefanini, Federico M

    2017-06-01

    Cell Transformation Assays (CTAs) have long been proposed for the identification of chemical carcinogenicity potential. The endpoint of these in vitro assays is represented by the phenotypic alterations in cultured cells, which are characterized by the change from the non-transformed to the transformed phenotype. Despite the wide fields of application and the numerous advantages of CTAs, their use in regulatory toxicology has been limited in part due to concerns about the subjective nature of visual scoring, i.e. the step in which transformed colonies or foci are evaluated through morphological features. An objective evaluation of morphological features has been previously obtained through automated digital processing of foci images to extract the value of three statistical image descriptors. In this study a further potential of the CTA using BALB/c 3T3 cells is addressed by analysing the effect of increasing concentrations of two known carcinogens, benzo[a]pyrene and NiCl 2 , with different modes of action on foci morphology. The main result of our quantitative evaluation shows that the concentration of the considered carcinogens has an effect on foci morphology that is statistically significant for the mean of two among the three selected descriptors. Statistical significance also corresponds to visual relevance. The statistical analysis of variations in foci morphology due to concentration allowed to quantify morphological changes that can be visually appreciated but not precisely determined. Therefore, it has the potential of providing new quantitative parameters in CTAs, and of exploiting all the information encoded in foci. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  12. Detecting Statistically Significant Communities of Triangle Motifs in Undirected Networks

    Science.gov (United States)

    2016-04-26

    Systems, Statistics & Management Science, University of Alabama, USA. 1 DISTRIBUTION A: Distribution approved for public release. Contents 1 Summary 5...13 5 Application to Real Networks 18 5.1 2012 FBS Football Schedule Network... football schedule network. . . . . . . . . . . . . . . . . . . . . . 21 14 Stem plot of degree-ordered vertices versus the degree for college football

  13. Conducting tests for statistically significant differences using forest inventory data

    Science.gov (United States)

    James A. Westfall; Scott A. Pugh; John W. Coulston

    2013-01-01

    Many forest inventory and monitoring programs are based on a sample of ground plots from which estimates of forest resources are derived. In addition to evaluating metrics such as number of trees or amount of cubic wood volume, it is often desirable to make comparisons between resource attributes. To properly conduct statistical tests for differences, it is imperative...

  14. Macro-indicators of citation impacts of six prolific countries: InCites data and the statistical significance of trends.

    Directory of Open Access Journals (Sweden)

    Lutz Bornmann

    Full Text Available Using the InCites tool of Thomson Reuters, this study compares normalized citation impact values calculated for China, Japan, France, Germany, United States, and the UK throughout the time period from 1981 to 2010. InCites offers a unique opportunity to study the normalized citation impacts of countries using (i a long publication window (1981 to 2010, (ii a differentiation in (broad or more narrow subject areas, and (iii allowing for the use of statistical procedures in order to obtain an insightful investigation of national citation trends across the years. Using four broad categories, our results show significantly increasing trends in citation impact values for France, the UK, and especially Germany across the last thirty years in all areas. The citation impact of papers from China is still at a relatively low level (mostly below the world average, but the country follows an increasing trend line. The USA exhibits a stable pattern of high citation impact values across the years. With small impact differences between the publication years, the US trend is increasing in engineering and technology but decreasing in medical and health sciences as well as in agricultural sciences. Similar to the USA, Japan follows increasing as well as decreasing trends in different subject areas, but the variability across the years is small. In most of the years, papers from Japan perform below or approximately at the world average in each subject area.

  15. The distribution of P-values in medical research articles suggested selective reporting associated with statistical significance.

    Science.gov (United States)

    Perneger, Thomas V; Combescure, Christophe

    2017-07-01

    Published P-values provide a window into the global enterprise of medical research. The aim of this study was to use the distribution of published P-values to estimate the relative frequencies of null and alternative hypotheses and to seek irregularities suggestive of publication bias. This cross-sectional study included P-values published in 120 medical research articles in 2016 (30 each from the BMJ, JAMA, Lancet, and New England Journal of Medicine). The observed distribution of P-values was compared with expected distributions under the null hypothesis (i.e., uniform between 0 and 1) and the alternative hypothesis (strictly decreasing from 0 to 1). P-values were categorized according to conventional levels of statistical significance and in one-percent intervals. Among 4,158 recorded P-values, 26.1% were highly significant (P values values equal to 1, and (3) about twice as many P-values less than 0.05 compared with those more than 0.05. The latter finding was seen in both randomized trials and observational studies, and in most types of analyses, excepting heterogeneity tests and interaction tests. Under plausible assumptions, we estimate that about half of the tested hypotheses were null and the other half were alternative. This analysis suggests that statistical tests published in medical journals are not a random sample of null and alternative hypotheses but that selective reporting is prevalent. In particular, significant results are about twice as likely to be reported as nonsignificant results. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Statistical lamb wave localization based on extreme value theory

    Science.gov (United States)

    Harley, Joel B.

    2018-04-01

    Guided wave localization methods based on delay-and-sum imaging, matched field processing, and other techniques have been designed and researched to create images that locate and describe structural damage. The maximum value of these images typically represent an estimated damage location. Yet, it is often unclear if this maximum value, or any other value in the image, is a statistically significant indicator of damage. Furthermore, there are currently few, if any, approaches to assess the statistical significance of guided wave localization images. As a result, we present statistical delay-and-sum and statistical matched field processing localization methods to create statistically significant images of damage. Our framework uses constant rate of false alarm statistics and extreme value theory to detect damage with little prior information. We demonstrate our methods with in situ guided wave data from an aluminum plate to detect two 0.75 cm diameter holes. Our results show an expected improvement in statistical significance as the number of sensors increase. With seventeen sensors, both methods successfully detect damage with statistical significance.

  17. Increasing Statistical Literacy by Exploiting Lexical Ambiguity of Technical Terms

    Directory of Open Access Journals (Sweden)

    Jennifer Kaplan

    2018-01-01

    Full Text Available Instructional inattention to language poses a barrier for students in entry-level science courses, in part because students may perceive a subject as difficult solely based on the lack of understanding of the vocabulary. In addition, the technical use of terms that have different everyday meanings may cause students to misinterpret statements made by instructors, leading to an incomplete or incorrect understanding of the domain. Terms that have different technical and everyday meanings are said to have lexical ambiguity and statistics, as a discipline, has many lexically ambiguous terms. This paper presents a cyclic process for designing activities to address lexical ambiguity in statistics. In addition, it describes three short activities aimed to have high impact on student learning associated with two different lexically ambiguous words or word pairs in statistics. Preliminary student-level data are used to assess the efficacy of the activities, and future directions for development of activities and research about lexical ambiguity in statistics in particular and STEM in general are discussed.

  18. Corruption Significantly Increases the Capital Cost of Power Plants in Developing Contexts

    Directory of Open Access Journals (Sweden)

    Kumar Biswajit Debnath

    2018-03-01

    Full Text Available Emerging economies with rapidly growing population and energy demand, own some of the most expensive power plants in the world. We hypothesized that corruption has a relationship with the capital cost of power plants in developing countries such as Bangladesh. For this study, we analyzed the capital cost of 61 operational and planned power plants in Bangladesh. Initial comparison study revealed that the mean capital cost of a power plant in Bangladesh is twice than that of the global average. Then, the statistical analysis revealed a significant correlation between corruption and the cost of power plants, indicating that higher corruption leads to greater capital cost. The high up-front cost can be a significant burden on the economy, at present and in the future, as most are financed through international loans with extended repayment terms. There is, therefore, an urgent need for the review of the procurement and due diligence process of establishing power plants, and for the implementation of a more transparent system to mitigate adverse effects of corruption on megaprojects.

  19. Sixteen-Day Bedrest Significantly Increases Plasma Colloid Osmotic Pressure

    Science.gov (United States)

    Hargens, Alan R.; Hsieh, S. T.; Murthy, G.; Ballard, R. E.; Convertino, V. A.; Wade, Charles E. (Technical Monitor)

    1994-01-01

    Upon exposure to microgravity, astronauts lose up to 10% of their total plasma volume, which may contribute to orthostatic intolerance after space flight. Because plasma colloid osmotic pressure (COP) is a primary factor maintaining plasma volume, our objective was to measure time course changes in COP during microgravity simulated by 6 deg. head-down tilt (HDT). Seven healthy male subjects (30-55 years of age) were placed in HDT for 16 days. For the purpose of another study, three of the seven subjects were chosen to exercise on a cycle ergometer on day 16. Blood samples were drawn immediately before bedrest on day 14 of bedrest, 18-24 hours following exercise while all subjects were still in HDT and 1 hour following bedrest termination. Plasma COP was measured in all 20 microliter EDTA-treated samples using an osmometer fitted with a PM 30 membrane. Data were analyzed with paired and unpaired t-tests. Plasma COP on day 14 of bedrest (29.9 +/- 0.69 mmHg) was significantly higher (p less than 0.005) than the control, pre-bedrest value (23.1 +/- 0.76 mmHg). At one hour of upright recovery after HDT, plasma COP remained significantly elevated (exercise: 26.9 +/- 0.87 mmHg; no exercise: 26.3 +/- 0.85 mmHg). Additionally, exercise had no significant effect on plasma COP 18-24 hours following exercise (exercise: 27.8 +/- 1.09 mmHg; no exercise: 27.1 +/- 0.78 mmHg). Our results demonstrate that plasma COP increases significantly with microgravity simulated by HDT. However, preliminary results indicate exercise during HDT does not significantly affect plasma COP.

  20. Sigsearch: a new term for post hoc unplanned search for statistically significant relationships with the intent to create publishable findings.

    Science.gov (United States)

    Hashim, Muhammad Jawad

    2010-09-01

    Post-hoc secondary data analysis with no prespecified hypotheses has been discouraged by textbook authors and journal editors alike. Unfortunately no single term describes this phenomenon succinctly. I would like to coin the term "sigsearch" to define this practice and bring it within the teaching lexicon of statistics courses. Sigsearch would include any unplanned, post-hoc search for statistical significance using multiple comparisons of subgroups. It would also include data analysis with outcomes other than the prespecified primary outcome measure of a study as well as secondary data analyses of earlier research.

  1. Intelligent system for statistically significant expertise knowledge on the basis of the model of self-organizing nonequilibrium dissipative system

    Directory of Open Access Journals (Sweden)

    E. A. Tatokchin

    2017-01-01

    Full Text Available Development of the modern educational technologies caused by broad introduction of comput-er testing and development of distant forms of education does necessary revision of methods of an examination of pupils. In work it was shown, need transition to mathematical criteria, exami-nations of knowledge which are deprived of subjectivity. In article the review of the problems arising at realization of this task and are offered approaches for its decision. The greatest atten-tion is paid to discussion of a problem of objective transformation of rated estimates of the ex-pert on to the scale estimates of the student. In general, the discussion this question is was con-cluded that the solution to this problem lies in the creation of specialized intellectual systems. The basis for constructing intelligent system laid the mathematical model of self-organizing nonequilibrium dissipative system, which is a group of students. This article assumes that the dissipative system is provided by the constant influx of new test items of the expert and non-equilibrium – individual psychological characteristics of students in the group. As a result, the system must self-organize themselves into stable patterns. This patern will allow for, relying on large amounts of data, get a statistically significant assessment of student. To justify the pro-posed approach in the work presents the data of the statistical analysis of the results of testing a large sample of students (> 90. Conclusions from this statistical analysis allowed to develop intelligent system statistically significant examination of student performance. It is based on data clustering algorithm (k-mean for the three key parameters. It is shown that this approach allows you to create of the dynamics and objective expertise evaluation.

  2. The Importance of Integrating Clinical Relevance and Statistical Significance in the Assessment of Quality of Care--Illustrated Using the Swedish Stroke Register.

    Directory of Open Access Journals (Sweden)

    Anita Lindmark

    Full Text Available When profiling hospital performance, quality inicators are commonly evaluated through hospital-specific adjusted means with confidence intervals. When identifying deviations from a norm, large hospitals can have statistically significant results even for clinically irrelevant deviations while important deviations in small hospitals can remain undiscovered. We have used data from the Swedish Stroke Register (Riksstroke to illustrate the properties of a benchmarking method that integrates considerations of both clinical relevance and level of statistical significance.The performance measure used was case-mix adjusted risk of death or dependency in activities of daily living within 3 months after stroke. A hospital was labeled as having outlying performance if its case-mix adjusted risk exceeded a benchmark value with a specified statistical confidence level. The benchmark was expressed relative to the population risk and should reflect the clinically relevant deviation that is to be detected. A simulation study based on Riksstroke patient data from 2008-2009 was performed to investigate the effect of the choice of the statistical confidence level and benchmark value on the diagnostic properties of the method.Simulations were based on 18,309 patients in 76 hospitals. The widely used setting, comparing 95% confidence intervals to the national average, resulted in low sensitivity (0.252 and high specificity (0.991. There were large variations in sensitivity and specificity for different requirements of statistical confidence. Lowering statistical confidence improved sensitivity with a relatively smaller loss of specificity. Variations due to different benchmark values were smaller, especially for sensitivity. This allows the choice of a clinically relevant benchmark to be driven by clinical factors without major concerns about sufficiently reliable evidence.The study emphasizes the importance of combining clinical relevance and level of statistical

  3. The Importance of Integrating Clinical Relevance and Statistical Significance in the Assessment of Quality of Care--Illustrated Using the Swedish Stroke Register.

    Science.gov (United States)

    Lindmark, Anita; van Rompaye, Bart; Goetghebeur, Els; Glader, Eva-Lotta; Eriksson, Marie

    2016-01-01

    When profiling hospital performance, quality inicators are commonly evaluated through hospital-specific adjusted means with confidence intervals. When identifying deviations from a norm, large hospitals can have statistically significant results even for clinically irrelevant deviations while important deviations in small hospitals can remain undiscovered. We have used data from the Swedish Stroke Register (Riksstroke) to illustrate the properties of a benchmarking method that integrates considerations of both clinical relevance and level of statistical significance. The performance measure used was case-mix adjusted risk of death or dependency in activities of daily living within 3 months after stroke. A hospital was labeled as having outlying performance if its case-mix adjusted risk exceeded a benchmark value with a specified statistical confidence level. The benchmark was expressed relative to the population risk and should reflect the clinically relevant deviation that is to be detected. A simulation study based on Riksstroke patient data from 2008-2009 was performed to investigate the effect of the choice of the statistical confidence level and benchmark value on the diagnostic properties of the method. Simulations were based on 18,309 patients in 76 hospitals. The widely used setting, comparing 95% confidence intervals to the national average, resulted in low sensitivity (0.252) and high specificity (0.991). There were large variations in sensitivity and specificity for different requirements of statistical confidence. Lowering statistical confidence improved sensitivity with a relatively smaller loss of specificity. Variations due to different benchmark values were smaller, especially for sensitivity. This allows the choice of a clinically relevant benchmark to be driven by clinical factors without major concerns about sufficiently reliable evidence. The study emphasizes the importance of combining clinical relevance and level of statistical confidence when

  4. Estimates of statistical significance for comparison of individual positions in multiple sequence alignments

    Directory of Open Access Journals (Sweden)

    Sadreyev Ruslan I

    2004-08-01

    Full Text Available Abstract Background Profile-based analysis of multiple sequence alignments (MSA allows for accurate comparison of protein families. Here, we address the problems of detecting statistically confident dissimilarities between (1 MSA position and a set of predicted residue frequencies, and (2 between two MSA positions. These problems are important for (i evaluation and optimization of methods predicting residue occurrence at protein positions; (ii detection of potentially misaligned regions in automatically produced alignments and their further refinement; and (iii detection of sites that determine functional or structural specificity in two related families. Results For problems (1 and (2, we propose analytical estimates of P-value and apply them to the detection of significant positional dissimilarities in various experimental situations. (a We compare structure-based predictions of residue propensities at a protein position to the actual residue frequencies in the MSA of homologs. (b We evaluate our method by the ability to detect erroneous position matches produced by an automatic sequence aligner. (c We compare MSA positions that correspond to residues aligned by automatic structure aligners. (d We compare MSA positions that are aligned by high-quality manual superposition of structures. Detected dissimilarities reveal shortcomings of the automatic methods for residue frequency prediction and alignment construction. For the high-quality structural alignments, the dissimilarities suggest sites of potential functional or structural importance. Conclusion The proposed computational method is of significant potential value for the analysis of protein families.

  5. Test the Overall Significance of p-values by Using Joint Tail Probability of Ordered p-values as Test Statistic

    NARCIS (Netherlands)

    Fang, Yongxiang; Wit, Ernst

    2008-01-01

    Fisher’s combined probability test is the most commonly used method to test the overall significance of a set independent p-values. However, it is very obviously that Fisher’s statistic is more sensitive to smaller p-values than to larger p-value and a small p-value may overrule the other p-values

  6. Changing world extreme temperature statistics

    Science.gov (United States)

    Finkel, J. M.; Katz, J. I.

    2018-04-01

    We use the Global Historical Climatology Network--daily database to calculate a nonparametric statistic that describes the rate at which all-time daily high and low temperature records have been set in nine geographic regions (continents or major portions of continents) during periods mostly from the mid-20th Century to the present. This statistic was defined in our earlier work on temperature records in the 48 contiguous United States. In contrast to this earlier work, we find that in every region except North America all-time high records were set at a rate significantly (at least $3\\sigma$) higher than in the null hypothesis of a stationary climate. Except in Antarctica, all-time low records were set at a rate significantly lower than in the null hypothesis. In Europe, North Africa and North Asia the rate of setting new all-time highs increased suddenly in the 1990's, suggesting a change in regional climate regime; in most other regions there was a steadier increase.

  7. Significance in the increase of women psychiatrists in Korea.

    Science.gov (United States)

    Kim, Ha Kyoung; Kim, Soo In

    2008-01-01

    The number of female doctors has increased in Korea; 18.9% (13,083) of the total medical doctors registered (69,097) were women in 2006, compared to 13.6% (2,216) in 1975. The proportion of female doctors will jump up by 2010 considering that nearly 40% of the medical students are women as of today. This trend has had strong influence on the field of psychiatry; the percentage of women psychiatrists rose from 1.6 (6)% to 18% (453), from 1975 to 2006 and now women residents comprise 39% (206) of all. This is not only a reflection of a social phenomenon of the increase in professional women but also attributed to some specific characteristics of the psychiatry. Psychiatric practice may come more natural to women. While clinical activities of women psychiatrists are expanding, there are few women leaders and much less women are involving in academic activities in this field as yet. Though there is less sexual discrimination in the field of psychiatry, women psychiatrists are still having a lot of difficulties in balancing work and family matters. Many women psychiatrists also report they've ever felt an implied discrimination in their careers. In this study, we are to identify the characteristics of women psychiatrists and to explore the significance of the increase in women psychiatrists in Korea and the situation in which they are.

  8. Detection and significance of serum inflammatory factors and oxidative stress indicators in diabetic retinopathy

    Institute of Scientific and Technical Information of China (English)

    Wei Gao; Jing Wang; Chao Zhang; Ping Qin

    2017-01-01

    Objective:To determine the serum inflammatory cytokines and oxidative stress parameters of diabetic retinopathy (DR) patients to explore their possible role in the DR.Methods: 116 cases of type 2 diabetic patients were selected from June 2015 to June 2016 in our hospital as research subjects, divided into diabetic Diabetes without retinopathy (NDR group,n = 63) and diabetic with retinopathy patients (DR group,n = 53). And 60 cases of healthy check-ups of the same period in our hospital medical center were selected as normal control group (NC). The VEGF, IL-6, TNF-α , MDA and SOD levels of three groups of patients were detected. Results:The IL-6 levels of NC group, NDR group and DR group were increased gradually, and the difference was statistically significant (P<0.05). The TNF-α levels of NC group, NDR group and DR group were increased gradually, and the difference was statistically significant (P<0.05). The VEGF levels of NC group, NDR group and DR group were increased gradually, and the difference was statistically significant (P<0.05). The malondialdehyde (MDA) levels of NC group, NDR group and DR group increased gradually, and the difference was statistically significant (P<0.05). The superoxide dismutase (SOD) levels of NC group, NDR group and DR group were decreased gradually, and the difference was statistically significant (P<0.05). Conclusions: DR patients express high levels of IL-6, TNF-α and VEGF, and there exists significant oxidative stress in DR, which shows that the inflammation occurrence and oxidative stress state play an important role in the development of DR.

  9. THE SMALL BUT SIGNIFICANT AND NONTRANSITORY INCREASE IN PRICES (SSNIP TEST

    Directory of Open Access Journals (Sweden)

    Liviana Niminet

    2008-12-01

    Full Text Available The Small but Significant Nontransitory Increase in Price Test was designed to define the relevant market by concepts of product, geographical area and time. This test, also called the ,,hypothetical monopolistic test” is the subject of many researches both economical and legal as it deals with economic concepts as well as with legally aspects.

  10. Increased frequency of retinopathy of prematurity over the last decade and significant regional differences.

    Science.gov (United States)

    Holmström, Gerd; Tornqvist, Kristina; Al-Hawasi, Abbas; Nilsson, Åsa; Wallin, Agneta; Hellström, Ann

    2018-03-01

    Retinopathy of prematurity (ROP) causes childhood blindness globally in prematurely born infants. Although increased levels of oxygen supply lead to increased survival and reduced frequency of cerebral palsy, increased incidence of ROP is reported. With the help of a Swedish register for ROP, SWEDROP, national and regional incidences of ROP and frequencies of treatment were evaluated from 2008 to 2015 (n = 5734), as well as before and after targets of provided oxygen changed from 85-89% to 91-95% in 2014. Retinopathy of prematurity (ROP) was found in 31.9% (1829/5734) of all infants with a gestational age (GA) of <31 weeks at birth and 5.7% of the infants (329/5734) had been treated for ROP. Analyses of the national data revealed an increased incidence of ROP during the 8-year study period (p = 0.003), but there was no significant increase in the frequency of treatment. There were significant differences between the seven health regions of Sweden, regarding both incidence of ROP and frequency of treatment (p < 0.001). Comparison of regional data before and after the new oxygen targets revealed a significant increase in treated ROP in one region [OR: 2.24 (CI: 1.11-4.49), p = 0.024] and a borderline increase in one other [OR: 3.08 (CI: 0.99-9.60), p = 0.052]. The Swedish national ROP register revealed an increased incidence of ROP during an 8-year period and significant regional differences regarding the incidence of ROP and frequency of treatment. © 2017 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  11. Statistically significant dependence of the Xaa-Pro peptide bond conformation on secondary structure and amino acid sequence

    Directory of Open Access Journals (Sweden)

    Leitner Dietmar

    2005-04-01

    Full Text Available Abstract Background A reliable prediction of the Xaa-Pro peptide bond conformation would be a useful tool for many protein structure calculation methods. We have analyzed the Protein Data Bank and show that the combined use of sequential and structural information has a predictive value for the assessment of the cis versus trans peptide bond conformation of Xaa-Pro within proteins. For the analysis of the data sets different statistical methods such as the calculation of the Chou-Fasman parameters and occurrence matrices were used. Furthermore we analyzed the relationship between the relative solvent accessibility and the relative occurrence of prolines in the cis and in the trans conformation. Results One of the main results of the statistical investigations is the ranking of the secondary structure and sequence information with respect to the prediction of the Xaa-Pro peptide bond conformation. We observed a significant impact of secondary structure information on the occurrence of the Xaa-Pro peptide bond conformation, while the sequence information of amino acids neighboring proline is of little predictive value for the conformation of this bond. Conclusion In this work, we present an extensive analysis of the occurrence of the cis and trans proline conformation in proteins. Based on the data set, we derived patterns and rules for a possible prediction of the proline conformation. Upon adoption of the Chou-Fasman parameters, we are able to derive statistically relevant correlations between the secondary structure of amino acid fragments and the Xaa-Pro peptide bond conformation.

  12. CONFIDENCE LEVELS AND/VS. STATISTICAL HYPOTHESIS TESTING IN STATISTICAL ANALYSIS. CASE STUDY

    Directory of Open Access Journals (Sweden)

    ILEANA BRUDIU

    2009-05-01

    Full Text Available Estimated parameters with confidence intervals and testing statistical assumptions used in statistical analysis to obtain conclusions on research from a sample extracted from the population. Paper to the case study presented aims to highlight the importance of volume of sample taken in the study and how this reflects on the results obtained when using confidence intervals and testing for pregnant. If statistical testing hypotheses not only give an answer "yes" or "no" to some questions of statistical estimation using statistical confidence intervals provides more information than a test statistic, show high degree of uncertainty arising from small samples and findings build in the "marginally significant" or "almost significant (p very close to 0.05.

  13. Evaluation of Significance of Diffusely Increased Bilateral Renal Uptake on Bone Scan

    International Nuclear Information System (INIS)

    Sung, Mi Sook; Yang, Woo Jin; Byun, Jae Young; Park, Jung Mi; Shinn, Kyung Sub; Bahk, Yong Whee

    1990-01-01

    Unexpected renal abnormality can be detected on bone scan using 99m Tc-MDP. The purpose of the study is to evaluate the diagnostic significance of diffusely increased bilateral renal uptake on bone scan. 1,500 bone scan were reviewed and 43 scans which showed diffusely increased bilateral renal uptake were selected for analysis. Laboratory findings for renal and liver function tests including routine urinalysis were reviewed in 43 patients. 26 of 43 case showed abnormality in urinalysis and renal function study. 20 of 43 cases showed abnormal liver function study and 3 of these cases were diagnosed as hepatorenal syndrome later. 13 of those 20 cases had liver cirrhosis with or without hepatoma. 12 of 43 cases showed abnormality both in renal and liver function studies. 2 of 43 cases showed diffusely increased bilateral renal uptake after chemotherapy for cancer but not on previous scans before chemotherapy. 2 of 43 cases showed hypercalcaemia and 8 of 43 cases had multifocal bone uptake due to metastasis or benign bone lesion. But the latter showed no hypercalcaemia at all. There was no significant correlation between increased renal uptake and MDP uptake in soft tissue other than kidneys. This study raised the possibility that the impaired liver and/or renal function may result in diffuse increase of bilateral renal uptake of MDP of unknown mechanism. It seems to need further study on this correlation.

  14. Evaluation of Significance of Diffusely Increased Bilateral Renal Uptake on Bone Scan

    Energy Technology Data Exchange (ETDEWEB)

    Sung, Mi Sook; Yang, Woo Jin; Byun, Jae Young; Park, Jung Mi; Shinn, Kyung Sub; Bahk, Yong Whee [Catholic University College of Medicine, Seoul (Korea, Republic of)

    1990-03-15

    Unexpected renal abnormality can be detected on bone scan using {sup 99m}Tc-MDP. The purpose of the study is to evaluate the diagnostic significance of diffusely increased bilateral renal uptake on bone scan. 1,500 bone scan were reviewed and 43 scans which showed diffusely increased bilateral renal uptake were selected for analysis. Laboratory findings for renal and liver function tests including routine urinalysis were reviewed in 43 patients. 26 of 43 case showed abnormality in urinalysis and renal function study. 20 of 43 cases showed abnormal liver function study and 3 of these cases were diagnosed as hepatorenal syndrome later. 13 of those 20 cases had liver cirrhosis with or without hepatoma. 12 of 43 cases showed abnormality both in renal and liver function studies. 2 of 43 cases showed diffusely increased bilateral renal uptake after chemotherapy for cancer but not on previous scans before chemotherapy. 2 of 43 cases showed hypercalcaemia and 8 of 43 cases had multifocal bone uptake due to metastasis or benign bone lesion. But the latter showed no hypercalcaemia at all. There was no significant correlation between increased renal uptake and MDP uptake in soft tissue other than kidneys. This study raised the possibility that the impaired liver and/or renal function may result in diffuse increase of bilateral renal uptake of MDP of unknown mechanism. It seems to need further study on this correlation.

  15. St. John's wort significantly increased the systemic exposure and toxicity of methotrexate in rats

    International Nuclear Information System (INIS)

    Yang, Shih-Ying; Juang, Shin-Hun; Tsai, Shang-Yuan; Chao, Pei-Dawn Lee; Hou, Yu-Chi

    2012-01-01

    St. John's wort (SJW, Hypericum perforatum) is one of the popular nutraceuticals for treating depression. Methotrexate (MTX) is an immunosuppressant with narrow therapeutic window. This study investigated the effect of SJW on MTX pharmacokinetics in rats. Rats were orally given MTX alone and coadministered with 300 and 150 mg/kg of SJW, and 25 mg/kg of diclofenac, respectively. Blood was withdrawn at specific time points and serum MTX concentrations were assayed by a specific monoclonal fluorescence polarization immunoassay method. The results showed that 300 mg/kg of SJW significantly increased the AUC 0−t and C max of MTX by 163% and 60%, respectively, and 150 mg/kg of SJW significantly increased the AUC 0−t of MTX by 55%. In addition, diclofenac enhanced the C max of MTX by 110%. The mortality of rats treated with SJW was higher than that of controls. In conclusion, coadministration of SJW significantly increased the systemic exposure and toxicity of MTX. The combined use of MTX with SJW would need to be with caution. -- Highlights: ► St. John's wort significantly increased the AUC 0−t and C max of methotrexate. ► Coadministration of St. John's wort increased the exposure and toxicity of methotrexate. ► The combined use of methotrexate with St. John's wort will need to be with caution.

  16. Statistical Power in Plant Pathology Research.

    Science.gov (United States)

    Gent, David H; Esker, Paul D; Kriss, Alissa B

    2018-01-01

    In null hypothesis testing, failure to reject a null hypothesis may have two potential interpretations. One interpretation is that the treatments being evaluated do not have a significant effect, and a correct conclusion was reached in the analysis. Alternatively, a treatment effect may have existed but the conclusion of the study was that there was none. This is termed a Type II error, which is most likely to occur when studies lack sufficient statistical power to detect a treatment effect. In basic terms, the power of a study is the ability to identify a true effect through a statistical test. The power of a statistical test is 1 - (the probability of Type II errors), and depends on the size of treatment effect (termed the effect size), variance, sample size, and significance criterion (the probability of a Type I error, α). Low statistical power is prevalent in scientific literature in general, including plant pathology. However, power is rarely reported, creating uncertainty in the interpretation of nonsignificant results and potentially underestimating small, yet biologically significant relationships. The appropriate level of power for a study depends on the impact of Type I versus Type II errors and no single level of power is acceptable for all purposes. Nonetheless, by convention 0.8 is often considered an acceptable threshold and studies with power less than 0.5 generally should not be conducted if the results are to be conclusive. The emphasis on power analysis should be in the planning stages of an experiment. Commonly employed strategies to increase power include increasing sample sizes, selecting a less stringent threshold probability for Type I errors, increasing the hypothesized or detectable effect size, including as few treatment groups as possible, reducing measurement variability, and including relevant covariates in analyses. Power analysis will lead to more efficient use of resources and more precisely structured hypotheses, and may even

  17. Inverse Statistics in the Foreign Exchange Market

    OpenAIRE

    Jensen, M. H.; Johansen, A.; Petroni, F.; Simonsen, I.

    2004-01-01

    We investigate intra-day foreign exchange (FX) time series using the inverse statistic analysis developed in [1,2]. Specifically, we study the time-averaged distributions of waiting times needed to obtain a certain increase (decrease) $\\rho$ in the price of an investment. The analysis is performed for the Deutsch mark (DM) against the $US for the full year of 1998, but similar results are obtained for the Japanese Yen against the $US. With high statistical significance, the presence of "reson...

  18. THE SIGNIFICANCE OF INTERCULTURAL COMPETENCE IN CROSS-CULTURAL COMMUNICATION

    Directory of Open Access Journals (Sweden)

    Jadranka Zlomislić

    2016-12-01

    Full Text Available The aim of this study is to explore the influence of education and additional factors influencing students’ awareness of intercultural differences. For the purposes of this research assessment was carried out with regard to their role in promoting cultural awareness and facing cross-cultural challenges posed by unfamiliar cross-cultural contexts. Cultural education is presumed to be a key factor for achieving a significant increase of cultural sensitivity and cultural awareness in order to ensure successful cross-cultural communication and increase mobility of students/working professionals. For this study, it was assumed that the cultural awareness of students increases due to the courses they take and their overall study experience. A special questionnaire was developed for the purposes of this research, and the obtained results were statistically analyzed with the help of descriptive statistics, the non-parametric chi-square test, and the Mann-Whitney test. The research has shown that intercultural competence has a statistically significant positive effect on the readiness of students to participate in study and work programs abroad. Thus, it is mandatory that foreign language competence as well as intercultural competence be a priority of the curriculum if we are to increase the number of highly educated experts who will be capable to compete successfully as students or professionals in all fields and all cultural areas. If we recognize that globalization has made the world a global village, we all need the intercultural competence to successfully live in it.

  19. Perceived Statistical Knowledge Level and Self-Reported Statistical Practice Among Academic Psychologists

    Directory of Open Access Journals (Sweden)

    Laura Badenes-Ribera

    2018-06-01

    Full Text Available Introduction: Publications arguing against the null hypothesis significance testing (NHST procedure and in favor of good statistical practices have increased. The most frequently mentioned alternatives to NHST are effect size statistics (ES, confidence intervals (CIs, and meta-analyses. A recent survey conducted in Spain found that academic psychologists have poor knowledge about effect size statistics, confidence intervals, and graphic displays for meta-analyses, which might lead to a misinterpretation of the results. In addition, it also found that, although the use of ES is becoming generalized, the same thing is not true for CIs. Finally, academics with greater knowledge about ES statistics presented a profile closer to good statistical practice and research design. Our main purpose was to analyze the extension of these results to a different geographical area through a replication study.Methods: For this purpose, we elaborated an on-line survey that included the same items as the original research, and we asked academic psychologists to indicate their level of knowledge about ES, their CIs, and meta-analyses, and how they use them. The sample consisted of 159 Italian academic psychologists (54.09% women, mean age of 47.65 years. The mean number of years in the position of professor was 12.90 (SD = 10.21.Results: As in the original research, the results showed that, although the use of effect size estimates is becoming generalized, an under-reporting of CIs for ES persists. The most frequent ES statistics mentioned were Cohen's d and R2/η2, which can have outliers or show non-normality or violate statistical assumptions. In addition, academics showed poor knowledge about meta-analytic displays (e.g., forest plot and funnel plot and quality checklists for studies. Finally, academics with higher-level knowledge about ES statistics seem to have a profile closer to good statistical practices.Conclusions: Changing statistical practice is not

  20. Control cards as a statistical quality control resource

    Directory of Open Access Journals (Sweden)

    Aleksandar Živan Drenovac

    2013-02-01

    Full Text Available Normal 0 false false false MicrosoftInternetExplorer4 This paper proves that applying of statistical methods can significantly contribute increasing of products and services quality, as well as increasing of institutions rating. Determining of optimal, apropos anticipatory and limitary values, is based on sample`s statistical analyze. Control cards represent very confident instrument, which is simple for use and efficient for control of process, by which process is maintained in set borders. Thus, control cards can be applied in quality control of procesess of weapons and military equipment production, maintenance of technical systems, as well as for seting of standards and increasing of quality level for many other activities.

  1. Determining coding CpG islands by identifying regions significant for pattern statistics on Markov chains.

    Science.gov (United States)

    Singer, Meromit; Engström, Alexander; Schönhuth, Alexander; Pachter, Lior

    2011-09-23

    Recent experimental and computational work confirms that CpGs can be unmethylated inside coding exons, thereby showing that codons may be subjected to both genomic and epigenomic constraint. It is therefore of interest to identify coding CpG islands (CCGIs) that are regions inside exons enriched for CpGs. The difficulty in identifying such islands is that coding exons exhibit sequence biases determined by codon usage and constraints that must be taken into account. We present a method for finding CCGIs that showcases a novel approach we have developed for identifying regions of interest that are significant (with respect to a Markov chain) for the counts of any pattern. Our method begins with the exact computation of tail probabilities for the number of CpGs in all regions contained in coding exons, and then applies a greedy algorithm for selecting islands from among the regions. We show that the greedy algorithm provably optimizes a biologically motivated criterion for selecting islands while controlling the false discovery rate. We applied this approach to the human genome (hg18) and annotated CpG islands in coding exons. The statistical criterion we apply to evaluating islands reduces the number of false positives in existing annotations, while our approach to defining islands reveals significant numbers of undiscovered CCGIs in coding exons. Many of these appear to be examples of functional epigenetic specialization in coding exons.

  2. Continuous background light significantly increases flashing-light enhancement of photosynthesis and growth of microalgae.

    Science.gov (United States)

    Abu-Ghosh, Said; Fixler, Dror; Dubinsky, Zvy; Iluz, David

    2015-01-01

    Under specific conditions, flashing light enhances the photosynthesis rate in comparison to continuous illumination. Here we show that a combination of flashing light and continuous background light with the same integrated photon dose as continuous or flashing light alone can be used to significantly enhance photosynthesis and increase microalgae growth. To test this hypothesis, the green microalga Dunaliella salina was exposed to three different light regimes: continuous light, flashing light, and concomitant application of both. Algal growth was compared under three different integrated light quantities; low, intermediate, and moderately high. Under the combined light regime, there was a substantial increase in all algal growth parameters, with an enhanced photosynthesis rate, within 3days. Our strategy demonstrates a hitherto undescribed significant increase in photosynthesis and algal growth rates, which is beyond the increase by flashing light alone. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Confounding and Statistical Significance of Indirect Effects: Childhood Adversity, Education, Smoking, and Anxious and Depressive Symptomatology

    Directory of Open Access Journals (Sweden)

    Mashhood Ahmed Sheikh

    2017-08-01

    Full Text Available The life course perspective, the risky families model, and stress-and-coping models provide the rationale for assessing the role of smoking as a mediator in the association between childhood adversity and anxious and depressive symptomatology (ADS in adulthood. However, no previous study has assessed the independent mediating role of smoking in the association between childhood adversity and ADS in adulthood. Moreover, the importance of mediator-response confounding variables has rarely been demonstrated empirically in social and psychiatric epidemiology. The aim of this paper was to (i assess the mediating role of smoking in adulthood in the association between childhood adversity and ADS in adulthood, and (ii assess the change in estimates due to different mediator-response confounding factors (education, alcohol intake, and social support. The present analysis used data collected from 1994 to 2008 within the framework of the Tromsø Study (N = 4,530, a representative prospective cohort study of men and women. Seven childhood adversities (low mother's education, low father's education, low financial conditions, exposure to passive smoke, psychological abuse, physical abuse, and substance abuse distress were used to create a childhood adversity score. Smoking status was measured at a mean age of 54.7 years (Tromsø IV, and ADS in adulthood was measured at a mean age of 61.7 years (Tromsø V. Mediation analysis was used to assess the indirect effect and the proportion of mediated effect (% of childhood adversity on ADS in adulthood via smoking in adulthood. The test-retest reliability of smoking was good (Kappa: 0.67, 95% CI: 0.63; 0.71 in this sample. Childhood adversity was associated with a 10% increased risk of smoking in adulthood (Relative risk: 1.10, 95% CI: 1.03; 1.18, and both childhood adversity and smoking in adulthood were associated with greater levels of ADS in adulthood (p < 0.001. Smoking in adulthood did not significantly

  4. Chemometric and Statistical Analyses of ToF-SIMS Spectra of Increasingly Complex Biological Samples

    Energy Technology Data Exchange (ETDEWEB)

    Berman, E S; Wu, L; Fortson, S L; Nelson, D O; Kulp, K S; Wu, K J

    2007-10-24

    Characterizing and classifying molecular variation within biological samples is critical for determining fundamental mechanisms of biological processes that will lead to new insights including improved disease understanding. Towards these ends, time-of-flight secondary ion mass spectrometry (ToF-SIMS) was used to examine increasingly complex samples of biological relevance, including monosaccharide isomers, pure proteins, complex protein mixtures, and mouse embryo tissues. The complex mass spectral data sets produced were analyzed using five common statistical and chemometric multivariate analysis techniques: principal component analysis (PCA), linear discriminant analysis (LDA), partial least squares discriminant analysis (PLSDA), soft independent modeling of class analogy (SIMCA), and decision tree analysis by recursive partitioning. PCA was found to be a valuable first step in multivariate analysis, providing insight both into the relative groupings of samples and into the molecular basis for those groupings. For the monosaccharides, pure proteins and protein mixture samples, all of LDA, PLSDA, and SIMCA were found to produce excellent classification given a sufficient number of compound variables calculated. For the mouse embryo tissues, however, SIMCA did not produce as accurate a classification. The decision tree analysis was found to be the least successful for all the data sets, providing neither as accurate a classification nor chemical insight for any of the tested samples. Based on these results we conclude that as the complexity of the sample increases, so must the sophistication of the multivariate technique used to classify the samples. PCA is a preferred first step for understanding ToF-SIMS data that can be followed by either LDA or PLSDA for effective classification analysis. This study demonstrates the strength of ToF-SIMS combined with multivariate statistical and chemometric techniques to classify increasingly complex biological samples

  5. Statistical significance approximation in local trend analysis of high-throughput time-series data using the theory of Markov chains.

    Science.gov (United States)

    Xia, Li C; Ai, Dongmei; Cram, Jacob A; Liang, Xiaoyi; Fuhrman, Jed A; Sun, Fengzhu

    2015-09-21

    Local trend (i.e. shape) analysis of time series data reveals co-changing patterns in dynamics of biological systems. However, slow permutation procedures to evaluate the statistical significance of local trend scores have limited its applications to high-throughput time series data analysis, e.g., data from the next generation sequencing technology based studies. By extending the theories for the tail probability of the range of sum of Markovian random variables, we propose formulae for approximating the statistical significance of local trend scores. Using simulations and real data, we show that the approximate p-value is close to that obtained using a large number of permutations (starting at time points >20 with no delay and >30 with delay of at most three time steps) in that the non-zero decimals of the p-values obtained by the approximation and the permutations are mostly the same when the approximate p-value is less than 0.05. In addition, the approximate p-value is slightly larger than that based on permutations making hypothesis testing based on the approximate p-value conservative. The approximation enables efficient calculation of p-values for pairwise local trend analysis, making large scale all-versus-all comparisons possible. We also propose a hybrid approach by integrating the approximation and permutations to obtain accurate p-values for significantly associated pairs. We further demonstrate its use with the analysis of the Polymouth Marine Laboratory (PML) microbial community time series from high-throughput sequencing data and found interesting organism co-occurrence dynamic patterns. The software tool is integrated into the eLSA software package that now provides accelerated local trend and similarity analysis pipelines for time series data. The package is freely available from the eLSA website: http://bitbucket.org/charade/elsa.

  6. Test the Overall Significance of p-values by Using Joint Tail Probability of Ordered p-values as Test Statistic

    OpenAIRE

    Fang, Yongxiang; Wit, Ernst

    2008-01-01

    Fisher’s combined probability test is the most commonly used method to test the overall significance of a set independent p-values. However, it is very obviously that Fisher’s statistic is more sensitive to smaller p-values than to larger p-value and a small p-value may overrule the other p-values and decide the test result. This is, in some cases, viewed as a flaw. In order to overcome this flaw and improve the power of the test, the joint tail probability of a set p-values is proposed as a ...

  7. Increase in Utilization of Afterhours Medical Imaging: A Study of Three Canadian Academic Centers.

    Science.gov (United States)

    Chaudhry, Shivani; Dhalla, Irfan; Lebovic, Gerald; Rogalla, Patrik; Dowdell, Timothy

    2015-11-01

    The objectives of our study were to assess trends in afterhours medical imaging utilization for emergency department (ED) and inpatient (IP) patient populations from 2006-2013, including analysis by modality and specialty and with adjustment for patient volume. For this retrospective study, we reviewed the number of CT, MRI, and ultrasound studies performed for the ED and IP patients during the afterhours time period (5pm - 8am on weekdays and 24 hours on weekends and statutory holidays) from 2006-2013 at three different Canadian academic hospitals. We used the Jonckheere-Terpstra (JT) test to determine statistical significance of imaging and patient volume trends. A regression model was used to examine whether there was an increasing trend over time in the volume of imaging tests per 1000 patients. For all three sites from 2006-2013 during the afterhours time period: There was a statistically significant increasing trend in total medical imaging volume, which also held true when the volumes were assessed by modality and by specialty. There was a statistically significant increasing trend in ED and IP patient volume. When medical imaging volumes were adjusted for patient volumes, there was a statistically significant increasing trend in imaging being performed per patient. Afterhours medical imaging volumes demonstrated a statistically significant increasing trend at all three sites from 2006-2013 when assessed by total volume, modality, and specialty. During the same time period and at all three sites, the ED and IP patient volumes also demonstrated a statistically significant increasing trend with more medical imaging, however, being performed per patient. Copyright © 2015 Canadian Association of Radiologists. Published by Elsevier Inc. All rights reserved.

  8. Baseline Statistics of Linked Statistical Data

    NARCIS (Netherlands)

    Scharnhorst, Andrea; Meroño-Peñuela, Albert; Guéret, Christophe

    2014-01-01

    We are surrounded by an ever increasing ocean of information, everybody will agree to that. We build sophisticated strategies to govern this information: design data models, develop infrastructures for data sharing, building tool for data analysis. Statistical datasets curated by National

  9. IGESS: a statistical approach to integrating individual-level genotype data and summary statistics in genome-wide association studies.

    Science.gov (United States)

    Dai, Mingwei; Ming, Jingsi; Cai, Mingxuan; Liu, Jin; Yang, Can; Wan, Xiang; Xu, Zongben

    2017-09-15

    Results from genome-wide association studies (GWAS) suggest that a complex phenotype is often affected by many variants with small effects, known as 'polygenicity'. Tens of thousands of samples are often required to ensure statistical power of identifying these variants with small effects. However, it is often the case that a research group can only get approval for the access to individual-level genotype data with a limited sample size (e.g. a few hundreds or thousands). Meanwhile, summary statistics generated using single-variant-based analysis are becoming publicly available. The sample sizes associated with the summary statistics datasets are usually quite large. How to make the most efficient use of existing abundant data resources largely remains an open question. In this study, we propose a statistical approach, IGESS, to increasing statistical power of identifying risk variants and improving accuracy of risk prediction by i ntegrating individual level ge notype data and s ummary s tatistics. An efficient algorithm based on variational inference is developed to handle the genome-wide analysis. Through comprehensive simulation studies, we demonstrated the advantages of IGESS over the methods which take either individual-level data or summary statistics data as input. We applied IGESS to perform integrative analysis of Crohns Disease from WTCCC and summary statistics from other studies. IGESS was able to significantly increase the statistical power of identifying risk variants and improve the risk prediction accuracy from 63.2% ( ±0.4% ) to 69.4% ( ±0.1% ) using about 240 000 variants. The IGESS software is available at https://github.com/daviddaigithub/IGESS . zbxu@xjtu.edu.cn or xwan@comp.hkbu.edu.hk or eeyang@hkbu.edu.hk. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  10. New scanning technique using Adaptive Statistical Iterative Reconstruction (ASIR) significantly reduced the radiation dose of cardiac CT.

    Science.gov (United States)

    Tumur, Odgerel; Soon, Kean; Brown, Fraser; Mykytowycz, Marcus

    2013-06-01

    The aims of our study were to evaluate the effect of application of Adaptive Statistical Iterative Reconstruction (ASIR) algorithm on the radiation dose of coronary computed tomography angiography (CCTA) and its effects on image quality of CCTA and to evaluate the effects of various patient and CT scanning factors on the radiation dose of CCTA. This was a retrospective study that included 347 consecutive patients who underwent CCTA at a tertiary university teaching hospital between 1 July 2009 and 20 September 2011. Analysis was performed comparing patient demographics, scan characteristics, radiation dose and image quality in two groups of patients in whom conventional Filtered Back Projection (FBP) or ASIR was used for image reconstruction. There were 238 patients in the FBP group and 109 patients in the ASIR group. There was no difference between the groups in the use of prospective gating, scan length or tube voltage. In ASIR group, significantly lower tube current was used compared with FBP group, 550 mA (450-600) vs. 650 mA (500-711.25) (median (interquartile range)), respectively, P ASIR group compared with FBP group, 4.29 mSv (2.84-6.02) vs. 5.84 mSv (3.88-8.39) (median (interquartile range)), respectively, P ASIR was associated with increased image noise compared with FBP (39.93 ± 10.22 vs. 37.63 ± 18.79 (mean ± standard deviation), respectively, P ASIR reduces the radiation dose of CCTA without affecting the image quality. © 2013 The Authors. Journal of Medical Imaging and Radiation Oncology © 2013 The Royal Australian and New Zealand College of Radiologists.

  11. Significant increase of surface ozone at a rural site, north of eastern China

    Directory of Open Access Journals (Sweden)

    Z. Ma

    2016-03-01

    Full Text Available Ozone pollution in eastern China has become one of the top environmental issues. Quantifying the temporal trend of surface ozone helps to assess the impacts of the anthropogenic precursor reductions and the likely effects of emission control strategies implemented. In this paper, ozone data collected at the Shangdianzi (SDZ regional atmospheric background station from 2003 to 2015 are presented and analyzed to obtain the variation in the trend of surface ozone in the most polluted region of China, north of eastern China or the North China Plain. A modified Kolmogorov–Zurbenko (KZ filter method was performed on the maximum daily average 8 h (MDA8 concentrations of ozone to separate the contributions of different factors from the variation of surface ozone and remove the influence of meteorological fluctuations on surface ozone. Results reveal that the short-term, seasonal and long-term components of ozone account for 36.4, 57.6 and 2.2 % of the total variance, respectively. The long-term trend indicates that the MDA8 has undergone a significant increase in the period of 2003–2015, with an average rate of 1.13 ± 0.01 ppb year−1 (R2 = 0.92. It is found that meteorological factors did not significantly influence the long-term variation of ozone and the increase may be completely attributed to changes in emissions. Furthermore, there is no significant correlation between the long-term O3 and NO2 trends. This study suggests that emission changes in VOCs might have played a more important role in the observed increase of surface ozone at SDZ.

  12. After statistics reform : Should we still teach significance testing?

    NARCIS (Netherlands)

    A. Hak (Tony)

    2014-01-01

    textabstractIn the longer term null hypothesis significance testing (NHST) will disappear because p- values are not informative and not replicable. Should we continue to teach in the future the procedures of then abolished routines (i.e., NHST)? Three arguments are discussed for not teaching NHST in

  13. Worry, Intolerance of Uncertainty, and Statistics Anxiety

    Science.gov (United States)

    Williams, Amanda S.

    2013-01-01

    Statistics anxiety is a problem for most graduate students. This study investigates the relationship between intolerance of uncertainty, worry, and statistics anxiety. Intolerance of uncertainty was significantly related to worry, and worry was significantly related to three types of statistics anxiety. Six types of statistics anxiety were…

  14. Reducing statistics anxiety and enhancing statistics learning achievement: effectiveness of a one-minute strategy.

    Science.gov (United States)

    Chiou, Chei-Chang; Wang, Yu-Min; Lee, Li-Tze

    2014-08-01

    Statistical knowledge is widely used in academia; however, statistics teachers struggle with the issue of how to reduce students' statistics anxiety and enhance students' statistics learning. This study assesses the effectiveness of a "one-minute paper strategy" in reducing students' statistics-related anxiety and in improving students' statistics-related achievement. Participants were 77 undergraduates from two classes enrolled in applied statistics courses. An experiment was implemented according to a pretest/posttest comparison group design. The quasi-experimental design showed that the one-minute paper strategy significantly reduced students' statistics anxiety and improved students' statistics learning achievement. The strategy was a better instructional tool than the textbook exercise for reducing students' statistics anxiety and improving students' statistics achievement.

  15. Evaluation of significantly modified water bodies in Vojvodina by using multivariate statistical techniques

    Directory of Open Access Journals (Sweden)

    Vujović Svetlana R.

    2013-01-01

    Full Text Available This paper illustrates the utility of multivariate statistical techniques for analysis and interpretation of water quality data sets and identification of pollution sources/factors with a view to get better information about the water quality and design of monitoring network for effective management of water resources. Multivariate statistical techniques, such as factor analysis (FA/principal component analysis (PCA and cluster analysis (CA, were applied for the evaluation of variations and for the interpretation of a water quality data set of the natural water bodies obtained during 2010 year of monitoring of 13 parameters at 33 different sites. FA/PCA attempts to explain the correlations between the observations in terms of the underlying factors, which are not directly observable. Factor analysis is applied to physico-chemical parameters of natural water bodies with the aim classification and data summation as well as segmentation of heterogeneous data sets into smaller homogeneous subsets. Factor loadings were categorized as strong and moderate corresponding to the absolute loading values of >0.75, 0.75-0.50, respectively. Four principal factors were obtained with Eigenvalues >1 summing more than 78 % of the total variance in the water data sets, which is adequate to give good prior information regarding data structure. Each factor that is significantly related to specific variables represents a different dimension of water quality. The first factor F1 accounting for 28 % of the total variance and represents the hydrochemical dimension of water quality. The second factor F2 accounting for 18% of the total variance and may be taken factor of water eutrophication. The third factor F3 accounting 17 % of the total variance and represents the influence of point sources of pollution on water quality. The fourth factor F4 accounting 13 % of the total variance and may be taken as an ecological dimension of water quality. Cluster analysis (CA is an

  16. Testing earthquake prediction algorithms: Statistically significant advance prediction of the largest earthquakes in the Circum-Pacific, 1992-1997

    Science.gov (United States)

    Kossobokov, V.G.; Romashkova, L.L.; Keilis-Borok, V. I.; Healy, J.H.

    1999-01-01

    Algorithms M8 and MSc (i.e., the Mendocino Scenario) were used in a real-time intermediate-term research prediction of the strongest earthquakes in the Circum-Pacific seismic belt. Predictions are made by M8 first. Then, the areas of alarm are reduced by MSc at the cost that some earthquakes are missed in the second approximation of prediction. In 1992-1997, five earthquakes of magnitude 8 and above occurred in the test area: all of them were predicted by M8 and MSc identified correctly the locations of four of them. The space-time volume of the alarms is 36% and 18%, correspondingly, when estimated with a normalized product measure of empirical distribution of epicenters and uniform time. The statistical significance of the achieved results is beyond 99% both for M8 and MSc. For magnitude 7.5 + , 10 out of 19 earthquakes were predicted by M8 in 40% and five were predicted by M8-MSc in 13% of the total volume considered. This implies a significance level of 81% for M8 and 92% for M8-MSc. The lower significance levels might result from a global change in seismic regime in 1993-1996, when the rate of the largest events has doubled and all of them become exclusively normal or reversed faults. The predictions are fully reproducible; the algorithms M8 and MSc in complete formal definitions were published before we started our experiment [Keilis-Borok, V.I., Kossobokov, V.G., 1990. Premonitory activation of seismic flow: Algorithm M8, Phys. Earth and Planet. Inter. 61, 73-83; Kossobokov, V.G., Keilis-Borok, V.I., Smith, S.W., 1990. Localization of intermediate-term earthquake prediction, J. Geophys. Res., 95, 19763-19772; Healy, J.H., Kossobokov, V.G., Dewey, J.W., 1992. A test to evaluate the earthquake prediction algorithm, M8. U.S. Geol. Surv. OFR 92-401]. M8 is available from the IASPEI Software Library [Healy, J.H., Keilis-Borok, V.I., Lee, W.H.K. (Eds.), 1997. Algorithms for Earthquake Statistics and Prediction, Vol. 6. IASPEI Software Library]. ?? 1999 Elsevier

  17. Effects of quantum coherence on work statistics

    Science.gov (United States)

    Xu, Bao-Ming; Zou, Jian; Guo, Li-Sha; Kong, Xiang-Mu

    2018-05-01

    In the conventional two-point measurement scheme of quantum thermodynamics, quantum coherence is destroyed by the first measurement. But as we know the coherence really plays an important role in the quantum thermodynamics process, and how to describe the work statistics for a quantum coherent process is still an open question. In this paper, we use the full counting statistics method to investigate the effects of quantum coherence on work statistics. First, we give a general discussion and show that for a quantum coherent process, work statistics is very different from that of the two-point measurement scheme, specifically the average work is increased or decreased and the work fluctuation can be decreased by quantum coherence, which strongly depends on the relative phase, the energy level structure, and the external protocol. Then, we concretely consider a quenched one-dimensional transverse Ising model and show that quantum coherence has a more significant influence on work statistics in the ferromagnetism regime compared with that in the paramagnetism regime, so that due to the presence of quantum coherence the work statistics can exhibit the critical phenomenon even at high temperature.

  18. A novel complete-case analysis to determine statistical significance between treatments in an intention-to-treat population of randomized clinical trials involving missing data.

    Science.gov (United States)

    Liu, Wei; Ding, Jinhui

    2018-04-01

    The application of the principle of the intention-to-treat (ITT) to the analysis of clinical trials is challenged in the presence of missing outcome data. The consequences of stopping an assigned treatment in a withdrawn subject are unknown. It is difficult to make a single assumption about missing mechanisms for all clinical trials because there are complicated reactions in the human body to drugs due to the presence of complex biological networks, leading to data missing randomly or non-randomly. Currently there is no statistical method that can tell whether a difference between two treatments in the ITT population of a randomized clinical trial with missing data is significant at a pre-specified level. Making no assumptions about the missing mechanisms, we propose a generalized complete-case (GCC) analysis based on the data of completers. An evaluation of the impact of missing data on the ITT analysis reveals that a statistically significant GCC result implies a significant treatment effect in the ITT population at a pre-specified significance level unless, relative to the comparator, the test drug is poisonous to the non-completers as documented in their medical records. Applications of the GCC analysis are illustrated using literature data, and its properties and limits are discussed.

  19. Use of statistical procedures in Brazilian and international dental journals.

    Science.gov (United States)

    Ambrosano, Gláucia Maria Bovi; Reis, André Figueiredo; Giannini, Marcelo; Pereira, Antônio Carlos

    2004-01-01

    A descriptive survey was performed in order to assess the statistical content and quality of Brazilian and international dental journals, and compare their evolution throughout the last decades. The authors identified the reporting and accuracy of statistical techniques in 1000 papers published from 1970 to 2000 in seven dental journals: three Brazilian (Brazilian Dental Journal, Revista de Odontologia da Universidade de Sao Paulo and Revista de Odontologia da UNESP) and four international journals (Journal of the American Dental Association, Journal of Dental Research, Caries Research and Journal of Periodontology). Papers were divided into two time periods: from 1970 to 1989, and from 1990 to 2000. A slight increase in the number of articles that presented some form of statistical technique was noticed for Brazilian journals (from 61.0 to 66.7%), whereas for international journals, a significant increase was observed (65.8 to 92.6%). In addition, a decrease in the number of statistical errors was verified. The most commonly used statistical tests as well as the most frequent errors found in dental journals were assessed. Hopefully, this investigation will encourage dental educators to better plan the teaching of biostatistics, and to improve the statistical quality of submitted manuscripts.

  20. REANALYSIS OF F-STATISTIC GRAVITATIONAL-WAVE SEARCHES WITH THE HIGHER CRITICISM STATISTIC

    International Nuclear Information System (INIS)

    Bennett, M. F.; Melatos, A.; Delaigle, A.; Hall, P.

    2013-01-01

    We propose a new method of gravitational-wave detection using a modified form of higher criticism, a statistical technique introduced by Donoho and Jin. Higher criticism is designed to detect a group of sparse, weak sources, none of which are strong enough to be reliably estimated or detected individually. We apply higher criticism as a second-pass method to synthetic F-statistic and C-statistic data for a monochromatic periodic source in a binary system and quantify the improvement relative to the first-pass methods. We find that higher criticism on C-statistic data is more sensitive by ∼6% than the C-statistic alone under optimal conditions (i.e., binary orbit known exactly) and the relative advantage increases as the error in the orbital parameters increases. Higher criticism is robust even when the source is not monochromatic (e.g., phase-wandering in an accreting system). Applying higher criticism to a phase-wandering source over multiple time intervals gives a ∼> 30% increase in detectability with few assumptions about the frequency evolution. By contrast, in all-sky searches for unknown periodic sources, which are dominated by the brightest source, second-pass higher criticism does not provide any benefits over a first-pass search.

  1. St. John's wort significantly increased the systemic exposure and toxicity of methotrexate in rats

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Shih-Ying [Graduate Institute of Pharmaceutical Chemistry, China Medical University, Taichung, Taiwan (China); Juang, Shin-Hun [Graduate Institute of Pharmaceutical Chemistry, China Medical University, Taichung, Taiwan (China); Department of Medical Research, China Medical University Hospital, Taichung, Taiwan (China); Tsai, Shang-Yuan; Chao, Pei-Dawn Lee [School of Pharmacy, China Medical University, Taichung, Taiwan (China); Hou, Yu-Chi, E-mail: hou5133@gmail.com [School of Pharmacy, China Medical University, Taichung, Taiwan (China); Department of Medical Research, China Medical University Hospital, Taichung, Taiwan (China)

    2012-08-15

    St. John's wort (SJW, Hypericum perforatum) is one of the popular nutraceuticals for treating depression. Methotrexate (MTX) is an immunosuppressant with narrow therapeutic window. This study investigated the effect of SJW on MTX pharmacokinetics in rats. Rats were orally given MTX alone and coadministered with 300 and 150 mg/kg of SJW, and 25 mg/kg of diclofenac, respectively. Blood was withdrawn at specific time points and serum MTX concentrations were assayed by a specific monoclonal fluorescence polarization immunoassay method. The results showed that 300 mg/kg of SJW significantly increased the AUC{sub 0−t} and C{sub max} of MTX by 163% and 60%, respectively, and 150 mg/kg of SJW significantly increased the AUC{sub 0−t} of MTX by 55%. In addition, diclofenac enhanced the C{sub max} of MTX by 110%. The mortality of rats treated with SJW was higher than that of controls. In conclusion, coadministration of SJW significantly increased the systemic exposure and toxicity of MTX. The combined use of MTX with SJW would need to be with caution. -- Highlights: ► St. John's wort significantly increased the AUC{sub 0−t} and C{sub max} of methotrexate. ► Coadministration of St. John's wort increased the exposure and toxicity of methotrexate. ► The combined use of methotrexate with St. John's wort will need to be with caution.

  2. New scanning technique using Adaptive Statistical lterative Reconstruction (ASIR) significantly reduced the radiation dose of cardiac CT

    International Nuclear Information System (INIS)

    Tumur, Odgerel; Soon, Kean; Brown, Fraser; Mykytowycz, Marcus

    2013-01-01

    The aims of our study were to evaluate the effect of application of Adaptive Statistical Iterative Reconstruction (ASIR) algorithm on the radiation dose of coronary computed tomography angiography (CCTA) and its effects on image quality of CCTA and to evaluate the effects of various patient and CT scanning factors on the radiation dose of CCTA. This was a retrospective study that included 347 consecutive patients who underwent CCTA at a tertiary university teaching hospital between 1 July 2009 and 20 September 2011. Analysis was performed comparing patient demographics, scan characteristics, radiation dose and image quality in two groups of patients in whom conventional Filtered Back Projection (FBP) or ASIR was used for image reconstruction. There were 238 patients in the FBP group and 109 patients in the ASIR group. There was no difference between the groups in the use of prospective gating, scan length or tube voltage. In ASIR group, significantly lower tube current was used compared with FBP group, 550mA (450–600) vs. 650mA (500–711.25) (median (interquartile range)), respectively, P<0.001. There was 27% effective radiation dose reduction in the ASIR group compared with FBP group, 4.29mSv (2.84–6.02) vs. 5.84mSv (3.88–8.39) (median (interquartile range)), respectively, P<0.001. Although ASIR was associated with increased image noise compared with FBP (39.93±10.22 vs. 37.63±18.79 (mean ±standard deviation), respectively, P<001), it did not affect the signal intensity, signal-to-noise ratio, contrast-to-noise ratio or the diagnostic quality of CCTA. Application of ASIR reduces the radiation dose of CCTA without affecting the image quality.

  3. Statistics Clinic

    Science.gov (United States)

    Feiveson, Alan H.; Foy, Millennia; Ploutz-Snyder, Robert; Fiedler, James

    2014-01-01

    Do you have elevated p-values? Is the data analysis process getting you down? Do you experience anxiety when you need to respond to criticism of statistical methods in your manuscript? You may be suffering from Insufficient Statistical Support Syndrome (ISSS). For symptomatic relief of ISSS, come for a free consultation with JSC biostatisticians at our help desk during the poster sessions at the HRP Investigators Workshop. Get answers to common questions about sample size, missing data, multiple testing, when to trust the results of your analyses and more. Side effects may include sudden loss of statistics anxiety, improved interpretation of your data, and increased confidence in your results.

  4. Robust statistics and geochemical data analysis

    International Nuclear Information System (INIS)

    Di, Z.

    1987-01-01

    Advantages of robust procedures over ordinary least-squares procedures in geochemical data analysis is demonstrated using NURE data from the Hot Springs Quadrangle, South Dakota, USA. Robust principal components analysis with 5% multivariate trimming successfully guarded the analysis against perturbations by outliers and increased the number of interpretable factors. Regression with SINE estimates significantly increased the goodness-of-fit of the regression and improved the correspondence of delineated anomalies with known uranium prospects. Because of the ubiquitous existence of outliers in geochemical data, robust statistical procedures are suggested as routine procedures to replace ordinary least-squares procedures

  5. Evolutionary Statistical Procedures

    CERN Document Server

    Baragona, Roberto; Poli, Irene

    2011-01-01

    This proposed text appears to be a good introduction to evolutionary computation for use in applied statistics research. The authors draw from a vast base of knowledge about the current literature in both the design of evolutionary algorithms and statistical techniques. Modern statistical research is on the threshold of solving increasingly complex problems in high dimensions, and the generalization of its methodology to parameters whose estimators do not follow mathematically simple distributions is underway. Many of these challenges involve optimizing functions for which analytic solutions a

  6. Evaluating statistical and clinical significance of intervention effects in single-case experimental designs: an SPSS method to analyze univariate data.

    Science.gov (United States)

    Maric, Marija; de Haan, Else; Hogendoorn, Sanne M; Wolters, Lidewij H; Huizenga, Hilde M

    2015-03-01

    Single-case experimental designs are useful methods in clinical research practice to investigate individual client progress. Their proliferation might have been hampered by methodological challenges such as the difficulty applying existing statistical procedures. In this article, we describe a data-analytic method to analyze univariate (i.e., one symptom) single-case data using the common package SPSS. This method can help the clinical researcher to investigate whether an intervention works as compared with a baseline period or another intervention type, and to determine whether symptom improvement is clinically significant. First, we describe the statistical method in a conceptual way and show how it can be implemented in SPSS. Simulation studies were performed to determine the number of observation points required per intervention phase. Second, to illustrate this method and its implications, we present a case study of an adolescent with anxiety disorders treated with cognitive-behavioral therapy techniques in an outpatient psychotherapy clinic, whose symptoms were regularly assessed before each session. We provide a description of the data analyses and results of this case study. Finally, we discuss the advantages and shortcomings of the proposed method. Copyright © 2014. Published by Elsevier Ltd.

  7. Performance studies of GooFit on GPUs vs RooFit on CPUs while estimating the statistical significance of a new physical signal

    Science.gov (United States)

    Di Florio, Adriano

    2017-10-01

    In order to test the computing capabilities of GPUs with respect to traditional CPU cores a high-statistics toy Monte Carlo technique has been implemented both in ROOT/RooFit and GooFit frameworks with the purpose to estimate the statistical significance of the structure observed by CMS close to the kinematical boundary of the J/ψϕ invariant mass in the three-body decay B + → J/ψϕK +. GooFit is a data analysis open tool under development that interfaces ROOT/RooFit to CUDA platform on nVidia GPU. The optimized GooFit application running on GPUs hosted by servers in the Bari Tier2 provides striking speed-up performances with respect to the RooFit application parallelised on multiple CPUs by means of PROOF-Lite tool. The considerable resulting speed-up, evident when comparing concurrent GooFit processes allowed by CUDA Multi Process Service and a RooFit/PROOF-Lite process with multiple CPU workers, is presented and discussed in detail. By means of GooFit it has also been possible to explore the behaviour of a likelihood ratio test statistic in different situations in which the Wilks Theorem may or may not apply because its regularity conditions are not satisfied.

  8. Research design and statistical methods in Indian medical journals: a retrospective survey.

    Science.gov (United States)

    Hassan, Shabbeer; Yellur, Rajashree; Subramani, Pooventhan; Adiga, Poornima; Gokhale, Manoj; Iyer, Manasa S; Mayya, Shreemathi S

    2015-01-01

    Good quality medical research generally requires not only an expertise in the chosen medical field of interest but also a sound knowledge of statistical methodology. The number of medical research articles which have been published in Indian medical journals has increased quite substantially in the past decade. The aim of this study was to collate all evidence on study design quality and statistical analyses used in selected leading Indian medical journals. Ten (10) leading Indian medical journals were selected based on impact factors and all original research articles published in 2003 (N = 588) and 2013 (N = 774) were categorized and reviewed. A validated checklist on study design, statistical analyses, results presentation, and interpretation was used for review and evaluation of the articles. Main outcomes considered in the present study were - study design types and their frequencies, error/defects proportion in study design, statistical analyses, and implementation of CONSORT checklist in RCT (randomized clinical trials). From 2003 to 2013: The proportion of erroneous statistical analyses did not decrease (χ2=0.592, Φ=0.027, p=0.4418), 25% (80/320) in 2003 compared to 22.6% (111/490) in 2013. Compared with 2003, significant improvement was seen in 2013; the proportion of papers using statistical tests increased significantly (χ2=26.96, Φ=0.16, pdesign decreased significantly (χ2=16.783, Φ=0.12 pdesigns has remained very low (7.3%, 43/588) with majority showing some errors (41 papers, 95.3%). Majority of the published studies were retrospective in nature both in 2003 [79.1% (465/588)] and in 2013 [78.2% (605/774)]. Major decreases in error proportions were observed in both results presentation (χ2=24.477, Φ=0.17, presearch seems to have made no major progress regarding using correct statistical analyses, but error/defects in study designs have decreased significantly. Randomized clinical trials are quite rarely published and have high proportion of

  9. Robust statistical methods for significance evaluation and applications in cancer driver detection and biomarker discovery

    DEFF Research Database (Denmark)

    Madsen, Tobias

    2017-01-01

    In the present thesis I develop, implement and apply statistical methods for detecting genomic elements implicated in cancer development and progression. This is done in two separate bodies of work. The first uses the somatic mutation burden to distinguish cancer driver mutations from passenger m...

  10. Multiparametric statistics

    CERN Document Server

    Serdobolskii, Vadim Ivanovich

    2007-01-01

    This monograph presents mathematical theory of statistical models described by the essentially large number of unknown parameters, comparable with sample size but can also be much larger. In this meaning, the proposed theory can be called "essentially multiparametric". It is developed on the basis of the Kolmogorov asymptotic approach in which sample size increases along with the number of unknown parameters.This theory opens a way for solution of central problems of multivariate statistics, which up until now have not been solved. Traditional statistical methods based on the idea of an infinite sampling often break down in the solution of real problems, and, dependent on data, can be inefficient, unstable and even not applicable. In this situation, practical statisticians are forced to use various heuristic methods in the hope the will find a satisfactory solution.Mathematical theory developed in this book presents a regular technique for implementing new, more efficient versions of statistical procedures. ...

  11. Lies, damn lies and statistics

    International Nuclear Information System (INIS)

    Jones, M.D.

    2001-01-01

    Statistics are widely employed within archaeological research. This is becoming increasingly so as user friendly statistical packages make increasingly sophisticated analyses available to non statisticians. However, all statistical techniques are based on underlying assumptions of which the end user may be unaware. If statistical analyses are applied in ignorance of the underlying assumptions there is the potential for highly erroneous inferences to be drawn. This does happen within archaeology and here this is illustrated with the example of 'date pooling', a technique that has been widely misused in archaeological research. This misuse may have given rise to an inevitable and predictable misinterpretation of New Zealand's archaeological record. (author). 10 refs., 6 figs., 1 tab

  12. Combining Multiple Hypothesis Testing with Machine Learning Increases the Statistical Power of Genome-wide Association Studies

    Science.gov (United States)

    Mieth, Bettina; Kloft, Marius; Rodríguez, Juan Antonio; Sonnenburg, Sören; Vobruba, Robin; Morcillo-Suárez, Carlos; Farré, Xavier; Marigorta, Urko M.; Fehr, Ernst; Dickhaus, Thorsten; Blanchard, Gilles; Schunk, Daniel; Navarro, Arcadi; Müller, Klaus-Robert

    2016-01-01

    The standard approach to the analysis of genome-wide association studies (GWAS) is based on testing each position in the genome individually for statistical significance of its association with the phenotype under investigation. To improve the analysis of GWAS, we propose a combination of machine learning and statistical testing that takes correlation structures within the set of SNPs under investigation in a mathematically well-controlled manner into account. The novel two-step algorithm, COMBI, first trains a support vector machine to determine a subset of candidate SNPs and then performs hypothesis tests for these SNPs together with an adequate threshold correction. Applying COMBI to data from a WTCCC study (2007) and measuring performance as replication by independent GWAS published within the 2008–2015 period, we show that our method outperforms ordinary raw p-value thresholding as well as other state-of-the-art methods. COMBI presents higher power and precision than the examined alternatives while yielding fewer false (i.e. non-replicated) and more true (i.e. replicated) discoveries when its results are validated on later GWAS studies. More than 80% of the discoveries made by COMBI upon WTCCC data have been validated by independent studies. Implementations of the COMBI method are available as a part of the GWASpi toolbox 2.0. PMID:27892471

  13. Significance evaluation in factor graphs

    DEFF Research Database (Denmark)

    Madsen, Tobias; Hobolth, Asger; Jensen, Jens Ledet

    2017-01-01

    in genomics and the multiple-testing issues accompanying them, accurate significance evaluation is of great importance. We here address the problem of evaluating statistical significance of observations from factor graph models. Results Two novel numerical approximations for evaluation of statistical...... significance are presented. First a method using importance sampling. Second a saddlepoint approximation based method. We develop algorithms to efficiently compute the approximations and compare them to naive sampling and the normal approximation. The individual merits of the methods are analysed both from....... Conclusions The applicability of saddlepoint approximation and importance sampling is demonstrated on known models in the factor graph framework. Using the two methods we can substantially improve computational cost without compromising accuracy. This contribution allows analyses of large datasets...

  14. Optimizing refiner operation with statistical modelling

    Energy Technology Data Exchange (ETDEWEB)

    Broderick, G [Noranda Research Centre, Pointe Claire, PQ (Canada)

    1997-02-01

    The impact of refining conditions on the energy efficiency of the process and on the handsheet quality of a chemi-mechanical pulp was studied as part of a series of pilot scale refining trials. Statistical models of refiner performance were constructed from these results and non-linear optimization of process conditions were conducted. Optimization results indicated that increasing the ratio of specific energy applied in the first stage led to a reduction of some 15 per cent in the total energy requirement. The strategy can also be used to obtain significant increases in pulp quality for a given energy input. 20 refs., 6 tabs.

  15. Cluster size statistic and cluster mass statistic: two novel methods for identifying changes in functional connectivity between groups or conditions.

    Science.gov (United States)

    Ing, Alex; Schwarzbauer, Christian

    2014-01-01

    Functional connectivity has become an increasingly important area of research in recent years. At a typical spatial resolution, approximately 300 million connections link each voxel in the brain with every other. This pattern of connectivity is known as the functional connectome. Connectivity is often compared between experimental groups and conditions. Standard methods used to control the type 1 error rate are likely to be insensitive when comparisons are carried out across the whole connectome, due to the huge number of statistical tests involved. To address this problem, two new cluster based methods--the cluster size statistic (CSS) and cluster mass statistic (CMS)--are introduced to control the family wise error rate across all connectivity values. These methods operate within a statistical framework similar to the cluster based methods used in conventional task based fMRI. Both methods are data driven, permutation based and require minimal statistical assumptions. Here, the performance of each procedure is evaluated in a receiver operator characteristic (ROC) analysis, utilising a simulated dataset. The relative sensitivity of each method is also tested on real data: BOLD (blood oxygen level dependent) fMRI scans were carried out on twelve subjects under normal conditions and during the hypercapnic state (induced through the inhalation of 6% CO2 in 21% O2 and 73%N2). Both CSS and CMS detected significant changes in connectivity between normal and hypercapnic states. A family wise error correction carried out at the individual connection level exhibited no significant changes in connectivity.

  16. A novel statistic for genome-wide interaction analysis.

    Directory of Open Access Journals (Sweden)

    Xuesen Wu

    2010-09-01

    Full Text Available Although great progress in genome-wide association studies (GWAS has been made, the significant SNP associations identified by GWAS account for only a few percent of the genetic variance, leading many to question where and how we can find the missing heritability. There is increasing interest in genome-wide interaction analysis as a possible source of finding heritability unexplained by current GWAS. However, the existing statistics for testing interaction have low power for genome-wide interaction analysis. To meet challenges raised by genome-wide interactional analysis, we have developed a novel statistic for testing interaction between two loci (either linked or unlinked. The null distribution and the type I error rates of the new statistic for testing interaction are validated using simulations. Extensive power studies show that the developed statistic has much higher power to detect interaction than classical logistic regression. The results identified 44 and 211 pairs of SNPs showing significant evidence of interactions with FDR<0.001 and 0.001statistic is able to search significant interaction between SNPs across the genome. Real data analysis showed that the results of genome-wide interaction analysis can be replicated in two independent studies.

  17. Increased oxidative stress in patients with familial Mediterranean ...

    African Journals Online (AJOL)

    0.05) comparing to HC group. However, there were no statistically significant differences between the groups in terms of antioxidant vitamin levels. Conclusions: Our study demonstrated increased oxidative stress in patients with FMF during AP.

  18. Low statistical power in biomedical science: a review of three human research domains

    Science.gov (United States)

    Dumas-Mallet, Estelle; Button, Katherine S.; Boraud, Thomas; Gonon, Francois

    2017-01-01

    Studies with low statistical power increase the likelihood that a statistically significant finding represents a false positive result. We conducted a review of meta-analyses of studies investigating the association of biological, environmental or cognitive parameters with neurological, psychiatric and somatic diseases, excluding treatment studies, in order to estimate the average statistical power across these domains. Taking the effect size indicated by a meta-analysis as the best estimate of the likely true effect size, and assuming a threshold for declaring statistical significance of 5%, we found that approximately 50% of studies have statistical power in the 0–10% or 11–20% range, well below the minimum of 80% that is often considered conventional. Studies with low statistical power appear to be common in the biomedical sciences, at least in the specific subject areas captured by our search strategy. However, we also observe evidence that this depends in part on research methodology, with candidate gene studies showing very low average power and studies using cognitive/behavioural measures showing high average power. This warrants further investigation. PMID:28386409

  19. ANALYSIS OF THE INCIDENCE OF PROSTATE CANCER IN THE ROSTOV REGION FOR THE YEARS 2001–2016: SPATIOTEMPORAL STATISTICS

    Directory of Open Access Journals (Sweden)

    O. E. Arhipova

    2017-01-01

    Full Text Available Introduction. Oncological diseases is a serious medico-social problem of modern society. The article presents the analysis of prostate cancer morbidity with consideration of regional health level differences.Objective. To conduct spatial-temporal analysis of prostate cancer incidence in Rostov region; to identify areas with a statistically significant increase in the incidence of prostate cancer; to identify regional differences (environmental determinism in the development of cancer in the southern Federal district.Materials and methods. We’ve analysed incidence of prostate cancer in the Rostov region for the period of 2001-2016. The analysis has been performed using tools spatio-temporal statistics on software ArcGis 10 *.Results. Areas and cities of Rostov region with a statistically significant increase in prostate cancer incidence were identified. It has been shown that in the regions and cities of the Rostov region with a low level of medical-ecological safety had a statistically significant increase in prostate cancer incidenceConclusions. The results can serve as a basis for the directional analysis of factors causing increased risk of cancer and development on this basis strategies for monitoring and prevention of cancer diseases in the Rostov region.

  20. Introducing extra NADPH consumption ability significantly increases the photosynthetic efficiency and biomass production of cyanobacteria.

    Science.gov (United States)

    Zhou, Jie; Zhang, Fuliang; Meng, Hengkai; Zhang, Yanping; Li, Yin

    2016-11-01

    Increasing photosynthetic efficiency is crucial to increasing biomass production to meet the growing demands for food and energy. Previous theoretical arithmetic analysis suggests that the light reactions and dark reactions are imperfectly coupled due to shortage of ATP supply, or accumulation of NADPH. Here we hypothesized that solely increasing NADPH consumption might improve the coupling of light reactions and dark reactions, thereby increasing the photosynthetic efficiency and biomass production. To test this hypothesis, an NADPH consumption pathway was constructed in cyanobacterium Synechocystis sp. PCC 6803. The resulting extra NADPH-consuming mutant grew much faster and achieved a higher biomass concentration. Analyses of photosynthesis characteristics showed the activities of photosystem II and photosystem I and the light saturation point of the NADPH-consuming mutant all significantly increased. Thus, we demonstrated that introducing extra NADPH consumption ability is a promising strategy to increase photosynthetic efficiency and to enable utilization of high-intensity lights. Copyright © 2016 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.

  1. Epilepsy and occupational accidents in Brazil: a national statistics study.

    Science.gov (United States)

    Lunardi, Mariana dos Santos; Soliman, Lucas Alexandre Pedrollo; Pauli, Carla; Lin, Katia

    2011-01-01

    Epilepsy may restrict the patient's daily life. It causes lower quality of life and increased risk for work-related accidents (WRA). The aim of this study is to analyze the implantation of the Epidemiologic and Technical Security System Nexus (ETSSN) and WRA patterns among patients with epilepsy. Data regarding WRA, between 1999 and 2008, on the historical database of WRA Infolog Statistical Yearbook from Brazilian Ministry of Social Security were reviewed. There was a significant increase of reported cases during the ten year period, mainly after the establishment of the ETSSN. The increased granted benefits evidenced the epidemiologic association between epilepsy and WRA. ETSSN possibly raised the registration of occupational accidents and granted benefits. However, the real number of WRA may remain underestimated due to informal economy and house workers' accidents which are usually not included in the official statistics in Brazil.

  2. The clinical significance of serum SCC-Ag combined with CD105 in patients with cervical cancer during the early stage diagnosis

    Directory of Open Access Journals (Sweden)

    Ru-Chan Ma

    2016-09-01

    Full Text Available Objective: To invest the clinical significance of serum SCC-Ag combined with CD105 in early diagnosis of cervical cancer to provide new ideas for early diagnosis and clinical treatment of cervical cancer. Methods: A total of 74 cases cervical cancer patients were selected as cervical cancer group, and 52 cases uterine fibroids patients were selected as normal cervical group, serum samples were collected in the early morning fasting condition, SCC-Ag and CD105 were checked by ELISA method, SCC-Ag and CD105 of two groups were analyzed by t-test, and to compare SCC-Ag and CD105 in different TMN staging, lymph gland metastasis and non-lymph gland metastasis in patients with cervical cancer, the correlation analysis was used by Pearson correlation analysis method. Results: These results came from ELISA method, comparing with normal cervical group, the SCC-Ag and CD105 of cervical cancer group increased, the difference was statistically significant. Comparing with Ⅰ period of TMN staging, SCC-Ag and CD105 of Ⅱ period increased, Ⅲ, Ⅳ period increased, the difference was statistically significant. Comparing with Ⅱ period, SCC-Ag and CD105 of Ⅲ, Ⅳ period increased, the difference was statistically significant. Comparing with non-lymph gland metastasis, SCC-Ag and CD105 of lymph gland metastasis increased in cervical cancer with surgical treatment, the difference was statistically significant. According to Pearson correlation analysis, SCC-Ag and CD105 were positively correlated. Conclusion: SCC-Ag and CD105 in patients with cervical cancer increase highly, it has important clinical value that of serum SCCAg combined with CD105 in the early diagnosis of cervical cancer, especially it has clinical guiding significance to staging and lymph gland metastasis of cervical cancer, and it is worthy of clinical reference.

  3. Some challenges with statistical inference in adaptive designs.

    Science.gov (United States)

    Hung, H M James; Wang, Sue-Jane; Yang, Peiling

    2014-01-01

    Adaptive designs have generated a great deal of attention to clinical trial communities. The literature contains many statistical methods to deal with added statistical uncertainties concerning the adaptations. Increasingly encountered in regulatory applications are adaptive statistical information designs that allow modification of sample size or related statistical information and adaptive selection designs that allow selection of doses or patient populations during the course of a clinical trial. For adaptive statistical information designs, a few statistical testing methods are mathematically equivalent, as a number of articles have stipulated, but arguably there are large differences in their practical ramifications. We pinpoint some undesirable features of these methods in this work. For adaptive selection designs, the selection based on biomarker data for testing the correlated clinical endpoints may increase statistical uncertainty in terms of type I error probability, and most importantly the increased statistical uncertainty may be impossible to assess.

  4. Incorporating an Interactive Statistics Workshop into an Introductory Biology Course-Based Undergraduate Research Experience (CURE) Enhances Students' Statistical Reasoning and Quantitative Literacy Skills.

    Science.gov (United States)

    Olimpo, Jeffrey T; Pevey, Ryan S; McCabe, Thomas M

    2018-01-01

    Course-based undergraduate research experiences (CUREs) provide an avenue for student participation in authentic scientific opportunities. Within the context of such coursework, students are often expected to collect, analyze, and evaluate data obtained from their own investigations. Yet, limited research has been conducted that examines mechanisms for supporting students in these endeavors. In this article, we discuss the development and evaluation of an interactive statistics workshop that was expressly designed to provide students with an open platform for graduate teaching assistant (GTA)-mentored data processing, statistical testing, and synthesis of their own research findings. Mixed methods analyses of pre/post-intervention survey data indicated a statistically significant increase in students' reasoning and quantitative literacy abilities in the domain, as well as enhancement of student self-reported confidence in and knowledge of the application of various statistical metrics to real-world contexts. Collectively, these data reify an important role for scaffolded instruction in statistics in preparing emergent scientists to be data-savvy researchers in a globally expansive STEM workforce.

  5. Hydrologic effects of large southwestern USA wildfires significantly increase regional water supply: fact or fiction?

    Science.gov (United States)

    Wine, M. L.; Cadol, D.

    2016-08-01

    In recent years climate change and historic fire suppression have increased the frequency of large wildfires in the southwestern USA, motivating study of the hydrological consequences of these wildfires at point and watershed scales, typically over short periods of time. These studies have revealed that reduced soil infiltration capacity and reduced transpiration due to tree canopy combustion increase streamflow at the watershed scale. However, the degree to which these local increases in runoff propagate to larger scales—relevant to urban and agricultural water supply—remains largely unknown, particularly in semi-arid mountainous watersheds co-dominated by winter snowmelt and the North American monsoon. To address this question, we selected three New Mexico watersheds—the Jemez (1223 km2), Mogollon (191 km2), and Gila (4807 km2)—that together have been affected by over 100 wildfires since 1982. We then applied climate-driven linear models to test for effects of fire on streamflow metrics after controlling for climatic variability. Here we show that, after controlling for climatic and snowpack variability, significantly more streamflow discharged from the Gila watershed for three to five years following wildfires, consistent with increased regional water yield due to enhanced infiltration-excess overland flow and groundwater recharge at the large watershed scale. In contrast, we observed no such increase in discharge from the Jemez watershed following wildfires. Fire regimes represent a key difference between the contrasting responses of the Jemez and Gila watersheds with the latter experiencing more frequent wildfires, many caused by lightning strikes. While hydrologic dynamics at the scale of large watersheds were previously thought to be climatically dominated, these results suggest that if one fifth or more of a large watershed has been burned in the previous three to five years, significant increases in water yield can be expected.

  6. Timescales for detecting a significant acceleration in sea level rise.

    Science.gov (United States)

    Haigh, Ivan D; Wahl, Thomas; Rohling, Eelco J; Price, René M; Pattiaratchi, Charitha B; Calafat, Francisco M; Dangendorf, Sönke

    2014-04-14

    There is observational evidence that global sea level is rising and there is concern that the rate of rise will increase, significantly threatening coastal communities. However, considerable debate remains as to whether the rate of sea level rise is currently increasing and, if so, by how much. Here we provide new insights into sea level accelerations by applying the main methods that have been used previously to search for accelerations in historical data, to identify the timings (with uncertainties) at which accelerations might first be recognized in a statistically significant manner (if not apparent already) in sea level records that we have artificially extended to 2100. We find that the most important approach to earliest possible detection of a significant sea level acceleration lies in improved understanding (and subsequent removal) of interannual to multidecadal variability in sea level records.

  7. The significance of Good Chair as part of children’s school and home environment in the preventive treatment of body statistics distortions

    OpenAIRE

    Mirosław Mrozkowiak; Hanna Żukowska

    2015-01-01

    Mrozkowiak Mirosław, Żukowska Hanna. Znaczenie Dobrego Krzesła, jako elementu szkolnego i domowego środowiska ucznia, w profilaktyce zaburzeń statyki postawy ciała = The significance of Good Chair as part of children’s school and home environment in the preventive treatment of body statistics distortions. Journal of Education, Health and Sport. 2015;5(7):179-215. ISSN 2391-8306. DOI 10.5281/zenodo.19832 http://ojs.ukw.edu.pl/index.php/johs/article/view/2015%3B5%287%29%3A179-215 https:...

  8. Can a significance test be genuinely Bayesian?

    OpenAIRE

    Pereira, Carlos A. de B.; Stern, Julio Michael; Wechsler, Sergio

    2008-01-01

    The Full Bayesian Significance Test, FBST, is extensively reviewed. Its test statistic, a genuine Bayesian measure of evidence, is discussed in detail. Its behavior in some problems of statistical inference like testing for independence in contingency tables is discussed.

  9. Statistical Tutorial | Center for Cancer Research

    Science.gov (United States)

    Recent advances in cancer biology have resulted in the need for increased statistical analysis of research data.  ST is designed as a follow up to Statistical Analysis of Research Data (SARD) held in April 2018.  The tutorial will apply the general principles of statistical analysis of research data including descriptive statistics, z- and t-tests of means and mean

  10. Infections and mixed infections with the selected species of Borrelia burgdorferi sensu lato complex in Ixodes ricinus ticks collected in eastern Poland: a significant increase in the course of 5 years.

    Science.gov (United States)

    Wójcik-Fatla, Angelina; Zając, Violetta; Sawczyn, Anna; Sroka, Jacek; Cisak, Ewa; Dutkiewicz, Jacek

    2016-02-01

    In the years 2008-2009 and 2013-2014, 1620 and 1500 questing Ixodes ricinus ticks, respectively, were examined on the territory of the Lublin province (eastern Poland). The presence of three pathogenic species causing Lyme disease was investigated: Borrelia burgdorferi sensu stricto, B. afzelii and B. garinii. The proportion of I. ricinus ticks infected with B. burgdorferi sensu lato showed a highly significant increase between 2008-2009 and 2013-2014, from 6.0 to 15.3%. A significant increase was noted with regard to all types of infections with individual species: single (4.7-7.8%), dual (1.2-6.6%), and triple (0.1-0.9%). When expressed as the percent of all infections, the frequency of mixed infections increased from 21.4 to 49.2%. Statistical analysis performed with two methods (by calculating of odds ratios and by Fisher's exact test) showed that the frequencies of mixed infections in most cases proved to be significantly greater than expected. The strongest associations were found between B. burgdorferi s. s. and B. afzelii, and between B. burgdorferi s. s. and B. garinii. They appeared to be highly significant (P eastern Poland, and dramatic enhancement of mixed infections with individual species, which may result in mixed infections of humans and exacerbation of the clinical course of Lyme disease cases on the studied area.

  11. Statistical implications in Monte Carlo depletions - 051

    International Nuclear Information System (INIS)

    Zhiwen, Xu; Rhodes, J.; Smith, K.

    2010-01-01

    As a result of steady advances of computer power, continuous-energy Monte Carlo depletion analysis is attracting considerable attention for reactor burnup calculations. The typical Monte Carlo analysis is set up as a combination of a Monte Carlo neutron transport solver and a fuel burnup solver. Note that the burnup solver is a deterministic module. The statistical errors in Monte Carlo solutions are introduced into nuclide number densities and propagated along fuel burnup. This paper is towards the understanding of the statistical implications in Monte Carlo depletions, including both statistical bias and statistical variations in depleted fuel number densities. The deterministic Studsvik lattice physics code, CASMO-5, is modified to model the Monte Carlo depletion. The statistical bias in depleted number densities is found to be negligible compared to its statistical variations, which, in turn, demonstrates the correctness of the Monte Carlo depletion method. Meanwhile, the statistical variation in number densities generally increases with burnup. Several possible ways of reducing the statistical errors are discussed: 1) to increase the number of individual Monte Carlo histories; 2) to increase the number of time steps; 3) to run additional independent Monte Carlo depletion cases. Finally, a new Monte Carlo depletion methodology, called the batch depletion method, is proposed, which consists of performing a set of independent Monte Carlo depletions and is thus capable of estimating the overall statistical errors including both the local statistical error and the propagated statistical error. (authors)

  12. Increased river alkalinization in the Eastern U.S.

    Science.gov (United States)

    Kaushal, Sujay S; Likens, Gene E; Utz, Ryan M; Pace, Michael L; Grese, Melissa; Yepsen, Metthea

    2013-09-17

    The interaction between human activities and watershed geology is accelerating long-term changes in the carbon cycle of rivers. We evaluated changes in bicarbonate alkalinity, a product of chemical weathering, and tested for long-term trends at 97 sites in the eastern United States draining over 260,000 km(2). We observed statistically significant increasing trends in alkalinity at 62 of the 97 sites, while remaining sites exhibited no significant decreasing trends. Over 50% of study sites also had statistically significant increasing trends in concentrations of calcium (another product of chemical weathering) where data were available. River alkalinization rates were significantly related to watershed carbonate lithology, acid deposition, and topography. These three variables explained ~40% of variation in river alkalinization rates. The strongest predictor of river alkalinization rates was carbonate lithology. The most rapid rates of river alkalinization occurred at sites with highest inputs of acid deposition and highest elevation. The rise of alkalinity in many rivers throughout the Eastern U.S. suggests human-accelerated chemical weathering, in addition to previously documented impacts of mining and land use. Increased river alkalinization has major environmental implications including impacts on water hardness and salinization of drinking water, alterations of air-water exchange of CO2, coastal ocean acidification, and the influence of bicarbonate availability on primary production.

  13. The case for increasing the statistical power of eddy covariance ecosystem studies: why, where and how?

    Science.gov (United States)

    Hill, Timothy; Chocholek, Melanie; Clement, Robert

    2017-06-01

    Eddy covariance (EC) continues to provide invaluable insights into the dynamics of Earth's surface processes. However, despite its many strengths, spatial replication of EC at the ecosystem scale is rare. High equipment costs are likely to be partially responsible. This contributes to the low sampling, and even lower replication, of ecoregions in Africa, Oceania (excluding Australia) and South America. The level of replication matters as it directly affects statistical power. While the ergodicity of turbulence and temporal replication allow an EC tower to provide statistically robust flux estimates for its footprint, these principles do not extend to larger ecosystem scales. Despite the challenge of spatially replicating EC, it is clearly of interest to be able to use EC to provide statistically robust flux estimates for larger areas. We ask: How much spatial replication of EC is required for statistical confidence in our flux estimates of an ecosystem? We provide the reader with tools to estimate the number of EC towers needed to achieve a given statistical power. We show that for a typical ecosystem, around four EC towers are needed to have 95% statistical confidence that the annual flux of an ecosystem is nonzero. Furthermore, if the true flux is small relative to instrument noise and spatial variability, the number of towers needed can rise dramatically. We discuss approaches for improving statistical power and describe one solution: an inexpensive EC system that could help by making spatial replication more affordable. However, we note that diverting limited resources from other key measurements in order to allow spatial replication may not be optimal, and a balance needs to be struck. While individual EC towers are well suited to providing fluxes from the flux footprint, we emphasize that spatial replication is essential for statistically robust fluxes if a wider ecosystem is being studied. © 2016 The Authors Global Change Biology Published by John Wiley

  14. Wind energy statistics 2012; Vindkraftsstatistik 2012

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2013-04-15

    The publication 'Wind Energy Statistics' is an annual publication. Since 2010, the reported statistics of installed power, number of plants and regional distribution, even semi-annually, and in tabular form on the Agency's website. The publication is produced in a new way this year, which will result in some data differ from previous publications. Due to the certificate system there is basically full statistics on wind energy in this publication which are presented in different styles. Here we present the regional distribution, ie. how the number of turbines and installed capacity is allocated to counties and municipalities. The electricity produced divided by county, where for reasons of confidentiality possible, are also reported. The wind power is becoming increasingly important in the Swedish energy system which provides an increased demand for statistics and other divisions than that presented in the official statistics. Therefore, this publication, which are not official statistics, has been developed.

  15. Statistics for experimentalists

    CERN Document Server

    Cooper, B E

    2014-01-01

    Statistics for Experimentalists aims to provide experimental scientists with a working knowledge of statistical methods and search approaches to the analysis of data. The book first elaborates on probability and continuous probability distributions. Discussions focus on properties of continuous random variables and normal variables, independence of two random variables, central moments of a continuous distribution, prediction from a normal distribution, binomial probabilities, and multiplication of probabilities and independence. The text then examines estimation and tests of significance. Topics include estimators and estimates, expected values, minimum variance linear unbiased estimators, sufficient estimators, methods of maximum likelihood and least squares, and the test of significance method. The manuscript ponders on distribution-free tests, Poisson process and counting problems, correlation and function fitting, balanced incomplete randomized block designs and the analysis of covariance, and experiment...

  16. Are studies reporting significant results more likely to be published?

    Science.gov (United States)

    Koletsi, Despina; Karagianni, Anthi; Pandis, Nikolaos; Makou, Margarita; Polychronopoulou, Argy; Eliades, Theodore

    2009-11-01

    Our objective was to assess the hypothesis that there are variations of the proportion of articles reporting a significant effect, with a higher percentage of those articles published in journals with impact factors. The contents of 5 orthodontic journals (American Journal of Orthodontics and Dentofacial Orthopedics, Angle Orthodontist, European Journal of Orthodontics, Journal of Orthodontics, and Orthodontics and Craniofacial Research), published between 2004 and 2008, were hand-searched. Articles with statistical analysis of data were included in the study and classified into 4 categories: behavior and psychology, biomaterials and biomechanics, diagnostic procedures and treatment, and craniofacial growth, morphology, and genetics. In total, 2622 articles were examined, with 1785 included in the analysis. Univariate and multivariate logistic regression analyses were applied with statistical significance as the dependent variable, and whether the journal had an impact factor, the subject, and the year were the independent predictors. A higher percentage of articles showed significant results relative to those without significant associations (on average, 88% vs 12%) for those journals. Overall, these journals published significantly more studies with significant results, ranging from 75% to 90% (P = 0.02). Multivariate modeling showed that journals with impact factors had a 100% increased probability of publishing a statistically significant result compared with journals with no impact factor (odds ratio [OR], 1.99; 95% CI, 1.19-3.31). Compared with articles on biomaterials and biomechanics, all other subject categories showed lower probabilities of significant results. Nonsignificant findings in behavior and psychology and diagnosis and treatment were 1.8 (OR, 1.75; 95% CI, 1.51-2.67) and 3.5 (OR, 3.50; 95% CI, 2.27-5.37) times more likely to be published, respectively. Journals seem to prefer reporting significant results; this might be because of authors

  17. Incorporating an Interactive Statistics Workshop into an Introductory Biology Course-Based Undergraduate Research Experience (CURE) Enhances Students’ Statistical Reasoning and Quantitative Literacy Skills †

    Science.gov (United States)

    Olimpo, Jeffrey T.; Pevey, Ryan S.; McCabe, Thomas M.

    2018-01-01

    Course-based undergraduate research experiences (CUREs) provide an avenue for student participation in authentic scientific opportunities. Within the context of such coursework, students are often expected to collect, analyze, and evaluate data obtained from their own investigations. Yet, limited research has been conducted that examines mechanisms for supporting students in these endeavors. In this article, we discuss the development and evaluation of an interactive statistics workshop that was expressly designed to provide students with an open platform for graduate teaching assistant (GTA)-mentored data processing, statistical testing, and synthesis of their own research findings. Mixed methods analyses of pre/post-intervention survey data indicated a statistically significant increase in students’ reasoning and quantitative literacy abilities in the domain, as well as enhancement of student self-reported confidence in and knowledge of the application of various statistical metrics to real-world contexts. Collectively, these data reify an important role for scaffolded instruction in statistics in preparing emergent scientists to be data-savvy researchers in a globally expansive STEM workforce. PMID:29904549

  18. A shift from significance test to hypothesis test through power analysis in medical research.

    Science.gov (United States)

    Singh, G

    2006-01-01

    Medical research literature until recently, exhibited substantial dominance of the Fisher's significance test approach of statistical inference concentrating more on probability of type I error over Neyman-Pearson's hypothesis test considering both probability of type I and II error. Fisher's approach dichotomises results into significant or not significant results with a P value. The Neyman-Pearson's approach talks of acceptance or rejection of null hypothesis. Based on the same theory these two approaches deal with same objective and conclude in their own way. The advancement in computing techniques and availability of statistical software have resulted in increasing application of power calculations in medical research and thereby reporting the result of significance tests in the light of power of the test also. Significance test approach, when it incorporates power analysis contains the essence of hypothesis test approach. It may be safely argued that rising application of power analysis in medical research may have initiated a shift from Fisher's significance test to Neyman-Pearson's hypothesis test procedure.

  19. Game statistics for the island of Olkiluoto in 2005-2006

    International Nuclear Information System (INIS)

    Oja, S.

    2006-11-01

    The game statistics for the island of Olkiluoto was updated in February 2006. The estimate of game populations in Olkiluoto was done on the basis of interviews of local hunters and available statistical materials. The collected data were compared to earlier studies of game animals done in Olkiluoto. The populations of Elk and White-tailed Deer are stable, and the population of Roe Deer is increasing significantly. The populations of small mammal predators (American Mink, Raccoon Dog, Red Fox) are very high level, despite of intensive hunting. Other game animals like waterfowls are hunted moderately and the amount of catches are small. (orig.)

  20. Statistical tests for power-law cross-correlated processes

    Science.gov (United States)

    Podobnik, Boris; Jiang, Zhi-Qiang; Zhou, Wei-Xing; Stanley, H. Eugene

    2011-12-01

    For stationary time series, the cross-covariance and the cross-correlation as functions of time lag n serve to quantify the similarity of two time series. The latter measure is also used to assess whether the cross-correlations are statistically significant. For nonstationary time series, the analogous measures are detrended cross-correlations analysis (DCCA) and the recently proposed detrended cross-correlation coefficient, ρDCCA(T,n), where T is the total length of the time series and n the window size. For ρDCCA(T,n), we numerically calculated the Cauchy inequality -1≤ρDCCA(T,n)≤1. Here we derive -1≤ρDCCA(T,n)≤1 for a standard variance-covariance approach and for a detrending approach. For overlapping windows, we find the range of ρDCCA within which the cross-correlations become statistically significant. For overlapping windows we numerically determine—and for nonoverlapping windows we derive—that the standard deviation of ρDCCA(T,n) tends with increasing T to 1/T. Using ρDCCA(T,n) we show that the Chinese financial market's tendency to follow the U.S. market is extremely weak. We also propose an additional statistical test that can be used to quantify the existence of cross-correlations between two power-law correlated time series.

  1. On two methods of statistical image analysis

    NARCIS (Netherlands)

    Missimer, J; Knorr, U; Maguire, RP; Herzog, H; Seitz, RJ; Tellman, L; Leenders, K.L.

    1999-01-01

    The computerized brain atlas (CBA) and statistical parametric mapping (SPM) are two procedures for voxel-based statistical evaluation of PET activation studies. Each includes spatial standardization of image volumes, computation of a statistic, and evaluation of its significance. In addition,

  2. Increasing lateral tibial slope: is there an association with articular cartilage changes in the knee?

    Energy Technology Data Exchange (ETDEWEB)

    Khan, Nasir; Shepel, Michael; Leswick, David A.; Obaid, Haron [University of Saskatchewan, Department of Medical Imaging, Royal University Hospital, and College of Medicine, Saskatoon, Saskatchewan (Canada)

    2014-04-15

    The geometry of the lateral tibial slope (LTS) plays an important role in the overall biomechanics of the knee. Through this study, we aim to assess the impact of LTS on cartilage degeneration in the knee. A retrospective analysis of 93 knee MRI scans (1.5 T or 3 T) for patients aged 20-45 years with no history of trauma or knee surgery, and absence of internal derangement. The LTS was calculated using the circle method. Chondropathy was graded from 0 (normal) to 3 (severe). Linear regression analysis was used for statistical analysis (p < 0.05). In our cohort of patients, a statistically significant association was seen between increasing LTS and worsening cartilage degenerative changes in the medial patellar articular surface and the lateral tibial articular surface (p < 0.05). There was no statistically significant association between increasing LTS and worsening chondropathy of the lateral patellar, medial trochlea, lateral trochlea, medial femoral, lateral femoral, and medial tibial articular surfaces. Our results show a statistically significant association between increasing LTS and worsening cartilage degenerative changes in the medial patella and the lateral tibial plateau. We speculate that increased LTS may result in increased femoral glide over the lateral tibial plateau with subsequent increased external rotation of the femur predisposing to patellofemoral articular changes. Future arthroscopic studies are needed to further confirm our findings. (orig.)

  3. Increasing lateral tibial slope: is there an association with articular cartilage changes in the knee?

    International Nuclear Information System (INIS)

    Khan, Nasir; Shepel, Michael; Leswick, David A.; Obaid, Haron

    2014-01-01

    The geometry of the lateral tibial slope (LTS) plays an important role in the overall biomechanics of the knee. Through this study, we aim to assess the impact of LTS on cartilage degeneration in the knee. A retrospective analysis of 93 knee MRI scans (1.5 T or 3 T) for patients aged 20-45 years with no history of trauma or knee surgery, and absence of internal derangement. The LTS was calculated using the circle method. Chondropathy was graded from 0 (normal) to 3 (severe). Linear regression analysis was used for statistical analysis (p < 0.05). In our cohort of patients, a statistically significant association was seen between increasing LTS and worsening cartilage degenerative changes in the medial patellar articular surface and the lateral tibial articular surface (p < 0.05). There was no statistically significant association between increasing LTS and worsening chondropathy of the lateral patellar, medial trochlea, lateral trochlea, medial femoral, lateral femoral, and medial tibial articular surfaces. Our results show a statistically significant association between increasing LTS and worsening cartilage degenerative changes in the medial patella and the lateral tibial plateau. We speculate that increased LTS may result in increased femoral glide over the lateral tibial plateau with subsequent increased external rotation of the femur predisposing to patellofemoral articular changes. Future arthroscopic studies are needed to further confirm our findings. (orig.)

  4. Statistical Analysis of Compressive and Flexural Test Results on the Sustainable Adobe Reinforced with Steel Wire Mesh

    Science.gov (United States)

    Jokhio, Gul A.; Syed Mohsin, Sharifah M.; Gul, Yasmeen

    2018-04-01

    It has been established that Adobe provides, in addition to being sustainable and economic, a better indoor air quality without spending extensive amounts of energy as opposed to the modern synthetic materials. The material, however, suffers from weak structural behaviour when subjected to adverse loading conditions. A wide range of mechanical properties has been reported in literature owing to lack of research and standardization. The present paper presents the statistical analysis of the results that were obtained through compressive and flexural tests on Adobe samples. Adobe specimens with and without wire mesh reinforcement were tested and the results were reported. The statistical analysis of these results presents an interesting read. It has been found that the compressive strength of adobe increases by about 43% after adding a single layer of wire mesh reinforcement. This increase is statistically significant. The flexural response of Adobe has also shown improvement with the addition of wire mesh reinforcement, however, the statistical significance of the same cannot be established.

  5. Increased serum urea to creatinine ratio and its negative correlation with arterial pressure in canine babesiosis.

    Science.gov (United States)

    Zygner, Wojciech; Gójska-Zygner, Olga

    2014-09-01

    The increase of the serum urea to creatinine ratio (UCR) was observed in dogs infected with Babesia canis. Previous studies have suggested that decrease of blood pressure can be one of the reasons for this phenomenon. In this work statistically significant increase of the UCR was observed in dogs with babesiosis. Comparison of the UCR between 23 azotaemic dogs and 25 non-azotaemic dogs infected with Babesia canis showed statistically significantly higher mean of the UCR in azotaemic dogs. Correlations between UCR and systolic, diastolic and mean arterial pressure (SAP, DAP and MAP) in 48 dogs infected with B. canis were negative (UCR and SAP: r = -0.3909; UCR and DAP: r = -0.3182; UCR and MAP: r = -0.3682) and statistically significant (p high, and there was no statistically significant correlation between UCR and arterial pressures in azotaemic dogs. Thus, it seems that decrease of blood pressure in dogs with babesiosis explains only partially the cause of increased UCR in infected dogs. The other authors suggested hyperureagenesis and myocardial injury as a potential reason for the increased UCR in canine babesiosis. Thus, further studies are needed to determine causes of increased UCR in dogs with babesiosis, especially on the connection between UCR changes and the concentrations of plasma cardiac troponins and ammonia, and the occurrence of occult blood on fecal examination.

  6. Statistics Using Just One Formula

    Science.gov (United States)

    Rosenthal, Jeffrey S.

    2018-01-01

    This article advocates that introductory statistics be taught by basing all calculations on a single simple margin-of-error formula and deriving all of the standard introductory statistical concepts (confidence intervals, significance tests, comparisons of means and proportions, etc) from that one formula. It is argued that this approach will…

  7. Statistics Anxiety among Postgraduate Students

    Science.gov (United States)

    Koh, Denise; Zawi, Mohd Khairi

    2014-01-01

    Most postgraduate programmes, that have research components, require students to take at least one course of research statistics. Not all postgraduate programmes are science based, there are a significant number of postgraduate students who are from the social sciences that will be taking statistics courses, as they try to complete their…

  8. P values in display items are ubiquitous and almost invariably significant: A survey of top science journals.

    Science.gov (United States)

    Cristea, Ioana Alina; Ioannidis, John P A

    2018-01-01

    P values represent a widely used, but pervasively misunderstood and fiercely contested method of scientific inference. Display items, such as figures and tables, often containing the main results, are an important source of P values. We conducted a survey comparing the overall use of P values and the occurrence of significant P values in display items of a sample of articles in the three top multidisciplinary journals (Nature, Science, PNAS) in 2017 and, respectively, in 1997. We also examined the reporting of multiplicity corrections and its potential influence on the proportion of statistically significant P values. Our findings demonstrated substantial and growing reliance on P values in display items, with increases of 2.5 to 14.5 times in 2017 compared to 1997. The overwhelming majority of P values (94%, 95% confidence interval [CI] 92% to 96%) were statistically significant. Methods to adjust for multiplicity were almost non-existent in 1997, but reported in many articles relying on P values in 2017 (Nature 68%, Science 48%, PNAS 38%). In their absence, almost all reported P values were statistically significant (98%, 95% CI 96% to 99%). Conversely, when any multiplicity corrections were described, 88% (95% CI 82% to 93%) of reported P values were statistically significant. Use of Bayesian methods was scant (2.5%) and rarely (0.7%) articles relied exclusively on Bayesian statistics. Overall, wider appreciation of the need for multiplicity corrections is a welcome evolution, but the rapid growth of reliance on P values and implausibly high rates of reported statistical significance are worrisome.

  9. Phytohormone supplementation significantly increases growth of Chlamydomonas reinhardtii cultivated for biodiesel production.

    Science.gov (United States)

    Park, Won-Kun; Yoo, Gursong; Moon, Myounghoon; Kim, Chul Woong; Choi, Yoon-E; Yang, Ji-Won

    2013-11-01

    Cultivation is the most expensive step in the production of biodiesel from microalgae, and substantial research has been devoted to developing more cost-effective cultivation methods. Plant hormones (phytohormones) are chemical messengers that regulate various aspects of growth and development and are typically active at very low concentrations. In this study, we investigated the effect of different phytohormones on microalgal growth and biodiesel production in Chlamydomonas reinhardtii and their potential to lower the overall cost of commercial biofuel production. The results indicated that all five of the tested phytohormones (indole-3-acetic acid, gibberellic acid, kinetin, 1-triacontanol, and abscisic acid) promoted microalgal growth. In particular, hormone treatment increased biomass production by 54 to 69 % relative to the control growth medium (Tris-acetate-phosphate, TAP). Phytohormone treatments also affected microalgal cell morphology but had no effect on the yields of fatty acid methyl esters (FAMEs) as a percent of biomass. We also tested the effect of these phytohormones on microalgal growth in nitrogen-limited media by supplementation in the early stationary phase. Maximum cell densities after addition of phytohormones were higher than in TAP medium, even when the nitrogen source was reduced to 40 % of that in TAP medium. Taken together, our results indicate that phytohormones significantly increased microalgal growth, particularly in nitrogen-limited media, and have potential for use in the development of efficient microalgal cultivation for biofuel production.

  10. An Application of Multivariate Statistical Analysis for Query-Driven Visualization

    Energy Technology Data Exchange (ETDEWEB)

    Gosink, Luke J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Garth, Christoph [Univ. of California, Davis, CA (United States); Anderson, John C. [Univ. of California, Davis, CA (United States); Bethel, E. Wes [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Joy, Kenneth I. [Univ. of California, Davis, CA (United States)

    2011-03-01

    Driven by the ability to generate ever-larger, increasingly complex data, there is an urgent need in the scientific community for scalable analysis methods that can rapidly identify salient trends in scientific data. Query-Driven Visualization (QDV) strategies are among the small subset of techniques that can address both large and highly complex datasets. This paper extends the utility of QDV strategies with a statistics-based framework that integrates non-parametric distribution estimation techniques with a new segmentation strategy to visually identify statistically significant trends and features within the solution space of a query. In this framework, query distribution estimates help users to interactively explore their query's solution and visually identify the regions where the combined behavior of constrained variables is most important, statistically, to their inquiry. Our new segmentation strategy extends the distribution estimation analysis by visually conveying the individual importance of each variable to these regions of high statistical significance. We demonstrate the analysis benefits these two strategies provide and show how they may be used to facilitate the refinement of constraints over variables expressed in a user's query. We apply our method to datasets from two different scientific domains to demonstrate its broad applicability.

  11. A scan statistic to extract causal gene clusters from case-control genome-wide rare CNV data

    Directory of Open Access Journals (Sweden)

    Scherer Stephen W

    2011-05-01

    Full Text Available Abstract Background Several statistical tests have been developed for analyzing genome-wide association data by incorporating gene pathway information in terms of gene sets. Using these methods, hundreds of gene sets are typically tested, and the tested gene sets often overlap. This overlapping greatly increases the probability of generating false positives, and the results obtained are difficult to interpret, particularly when many gene sets show statistical significance. Results We propose a flexible statistical framework to circumvent these problems. Inspired by spatial scan statistics for detecting clustering of disease occurrence in the field of epidemiology, we developed a scan statistic to extract disease-associated gene clusters from a whole gene pathway. Extracting one or a few significant gene clusters from a global pathway limits the overall false positive probability, which results in increased statistical power, and facilitates the interpretation of test results. In the present study, we applied our method to genome-wide association data for rare copy-number variations, which have been strongly implicated in common diseases. Application of our method to a simulated dataset demonstrated the high accuracy of this method in detecting disease-associated gene clusters in a whole gene pathway. Conclusions The scan statistic approach proposed here shows a high level of accuracy in detecting gene clusters in a whole gene pathway. This study has provided a sound statistical framework for analyzing genome-wide rare CNV data by incorporating topological information on the gene pathway.

  12. How to construct the statistic network? An association network of herbaceous

    Directory of Open Access Journals (Sweden)

    WenJun Zhang

    2012-06-01

    Full Text Available In present study I defined a new type of network, the statistic network. The statistic network is a weighted and non-deterministic network. In the statistic network, a connection value, i.e., connection weight, represents connection strength and connection likelihood between two nodes and its absolute value falls in the interval (0,1]. The connection value is expressed as a statistical measure such as correlation coefficient, association coefficient, or Jaccard coefficient, etc. In addition, all connections of the statistic network can be statistically tested for their validity. A connection is true if the connection value is statistically significant. If all connection values of a node are not statistically significant, it is an isolated node. An isolated node has not any connection to other nodes in the statistic network. Positive and negative connection values denote distinct connectiontypes (positive or negative association or interaction. In the statistic network, two nodes with the greater connection value will show more similar trend in the change of their states. At any time we can obtain a sample network of the statistic network. A sample network is a non-weighted and deterministic network. Thestatistic network, in particular the plant association network that constructed from field sampling, is mostly an information network. Most of the interspecific relationships in plant community are competition and cooperation. Therefore in comparison to animal networks, the methodology of statistic network is moresuitable to construct plant association networks. Some conclusions were drawn from this study: (1 in the plant association network, most connections are weak and positive interactions. The association network constructed from Spearman rank correlation has most connections and isolated taxa are fewer. From net linear correlation,linear correlation, to Spearman rank correlation, the practical number of connections and connectance in the

  13. Statistical Reporting Errors and Collaboration on Statistical Analyses in Psychological Science.

    Science.gov (United States)

    Veldkamp, Coosje L S; Nuijten, Michèle B; Dominguez-Alvarez, Linda; van Assen, Marcel A L M; Wicherts, Jelte M

    2014-01-01

    Statistical analysis is error prone. A best practice for researchers using statistics would therefore be to share data among co-authors, allowing double-checking of executed tasks just as co-pilots do in aviation. To document the extent to which this 'co-piloting' currently occurs in psychology, we surveyed the authors of 697 articles published in six top psychology journals and asked them whether they had collaborated on four aspects of analyzing data and reporting results, and whether the described data had been shared between the authors. We acquired responses for 49.6% of the articles and found that co-piloting on statistical analysis and reporting results is quite uncommon among psychologists, while data sharing among co-authors seems reasonably but not completely standard. We then used an automated procedure to study the prevalence of statistical reporting errors in the articles in our sample and examined the relationship between reporting errors and co-piloting. Overall, 63% of the articles contained at least one p-value that was inconsistent with the reported test statistic and the accompanying degrees of freedom, and 20% of the articles contained at least one p-value that was inconsistent to such a degree that it may have affected decisions about statistical significance. Overall, the probability that a given p-value was inconsistent was over 10%. Co-piloting was not found to be associated with reporting errors.

  14. Comparing geological and statistical approaches for element selection in sediment tracing research

    Science.gov (United States)

    Laceby, J. Patrick; McMahon, Joe; Evrard, Olivier; Olley, Jon

    2015-04-01

    Elevated suspended sediment loads reduce reservoir capacity and significantly increase the cost of operating water treatment infrastructure, making the management of sediment supply to reservoirs of increasingly importance. Sediment fingerprinting techniques can be used to determine the relative contributions of different sources of sediment accumulating in reservoirs. The objective of this research is to compare geological and statistical approaches to element selection for sediment fingerprinting modelling. Time-integrated samplers (n=45) were used to obtain source samples from four major subcatchments flowing into the Baroon Pocket Dam in South East Queensland, Australia. The geochemistry of potential sources were compared to the geochemistry of sediment cores (n=12) sampled in the reservoir. The geochemical approach selected elements for modelling that provided expected, observed and statistical discrimination between sediment sources. Two statistical approaches selected elements for modelling with the Kruskal-Wallis H-test and Discriminatory Function Analysis (DFA). In particular, two different significance levels (0.05 & 0.35) for the DFA were included to investigate the importance of element selection on modelling results. A distribution model determined the relative contributions of different sources to sediment sampled in the Baroon Pocket Dam. Elemental discrimination was expected between one subcatchment (Obi Obi Creek) and the remaining subcatchments (Lexys, Falls and Bridge Creek). Six major elements were expected to provide discrimination. Of these six, only Fe2O3 and SiO2 provided expected, observed and statistical discrimination. Modelling results with this geological approach indicated 36% (+/- 9%) of sediment sampled in the reservoir cores were from mafic-derived sources and 64% (+/- 9%) were from felsic-derived sources. The geological and the first statistical approach (DFA0.05) differed by only 1% (σ 5%) for 5 out of 6 model groupings with only

  15. On the statistical assessment of classifiers using DNA microarray data

    Directory of Open Access Journals (Sweden)

    Carella M

    2006-08-01

    Full Text Available Abstract Background In this paper we present a method for the statistical assessment of cancer predictors which make use of gene expression profiles. The methodology is applied to a new data set of microarray gene expression data collected in Casa Sollievo della Sofferenza Hospital, Foggia – Italy. The data set is made up of normal (22 and tumor (25 specimens extracted from 25 patients affected by colon cancer. We propose to give answers to some questions which are relevant for the automatic diagnosis of cancer such as: Is the size of the available data set sufficient to build accurate classifiers? What is the statistical significance of the associated error rates? In what ways can accuracy be considered dependant on the adopted classification scheme? How many genes are correlated with the pathology and how many are sufficient for an accurate colon cancer classification? The method we propose answers these questions whilst avoiding the potential pitfalls hidden in the analysis and interpretation of microarray data. Results We estimate the generalization error, evaluated through the Leave-K-Out Cross Validation error, for three different classification schemes by varying the number of training examples and the number of the genes used. The statistical significance of the error rate is measured by using a permutation test. We provide a statistical analysis in terms of the frequencies of the genes involved in the classification. Using the whole set of genes, we found that the Weighted Voting Algorithm (WVA classifier learns the distinction between normal and tumor specimens with 25 training examples, providing e = 21% (p = 0.045 as an error rate. This remains constant even when the number of examples increases. Moreover, Regularized Least Squares (RLS and Support Vector Machines (SVM classifiers can learn with only 15 training examples, with an error rate of e = 19% (p = 0.035 and e = 18% (p = 0.037 respectively. Moreover, the error rate

  16. Confidence Intervals: From tests of statistical significance to confidence intervals, range hypotheses and substantial effects

    Directory of Open Access Journals (Sweden)

    Dominic Beaulieu-Prévost

    2006-03-01

    Full Text Available For the last 50 years of research in quantitative social sciences, the empirical evaluation of scientific hypotheses has been based on the rejection or not of the null hypothesis. However, more than 300 articles demonstrated that this method was problematic. In summary, null hypothesis testing (NHT is unfalsifiable, its results depend directly on sample size and the null hypothesis is both improbable and not plausible. Consequently, alternatives to NHT such as confidence intervals (CI and measures of effect size are starting to be used in scientific publications. The purpose of this article is, first, to provide the conceptual tools necessary to implement an approach based on confidence intervals, and second, to briefly demonstrate why such an approach is an interesting alternative to an approach based on NHT. As demonstrated in the article, the proposed CI approach avoids most problems related to a NHT approach and can often improve the scientific and contextual relevance of the statistical interpretations by testing range hypotheses instead of a point hypothesis and by defining the minimal value of a substantial effect. The main advantage of such a CI approach is that it replaces the notion of statistical power by an easily interpretable three-value logic (probable presence of a substantial effect, probable absence of a substantial effect and probabilistic undetermination. The demonstration includes a complete example.

  17. Statistics 101 for Radiologists.

    Science.gov (United States)

    Anvari, Arash; Halpern, Elkan F; Samir, Anthony E

    2015-10-01

    Diagnostic tests have wide clinical applications, including screening, diagnosis, measuring treatment effect, and determining prognosis. Interpreting diagnostic test results requires an understanding of key statistical concepts used to evaluate test efficacy. This review explains descriptive statistics and discusses probability, including mutually exclusive and independent events and conditional probability. In the inferential statistics section, a statistical perspective on study design is provided, together with an explanation of how to select appropriate statistical tests. Key concepts in recruiting study samples are discussed, including representativeness and random sampling. Variable types are defined, including predictor, outcome, and covariate variables, and the relationship of these variables to one another. In the hypothesis testing section, we explain how to determine if observed differences between groups are likely to be due to chance. We explain type I and II errors, statistical significance, and study power, followed by an explanation of effect sizes and how confidence intervals can be used to generalize observed effect sizes to the larger population. Statistical tests are explained in four categories: t tests and analysis of variance, proportion analysis tests, nonparametric tests, and regression techniques. We discuss sensitivity, specificity, accuracy, receiver operating characteristic analysis, and likelihood ratios. Measures of reliability and agreement, including κ statistics, intraclass correlation coefficients, and Bland-Altman graphs and analysis, are introduced. © RSNA, 2015.

  18. Preventing statistical errors in scientific journals.

    NARCIS (Netherlands)

    Nuijten, M.B.

    2016-01-01

    There is evidence for a high prevalence of statistical reporting errors in psychology and other scientific fields. These errors display a systematic preference for statistically significant results, distorting the scientific literature. There are several possible causes for this systematic error

  19. Basics of statistical physics

    CERN Document Server

    Müller-Kirsten, Harald J W

    2013-01-01

    Statistics links microscopic and macroscopic phenomena, and requires for this reason a large number of microscopic elements like atoms. The results are values of maximum probability or of averaging. This introduction to statistical physics concentrates on the basic principles, and attempts to explain these in simple terms supplemented by numerous examples. These basic principles include the difference between classical and quantum statistics, a priori probabilities as related to degeneracies, the vital aspect of indistinguishability as compared with distinguishability in classical physics, the differences between conserved and non-conserved elements, the different ways of counting arrangements in the three statistics (Maxwell-Boltzmann, Fermi-Dirac, Bose-Einstein), the difference between maximization of the number of arrangements of elements, and averaging in the Darwin-Fowler method. Significant applications to solids, radiation and electrons in metals are treated in separate chapters, as well as Bose-Eins...

  20. Renyi statistics in equilibrium statistical mechanics

    International Nuclear Information System (INIS)

    Parvan, A.S.; Biro, T.S.

    2010-01-01

    The Renyi statistics in the canonical and microcanonical ensembles is examined both in general and in particular for the ideal gas. In the microcanonical ensemble the Renyi statistics is equivalent to the Boltzmann-Gibbs statistics. By the exact analytical results for the ideal gas, it is shown that in the canonical ensemble, taking the thermodynamic limit, the Renyi statistics is also equivalent to the Boltzmann-Gibbs statistics. Furthermore it satisfies the requirements of the equilibrium thermodynamics, i.e. the thermodynamical potential of the statistical ensemble is a homogeneous function of first degree of its extensive variables of state. We conclude that the Renyi statistics arrives at the same thermodynamical relations, as those stemming from the Boltzmann-Gibbs statistics in this limit.

  1. Why Wait? The Influence of Academic Self-Regulation, Intrinsic Motivation, and Statistics Anxiety on Procrastination in Online Statistics

    Science.gov (United States)

    Dunn, Karee

    2014-01-01

    Online graduate education programs are expanding rapidly. Many of these programs require a statistics course, resulting in an increasing need for online statistics courses. The study reported here grew from experiences teaching online, graduate statistics courses. In seeking answers on how to improve this class, I discovered that research has yet…

  2. Statistical modeling for degradation data

    CERN Document Server

    Lio, Yuhlong; Ng, Hon; Tsai, Tzong-Ru

    2017-01-01

    This book focuses on the statistical aspects of the analysis of degradation data. In recent years, degradation data analysis has come to play an increasingly important role in different disciplines such as reliability, public health sciences, and finance. For example, information on products’ reliability can be obtained by analyzing degradation data. In addition, statistical modeling and inference techniques have been developed on the basis of different degradation measures. The book brings together experts engaged in statistical modeling and inference, presenting and discussing important recent advances in degradation data analysis and related applications. The topics covered are timely and have considerable potential to impact both statistics and reliability engineering.

  3. A shift from significance test to hypothesis test through power analysis in medical research

    Directory of Open Access Journals (Sweden)

    Singh Girish

    2006-01-01

    Full Text Available Medical research literature until recently, exhibited substantial dominance of the Fisher′s significance test approach of statistical inference concentrating more on probability of type I error over Neyman-Pearson′s hypothesis test considering both probability of type I and II error. Fisher′s approach dichotomises results into significant or not significant results with a P value. The Neyman-Pearson′s approach talks of acceptance or rejection of null hypothesis. Based on the same theory these two approaches deal with same objective and conclude in their own way. The advancement in computing techniques and availability of statistical software have resulted in increasing application of power calculations in medical research and thereby reporting the result of significance tests in the light of power of the test also. Significance test approach, when it incorporates power analysis contains the essence of hypothesis test approach. It may be safely argued that rising application of power analysis in medical research may have initiated a shift from Fisher′s significance test to Neyman-Pearson′s hypothesis test procedure.

  4. Death Certification Errors and the Effect on Mortality Statistics.

    Science.gov (United States)

    McGivern, Lauri; Shulman, Leanne; Carney, Jan K; Shapiro, Steven; Bundock, Elizabeth

    Errors in cause and manner of death on death certificates are common and affect families, mortality statistics, and public health research. The primary objective of this study was to characterize errors in the cause and manner of death on death certificates completed by non-Medical Examiners. A secondary objective was to determine the effects of errors on national mortality statistics. We retrospectively compared 601 death certificates completed between July 1, 2015, and January 31, 2016, from the Vermont Electronic Death Registration System with clinical summaries from medical records. Medical Examiners, blinded to original certificates, reviewed summaries, generated mock certificates, and compared mock certificates with original certificates. They then graded errors using a scale from 1 to 4 (higher numbers indicated increased impact on interpretation of the cause) to determine the prevalence of minor and major errors. They also compared International Classification of Diseases, 10th Revision (ICD-10) codes on original certificates with those on mock certificates. Of 601 original death certificates, 319 (53%) had errors; 305 (51%) had major errors; and 59 (10%) had minor errors. We found no significant differences by certifier type (physician vs nonphysician). We did find significant differences in major errors in place of death ( P statistics. Surveillance and certifier education must expand beyond local and state efforts. Simplifying and standardizing underlying literal text for cause of death may improve accuracy, decrease coding errors, and improve national mortality statistics.

  5. Significant increase of Echinococcus multilocularis prevalencein foxes, but no increased predicted risk for humans

    NARCIS (Netherlands)

    Maas, M.; Dam-Deisz, W.D.C.; Roon, van A.M.; Takumi, K.; Giessen, van der J.W.B.

    2014-01-01

    The emergence of the zoonotic tapeworm Echinococcus multilocularis, causative agent ofalveolar echinococcosis (AE), poses a public health risk. A previously designed risk mapmodel predicted a spread of E. multilocularis and increasing numbers of alveolar echinococ-cosis patients in the province of

  6. Statistics: a Bayesian perspective

    National Research Council Canada - National Science Library

    Berry, Donald A

    1996-01-01

    ...: it is the only introductory textbook based on Bayesian ideas, it combines concepts and methods, it presents statistics as a means of integrating data into the significant process, it develops ideas...

  7. Philosophy of statistics

    CERN Document Server

    Forster, Malcolm R

    2011-01-01

    Statisticians and philosophers of science have many common interests but restricted communication with each other. This volume aims to remedy these shortcomings. It provides state-of-the-art research in the area of philosophy of statistics by encouraging numerous experts to communicate with one another without feeling "restricted” by their disciplines or thinking "piecemeal” in their treatment of issues. A second goal of this book is to present work in the field without bias toward any particular statistical paradigm. Broadly speaking, the essays in this Handbook are concerned with problems of induction, statistics and probability. For centuries, foundational problems like induction have been among philosophers' favorite topics; recently, however, non-philosophers have increasingly taken a keen interest in these issues. This volume accordingly contains papers by both philosophers and non-philosophers, including scholars from nine academic disciplines.

  8. Statistical Analysis and Evaluation of the Depth of the Ruts on Lithuanian State Significance Roads

    Directory of Open Access Journals (Sweden)

    Erinijus Getautis

    2011-04-01

    Full Text Available The aim of this work is to gather information about the national flexible pavement roads ruts depth, to determine its statistical dispersijon index and to determine their validity for needed requirements. Analysis of scientific works of ruts apearance in the asphalt and their influence for driving is presented in this work. Dynamical models of ruts in asphalt are presented in the work as well. Experimental outcome data of rut depth dispersijon in the national highway of Lithuania Vilnius – Kaunas is prepared. Conclusions are formulated and presented. Article in Lithuanian

  9. Reducing Anxiety and Increasing Self-Efficacy within an Advanced Graduate Psychology Statistics Course

    Science.gov (United States)

    McGrath, April L.; Ferns, Alyssa; Greiner, Leigh; Wanamaker, Kayla; Brown, Shelley

    2015-01-01

    In this study we assessed the usefulness of a multifaceted teaching framework in an advanced statistics course. We sought to expand on past findings by using this framework to assess changes in anxiety and self-efficacy, and we collected focus group data to ascertain whether students attribute such changes to a multifaceted teaching approach.…

  10. Statistical Model of Extreme Shear

    DEFF Research Database (Denmark)

    Larsen, Gunner Chr.; Hansen, Kurt Schaldemose

    2004-01-01

    In order to continue cost-optimisation of modern large wind turbines, it is important to continously increase the knowledge on wind field parameters relevant to design loads. This paper presents a general statistical model that offers site-specific prediction of the probability density function...... by a model that, on a statistically consistent basis, describe the most likely spatial shape of an extreme wind shear event. Predictions from the model have been compared with results from an extreme value data analysis, based on a large number of high-sampled full-scale time series measurements...... are consistent, given the inevitabel uncertainties associated with model as well as with the extreme value data analysis. Keywords: Statistical model, extreme wind conditions, statistical analysis, turbulence, wind loading, statistical analysis, turbulence, wind loading, wind shear, wind turbines....

  11. Application of scl - pbl method to increase quality learning of industrial statistics course in department of industrial engineering pancasila university

    Science.gov (United States)

    Darmawan, M.; Hidayah, N. Y.

    2017-12-01

    Currently, there has been a change of new paradigm in the learning model in college, ie from Teacher Centered Learning (TCL) model to Student Centered Learing (SCL). It is generally assumed that the SCL model is better than the TCL model. The Courses of 2nd Industrial Statistics in the Department Industrial Engineering Pancasila University is the subject that belongs to the Basic Engineering group. So far, the applied learning model refers more to the TCL model, and field facts show that the learning outcomes are less satisfactory. Of the three consecutive semesters, ie even semester 2013/2014, 2014/2015, and 2015/2016 obtained grade average is equal to 56.0; 61.1, and 60.5. In the even semester of 2016/2017, Classroom Action Research (CAR) is conducted for this course through the implementation of SCL model with Problem Based Learning (PBL) methods. The hypothesis proposed is that the SCL-PBL model will be able to improve the final grade of the course. The results shows that the average grade of the course can be increased to 73.27. This value was then tested using the ANOVA and the test results concluded that the average grade was significantly different from the average grade value in the previous three semesters.

  12. A study of statistics anxiety levels of graduate dental hygiene students.

    Science.gov (United States)

    Welch, Paul S; Jacks, Mary E; Smiley, Lynn A; Walden, Carolyn E; Clark, William D; Nguyen, Carol A

    2015-02-01

    In light of increased emphasis on evidence-based practice in the profession of dental hygiene, it is important that today's dental hygienist comprehend statistical measures to fully understand research articles, and thereby apply scientific evidence to practice. Therefore, the purpose of this study was to investigate statistics anxiety among graduate dental hygiene students in the U.S. A web-based self-report, anonymous survey was emailed to directors of 17 MSDH programs in the U.S. with a request to distribute to graduate students. The survey collected data on statistics anxiety, sociodemographic characteristics and evidence-based practice. Statistic anxiety was assessed using the Statistical Anxiety Rating Scale. Study significance level was α=0.05. Only 8 of the 17 invited programs participated in the study. Statistical Anxiety Rating Scale data revealed graduate dental hygiene students experience low to moderate levels of statistics anxiety. Specifically, the level of anxiety on the Interpretation Anxiety factor indicated this population could struggle with making sense of scientific research. A decisive majority (92%) of students indicated statistics is essential for evidence-based practice and should be a required course for all dental hygienists. This study served to identify statistics anxiety in a previously unexplored population. The findings should be useful in both theory building and in practical applications. Furthermore, the results can be used to direct future research. Copyright © 2015 The American Dental Hygienists’ Association.

  13. Understanding Statistics - Cancer Statistics

    Science.gov (United States)

    Annual reports of U.S. cancer statistics including new cases, deaths, trends, survival, prevalence, lifetime risk, and progress toward Healthy People targets, plus statistical summaries for a number of common cancer types.

  14. Environmental radiations and childhood dynamic statistics

    International Nuclear Information System (INIS)

    Sakka, Masatoshi

    1981-01-01

    In Fukushima prefecture the first nuclear power plant attained criticality in 1971. Since then 6 reactors have been in operation. Increasing concern is the possible adverse effects due to ionizing radiations released from nuclear reactors. As the radiation level around the nuclear power plants is usually low, induced effects are necessarily delayed ones which require tens of years to appear. Among other tissues, embryos and foetuses are most radiosensitive and induced effects result in the change of childhood dynamic statistics. In this report dynamic statistics including stillbirth, perinatal death, neonatal death, infant death, 3rd year examinations were surveyed in 18 health centers in the prefecture from 1961 to 1979. Environmental radiation levels in each district (health centers) were compared and were arranged in order, 1, 2, ... etc. Dynamic statistics were also compared for each district and were arranged in order. Order correlation coefficients were calculated and a linearity between radiation level and health status was tested. No significant values were obtained ranging from 0.66 to -0.43 of correlation coefficients. Still birth decreased 4.4%/y since 1963 and neonatal death decreased 6.7%/y and infant death also decreased 8.7%/y since 1957 on an average. These decreases were negatively correlated with the proliferation of water supply service, sewage service and increase of physicians in 18 districts including 2 which are under continuous observation of environmental radiations released from nuclear power plants. Childhood dynamic statistics have been turning better in the last 10 years in prefecture with the difference of 47 mR/y (lowest values of 56 mR/y on an average in 3 prefectures and highest of 103 mR/y in 4 ones). Environmental radiation may initiate adverse effects on prenatal lives but the hygienic improvement in recent years must extinguish the promotion of the adverse effects. This may be a plausible explanation. (author)

  15. Statistical principles for prospective study protocols:

    DEFF Research Database (Denmark)

    Christensen, Robin; Langberg, Henning

    2012-01-01

    In the design of scientific studies it is essential to decide on which scientific questions one aims to answer, just as it is important to decide on the correct statistical methods to use to answer these questions. The correct use of statistical methods is crucial in all aspects of research...... to quantify relationships in data. Despite an increased focus on statistical content and complexity of biomedical research these topics remain difficult for most researchers. Statistical methods enable researchers to condense large spreadsheets with data into means, proportions, and difference between means...... the statistical principles for trial protocols in terms of design, analysis, and reporting of findings....

  16. Funding source and primary outcome changes in clinical trials registered on ClinicalTrials.gov are associated with the reporting of a statistically significant primary outcome: a cross-sectional study [v2; ref status: indexed, http://f1000r.es/5bj

    Directory of Open Access Journals (Sweden)

    Sreeram V Ramagopalan

    2015-04-01

    Full Text Available Background: We and others have shown a significant proportion of interventional trials registered on ClinicalTrials.gov have their primary outcomes altered after the listed study start and completion dates. The objectives of this study were to investigate whether changes made to primary outcomes are associated with the likelihood of reporting a statistically significant primary outcome on ClinicalTrials.gov. Methods: A cross-sectional analysis of all interventional clinical trials registered on ClinicalTrials.gov as of 20 November 2014 was performed. The main outcome was any change made to the initially listed primary outcome and the time of the change in relation to the trial start and end date. Findings: 13,238 completed interventional trials were registered with ClinicalTrials.gov that also had study results posted on the website. 2555 (19.3% had one or more statistically significant primary outcomes. Statistical analysis showed that registration year, funding source and primary outcome change after trial completion were associated with reporting a statistically significant primary outcome. Conclusions: Funding source and primary outcome change after trial completion are associated with a statistically significant primary outcome report on clinicaltrials.gov.

  17. Managing Macroeconomic Risks by Using Statistical Simulation

    Directory of Open Access Journals (Sweden)

    Merkaš Zvonko

    2017-06-01

    Full Text Available The paper analyzes the possibilities of using statistical simulation in the macroeconomic risks measurement. At the level of the whole world, macroeconomic risks are, due to the excessive imbalance, significantly increased. Using analytical statistical methods and Monte Carlo simulation, the authors interpret the collected data sets, compare and analyze them in order to mitigate potential risks. The empirical part of the study is a qualitative case study that uses statistical methods and Monte Carlo simulation for managing macroeconomic risks, which is the central theme of this work. Application of statistical simulation is necessary because the system, for which it is necessary to specify the model, is too complex for an analytical approach. The objective of the paper is to point out the previous need for consideration of significant macroeconomic risks, particularly in terms of the number of the unemployed in the society, the movement of gross domestic product and the country’s credit rating, and the use of data previously processed by statistical methods, through statistical simulation, to analyze the existing model of managing the macroeconomic risks and suggest elements for a management model development that will allow, with the lowest possible probability and consequences, the emergence of the recent macroeconomic risks. The stochastic characteristics of the system, defined by random variables as input values defined by probability distributions, require the performance of a large number of iterations on which to record the output of the model and calculate the mathematical expectations. The paper expounds the basic procedures and techniques of discrete statistical simulation applied to systems that can be characterized by a number of events which represent a set of circumstances that have caused a change in the system’s state and the possibility of its application in the field of assessment of macroeconomic risks. The method has no

  18. Changes in rocket salad phytochemicals within the commercial supply chain: Glucosinolates, isothiocyanates, amino acids and bacterial load increase significantly after processing.

    Science.gov (United States)

    Bell, Luke; Yahya, Hanis Nadia; Oloyede, Omobolanle Oluwadamilola; Methven, Lisa; Wagstaff, Carol

    2017-04-15

    Five cultivars of Eruca sativa and a commercial variety of Diplotaxis tenuifolia were grown in the UK (summer) and subjected to commercial growth, harvesting and processing, with subsequent shelf life storage. Glucosinolates (GSL), isothiocyanates (ITC), amino acids (AA), free sugars, and bacterial loads were analysed throughout the supply chain to determine the effects on phytochemical compositions. Bacterial load of leaves increased significantly over time and peaked during shelf life storage. Significant correlations were observed with GSL and AA concentrations, suggesting a previously unknown relationship between plants and endemic leaf bacteria. GSLs, ITCs and AAs increased significantly after processing and during shelf life. The supply chain did not significantly affect glucoraphanin concentrations, and its ITC sulforaphane significantly increased during shelf life in E. sativa cultivars. We hypothesise that commercial processing may increase the nutritional value of the crop, and have added health benefits for the consumer. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  19. STATISTICAL INSIGHT INTO THE BINDING REGIONS IN DISORDERED HUMAN PROTEOME

    Directory of Open Access Journals (Sweden)

    Uttam Pal

    2016-03-01

    Full Text Available The human proteome contains a significant number of intrinsically disordered proteins (IDPs. They show unusual structural features that enable them to participate in diverse cellular functions and play significant roles in cell signaling and reorganization processes. In addition, the actions of IDPs, their functional cooperativity, conformational alterations and folding often accompany binding to a target macromolecule. Applying bioinformatics approaches and with the aid of statistical methodologies, we investigated the statistical parameters of binding regions (BRs found in disordered human proteome. In this report, we detailed the bioinformatics analysis of binding regions found in the IDPs. Statistical models for the occurrence of BRs, their length distribution and percent occupancy in the parent proteins are shown. The frequency of BRs followed a Poisson distribution pattern with increasing expectancy with the degree of disorderedness. The length of the individual BRs also followed Poisson distribution with a mean of 6 residues, whereas, percentage of residues in BR showed a normal distribution pattern. We also explored the physicochemical properties such as the grand average of hydropathy (GRAVY and the theoretical isoelectric points (pIs. The theoretical pIs of the BRs followed a bimodal distribution as in the parent proteins. However, the mean acidic/basic pIs were significantly lower/higher than that of the proteins, respectively. We further showed that the amino acid composition of BRs was enriched in hydrophobic residues such as Ala, Val, Ile, Leu and Phe compared to the average sequence content of the proteins. Sequences in a BR showed conformational adaptability mostly towards flexible coil structure and followed by helix, however, the ordered secondary structural conformation was significantly lower in BRs than the proteins. Combining and comparing these statistical information of BRs with other methods may be useful for high

  20. Statistical analysis and data management

    International Nuclear Information System (INIS)

    Anon.

    1981-01-01

    This report provides an overview of the history of the WIPP Biology Program. The recommendations of the American Institute of Biological Sciences (AIBS) for the WIPP biology program are summarized. The data sets available for statistical analyses and problems associated with these data sets are also summarized. Biological studies base maps are presented. A statistical model is presented to evaluate any correlation between climatological data and small mammal captures. No statistically significant relationship between variance in small mammal captures on Dr. Gennaro's 90m x 90m grid and precipitation records from the Duval Potash Mine were found

  1. Triglyceride content in remnant lipoproteins is significantly increased after food intake and is associated with plasma lipoprotein lipase.

    Science.gov (United States)

    Nakajima, Katsuyuki; Tokita, Yoshiharu; Sakamaki, Koji; Shimomura, Younosuke; Kobayashi, Junji; Kamachi, Keiko; Tanaka, Akira; Stanhope, Kimber L; Havel, Peter J; Wang, Tao; Machida, Tetsuo; Murakami, Masami

    2017-02-01

    Previous large population studies reported that non-fasting plasma triglyceride (TG) reflect a higher risk for cardiovascular disease than TG in the fasting plasma. This is suggestive of the presence of higher concentration of remnant lipoproteins (RLP) in postprandial plasma. TG and RLP-TG together with other lipids, lipoproteins and lipoprotein lipase (LPL) in both fasting and postprandial plasma were determined in generally healthy volunteers and in patients with coronary artery disease (CAD) after consuming a fat load or a more typical moderate meal. RLP-TG/TG ratio (concentration) and RLP-TG/RLP-C ratio (particle size) were significantly increased in the postprandial plasma of both healthy controls and CAD patients compared with those in fasting plasma. LPL/RLP-TG ratio demonstrated the interaction correlation between RLP concentration and LPL activity The increased RLP-TG after fat consumption contributed to approximately 90% of the increased plasma TG, while approximately 60% after a typical meal. Plasma LPL in postprandial plasma was not significantly altered after either type of meal. Concentrations of RLP-TG found in the TG along with its particle size are significantly increased in postprandial plasma compared with fasting plasma. Therefore, non-fasting TG determination better reflects the presence of higher RLP concentrations in plasma. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  2. INCREASE OF QUEUING SYSTEM EFFECTIVENESS OF TRADING ENTERPRISE BY MEANS OF NUMERICAL STATISTICAL SIMULATION

    Directory of Open Access Journals (Sweden)

    Knyazheva Yu. V.

    2014-06-01

    Full Text Available The market economy causes need of development of the economic analysis first of all at microlevel, that is at the level of the separate enterprises as the enterprises are basis for market economy. Therefore improvement of the queuing system trading enterprise is an important economic problem. Analytical solutions of problems of the mass servicing are in described the theory, don’t correspond to real operating conditions of the queuing systems. Therefore in this article optimization of customer service process and improvement of settlement and cash service system trading enterprise are made by means of numerical statistical simulation of the queuing system trading enterprise. The article describe integrated statistical numerical simulation model of queuing systems trading enterprise working in nonstationary conditions with reference to different distribution laws of customers input stream. This model takes account of various behavior customers output stream, includes checkout service model which takes account of cashier rate of working, also this model includes staff motivation model, profit earning and profit optimization models that take into account possible revenue and costs. The created statistical numerical simulation model of queuing systems trading enterprise, at its realization in the suitable software environment, allows to perform optimization of the most important parameters of system. And when developing the convenient user interface, this model can be a component of support decision-making system for rationalization of organizational structure and for management optimization by trading enterprise.

  3. [Big data in official statistics].

    Science.gov (United States)

    Zwick, Markus

    2015-08-01

    The concept of "big data" stands to change the face of official statistics over the coming years, having an impact on almost all aspects of data production. The tasks of future statisticians will not necessarily be to produce new data, but rather to identify and make use of existing data to adequately describe social and economic phenomena. Until big data can be used correctly in official statistics, a lot of questions need to be answered and problems solved: the quality of data, data protection, privacy, and the sustainable availability are some of the more pressing issues to be addressed. The essential skills of official statisticians will undoubtedly change, and this implies a number of challenges to be faced by statistical education systems, in universities, and inside the statistical offices. The national statistical offices of the European Union have concluded a concrete strategy for exploring the possibilities of big data for official statistics, by means of the Big Data Roadmap and Action Plan 1.0. This is an important first step and will have a significant influence on implementing the concept of big data inside the statistical offices of Germany.

  4. Energy statistics 1999

    International Nuclear Information System (INIS)

    2000-10-01

    Denmark's gross energy consumption increased in 1999 with almost 0,5% while the CO 2 emission decreased with 1,4%. Energy Statistics 1999 shows that the energy consumption in households and the production industries was the same as the year before. The consumption in the trade and service sectors and for transportation increased. The Danish production of petroleum, natural gas and renewable energy increased in 1999 to 1000 PJ which is an increase of 17% compared to 1998. The degree of self-supply increased to 118%, which means that the energy production was 18% higher than the energy consumption in 1999. This was primarily due to a very high increase of production of petroleum of 26%. (LN)

  5. Long-term use of amiodarone before heart transplantation significantly reduces early post-transplant atrial fibrillation and is not associated with increased mortality after heart transplantation

    Directory of Open Access Journals (Sweden)

    Rivinius R

    2016-02-01

    group (P=0.0123. There was no statistically significant difference between patients with and without long-term use of amiodarone prior to HTX in 1-year (P=0.8596, 2-year (P=0.8620, 5-year (P=0.2737, or overall follow-up mortality after HTX (P=0.1049. Moreover, Kaplan–Meier survival analysis showed no statistically significant difference in overall survival (P=0.1786.Conclusion: Long-term use of amiodarone in patients before HTX significantly reduces early post-transplant AF and is not associated with increased mortality after HTX. Keywords: amiodarone, atrial fibrillation, heart failure, heart transplantation, mortality

  6. Statistical Seismology and Induced Seismicity

    Science.gov (United States)

    Tiampo, K. F.; González, P. J.; Kazemian, J.

    2014-12-01

    While seismicity triggered or induced by natural resources production such as mining or water impoundment in large dams has long been recognized, the recent increase in the unconventional production of oil and gas has been linked to rapid rise in seismicity in many places, including central North America (Ellsworth et al., 2012; Ellsworth, 2013). Worldwide, induced events of M~5 have occurred and, although rare, have resulted in both damage and public concern (Horton, 2012; Keranen et al., 2013). In addition, over the past twenty years, the increase in both number and coverage of seismic stations has resulted in an unprecedented ability to precisely record the magnitude and location of large numbers of small magnitude events. The increase in the number and type of seismic sequences available for detailed study has revealed differences in their statistics that previously difficult to quantify. For example, seismic swarms that produce significant numbers of foreshocks as well as aftershocks have been observed in different tectonic settings, including California, Iceland, and the East Pacific Rise (McGuire et al., 2005; Shearer, 2012; Kazemian et al., 2014). Similarly, smaller events have been observed prior to larger induced events in several occurrences from energy production. The field of statistical seismology has long focused on the question of triggering and the mechanisms responsible (Stein et al., 1992; Hill et al., 1993; Steacy et al., 2005; Parsons, 2005; Main et al., 2006). For example, in most cases the associated stress perturbations are much smaller than the earthquake stress drop, suggesting an inherent sensitivity to relatively small stress changes (Nalbant et al., 2005). Induced seismicity provides the opportunity to investigate triggering and, in particular, the differences between long- and short-range triggering. Here we investigate the statistics of induced seismicity sequences from around the world, including central North America and Spain, and

  7. Prognostic Significance of Creatinine Increases During an Acute Heart Failure Admission in Patients With and Without Residual Congestion: A Post Hoc Analysis of the PROTECT Data.

    Science.gov (United States)

    Metra, Marco; Cotter, Gad; Senger, Stefanie; Edwards, Christopher; Cleland, John G; Ponikowski, Piotr; Cursack, Guillermo C; Milo, Olga; Teerlink, John R; Givertz, Michael M; O'Connor, Christopher M; Dittrich, Howard C; Bloomfield, Daniel M; Voors, Adriaan A; Davison, Beth A

    2018-05-01

    The importance of a serum creatinine increase, traditionally considered worsening renal function (WRF), during admission for acute heart failure has been recently debated, with data suggesting an interaction between congestion and creatinine changes. In post hoc analyses, we analyzed the association of WRF with length of hospital stay, 30-day death or cardiovascular/renal readmission and 90-day mortality in the PROTECT study (Placebo-Controlled Randomized Study of the Selective A1 Adenosine Receptor Antagonist Rolofylline for Patients Hospitalized With Acute Decompensated Heart Failure and Volume Overload to Assess Treatment Effect on Congestion and Renal Function). Daily creatinine changes from baseline were categorized as WRF (an increase of 0.3 mg/dL or more) or not. Daily congestion scores were computed by summing scores for orthopnea, edema, and jugular venous pressure. Of the 2033 total patients randomized, 1537 patients had both available at study day 14. Length of hospital stay was longer and 30-day cardiovascular/renal readmission or death more common in patients with WRF. However, these were driven by significant associations in patients with concomitant congestion at the time of assessment of renal function. The mean difference in length of hospital stay because of WRF was 3.51 (95% confidence interval, 1.29-5.73) more days ( P =0.0019), and the hazard ratio for WRF on 30-day death or heart failure hospitalization was 1.49 (95% confidence interval, 1.06-2.09) times higher ( P =0.0205), in significantly congested than nonsignificantly congested patients. A similar trend was observed with 90-day mortality although not statistically significant. In patients admitted for acute heart failure, WRF defined as a creatinine increase of ≥0.3 mg/dL was associated with longer length of hospital stay, and worse 30- and 90-day outcomes. However, effects were largely driven by patients who had residual congestion at the time of renal function assessment. URL: https

  8. Social marketing campaign significantly associated with increases in syphilis testing among gay and bisexual men in San Francisco.

    Science.gov (United States)

    Montoya, Jorge A; Kent, Charlotte K; Rotblatt, Harlan; McCright, Jacque; Kerndt, Peter R; Klausner, Jeffrey D

    2005-07-01

    Between 1999 and 2002, San Francisco experienced a sharp increase in early syphilis among gay and bisexual men. In response, the San Francisco Department of Public Health launched a social marketing campaign to increase testing for syphilis, and awareness and knowledge about syphilis among gay and bisexual men. A convenience sample of 244 gay and bisexual men (18-60 years of age) were surveyed to evaluate the effectiveness of the campaign. Respondents were interviewed to elicit unaided and aided awareness about the campaign, knowledge about syphilis, recent sexual behaviors, and syphilis testing behavior. After controlling for other potential confounders, unaided campaign awareness was a significant correlate of having a syphilis test in the last 6 months (odds ratio, 3.21; 95% confidence interval, 1.30-7.97) compared with no awareness of the campaign. A comparison of respondents aware of the campaign with those not aware also revealed significant increases in awareness and knowledge about syphilis. The Healthy Penis 2002 campaign achieved its primary objective of increasing syphilis testing, and awareness and knowledge about syphilis among gay and bisexual men in San Francisco.

  9. The significance of climate change in the Netherlands. An analysis of historical and future trends (1901-2020) in weather conditions, weather extremes and temperature-related impacts

    Energy Technology Data Exchange (ETDEWEB)

    Visser, H.

    2005-07-01

    A rigorous statistical analysis reveals changes in Dutch climate that are statistically significant over the last century. Annually averaged temperatures have increased by 1.5 about 0.5 degrees Centigrade; the number of summer days has roughly doubled from 14 about 5 to 27 about 9 days; annual precipitation has increased by 120 about 100 mm; and the number of extremely wet days has increased by about 40%, from 19 about 3 to 26 about 3 days. Several other changes in Dutch climate, such as spring temperatures rising more rapidly than winter temperatures, the increase of the coldest temperature in each year by 0.9 degrees Centigrade and the annual maximum day sum of precipitation, turn out to be not (yet) statistically significant. The changes in Dutch climate have already led to several statistically significant impacts. The length of the growing season has increased by nearly a month, and the number of heating-degree days, a measure for the energy needed for the heating of houses and buildings, has decreased by 14 about 5%. Projections of future temperature increase in 2020 based on the statistical analysis closely resemble projections based on climate models: temperatures continue to increase from 10.4 about 0.4 degrees Centigrade in 2003 to 10.7 about 0.6 degrees Centigrade in 2010 and 11.1 about 1.0 degrees Centigrade in 2020. The energy needed for heating of houses and buildings is expected to decrease further. This warming effect is expected to lower projections of future Dutch greenhouse-gas emissions by 3.5 Mton CO2 equivalents, which is relevant in the context of commitments under the Kyoto Protocol. Finally, over the course of the 20th century the chance on an 'Elfstedentocht', an outdoor skating event in the Netherlands, has decreased from once every five years to once every ten years. Even though this impact change is not yet statistically significant, it resides 'on the edge' of significance: within a few years more evidence may

  10. The prognostic significance of angina pectoris experienced during the first month following acute myocardial infarction

    DEFF Research Database (Denmark)

    Jespersen, C M

    1997-01-01

    BACKGROUND: Angina pectoris accompanied by transient ST-segment changes during the in-hospital phase of acute myocardial infarction (AMI) is a well established marker of subsequent cardiac death and reinfarction. HYPOTHESIS: This study was undertaken to record the prognostic significance of angina...... on study treatment 1 month after discharge. Of these patients, 311 (39%) reported chest pain during the first month following discharge. RESULTS: Patients with angina pectoris had a significantly increased risk of reinfarction [hazard 1.71; 95%-confidence limit (CL): 1.09, 2.69] and increased mortality...... risk which, however, only reached borderline statistical significance (hazard 1.52; 95%-CL: 0.96, 2.40). When patients were subdivided according to both angina pectoris and heart failure, those with one or both of these risk markers had significantly increased mortality (p 0.03) and reinfarction (p 0...

  11. National Statistical Commission and Indian Official Statistics*

    Indian Academy of Sciences (India)

    IAS Admin

    a good collection of official statistics of that time. With more .... statistical agencies and institutions to provide details of statistical activities .... ing several training programmes. .... ful completion of Indian Statistical Service examinations, the.

  12. Statistical methods for nuclear material management

    International Nuclear Information System (INIS)

    Bowen, W.M.; Bennett, C.A.

    1988-12-01

    This book is intended as a reference manual of statistical methodology for nuclear material management practitioners. It describes statistical methods currently or potentially important in nuclear material management, explains the choice of methods for specific applications, and provides examples of practical applications to nuclear material management problems. Together with the accompanying training manual, which contains fully worked out problems keyed to each chapter, this book can also be used as a textbook for courses in statistical methods for nuclear material management. It should provide increased understanding and guidance to help improve the application of statistical methods to nuclear material management problems

  13. Statistical methods for nuclear material management

    Energy Technology Data Exchange (ETDEWEB)

    Bowen W.M.; Bennett, C.A. (eds.)

    1988-12-01

    This book is intended as a reference manual of statistical methodology for nuclear material management practitioners. It describes statistical methods currently or potentially important in nuclear material management, explains the choice of methods for specific applications, and provides examples of practical applications to nuclear material management problems. Together with the accompanying training manual, which contains fully worked out problems keyed to each chapter, this book can also be used as a textbook for courses in statistical methods for nuclear material management. It should provide increased understanding and guidance to help improve the application of statistical methods to nuclear material management problems.

  14. Official Statistics and Statistics Education: Bridging the Gap

    Directory of Open Access Journals (Sweden)

    Gal Iddo

    2017-03-01

    Full Text Available This article aims to challenge official statistics providers and statistics educators to ponder on how to help non-specialist adult users of statistics develop those aspects of statistical literacy that pertain to official statistics. We first document the gap in the literature in terms of the conceptual basis and educational materials needed for such an undertaking. We then review skills and competencies that may help adults to make sense of statistical information in areas of importance to society. Based on this review, we identify six elements related to official statistics about which non-specialist adult users should possess knowledge in order to be considered literate in official statistics: (1 the system of official statistics and its work principles; (2 the nature of statistics about society; (3 indicators; (4 statistical techniques and big ideas; (5 research methods and data sources; and (6 awareness and skills for citizens’ access to statistical reports. Based on this ad hoc typology, we discuss directions that official statistics providers, in cooperation with statistics educators, could take in order to (1 advance the conceptualization of skills needed to understand official statistics, and (2 expand educational activities and services, specifically by developing a collaborative digital textbook and a modular online course, to improve public capacity for understanding of official statistics.

  15. Statistical power and the Rorschach: 1975-1991.

    Science.gov (United States)

    Acklin, M W; McDowell, C J; Orndoff, S

    1992-10-01

    The Rorschach Inkblot Test has been the source of long-standing controversies as to its nature and its psychometric properties. Consistent with behavioral science research in general, the concept of statistical power has been entirely ignored by Rorschach researchers. The concept of power is introduced and discussed, and a power survey of the Rorschach literature published between 1975 and 1991 in the Journal of Personality Assessment, Journal of Consulting and Clinical Psychology, Journal of Abnormal Psychology, Journal of Clinical Psychology, Journal of Personality, Psychological Bulletin, American Journal of Psychiatry, and Journal of Personality and Social Psychology was undertaken. Power was calculated for 2,300 statistical tests in 158 journal articles. Power to detect small, medium, and large effect sizes was .13, .56, and .85, respectively. Similar to the findings in other power surveys conducted on behavioral science research, we concluded that Rorschach research is underpowered to detect the differences under investigation. This undoubtedly contributes to the inconsistency of research findings which has been a source of controversy and criticism over the decades. It appears that research conducted according to the Comprehensive System for the Rorschach is more powerful. Recommendations are offered for improving power and strengthening the design sensitivity of Rorschach research, including increasing sample sizes, use of parametric statistics, reduction of error variance, more accurate reporting of findings, and editorial policies reflecting concern about the magnitude of relationships beyond an exclusive focus on levels of statistical significance.

  16. Is there a clinically significant seasonal component to hospital admissions for atrial fibrillation?

    Directory of Open Access Journals (Sweden)

    Moineddin Rahim

    2004-03-01

    Full Text Available Abstract Background Atrial fibrillation is a common cardiac dysrhythmia, particularly in the elderly. Recent studies have indicated a statistically significant seasonal component to atrial fibrillation hospitalizations. Methods We conducted a retrospective population cohort study using time series analysis to evaluate seasonal patterns of atrial fibrillation hospitalizations for the province of Ontario for the years 1988 to 2001. Five different series methods were used to analyze the data, including spectral analysis, X11, R-Squared, autocorrelation function and monthly aggregation. Results This study found evidence of weak seasonality, most apparent at aggregate levels including both ages and sexes. There was dramatic increase in hospitalizations for atrial fibrillation over the years studied and an age dependent increase in rates per 100,000. Overall, the magnitude of seasonal difference between peak and trough months is in the order of 1.4 admissions per 100,000 population. The peaks for hospitalizations were predominantly in April, and the troughs in August. Conclusions Our study confirms statistical evidence of seasonality for atrial fibrillation hospitalizations. This effect is small in absolute terms and likely not significant for policy or etiological research purposes.

  17. Assessing Fun Items' Effectiveness in Increasing Learning of College Introductory Statistics Students: Results of a Randomized Experiment

    Science.gov (United States)

    Lesser, Lawrence M.; Pearl, Dennis K.; Weber, John J., III

    2016-01-01

    There has been a recent emergence of scholarship on the use of fun in the college statistics classroom, with at least 20 modalities identified. While there have been randomized experiments that suggest that fun can enhance student achievement or attitudes in statistics, these studies have generally been limited to one particular fun modality or…

  18. Statistical Analysis of the Polarimetric Cloud Analysis and Seeding Test (POLCAST) Field Projects

    Science.gov (United States)

    Ekness, Jamie Lynn

    The North Dakota farming industry brings in more than $4.1 billion annually in cash receipts. Unfortunately, agriculture sales vary significantly from year to year, which is due in large part to weather events such as hail storms and droughts. One method to mitigate drought is to use hygroscopic seeding to increase the precipitation efficiency of clouds. The North Dakota Atmospheric Research Board (NDARB) sponsored the Polarimetric Cloud Analysis and Seeding Test (POLCAST) research project to determine the effectiveness of hygroscopic seeding in North Dakota. The POLCAST field projects obtained airborne and radar observations, while conducting randomized cloud seeding. The Thunderstorm Identification Tracking and Nowcasting (TITAN) program is used to analyze radar data (33 usable cases) in determining differences in the duration of the storm, rain rate and total rain amount between seeded and non-seeded clouds. The single ratio of seeded to non-seeded cases is 1.56 (0.28 mm/0.18 mm) or 56% increase for the average hourly rainfall during the first 60 minutes after target selection. A seeding effect is indicated with the lifetime of the storms increasing by 41 % between seeded and non-seeded clouds for the first 60 minutes past seeding decision. A double ratio statistic, a comparison of radar derived rain amount of the last 40 minutes of a case (seed/non-seed), compared to the first 20 minutes (seed/non-seed), is used to account for the natural variability of the cloud system and gives a double ratio of 1.85. The Mann-Whitney test on the double ratio of seeded to non-seeded cases (33 cases) gives a significance (p-value) of 0.063. Bootstrapping analysis of the POLCAST set indicates that 50 cases would provide statistically significant results based on the Mann-Whitney test of the double ratio. All the statistical analysis conducted on the POLCAST data set show that hygroscopic seeding in North Dakota does increase precipitation. While an additional POLCAST field

  19. Decompression of keratocystic odontogenic tumors leading to increased fibrosis, but without any change in epithelial proliferation.

    Science.gov (United States)

    Awni, Sarah; Conn, Brendan

    2017-06-01

    The aim of this study was to investigate whether decompression treatment induces changes in the histology or biologic behavior of keratocystic odontogenic tumor (KCOT). Seventeen patients with KCOT underwent decompression treatment with or without enucleation. Histologic evaluation and immunohistochemical expression of p53, Ki-67, and Bcl-2 were analyzed by using conventional microscopy. KCOT showed significantly increased fibrosis (P = .01) and a subjective reduction in mitotic activity (P = .03) after decompression. There were no statistically significant changes in the expression of proliferation markers. An increase in daughter-cysts or epithelial rests was seen after decompression (P = .04). Recurrence was noted in four of 16 cases, and expression of p53 was strongly correlated with prolonged duration of treatment (P = .01) and intense inflammatory changes (P = .02). Structural changes in the KCOT epithelium or capsule following decompression facilitate surgical removal of the tumor. There was no statistical evidence that decompression influences expression of proliferation markers in the lining, indicating that the potential for recurrence may not be restricted to the cellular level. The statistically significant increase of p53 expression with increased duration of treatment and increase of inflammation may also indicate the possibility of higher rates of recurrence with prolonged treatment and significant inflammatory changes. Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.

  20. Significance of atmospheric effects of heat rejection from energy centers in the semi arid northwest

    International Nuclear Information System (INIS)

    Ramsdell, J.V.; Drake, R.L.; Young, J.R.

    1976-01-01

    The results presented in this paper have been obtained using simple atmospheric models in an attempt to optimize heat sink management in a conceptual nuclear energy center (NEC) at Hanford. The models have been designed to be conservatice in the sense that they are biased toward over prediction of the impact of cooling system effluents on humidity and fog. Thus the models are screening tools to be used to identify subjects for further, more realistic examination. Within this context the following conclusions have been reached: the evaluation of any atmospheric impact postulated for heat dissipation must be conducted in quantitative terms which can be used to determine the significance of the impact; of the potential atmospheric impacts of large heat releases from energy centers, the one most amenable to quantitative evaluation in meaningful terms as the increase in fog; a postulated increase in frequency of fog can be translated into terms of visibility and both can be evaluated statistically; the translation of a increase in fog to visibility terms permits economic evaluation of the impact; and the predicted impact of the HNEC on fog and visibility is statistically significant whether the energy center consists of 20 or 40 units

  1. Free ammonia pre-treatment of secondary sludge significantly increases anaerobic methane production.

    Science.gov (United States)

    Wei, Wei; Zhou, Xu; Wang, Dongbo; Sun, Jing; Wang, Qilin

    2017-07-01

    Energy recovery in the form of methane from sludge/wastewater is restricted by the poor and slow biodegradability of secondary sludge. An innovative pre-treatment technology using free ammonia (FA, i.e. NH 3 ) was proposed in this study to increase anaerobic methane production. The solubilisation of secondary sludge was significantly increased after FA pre-treatment at up to 680 mg NH 3 -N/L for 1 day, under which the solubilisation (i.e. 0.4 mg SCOD/mg VS; SCOD: soluble chemical oxygen demand; VS: volatile solids) was >10 times higher than that without FA pre-treatment (i.e. 0.03 mg SCOD/mg VS). Biochemical methane potential assays showed that FA pre-treatment at above 250 mg NH 3 -N/L is effective in improving anaerobic methane production. The highest improvement in biochemical methane potential (B 0 ) and hydrolysis rate (k) was achieved at FA concentrations of 420-680 mg NH 3 -N/L, and was determined as approximately 22% (from 160 to 195 L CH 4 /kg VS added) and 140% (from 0.22 to 0.53 d -1 ) compared to the secondary sludge without pre-treatment. More analysis revealed that the FA induced improvement in B 0 and k could be attributed to the rapidly biodegradable substances rather than the slowly biodegradable substances. Economic and environmental analyses showed that the FA-based technology is economically favourable and environmentally friendly. Since this FA technology aims to use the wastewater treatment plants (WWTPs) waste (i.e. anaerobic digestion liquor) to enhance methane production from the WWTPs, it will set an example for the paradigm shift of the WWTPs from 'linear economy' to 'circular economy'. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Changes of serum tumor markers, immunoglobulins, TNF-α and hs-CRP levels in patients with breast cancer and its clinical significance

    Institute of Scientific and Technical Information of China (English)

    Jian-Gang Dai; Yong-Feng Wu; Mei Li

    2017-01-01

    Objective: To study the serum tumor markers, immunoglobulin, TNF-α and hs-CRP in breast cancer in different pathological stages of the concentration, and to analyze the clinical significance of early diagnosis of breast cancer. Methods: A total of 130 patients with breast cancer were divided into stage I, II, III and IV according to clinical pathology. In addition, 40 patients with benign breast disease and 35 healthy subjects were selected as benign breast disease group and control group. Serum tumor markers, immunoglobulins, TNF-αand hs-CRP concentrations were measured and compared of all subjects. Results: There were no significant difference in serum tumor markers, immunoglobulin and inflammatory factors between the control group and the benign breast cancer group. The level of serum tumor markers in breast cancer group was significantly higher than that in control group and benign breast cancer group. The levels of serum CA125, CA153 and CEA were gradually increased with the severity enhancing from stage I and IV of breast cancer, and he difference was statistically significant. The level of serum immunoglobulin in breast cancer group was significantly higher than that in control group and benign breast cancer group. The levels of serum IgG and IgM increased gradually severity enhancing from stage I and IV of breast cancer, and the difference was statistically significant. The level of serum TNF-α and hs-CRP in serum of breast cancer group was significantly higher than that of control group and benign breast cancer group. The serum levels of TNF-α and hs-CRP increased gradually with severity enhancing from stage I and IV of breast cancer, and the difference was statistically significant. Conclusion: The level of serum tumor markers in breast cancer patients is increasing. Humoral and inflammatory responses are activated to varying degrees and increase with the aggregation of disease. They may involve regulating the occurrence and metastasis of breast

  3. Development of free statistical software enabling researchers to calculate confidence levels, clinical significance curves and risk-benefit contours

    International Nuclear Information System (INIS)

    Shakespeare, T.P.; Mukherjee, R.K.; Gebski, V.J.

    2003-01-01

    Confidence levels, clinical significance curves, and risk-benefit contours are tools improving analysis of clinical studies and minimizing misinterpretation of published results, however no software has been available for their calculation. The objective was to develop software to help clinicians utilize these tools. Excel 2000 spreadsheets were designed using only built-in functions, without macros. The workbook was protected and encrypted so that users can modify only input cells. The workbook has 4 spreadsheets for use in studies comparing two patient groups. Sheet 1 comprises instructions and graphic examples for use. Sheet 2 allows the user to input the main study results (e.g. survival rates) into a 2-by-2 table. Confidence intervals (95%), p-value and the confidence level for Treatment A being better than Treatment B are automatically generated. An additional input cell allows the user to determine the confidence associated with a specified level of benefit. For example if the user wishes to know the confidence that Treatment A is at least 10% better than B, 10% is entered. Sheet 2 automatically displays clinical significance curves, graphically illustrating confidence levels for all possible benefits of one treatment over the other. Sheet 3 allows input of toxicity data, and calculates the confidence that one treatment is more toxic than the other. It also determines the confidence that the relative toxicity of the most effective arm does not exceed user-defined tolerability. Sheet 4 automatically calculates risk-benefit contours, displaying the confidence associated with a specified scenario of minimum benefit and maximum risk of one treatment arm over the other. The spreadsheet is freely downloadable at www.ontumor.com/professional/statistics.htm A simple, self-explanatory, freely available spreadsheet calculator was developed using Excel 2000. The incorporated decision-making tools can be used for data analysis and improve the reporting of results of any

  4. Bayesian statistics an introduction

    CERN Document Server

    Lee, Peter M

    2012-01-01

    Bayesian Statistics is the school of thought that combines prior beliefs with the likelihood of a hypothesis to arrive at posterior beliefs. The first edition of Peter Lee’s book appeared in 1989, but the subject has moved ever onwards, with increasing emphasis on Monte Carlo based techniques. This new fourth edition looks at recent techniques such as variational methods, Bayesian importance sampling, approximate Bayesian computation and Reversible Jump Markov Chain Monte Carlo (RJMCMC), providing a concise account of the way in which the Bayesian approach to statistics develops as wel

  5. Short communication Increase in metal extractability after liming of ...

    African Journals Online (AJOL)

    ... decreased in extractability (Statistically significant differences could not be determined for the trial due to the trial not having been designed for the results that were obtained). Similar results were reported in the literature for EDTA metal extraction but the phenomenon was not elaborated upon, except for Cr. The increased ...

  6. Statistics: The stethoscope of a thinking urologist

    Directory of Open Access Journals (Sweden)

    Arun S Sivanandam

    2009-01-01

    Full Text Available Understanding statistical terminology and the ability to appraise clinical research findings and statistical tests are critical to the practice of evidence-based medicine. Urologists require statistics in their toolbox of skills in order to successfully sift through increasingly complex studies and realize the drawbacks of statistical tests. Currently, the level of evidence in urology literature is low and the majority of research abstracts published for the American Urological Association (AUA meetings lag behind for full-text publication because of a lack of statistical reporting. Underlying these issues is a distinct deficiency in solid comprehension of statistics in the literature and a discomfort with the application of statistics for clinical decision-making. This review examines the plight of statistics in urology and investigates the reason behind the white-coat aversion to biostatistics. Resources such as evidence-based medicine websites, primers in statistics, and guidelines for statistical reporting exist for quick reference by urologists. Ultimately, educators should take charge of monitoring statistical knowledge among trainees by bolstering competency requirements and creating sustained opportunities for statistics and methodology exposure.

  7. Review of the Statistical Techniques in Medical Sciences | Okeh ...

    African Journals Online (AJOL)

    ... medical researcher in selecting the appropriate statistical techniques. Of course, all statistical techniques have certain underlying assumptions, which must be checked before the technique is applied. Keywords: Variable, Prospective Studies, Retrospective Studies, Statistical significance. Bio-Research Vol. 6 (1) 2008: pp.

  8. Increased NMDA receptor inhibition at an increased Sevoflurane MAC

    Directory of Open Access Journals (Sweden)

    Brosnan Robert J

    2012-06-01

    Full Text Available Abstract Background Sevoflurane potently enhances glycine receptor currents and more modestly decreases NMDA receptor currents, each of which may contribute to immobility. This modest NMDA receptor antagonism by sevoflurane at a minimum alveolar concentration (MAC could be reciprocally related to large potentiation of other inhibitory ion channels. If so, then reduced glycine receptor potency should increase NMDA receptor antagonism by sevoflurane at MAC. Methods Indwelling lumbar subarachnoid catheters were surgically placed in 14 anesthetized rats. Rats were anesthetized with sevoflurane the next day, and a pre-infusion sevoflurane MAC was measured in duplicate using a tail clamp method. Artificial CSF (aCSF containing either 0 or 4 mg/mL strychnine was then infused intrathecally at 4 μL/min, and the post-infusion baseline sevoflurane MAC was measured. Finally, aCSF containing strychnine (either 0 or 4 mg/mL plus 0.4 mg/mL dizocilpine (MK-801 was administered intrathecally at 4 μL/min, and the post-dizocilpine sevoflurane MAC was measured. Results Pre-infusion sevoflurane MAC was 2.26%. Intrathecal aCSF alone did not affect MAC, but intrathecal strychnine significantly increased sevoflurane requirement. Addition of dizocilpine significantly decreased MAC in all rats, but this decrease was two times larger in rats without intrathecal strychnine compared to rats with intrathecal strychnine, a statistically significant (P  Conclusions Glycine receptor antagonism increases NMDA receptor antagonism by sevoflurane at MAC. The magnitude of anesthetic effects on a given ion channel may therefore depend on the magnitude of its effects on other receptors that modulate neuronal excitability.

  9. Effect size and statistical power in the rodent fear conditioning literature - A systematic review.

    Science.gov (United States)

    Carneiro, Clarissa F D; Moulin, Thiago C; Macleod, Malcolm R; Amaral, Olavo B

    2018-01-01

    Proposals to increase research reproducibility frequently call for focusing on effect sizes instead of p values, as well as for increasing the statistical power of experiments. However, it is unclear to what extent these two concepts are indeed taken into account in basic biomedical science. To study this in a real-case scenario, we performed a systematic review of effect sizes and statistical power in studies on learning of rodent fear conditioning, a widely used behavioral task to evaluate memory. Our search criteria yielded 410 experiments comparing control and treated groups in 122 articles. Interventions had a mean effect size of 29.5%, and amnesia caused by memory-impairing interventions was nearly always partial. Mean statistical power to detect the average effect size observed in well-powered experiments with significant differences (37.2%) was 65%, and was lower among studies with non-significant results. Only one article reported a sample size calculation, and our estimated sample size to achieve 80% power considering typical effect sizes and variances (15 animals per group) was reached in only 12.2% of experiments. Actual effect sizes correlated with effect size inferences made by readers on the basis of textual descriptions of results only when findings were non-significant, and neither effect size nor power correlated with study quality indicators, number of citations or impact factor of the publishing journal. In summary, effect sizes and statistical power have a wide distribution in the rodent fear conditioning literature, but do not seem to have a large influence on how results are described or cited. Failure to take these concepts into consideration might limit attempts to improve reproducibility in this field of science.

  10. Spectral and cross-spectral analysis of uneven time series with the smoothed Lomb-Scargle periodogram and Monte Carlo evaluation of statistical significance

    Science.gov (United States)

    Pardo-Igúzquiza, Eulogio; Rodríguez-Tovar, Francisco J.

    2012-12-01

    Many spectral analysis techniques have been designed assuming sequences taken with a constant sampling interval. However, there are empirical time series in the geosciences (sediment cores, fossil abundance data, isotope analysis, …) that do not follow regular sampling because of missing data, gapped data, random sampling or incomplete sequences, among other reasons. In general, interpolating an uneven series in order to obtain a succession with a constant sampling interval alters the spectral content of the series. In such cases it is preferable to follow an approach that works with the uneven data directly, avoiding the need for an explicit interpolation step. The Lomb-Scargle periodogram is a popular choice in such circumstances, as there are programs available in the public domain for its computation. One new computer program for spectral analysis improves the standard Lomb-Scargle periodogram approach in two ways: (1) It explicitly adjusts the statistical significance to any bias introduced by variance reduction smoothing, and (2) it uses a permutation test to evaluate confidence levels, which is better suited than parametric methods when neighbouring frequencies are highly correlated. Another novel program for cross-spectral analysis offers the advantage of estimating the Lomb-Scargle cross-periodogram of two uneven time series defined on the same interval, and it evaluates the confidence levels of the estimated cross-spectra by a non-parametric computer intensive permutation test. Thus, the cross-spectrum, the squared coherence spectrum, the phase spectrum, and the Monte Carlo statistical significance of the cross-spectrum and the squared-coherence spectrum can be obtained. Both of the programs are written in ANSI Fortran 77, in view of its simplicity and compatibility. The program code is of public domain, provided on the website of the journal (http://www.iamg.org/index.php/publisher/articleview/frmArticleID/112/). Different examples (with simulated and

  11. Ibuprofen therapy resulted in significantly decreased tissue bacillary loads and increased survival in a new murine experimental model of active tuberculosis.

    Science.gov (United States)

    Vilaplana, Cristina; Marzo, Elena; Tapia, Gustavo; Diaz, Jorge; Garcia, Vanesa; Cardona, Pere-Joan

    2013-07-15

    C3HeB/FeJ mice infected with Mycobacterium tuberculosis were used in an experimental animal model mimicking active tuberculosis in humans to evaluate the effect of antiinflammatory agents. No other treatment but ibuprofen was given, and it was administered when the animals' health started to deteriorate. Animals treated with ibuprofen had statistically significant decreases in the size and number of lung lesions, decreases in the bacillary load, and improvements in survival, compared with findings for untreated animals. Because antiinflammatory agents are already on the market, further clinical trials should be done to evaluate this effect in humans as soon as possible, to determine their suitability as coadjuvant tuberculosis treatment.

  12. Predictor of increase in caregiver burden for disabled elderly at home.

    Science.gov (United States)

    Okamoto, Kazushi; Harasawa, Yuko

    2009-01-01

    In order to classify the caregivers at high risk of increase in their burden early, linear discriminant analysis was performed to obtain an effective discriminant model for differentiation of the presence or absence of increase in caregiver burden. The data obtained by self-administered questionnaire from 193 caregivers of frail elderly from January to February of 2005 were used. The discriminant analysis yielded a statistically significant function explaining 35.0% (Rc=0.59; d.f.=6; p=0.0001). The configuration indicated that the psychological predictors of change in caregiver burden with much perceived stress (1.47), high caregiver burden at baseline (1.28), emotional control (0.75), effort to achieve (-0.28), symptomatic depression (0.20) and "ikigai" (purpose in life) (0.18) made statistically significant contributions to the differentiation between no increase and increase in caregiver burden. The discriminant function showed a sensitivity of 86% and specificity of 81%, and successfully classified 83% of the caregivers. The function at baseline is a simple and useful method for screening of an increase in caregiver burden among caregivers for the frail elderly at home.

  13. On quantum statistical inference

    NARCIS (Netherlands)

    Barndorff-Nielsen, O.E.; Gill, R.D.; Jupp, P.E.

    2003-01-01

    Interest in problems of statistical inference connected to measurements of quantum systems has recently increased substantially, in step with dramatic new developments in experimental techniques for studying small quantum systems. Furthermore, developments in the theory of quantum measurements have

  14. The issue of statistical power for overall model fit in evaluating structural equation models

    Directory of Open Access Journals (Sweden)

    Richard HERMIDA

    2015-06-01

    Full Text Available Statistical power is an important concept for psychological research. However, examining the power of a structural equation model (SEM is rare in practice. This article provides an accessible review of the concept of statistical power for the Root Mean Square Error of Approximation (RMSEA index of overall model fit in structural equation modeling. By way of example, we examine the current state of power in the literature by reviewing studies in top Industrial-Organizational (I/O Psychology journals using SEMs. Results indicate that in many studies, power is very low, which implies acceptance of invalid models. Additionally, we examined methodological situations which may have an influence on statistical power of SEMs. Results showed that power varies significantly as a function of model type and whether or not the model is the main model for the study. Finally, results indicated that power is significantly related to model fit statistics used in evaluating SEMs. The results from this quantitative review imply that researchers should be more vigilant with respect to power in structural equation modeling. We therefore conclude by offering methodological best practices to increase confidence in the interpretation of structural equation modeling results with respect to statistical power issues.

  15. Statistical thermodynamics

    International Nuclear Information System (INIS)

    Lim, Gyeong Hui

    2008-03-01

    This book consists of 15 chapters, which are basic conception and meaning of statistical thermodynamics, Maxwell-Boltzmann's statistics, ensemble, thermodynamics function and fluctuation, statistical dynamics with independent particle system, ideal molecular system, chemical equilibrium and chemical reaction rate in ideal gas mixture, classical statistical thermodynamics, ideal lattice model, lattice statistics and nonideal lattice model, imperfect gas theory on liquid, theory on solution, statistical thermodynamics of interface, statistical thermodynamics of a high molecule system and quantum statistics

  16. Flaws and fallacies in statistical thinking

    CERN Document Server

    Campbell, Stephen K

    2004-01-01

    This book was written with a dual purpose: first, the author was motivated to relieve his distress over the faulty conclusions drawn from the frequent misuse of relatively simple statistical tools such as percents, graphs, and averages. Second, his objective was to create a nontechnical book that would help people make better-informed decisions by increasing their ability to judge the quality of statistical evidence. This volume achieves both, serving as a supplemental text for students taking their first course in statistics, and as a self-help guide for anyone wishing to evaluate statistica

  17. Teaching statistics a bag of tricks

    CERN Document Server

    Gelman, Andrew

    2002-01-01

    Students in the sciences, economics, psychology, social sciences, and medicine take introductory statistics. Statistics is increasingly offered at the high school level as well. However, statistics can be notoriously difficult to teach as it is seen by many students as difficult and boring, if not irrelevant to their subject of choice. To help dispel these misconceptions, Gelman and Nolan have put together this fascinating and thought-provoking book. Based on years of teachingexperience the book provides a wealth of demonstrations, examples and projects that involve active student participatio

  18. Matrix Tricks for Linear Statistical Models

    CERN Document Server

    Puntanen, Simo; Styan, George PH

    2011-01-01

    In teaching linear statistical models to first-year graduate students or to final-year undergraduate students there is no way to proceed smoothly without matrices and related concepts of linear algebra; their use is really essential. Our experience is that making some particular matrix tricks very familiar to students can substantially increase their insight into linear statistical models (and also multivariate statistical analysis). In matrix algebra, there are handy, sometimes even very simple "tricks" which simplify and clarify the treatment of a problem - both for the student and

  19. [Statistics for statistics?--Thoughts about psychological tools].

    Science.gov (United States)

    Berger, Uwe; Stöbel-Richter, Yve

    2007-12-01

    Statistical methods take a prominent place among psychologists' educational programs. Being known as difficult to understand and heavy to learn, students fear of these contents. Those, who do not aspire after a research carrier at the university, will forget the drilled contents fast. Furthermore, because it does not apply for the work with patients and other target groups at a first glance, the methodological education as a whole was often questioned. For many psychological practitioners the statistical education makes only sense by enforcing respect against other professions, namely physicians. For the own business, statistics is rarely taken seriously as a professional tool. The reason seems to be clear: Statistics treats numbers, while psychotherapy treats subjects. So, does statistics ends in itself? With this article, we try to answer the question, if and how statistical methods were represented within the psychotherapeutical and psychological research. Therefore, we analyzed 46 Originals of a complete volume of the journal Psychotherapy, Psychosomatics, Psychological Medicine (PPmP). Within the volume, 28 different analyse methods were applied, from which 89 per cent were directly based upon statistics. To be able to write and critically read Originals as a backbone of research, presumes a high degree of statistical education. To ignore statistics means to ignore research and at least to reveal the own professional work to arbitrariness.

  20. Increased Mortality in Diabetic Foot Ulcer Patients: The Significance of Ulcer Type

    Science.gov (United States)

    Chammas, N. K.; Hill, R. L. R.; Edmonds, M. E.

    2016-01-01

    Diabetic foot ulcer (DFU) patients have a greater than twofold increase in mortality compared with nonulcerated diabetic patients. We investigated (a) cause of death in DFU patients, (b) age at death, and (c) relationship between cause of death and ulcer type. This was an eleven-year retrospective study on DFU patients who attended King's College Hospital Foot Clinic and subsequently died. A control group of nonulcerated diabetic patients was matched for age and type of diabetes mellitus. The cause of death was identified from death certificates (DC) and postmortem (PM) examinations. There were 243 DFU patient deaths during this period. Ischaemic heart disease (IHD) was the major cause of death in 62.5% on PM compared to 45.7% on DC. Mean age at death from IHD on PM was 5 years lower in DFU patients compared to controls (68.2 ± 8.7 years versus 73.1 ± 8.0 years, P = 0.015). IHD as a cause of death at PM was significantly linked to neuropathic foot ulcers (OR 3.064, 95% CI 1.003–9.366, and P = 0.049). Conclusions. IHD is the major cause of premature mortality in DFU patients with the neuropathic foot ulcer patients being at a greater risk. PMID:27213157

  1. Accident statistics and the human-factor element.

    Science.gov (United States)

    Shuckburgh, J S

    1975-01-01

    The number of fatal accidents involving public transport aircraft has increased significantly in recent years and, because more and more "wide-bodied" aircraft have been coming into service, this has resulted in a rapid increase in the number of fatalities. A combined attack on the problem by all concerned with flight safety is required to improve the situation. The collection and analysis of aircraft accident data can contribute to safety in two ways; by giving an indication of where to concentrate future effort and by showing how successful past efforts have been. An analysis of worldwide accident statistics by phase of flight and causal factor show that the largest percentage of accidents occurs in the approach and landing phase and are caused by "pilot error". Further research is needed to find out why pilots make errors and how such errors can be eliminated.

  2. Statistical analysis and digital processing of the Mössbauer spectra

    International Nuclear Information System (INIS)

    Prochazka, Roman; Tucek, Jiri; Mashlan, Miroslav; Pechousek, Jiri; Tucek, Pavel; Marek, Jaroslav

    2010-01-01

    This work is focused on using the statistical methods and development of the filtration procedures for signal processing in Mössbauer spectroscopy. Statistical tools for noise filtering in the measured spectra are used in many scientific areas. The use of a pure statistical approach in accumulated Mössbauer spectra filtration is described. In Mössbauer spectroscopy, the noise can be considered as a Poisson statistical process with a Gaussian distribution for high numbers of observations. This noise is a superposition of the non-resonant photons counting with electronic noise (from γ-ray detection and discrimination units), and the velocity system quality that can be characterized by the velocity nonlinearities. The possibility of a noise-reducing process using a new design of statistical filter procedure is described. This mathematical procedure improves the signal-to-noise ratio and thus makes it easier to determine the hyperfine parameters of the given Mössbauer spectra. The filter procedure is based on a periodogram method that makes it possible to assign the statistically important components in the spectral domain. The significance level for these components is then feedback-controlled using the correlation coefficient test results. The estimation of the theoretical correlation coefficient level which corresponds to the spectrum resolution is performed. Correlation coefficient test is based on comparison of the theoretical and the experimental correlation coefficients given by the Spearman method. The correctness of this solution was analyzed by a series of statistical tests and confirmed by many spectra measured with increasing statistical quality for a given sample (absorber). The effect of this filter procedure depends on the signal-to-noise ratio and the applicability of this method has binding conditions

  3. Statistical analysis and digital processing of the Mössbauer spectra

    Science.gov (United States)

    Prochazka, Roman; Tucek, Pavel; Tucek, Jiri; Marek, Jaroslav; Mashlan, Miroslav; Pechousek, Jiri

    2010-02-01

    This work is focused on using the statistical methods and development of the filtration procedures for signal processing in Mössbauer spectroscopy. Statistical tools for noise filtering in the measured spectra are used in many scientific areas. The use of a pure statistical approach in accumulated Mössbauer spectra filtration is described. In Mössbauer spectroscopy, the noise can be considered as a Poisson statistical process with a Gaussian distribution for high numbers of observations. This noise is a superposition of the non-resonant photons counting with electronic noise (from γ-ray detection and discrimination units), and the velocity system quality that can be characterized by the velocity nonlinearities. The possibility of a noise-reducing process using a new design of statistical filter procedure is described. This mathematical procedure improves the signal-to-noise ratio and thus makes it easier to determine the hyperfine parameters of the given Mössbauer spectra. The filter procedure is based on a periodogram method that makes it possible to assign the statistically important components in the spectral domain. The significance level for these components is then feedback-controlled using the correlation coefficient test results. The estimation of the theoretical correlation coefficient level which corresponds to the spectrum resolution is performed. Correlation coefficient test is based on comparison of the theoretical and the experimental correlation coefficients given by the Spearman method. The correctness of this solution was analyzed by a series of statistical tests and confirmed by many spectra measured with increasing statistical quality for a given sample (absorber). The effect of this filter procedure depends on the signal-to-noise ratio and the applicability of this method has binding conditions.

  4. Statistical Dependence of Pipe Breaks on Explanatory Variables

    Directory of Open Access Journals (Sweden)

    Patricia Gómez-Martínez

    2017-02-01

    Full Text Available Aging infrastructure is the main challenge currently faced by water suppliers. Estimation of assets lifetime requires reliable criteria to plan assets repair and renewal strategies. To do so, pipe break prediction is one of the most important inputs. This paper analyzes the statistical dependence of pipe breaks on explanatory variables, determining their optimal combination and quantifying their influence on failure prediction accuracy. A large set of registered data from Madrid water supply network, managed by Canal de Isabel II, has been filtered, classified and studied. Several statistical Bayesian models have been built and validated from the available information with a technique that combines reference periods of time as well as geographical location. Statistical models of increasing complexity are built from zero up to five explanatory variables following two approaches: a set of independent variables or a combination of two joint variables plus an additional number of independent variables. With the aim of finding the variable combination that provides the most accurate prediction, models are compared following an objective validation procedure based on the model skill to predict the number of pipe breaks in a large set of geographical locations. As expected, model performance improves as the number of explanatory variables increases. However, the rate of improvement is not constant. Performance metrics improve significantly up to three variables, but the tendency is softened for higher order models, especially in trunk mains where performance is reduced. Slight differences are found between trunk mains and distribution lines when selecting the most influent variables and models.

  5. On a curvature-statistics theorem

    International Nuclear Information System (INIS)

    Calixto, M; Aldaya, V

    2008-01-01

    The spin-statistics theorem in quantum field theory relates the spin of a particle to the statistics obeyed by that particle. Here we investigate an interesting correspondence or connection between curvature (κ = ±1) and quantum statistics (Fermi-Dirac and Bose-Einstein, respectively). The interrelation between both concepts is established through vacuum coherent configurations of zero modes in quantum field theory on the compact O(3) and noncompact O(2; 1) (spatial) isometry subgroups of de Sitter and Anti de Sitter spaces, respectively. The high frequency limit, is retrieved as a (zero curvature) group contraction to the Newton-Hooke (harmonic oscillator) group. We also make some comments on the physical significance of the vacuum energy density and the cosmological constant problem.

  6. On a curvature-statistics theorem

    Energy Technology Data Exchange (ETDEWEB)

    Calixto, M [Departamento de Matematica Aplicada y Estadistica, Universidad Politecnica de Cartagena, Paseo Alfonso XIII 56, 30203 Cartagena (Spain); Aldaya, V [Instituto de Astrofisica de Andalucia, Apartado Postal 3004, 18080 Granada (Spain)], E-mail: Manuel.Calixto@upct.es

    2008-08-15

    The spin-statistics theorem in quantum field theory relates the spin of a particle to the statistics obeyed by that particle. Here we investigate an interesting correspondence or connection between curvature ({kappa} = {+-}1) and quantum statistics (Fermi-Dirac and Bose-Einstein, respectively). The interrelation between both concepts is established through vacuum coherent configurations of zero modes in quantum field theory on the compact O(3) and noncompact O(2; 1) (spatial) isometry subgroups of de Sitter and Anti de Sitter spaces, respectively. The high frequency limit, is retrieved as a (zero curvature) group contraction to the Newton-Hooke (harmonic oscillator) group. We also make some comments on the physical significance of the vacuum energy density and the cosmological constant problem.

  7. Illinois forest statistics, 1985.

    Science.gov (United States)

    Jerold T. Hahn

    1987-01-01

    The third inventory of the timber resource of Illinois shows a 1% increase in commercial forest area and a 40% gain in growing-stock volume between 1962 and 1985. Presented are highlights and statistics on area, volume, growth, mortality, removals, utilization, and biomass.

  8. The increasing number of surgical procedures for female genital fistula in England: analysis of Hospital Episode Statistics (HES) data.

    Science.gov (United States)

    Ismail, S I M F

    2015-01-01

    The aim of this study was to describe the number and trend of surgical procedures for female genital fistula in England. An online search of Hospital Episode Statistics (HES) data was carried out. Data were available for the 4-year period from 2002-03 until 2005-06. The total number of surgical procedures carried out for female genital fistula steadily increased by 28.7% from 616 in 2002-03 to 793 in 2005-06. The number of surgical procedures performed for rectovaginal fistula exceeded the total number of surgical procedures carried out for vesicovaginal and urethrovaginal fistula in each year of the study period. This pattern needs to be monitored and investigated further.

  9. Increased mortality associated with extreme-heat exposure in King County, Washington, 1980-2010

    Science.gov (United States)

    Isaksen, Tania Busch; Fenske, Richard A.; Hom, Elizabeth K.; Ren, You; Lyons, Hilary; Yost, Michael G.

    2016-01-01

    Extreme heat has been associated with increased mortality, particularly in temperate climates. Few epidemiologic studies have considered the Pacific Northwest region in their analyses. This study quantified the historical (May to September, 1980-2010) heat-mortality relationship in the most populous Pacific Northwest County, King County, Washington. A relative risk (RR) analysis was used to explore the relationship between heat and all-cause mortality on 99th percentile heat days, while a time series analysis, using a piece-wise linear model fit, was used to estimate the effect of heat intensity on mortality, adjusted for temporal trends. For all ages, all causes, we found a 10 % (1.10 (95 % confidence interval (CI), 1.06, 1.14)) increase in the risk of death on a heat day versus non-heat day. When considering the intensity effect of heat on all-cause mortality, we found a 1.69 % (95 % CI, 0.69, 2.70) increase in the risk of death per unit of humidex above 36.0 °C. Mortality stratified by cause and age produced statistically significant results using both types of analyses for: all-cause, non-traumatic, circulatory, cardiovascular, cerebrovascular, and diabetes causes of death. All-cause mortality was statistically significantly modified by the type of synoptic weather type. These results demonstrate that heat, expressed as humidex, is associated with increased mortality on heat days, and that risk increases with heat's intensity. While age was the only individual-level characteristic found to modify mortality risks, statistically significant increases in diabetes-related mortality for the 45-64 age group suggests that underlying health status may contribute to these risks.

  10. Statistical ecology comes of age

    Science.gov (United States)

    Gimenez, Olivier; Buckland, Stephen T.; Morgan, Byron J. T.; Bez, Nicolas; Bertrand, Sophie; Choquet, Rémi; Dray, Stéphane; Etienne, Marie-Pierre; Fewster, Rachel; Gosselin, Frédéric; Mérigot, Bastien; Monestiez, Pascal; Morales, Juan M.; Mortier, Frédéric; Munoz, François; Ovaskainen, Otso; Pavoine, Sandrine; Pradel, Roger; Schurr, Frank M.; Thomas, Len; Thuiller, Wilfried; Trenkel, Verena; de Valpine, Perry; Rexstad, Eric

    2014-01-01

    The desire to predict the consequences of global environmental change has been the driver towards more realistic models embracing the variability and uncertainties inherent in ecology. Statistical ecology has gelled over the past decade as a discipline that moves away from describing patterns towards modelling the ecological processes that generate these patterns. Following the fourth International Statistical Ecology Conference (1–4 July 2014) in Montpellier, France, we analyse current trends in statistical ecology. Important advances in the analysis of individual movement, and in the modelling of population dynamics and species distributions, are made possible by the increasing use of hierarchical and hidden process models. Exciting research perspectives include the development of methods to interpret citizen science data and of efficient, flexible computational algorithms for model fitting. Statistical ecology has come of age: it now provides a general and mathematically rigorous framework linking ecological theory and empirical data. PMID:25540151

  11. Statistical ecology comes of age.

    Science.gov (United States)

    Gimenez, Olivier; Buckland, Stephen T; Morgan, Byron J T; Bez, Nicolas; Bertrand, Sophie; Choquet, Rémi; Dray, Stéphane; Etienne, Marie-Pierre; Fewster, Rachel; Gosselin, Frédéric; Mérigot, Bastien; Monestiez, Pascal; Morales, Juan M; Mortier, Frédéric; Munoz, François; Ovaskainen, Otso; Pavoine, Sandrine; Pradel, Roger; Schurr, Frank M; Thomas, Len; Thuiller, Wilfried; Trenkel, Verena; de Valpine, Perry; Rexstad, Eric

    2014-12-01

    The desire to predict the consequences of global environmental change has been the driver towards more realistic models embracing the variability and uncertainties inherent in ecology. Statistical ecology has gelled over the past decade as a discipline that moves away from describing patterns towards modelling the ecological processes that generate these patterns. Following the fourth International Statistical Ecology Conference (1-4 July 2014) in Montpellier, France, we analyse current trends in statistical ecology. Important advances in the analysis of individual movement, and in the modelling of population dynamics and species distributions, are made possible by the increasing use of hierarchical and hidden process models. Exciting research perspectives include the development of methods to interpret citizen science data and of efficient, flexible computational algorithms for model fitting. Statistical ecology has come of age: it now provides a general and mathematically rigorous framework linking ecological theory and empirical data.

  12. Testing statistical hypotheses of equivalence

    CERN Document Server

    Wellek, Stefan

    2010-01-01

    Equivalence testing has grown significantly in importance over the last two decades, especially as its relevance to a variety of applications has become understood. Yet published work on the general methodology remains scattered in specialists' journals, and for the most part, it focuses on the relatively narrow topic of bioequivalence assessment.With a far broader perspective, Testing Statistical Hypotheses of Equivalence provides the first comprehensive treatment of statistical equivalence testing. The author addresses a spectrum of specific, two-sided equivalence testing problems, from the

  13. Adjusted scaling of FDG positron emission tomography images for statistical evaluation in patients with suspected Alzheimer's disease.

    Science.gov (United States)

    Buchert, Ralph; Wilke, Florian; Chakrabarti, Bhismadev; Martin, Brigitte; Brenner, Winfried; Mester, Janos; Clausen, Malte

    2005-10-01

    Statistical parametric mapping (SPM) gained increasing acceptance for the voxel-based statistical evaluation of brain positron emission tomography (PET) with the glucose analog 2-[18F]-fluoro-2-deoxy-d-glucose (FDG) in patients with suspected Alzheimer's disease (AD). To increase the sensitivity for detection of local changes, individual differences of total brain FDG uptake are usually compensated for by proportional scaling. However, in cases of extensive hypometabolic areas, proportional scaling overestimates scaled uptake. This may cause significant underestimation of the extent of hypometabolic areas by the statistical test. To detect this problem, the authors tested for hypermetabolism. In patients with no visual evidence of true focal hypermetabolism, significant clusters of hypermetabolism in the presence of extended hypometabolism were interpreted as false-positive findings, indicating relevant overestimation of scaled uptake. In this case, scaled uptake was reduced step by step until there were no more significant clusters of hypermetabolism. In 22 consecutive patients with suspected AD, proportional scaling resulted in relevant overestimation of scaled uptake in 9 patients. Scaled uptake had to be reduced by 11.1% +/- 5.3% in these cases to eliminate the artifacts. Adjusted scaling resulted in extension of existing and appearance of new clusters of hypometabolism. Total volume of the additional voxels with significant hypometabolism depended linearly on the extent of the additional scaling and was 202 +/- 118 mL on average. Adjusted scaling helps to identify characteristic metabolic patterns in patients with suspected AD. It is expected to increase specificity of FDGPET in this group of patients.

  14. Official statistics and Big Data

    Directory of Open Access Journals (Sweden)

    Peter Struijs

    2014-07-01

    Full Text Available The rise of Big Data changes the context in which organisations producing official statistics operate. Big Data provides opportunities, but in order to make optimal use of Big Data, a number of challenges have to be addressed. This stimulates increased collaboration between National Statistical Institutes, Big Data holders, businesses and universities. In time, this may lead to a shift in the role of statistical institutes in the provision of high-quality and impartial statistical information to society. In this paper, the changes in context, the opportunities, the challenges and the way to collaborate are addressed. The collaboration between the various stakeholders will involve each partner building on and contributing different strengths. For national statistical offices, traditional strengths include, on the one hand, the ability to collect data and combine data sources with statistical products and, on the other hand, their focus on quality, transparency and sound methodology. In the Big Data era of competing and multiplying data sources, they continue to have a unique knowledge of official statistical production methods. And their impartiality and respect for privacy as enshrined in law uniquely position them as a trusted third party. Based on this, they may advise on the quality and validity of information of various sources. By thus positioning themselves, they will be able to play their role as key information providers in a changing society.

  15. Meta-analysis : High-dosage vitamin E supplementation may increase all-cause mortality

    NARCIS (Netherlands)

    Miller, ER; Pastor-Barriuso, R; Dalal, D; Riemersma, RA; Appel, LJ; Guallar, E

    2005-01-01

    Background: Experimental models and observational studies suggest that vitamin E supplementation may prevent cardiovascular disease and cancer. However, several trials of high-dosage vitamin E supplementation showed non-statistically significant increases in total mortality. Purpose: To perform a

  16. Increased hospital admissions associated with extreme-heat exposure in King County, Washington, 1990-2010.

    Science.gov (United States)

    Isaksen, Tania Busch; Yost, Michael G; Hom, Elizabeth K; Ren, You; Lyons, Hilary; Fenske, Richard A

    2015-01-01

    Increased morbidity and mortality have been associated with extreme heat events, particularly in temperate climates. Few epidemiologic studies have considered the impact of extreme heat events on hospitalization rates in the Pacific Northwest region. This study quantifies the historic (May to September 1990-2010) heat-morbidity relationship in the most populous Pacific Northwest County, King County, Washington. A relative risk (RR) analysis was used to explore the association between heat and all non-traumatic hospitalizations on 99th percentile heat days, whereas a time series analysis using a piecewise linear model approximation was used to estimate the effect of heat intensity on hospitalizations, adjusted for temporal trends and day of the week. A non-statistically significant 2% [95% CI: 1.02 (0.98, 1.05)] increase in hospitalization risk, on a heat day vs. a non-heat day, was noted for all-ages and all non-traumatic causes. When considering the effect of heat intensity on admissions, we found a statistically significant 1.59% (95% CI: 0.9%, 2.29%) increase in admissions per degree increase in humidex above 37.4°C. Admissions stratified by cause and age produced statistically significant results with both relative risk and time series analyses for nephritis and nephrotic syndromes, acute renal failure, and natural heat exposure hospitalizations. This study demonstrates that heat, expressed as humidex, is associated with increased hospital admissions. When stratified by age and cause of admission, the non-elderly age groups (<85 years) experience significant risk for nephritis and nephrotic syndromes, acute renal failure, natural heat exposure, chronic obstructive pulmonary disease, and asthma hospitalizations.

  17. Expression and prognostic significance of lysozyme in male breast cancer

    International Nuclear Information System (INIS)

    Serra, Carlos; Baltasar, Aniceto; Medrano, Justo; Vizoso, Francisco; Alonso, Lorena; Rodríguez, Juan C; González, Luis O; Fernández, María; Lamelas, María L; Sánchez, Luis M; García-Muñiz, José L

    2002-01-01

    Lysozyme, one of the major protein components of human milk that is also synthesized by a significant percentage of breast carcinomas, is associated with lesions that have a favorable outcome in female breast cancer. Here we evaluate the expression and prognostic value of lysozyme in male breast cancer (MBC). Lysozyme expression was examined by immunohistochemical methods in a series of 60 MBC tissue sections and in 15 patients with gynecomastia. Staining was quantified using the HSCORE (histological score) system, which considers both the intensity and the percentage of cells staining at each intensity. Prognostic value of lysozyme was retrospectively evaluated by multivariate analysis taking into account conventional prognostic factors. Lysozyme immunostaining was negative in all cases of gynecomastia. A total of 27 of 60 MBC sections (45%) stained positively for this protein, but there were clear differences among them with regard to the intensity and percentage of stained cells. Statistical analysis showed that lysozyme HSCORE values in relation to age, tumor size, nodal status, histological grade, estrogen receptor status, metastasis and histological type did not increase the statistical significance. Univariate analysis confirmed that both nodal involvement and lysozyme values were significant predictors of short-term relapse-free survival. Multivariate analysis, according to Cox's regression model, also showed that nodal status and lysozyme levels were significant independent indicators of short-term relapse-free survival. Tumor expression of lysozyme is associated with lesions that have an unfavorable outcome in male breast cancer. This milk protein may be a new prognostic factor in patients with breast cancer

  18. Calculating statistical distributions from operator relations: The statistical distributions of various intermediate statistics

    International Nuclear Information System (INIS)

    Dai, Wu-Sheng; Xie, Mi

    2013-01-01

    In this paper, we give a general discussion on the calculation of the statistical distribution from a given operator relation of creation, annihilation, and number operators. Our result shows that as long as the relation between the number operator and the creation and annihilation operators can be expressed as a † b=Λ(N) or N=Λ −1 (a † b), where N, a † , and b denote the number, creation, and annihilation operators, i.e., N is a function of quadratic product of the creation and annihilation operators, the corresponding statistical distribution is the Gentile distribution, a statistical distribution in which the maximum occupation number is an arbitrary integer. As examples, we discuss the statistical distributions corresponding to various operator relations. In particular, besides the Bose–Einstein and Fermi–Dirac cases, we discuss the statistical distributions for various schemes of intermediate statistics, especially various q-deformation schemes. Our result shows that the statistical distributions corresponding to various q-deformation schemes are various Gentile distributions with different maximum occupation numbers which are determined by the deformation parameter q. This result shows that the results given in much literature on the q-deformation distribution are inaccurate or incomplete. -- Highlights: ► A general discussion on calculating statistical distribution from relations of creation, annihilation, and number operators. ► A systemic study on the statistical distributions corresponding to various q-deformation schemes. ► Arguing that many results of q-deformation distributions in literature are inaccurate or incomplete

  19. Increased accumulation of dendritic cells in celiac disease associates with increased expression of autophagy protein LC3

    Directory of Open Access Journals (Sweden)

    Paramaguru Rajaguru

    2013-01-01

    Full Text Available Background: Celiac disease (CD an immune-mediated disorder associates with accumulation of dendritic cell (DC in duodenal mucosa. Autophagy has recently been implicated in autoantigen formation. However, its role in CD is still unknown. Aim: To examine role of autophagic protein LC3 expressed by activated DC in CD. Materials and Methods : Thirty CD patients were analyzed at initial presentation and after 6 months of gluten-free diet (GFD. Duodenal biopsies were studied for histological changes and CD11c, CD86, and MAP1LC3A expressions by double immunohistochemistry (IHC. Masson′s trichrome (MT staining was used to assess basement membrane (BM thickness and Oil Red O (ORO staining for mucosal lipid deposit. Polymerase chain reaction (PCR was performed for HLA-DQ system. Statistical analysis was done using paired and unpaired t test, chi-square test, Fisher′s exact test, and McNemar-Bowker test. A P-value <0.05 was considered statistically significant. Results: HLA-DQ2 and HLA-DQ8 alleles were present in all studied patients. Increased BM thickness was observed in 63% and 73% had ORO-positive lipid in surface lining epithelium. Pre-treatment biopsies showed increased DCs expressing LC3, which were significantly less in follow-up biopsies. The follow-up biopsies had shown significant reduction in BM thickness and ORO. Conclusion : Histological improvement in duodenal biopsies was associated with reduction in activated DCs expressing autophagic protein, which probably play important role in pathogenesis of an autoimmune disorder like CD.

  20. Statistical and theoretical research

    International Nuclear Information System (INIS)

    Anon.

    1983-01-01

    Significant accomplishments include the creation of field designs to detect population impacts, new census procedures for small mammals, and methods for designing studies to determine where and how much of a contaminant is extent over certain landscapes. A book describing these statistical methods is currently being written and will apply to a variety of environmental contaminants, including radionuclides. PNL scientists also have devised an analytical method for predicting the success of field eexperiments on wild populations. Two highlights of current research are the discoveries that population of free-roaming horse herds can double in four years and that grizzly bear populations may be substantially smaller than once thought. As stray horses become a public nuisance at DOE and other large Federal sites, it is important to determine their number. Similar statistical theory can be readily applied to other situations where wild animals are a problem of concern to other government agencies. Another book, on statistical aspects of radionuclide studies, is written specifically for researchers in radioecology

  1. Effect size and statistical power in the rodent fear conditioning literature – A systematic review

    Science.gov (United States)

    Macleod, Malcolm R.

    2018-01-01

    Proposals to increase research reproducibility frequently call for focusing on effect sizes instead of p values, as well as for increasing the statistical power of experiments. However, it is unclear to what extent these two concepts are indeed taken into account in basic biomedical science. To study this in a real-case scenario, we performed a systematic review of effect sizes and statistical power in studies on learning of rodent fear conditioning, a widely used behavioral task to evaluate memory. Our search criteria yielded 410 experiments comparing control and treated groups in 122 articles. Interventions had a mean effect size of 29.5%, and amnesia caused by memory-impairing interventions was nearly always partial. Mean statistical power to detect the average effect size observed in well-powered experiments with significant differences (37.2%) was 65%, and was lower among studies with non-significant results. Only one article reported a sample size calculation, and our estimated sample size to achieve 80% power considering typical effect sizes and variances (15 animals per group) was reached in only 12.2% of experiments. Actual effect sizes correlated with effect size inferences made by readers on the basis of textual descriptions of results only when findings were non-significant, and neither effect size nor power correlated with study quality indicators, number of citations or impact factor of the publishing journal. In summary, effect sizes and statistical power have a wide distribution in the rodent fear conditioning literature, but do not seem to have a large influence on how results are described or cited. Failure to take these concepts into consideration might limit attempts to improve reproducibility in this field of science. PMID:29698451

  2. Significance analysis of lexical bias in microarray data

    Directory of Open Access Journals (Sweden)

    Falkow Stanley

    2003-04-01

    Full Text Available Abstract Background Genes that are determined to be significantly differentially regulated in microarray analyses often appear to have functional commonalities, such as being components of the same biochemical pathway. This results in certain words being under- or overrepresented in the list of genes. Distinguishing between biologically meaningful trends and artifacts of annotation and analysis procedures is of the utmost importance, as only true biological trends are of interest for further experimentation. A number of sophisticated methods for identification of significant lexical trends are currently available, but these methods are generally too cumbersome for practical use by most microarray users. Results We have developed a tool, LACK, for calculating the statistical significance of apparent lexical bias in microarray datasets. The frequency of a user-specified list of search terms in a list of genes which are differentially regulated is assessed for statistical significance by comparison to randomly generated datasets. The simplicity of the input files and user interface targets the average microarray user who wishes to have a statistical measure of apparent lexical trends in analyzed datasets without the need for bioinformatics skills. The software is available as Perl source or a Windows executable. Conclusion We have used LACK in our laboratory to generate biological hypotheses based on our microarray data. We demonstrate the program's utility using an example in which we confirm significant upregulation of SPI-2 pathogenicity island of Salmonella enterica serovar Typhimurium by the cation chelator dipyridyl.

  3. Whither Statistics Education Research?

    Science.gov (United States)

    Watson, Jane

    2016-01-01

    This year marks the 25th anniversary of the publication of a "National Statement on Mathematics for Australian Schools", which was the first curriculum statement this country had including "Chance and Data" as a significant component. It is hence an opportune time to survey the history of the related statistics education…

  4. Identification and Analysis of Significant Factors Influencing Visitor Satisfaction at Heritage Sites – The Case of Serbian Medieval Fortresses

    Directory of Open Access Journals (Sweden)

    Ivana Blešić

    2013-01-01

    Full Text Available With increased appreciation of general public, heritage sites gained more attention regarding contemporary tourism and management studies. Accordingly, the assessment of visitors’ satisfaction on these sites is important tool for both financial and organization management. The aim of this research is to identify the main (statistically significant factors that influence visitors’ satisfaction. Data was obtained by survey conducted during the visit of three medieval fortresses in Serbia, with aim to capture tourist’s expectations and perceptions on ten given attributes. The results of factor and descriptive statistical analysis indicate three factors: “regional settings”, “marketing”, “aesthetic appeal” significant for visitors’ satisfaction of the investigated heritage sites.

  5. Statistics

    CERN Document Server

    Hayslett, H T

    1991-01-01

    Statistics covers the basic principles of Statistics. The book starts by tackling the importance and the two kinds of statistics; the presentation of sample data; the definition, illustration and explanation of several measures of location; and the measures of variation. The text then discusses elementary probability, the normal distribution and the normal approximation to the binomial. Testing of statistical hypotheses and tests of hypotheses about the theoretical proportion of successes in a binomial population and about the theoretical mean of a normal population are explained. The text the

  6. Log-concave Probability Distributions: Theory and Statistical Testing

    DEFF Research Database (Denmark)

    An, Mark Yuing

    1996-01-01

    This paper studies the broad class of log-concave probability distributions that arise in economics of uncertainty and information. For univariate, continuous, and log-concave random variables we prove useful properties without imposing the differentiability of density functions. Discrete...... and multivariate distributions are also discussed. We propose simple non-parametric testing procedures for log-concavity. The test statistics are constructed to test one of the two implicati ons of log-concavity: increasing hazard rates and new-is-better-than-used (NBU) property. The test for increasing hazard...... rates are based on normalized spacing of the sample order statistics. The tests for NBU property fall into the category of Hoeffding's U-statistics...

  7. Industrial commodity statistics yearbook 2001. Production statistics (1992-2001)

    International Nuclear Information System (INIS)

    2003-01-01

    This is the thirty-fifth in a series of annual compilations of statistics on world industry designed to meet both the general demand for information of this kind and the special requirements of the United Nations and related international bodies. Beginning with the 1992 edition, the title of the publication was changed to industrial Commodity Statistics Yearbook as the result of a decision made by the United Nations Statistical Commission at its twenty-seventh session to discontinue, effective 1994, publication of the Industrial Statistics Yearbook, volume I, General Industrial Statistics by the Statistics Division of the United Nations. The United Nations Industrial Development Organization (UNIDO) has become responsible for the collection and dissemination of general industrial statistics while the Statistics Division of the United Nations continues to be responsible for industrial commodity production statistics. The previous title, Industrial Statistics Yearbook, volume II, Commodity Production Statistics, was introduced in the 1982 edition. The first seven editions in this series were published under the title The Growth of World industry and the next eight editions under the title Yearbook of Industrial Statistics. This edition of the Yearbook contains annual quantity data on production of industrial commodities by country, geographical region, economic grouping and for the world. A standard list of about 530 commodities (about 590 statistical series) has been adopted for the publication. The statistics refer to the ten-year period 1992-2001 for about 200 countries and areas

  8. Industrial commodity statistics yearbook 2002. Production statistics (1993-2002)

    International Nuclear Information System (INIS)

    2004-01-01

    This is the thirty-sixth in a series of annual compilations of statistics on world industry designed to meet both the general demand for information of this kind and the special requirements of the United Nations and related international bodies. Beginning with the 1992 edition, the title of the publication was changed to industrial Commodity Statistics Yearbook as the result of a decision made by the United Nations Statistical Commission at its twenty-seventh session to discontinue, effective 1994, publication of the Industrial Statistics Yearbook, volume I, General Industrial Statistics by the Statistics Division of the United Nations. The United Nations Industrial Development Organization (UNIDO) has become responsible for the collection and dissemination of general industrial statistics while the Statistics Division of the United Nations continues to be responsible for industrial commodity production statistics. The previous title, Industrial Statistics Yearbook, volume II, Commodity Production Statistics, was introduced in the 1982 edition. The first seven editions in this series were published under the title 'The Growth of World industry' and the next eight editions under the title 'Yearbook of Industrial Statistics'. This edition of the Yearbook contains annual quantity data on production of industrial commodities by country, geographical region, economic grouping and for the world. A standard list of about 530 commodities (about 590 statistical series) has been adopted for the publication. The statistics refer to the ten-year period 1993-2002 for about 200 countries and areas

  9. Industrial commodity statistics yearbook 2000. Production statistics (1991-2000)

    International Nuclear Information System (INIS)

    2002-01-01

    This is the thirty-third in a series of annual compilations of statistics on world industry designed to meet both the general demand for information of this kind and the special requirements of the United Nations and related international bodies. Beginning with the 1992 edition, the title of the publication was changed to industrial Commodity Statistics Yearbook as the result of a decision made by the United Nations Statistical Commission at its twenty-seventh session to discontinue, effective 1994, publication of the Industrial Statistics Yearbook, volume I, General Industrial Statistics by the Statistics Division of the United Nations. The United Nations Industrial Development Organization (UNIDO) has become responsible for the collection and dissemination of general industrial statistics while the Statistics Division of the United Nations continues to be responsible for industrial commodity production statistics. The previous title, Industrial Statistics Yearbook, volume II, Commodity Production Statistics, was introduced in the 1982 edition. The first seven editions in this series were published under the title The Growth of World industry and the next eight editions under the title Yearbook of Industrial Statistics. This edition of the Yearbook contains annual quantity data on production of industrial commodities by country, geographical region, economic grouping and for the world. A standard list of about 530 commodities (about 590 statistical series) has been adopted for the publication. Most of the statistics refer to the ten-year period 1991-2000 for about 200 countries and areas

  10. Statistical methods in spatial genetics

    DEFF Research Database (Denmark)

    Guillot, Gilles; Leblois, Raphael; Coulon, Aurelie

    2009-01-01

    The joint analysis of spatial and genetic data is rapidly becoming the norm in population genetics. More and more studies explicitly describe and quantify the spatial organization of genetic variation and try to relate it to underlying ecological processes. As it has become increasingly difficult...... to keep abreast with the latest methodological developments, we review the statistical toolbox available to analyse population genetic data in a spatially explicit framework. We mostly focus on statistical concepts but also discuss practical aspects of the analytical methods, highlighting not only...

  11. Increase in Jumping Height Associated with Maximal Effort Vertical Depth Jumps.

    Science.gov (United States)

    Bedi, John F.; And Others

    1987-01-01

    In order to assess if there existed a statistically significant increase in jumping performance when dropping from different heights, 32 males, aged 19 to 26, performed a series of maximal effort vertical jumps after dropping from eight heights onto a force plate. Results are analyzed. (Author/MT)

  12. Clinical significance of quantitative analysis of facial nerve enhancement on MRI in Bell's palsy.

    Science.gov (United States)

    Song, Mee Hyun; Kim, Jinna; Jeon, Ju Hyun; Cho, Chang Il; Yoo, Eun Hye; Lee, Won-Sang; Lee, Ho-Ki

    2008-11-01

    Quantitative analysis of the facial nerve on the lesion side as well as the normal side, which allowed for more accurate measurement of facial nerve enhancement in patients with facial palsy, showed statistically significant correlation with the initial severity of facial nerve inflammation, although little prognostic significance was shown. This study investigated the clinical significance of quantitative measurement of facial nerve enhancement in patients with Bell's palsy by analyzing the enhancement pattern and correlating MRI findings with initial severity of facial palsy and clinical outcome. Facial nerve enhancement was measured quantitatively by using the region of interest on pre- and postcontrast T1-weighted images in 44 patients diagnosed with Bell's palsy. The signal intensity increase on the lesion side was first compared with that of the contralateral side and then correlated with the initial degree of facial palsy and prognosis. The lesion side showed significantly higher signal intensity increase compared with the normal side in all of the segments except for the mastoid segment. Signal intensity increase at the internal auditory canal and labyrinthine segments showed correlation with the initial degree of facial palsy but no significant difference was found between different prognostic groups.

  13. How the Mastery Rubric for Statistical Literacy Can Generate Actionable Evidence about Statistical and Quantitative Learning Outcomes

    Directory of Open Access Journals (Sweden)

    Rochelle E. Tractenberg

    2016-12-01

    Full Text Available Statistical literacy is essential to an informed citizenry; and two emerging trends highlight a growing need for training that achieves this literacy. The first trend is towards “big” data: while automated analyses can exploit massive amounts of data, the interpretation—and possibly more importantly, the replication—of results are challenging without adequate statistical literacy. The second trend is that science and scientific publishing are struggling with insufficient/inappropriate statistical reasoning in writing, reviewing, and editing. This paper describes a model for statistical literacy (SL and its development that can support modern scientific practice. An established curriculum development and evaluation tool—the Mastery Rubric—is integrated with a new, developmental, model of statistical literacy that reflects the complexity of reasoning and habits of mind that scientists need to cultivate in order to recognize, choose, and interpret statistical methods. This developmental model provides actionable evidence, and explicit opportunities for consequential assessment that serves students, instructors, developers/reviewers/accreditors of a curriculum, and institutions. By supporting the enrichment, rather than increasing the amount, of statistical training in the basic and life sciences, this approach supports curriculum development, evaluation, and delivery to promote statistical literacy for students and a collective quantitative proficiency more broadly.

  14. Teaching statistics to social science students: Making it valuable ...

    African Journals Online (AJOL)

    In this age of rapid information expansion and technology, statistics is playing an ever increasing role in education, particularly also in the training of social scientists. Statistics enables the social scientist to obtain a quantitative awareness of socioeconomic phenomena hence is essential in their training. Statistics, however ...

  15. Childhood-compared to adolescent-onset bipolar disorder has more statistically significant clinical correlates.

    Science.gov (United States)

    Holtzman, Jessica N; Miller, Shefali; Hooshmand, Farnaz; Wang, Po W; Chang, Kiki D; Hill, Shelley J; Rasgon, Natalie L; Ketter, Terence A

    2015-07-01

    The strengths and limitations of considering childhood-and adolescent-onset bipolar disorder (BD) separately versus together remain to be established. We assessed this issue. BD patients referred to the Stanford Bipolar Disorder Clinic during 2000-2011 were assessed with the Systematic Treatment Enhancement Program for BD Affective Disorders Evaluation. Patients with childhood- and adolescent-onset were compared to those with adult-onset for 7 unfavorable bipolar illness characteristics with replicated associations with early-onset patients. Among 502 BD outpatients, those with childhood- (adolescent- (13-18 years, N=218) onset had significantly higher rates for 4/7 unfavorable illness characteristics, including lifetime comorbid anxiety disorder, at least ten lifetime mood episodes, lifetime alcohol use disorder, and prior suicide attempt, than those with adult-onset (>18 years, N=174). Childhood- but not adolescent-onset BD patients also had significantly higher rates of first-degree relative with mood disorder, lifetime substance use disorder, and rapid cycling in the prior year. Patients with pooled childhood/adolescent - compared to adult-onset had significantly higher rates for 5/7 of these unfavorable illness characteristics, while patients with childhood- compared to adolescent-onset had significantly higher rates for 4/7 of these unfavorable illness characteristics. Caucasian, insured, suburban, low substance abuse, American specialty clinic-referred sample limits generalizability. Onset age is based on retrospective recall. Childhood- compared to adolescent-onset BD was more robustly related to unfavorable bipolar illness characteristics, so pooling these groups attenuated such relationships. Further study is warranted to determine the extent to which adolescent-onset BD represents an intermediate phenotype between childhood- and adult-onset BD. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Encounter Probability of Significant Wave Height

    DEFF Research Database (Denmark)

    Liu, Z.; Burcharth, H. F.

    The determination of the design wave height (often given as the significant wave height) is usually based on statistical analysis of long-term extreme wave height measurement or hindcast. The result of such extreme wave height analysis is often given as the design wave height corresponding to a c...

  17. Statistical principles for prospective study protocols:

    DEFF Research Database (Denmark)

    Christensen, Robin; Langberg, Henning

    2012-01-01

    In the design of scientific studies it is essential to decide on which scientific questions one aims to answer, just as it is important to decide on the correct statistical methods to use to answer these questions. The correct use of statistical methods is crucial in all aspects of research...... to quantify relationships in data. Despite an increased focus on statistical content and complexity of biomedical research these topics remain difficult for most researchers. Statistical methods enable researchers to condense large spreadsheets with data into means, proportions, and difference between means......, risk differences, and other quantities that convey information. One of the goals in biomedical research is to develop parsimonious models - meaning as simple as possible. This approach is valid if the subsequent research report (the article) is written independent of whether the results...

  18. Statistical and Computational Techniques in Manufacturing

    CERN Document Server

    2012-01-01

    In recent years, interest in developing statistical and computational techniques for applied manufacturing engineering has been increased. Today, due to the great complexity of manufacturing engineering and the high number of parameters used, conventional approaches are no longer sufficient. Therefore, in manufacturing, statistical and computational techniques have achieved several applications, namely, modelling and simulation manufacturing processes, optimization manufacturing parameters, monitoring and control, computer-aided process planning, etc. The present book aims to provide recent information on statistical and computational techniques applied in manufacturing engineering. The content is suitable for final undergraduate engineering courses or as a subject on manufacturing at the postgraduate level. This book serves as a useful reference for academics, statistical and computational science researchers, mechanical, manufacturing and industrial engineers, and professionals in industries related to manu...

  19. Walking with a four wheeled walker (rollator) significantly reduces EMG lower-limb muscle activity in healthy subjects.

    Science.gov (United States)

    Suica, Zorica; Romkes, Jacqueline; Tal, Amir; Maguire, Clare

    2016-01-01

    To investigate the immediate effect of four-wheeled- walker(rollator)walking on lower-limb muscle activity and trunk-sway in healthy subjects. In this cross-sectional design electromyographic (EMG) data was collected in six lower-limb muscle groups and trunk-sway was measured as peak-to-peak angular displacement of the centre-of-mass (level L2/3) in the sagittal and frontal-planes using the SwayStar balance system. 19 subjects walked at self-selected speed firstly without a rollator then in randomised order 1. with rollator 2. with rollator with increased weight-bearing. Rollator-walking caused statistically significant reductions in EMG activity in lower-limb muscle groups and effect-sizes were medium to large. Increased weight-bearing increased the effect. Trunk-sway in the sagittal and frontal-planes showed no statistically significant difference between conditions. Rollator-walking reduces lower-limb muscle activity but trunk-sway remains unchanged as stability is likely gained through forces generated by the upper-limbs. Short-term stability is gained but the long-term effect is unclear and requires investigation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Diagnostic and Prognostic Significance of Serum and Tissue Galectin 3 Expression in Patients with Carcinoma of the Bladder

    Science.gov (United States)

    Gendy, Hoda El; Madkour, Bothina; Abdelaty, Sara; Essawy, Fayza; Khattab, Dina; Hammam, Olfat; Nour, Hani H.

    2014-01-01

    Background Galectins are group of proteins found in the cytoplasm, nucleus, cell surface and extracellular matrix. Galectin 3 (Gal-3) displays pathological expression in a variety of processes such as tumorigenesis. Patients and Method 70 patients classified into the control group, cystitis group, transitional cell carcinoma group, and squamous cell carcinoma group were enrolled in this study which aimed to detect the serum level and the intensity of tissue expression of Gal-3. Results Both serum level and tissue expression of Gal-3 were statistically higher in bladder cancer patients compared to the other groups. Gal-3 level expression increased from low to high grade urothelial tumors, with a statistically significant increase of its level and expression between muscle invasive and non-muscle invasive Ta urothelial tumors. Conclusion The serum Gal-3 level is sensitive and specific for the diagnosis of bladder cancer. The prognostic significance of tissue expression is to be confirmed. PMID:26195948

  1. [Comment on] Statistical discrimination

    Science.gov (United States)

    Chinn, Douglas

    In the December 8, 1981, issue of Eos, a news item reported the conclusion of a National Research Council study that sexual discrimination against women with Ph.D.'s exists in the field of geophysics. Basically, the item reported that even when allowances are made for motherhood the percentage of female Ph.D.'s holding high university and corporate positions is significantly lower than the percentage of male Ph.D.'s holding the same types of positions. The sexual discrimination conclusion, based only on these statistics, assumes that there are no basic psychological differences between men and women that might cause different populations in the employment group studied. Therefore, the reasoning goes, after taking into account possible effects from differences related to anatomy, such as women stopping their careers in order to bear and raise children, the statistical distributions of positions held by male and female Ph.D.'s ought to be very similar to one another. Any significant differences between the distributions must be caused primarily by sexual discrimination.

  2. Statistical Model of Extreme Shear

    DEFF Research Database (Denmark)

    Hansen, Kurt Schaldemose; Larsen, Gunner Chr.

    2005-01-01

    In order to continue cost-optimisation of modern large wind turbines, it is important to continuously increase the knowledge of wind field parameters relevant to design loads. This paper presents a general statistical model that offers site-specific prediction of the probability density function...... by a model that, on a statistically consistent basis, describes the most likely spatial shape of an extreme wind shear event. Predictions from the model have been compared with results from an extreme value data analysis, based on a large number of full-scale measurements recorded with a high sampling rate...

  3. Artefactual increasing frequency of omphaloceles in the northern Netherlands : lessons for systematic analysis of apparent epidemics

    NARCIS (Netherlands)

    Reefhuis, J; de Walle, HEK; Cornel, MC

    Background While monitoring birth defects in a registry, statistically significant increases in prevalence occasionally occur. In the European Registration Of Congenital Anomalies (EUROCAT) in the Northern Netherlands 20 000 births are monitored every year. For omphaloceles, a steady increase in the

  4. Harmonic statistics

    International Nuclear Information System (INIS)

    Eliazar, Iddo

    2017-01-01

    The exponential, the normal, and the Poisson statistical laws are of major importance due to their universality. Harmonic statistics are as universal as the three aforementioned laws, but yet they fall short in their ‘public relations’ for the following reason: the full scope of harmonic statistics cannot be described in terms of a statistical law. In this paper we describe harmonic statistics, in their full scope, via an object termed harmonic Poisson process: a Poisson process, over the positive half-line, with a harmonic intensity. The paper reviews the harmonic Poisson process, investigates its properties, and presents the connections of this object to an assortment of topics: uniform statistics, scale invariance, random multiplicative perturbations, Pareto and inverse-Pareto statistics, exponential growth and exponential decay, power-law renormalization, convergence and domains of attraction, the Langevin equation, diffusions, Benford’s law, and 1/f noise. - Highlights: • Harmonic statistics are described and reviewed in detail. • Connections to various statistical laws are established. • Connections to perturbation, renormalization and dynamics are established.

  5. Harmonic statistics

    Energy Technology Data Exchange (ETDEWEB)

    Eliazar, Iddo, E-mail: eliazar@post.tau.ac.il

    2017-05-15

    The exponential, the normal, and the Poisson statistical laws are of major importance due to their universality. Harmonic statistics are as universal as the three aforementioned laws, but yet they fall short in their ‘public relations’ for the following reason: the full scope of harmonic statistics cannot be described in terms of a statistical law. In this paper we describe harmonic statistics, in their full scope, via an object termed harmonic Poisson process: a Poisson process, over the positive half-line, with a harmonic intensity. The paper reviews the harmonic Poisson process, investigates its properties, and presents the connections of this object to an assortment of topics: uniform statistics, scale invariance, random multiplicative perturbations, Pareto and inverse-Pareto statistics, exponential growth and exponential decay, power-law renormalization, convergence and domains of attraction, the Langevin equation, diffusions, Benford’s law, and 1/f noise. - Highlights: • Harmonic statistics are described and reviewed in detail. • Connections to various statistical laws are established. • Connections to perturbation, renormalization and dynamics are established.

  6. Statistical mechanics for a class of quantum statistics

    International Nuclear Information System (INIS)

    Isakov, S.B.

    1994-01-01

    Generalized statistical distributions for identical particles are introduced for the case where filling a single-particle quantum state by particles depends on filling states of different momenta. The system of one-dimensional bosons with a two-body potential that can be solved by means of the thermodynamic Bethe ansatz is shown to be equivalent thermodynamically to a system of free particles obeying statistical distributions of the above class. The quantum statistics arising in this way are completely determined by the two-particle scattering phases of the corresponding interacting systems. An equation determining the statistical distributions for these statistics is derived

  7. Costs and risks of the import of RES statistics by the Dutch government

    Energy Technology Data Exchange (ETDEWEB)

    Klessmann, C.; De Jager, D.; Gephart, M.; Winkel, T.

    2012-11-15

    This paper presents a first estimate of the costs and risks of a potential import of renewable energy statistics by the Dutch Government in order to meet the binding renewable energy (RE) target of 14% by 2020. Recently, the new government has announced that it will increase the ambition from 14% to 16%. Progress so-far has been slow however and meeting these targets requires near to maximum realisable deployment rates of all relevant technologies. It points at the necessity to increase national policy measures (spatial, political, financial, etc.) for all renewable energy technologies or alternatively, to apply the cooperation mechanisms and/or import RES statistics from other countries. It is generally assumed that imported RE statistics, through the cooperation mechanisms of the European RES Directive, will have lower costs than supporting the potentially more expensive domestic technologies that would be needed to meet the targets fully by domestic production. This paper shows that this assumption is questionable, and that the risks of pursuing an import-strategy may be significant. The analysis shows that the use of statistical transfers, which in principle may be a viable option for realising part of the Dutch RE target, is linked to high uncertainties. Important aspects contributing to these uncertainties are: The effectiveness and efficiency of policies in the European Member States to meet domestic RE targets by and up to 2020, and hence the related surplus/shortfall of RE production and resulting market prices for statistical transfers; The price setting mechanisms that will be established between Member States, including the anticipated cost of infringement in case of non-fulfilment of the 2020 targets. Imports will likely be charged against the market prices for (statistical) transfers, not against the cost prices of RE technologies. The price of statistical transfers can be expected to be higher in the case of a clear 'buyer market' in which

  8. Data-driven inference for the spatial scan statistic.

    Science.gov (United States)

    Almeida, Alexandre C L; Duarte, Anderson R; Duczmal, Luiz H; Oliveira, Fernando L P; Takahashi, Ricardo H C

    2011-08-02

    Kulldorff's spatial scan statistic for aggregated area maps searches for clusters of cases without specifying their size (number of areas) or geographic location in advance. Their statistical significance is tested while adjusting for the multiple testing inherent in such a procedure. However, as is shown in this work, this adjustment is not done in an even manner for all possible cluster sizes. A modification is proposed to the usual inference test of the spatial scan statistic, incorporating additional information about the size of the most likely cluster found. A new interpretation of the results of the spatial scan statistic is done, posing a modified inference question: what is the probability that the null hypothesis is rejected for the original observed cases map with a most likely cluster of size k, taking into account only those most likely clusters of size k found under null hypothesis for comparison? This question is especially important when the p-value computed by the usual inference process is near the alpha significance level, regarding the correctness of the decision based in this inference. A practical procedure is provided to make more accurate inferences about the most likely cluster found by the spatial scan statistic.

  9. Thermal activation in statistical clusters of magnetic nanoparticles

    International Nuclear Information System (INIS)

    Hovorka, O

    2017-01-01

    This article presents a kinetic Monte-Carlo study of thermally activated magnetisation dynamics in clusters of statistically distributed magnetic nanoparticles. The structure of clusters is assumed to be of fractal nature, consistently with recent observations of magnetic particle aggregation in cellular environments. The computed magnetisation relaxation decay and frequency-dependent hysteresis loops are seen to significantly depend on the fractal dimension of aggregates, leading to accelerated magnetisation relaxation and reduction in the size of hysteresis loops as the fractal dimension increases from one-dimensional-like to three-dimensional-like clusters. Discussed are implications for applications in nanomedicine, such as magnetic hyperthermia or magnetic particle imaging. (paper)

  10. Statistical estimation of fast-reactor fuel-element lifetime

    International Nuclear Information System (INIS)

    Proshkin, A.A.; Likhachev, Yu.I.; Tuzov, A.N.; Zabud'ko, L.M.

    1980-01-01

    On the basis of a statistical analysis, the main parameters having a significant influence on the theoretical determination of fuel-element lifetimes in the operation of power fast reactors in steady power conditions are isolated. These include the creep and swelling of the fuel and shell materials, prolonged-plasticity lag, shell-material corrosion, gap contact conductivity, and the strain diagrams of the shell and fuel materials obtained for irradiated materials at the corresponding strain rates. By means of deeper investigation of these properties of the materials, it is possible to increase significantly the reliability of fuel-element lifetime predictions in designing fast reactors and to optimize the structure of fuel elements more correctly. The results of such calculations must obviously be taken into account in the cost-benefit analysis of projected new reactors and in choosing the optimal fuel burnup. 9 refs

  11. Medical Statistics – Mathematics or Oracle? Farewell Lecture

    Directory of Open Access Journals (Sweden)

    Gaus, Wilhelm

    2005-06-01

    Full Text Available Certainty is rare in medicine. This is a direct consequence of the individuality of each and every human being and the reason why we need medical statistics. However, statistics have their pitfalls, too. Fig. 1 shows that the suicide rate peaks in youth, while in Fig. 2 the rate is highest in midlife and Fig. 3 in old age. Which of these contradictory messages is right? After an introduction to the principles of statistical testing, this lecture examines the probability with which statistical test results are correct. For this purpose the level of significance and the power of the test are compared with the sensitivity and specificity of a diagnostic procedure. The probability of obtaining correct statistical test results is the same as that for the positive and negative correctness of a diagnostic procedure and therefore depends on prevalence. The focus then shifts to the problem of multiple statistical testing. The lecture demonstrates that for each data set of reasonable size at least one test result proves to be significant - even if the data set is produced by a random number generator. It is extremely important that a hypothesis is generated independently from the data used for its testing. These considerations enable us to understand the gradation of "lame excuses, lies and statistics" and the difference between pure truth and the full truth. Finally, two historical oracles are cited.

  12. Testing for significance of phase synchronisation dynamics in the EEG.

    Science.gov (United States)

    Daly, Ian; Sweeney-Reed, Catherine M; Nasuto, Slawomir J

    2013-06-01

    A number of tests exist to check for statistical significance of phase synchronisation within the Electroencephalogram (EEG); however, the majority suffer from a lack of generality and applicability. They may also fail to account for temporal dynamics in the phase synchronisation, regarding synchronisation as a constant state instead of a dynamical process. Therefore, a novel test is developed for identifying the statistical significance of phase synchronisation based upon a combination of work characterising temporal dynamics of multivariate time-series and Markov modelling. We show how this method is better able to assess the significance of phase synchronisation than a range of commonly used significance tests. We also show how the method may be applied to identify and classify significantly different phase synchronisation dynamics in both univariate and multivariate datasets.

  13. Conversion factors and oil statistics

    International Nuclear Information System (INIS)

    Karbuz, Sohbet

    2004-01-01

    World oil statistics, in scope and accuracy, are often far from perfect. They can easily lead to misguided conclusions regarding the state of market fundamentals. Without proper attention directed at statistic caveats, the ensuing interpretation of oil market data opens the door to unnecessary volatility, and can distort perception of market fundamentals. Among the numerous caveats associated with the compilation of oil statistics, conversion factors, used to produce aggregated data, play a significant role. Interestingly enough, little attention is paid to conversion factors, i.e. to the relation between different units of measurement for oil. Additionally, the underlying information regarding the choice of a specific factor when trying to produce measurements of aggregated data remains scant. The aim of this paper is to shed some light on the impact of conversion factors for two commonly encountered issues, mass to volume equivalencies (barrels to tonnes) and for broad energy measures encountered in world oil statistics. This paper will seek to demonstrate how inappropriate and misused conversion factors can yield wildly varying results and ultimately distort oil statistics. Examples will show that while discrepancies in commonly used conversion factors may seem trivial, their impact on the assessment of a world oil balance is far from negligible. A unified and harmonised convention for conversion factors is necessary to achieve accurate comparisons and aggregate oil statistics for the benefit of both end-users and policy makers

  14. Increased Academic Productivity of Orthopaedic Surgery Residents Following 2011 Duty Hour Reform.

    Science.gov (United States)

    Johnson, Joey P; Savage, Kevin; Gil, Joseph A; Eberson, Craig P; Mulcahey, Mary K

    2017-12-19

    In 2003 and again in 2011, the Accreditation Council for Graduate Medical Education (ACGME) mandated increasingly stringent resident duty hour restrictions. With less time required at the hospital, residents theoretically have more time for other academic activities, such as research. Our study seeks to examine whether the number of research publications by orthopaedic residents increased following implementation of the 2011 ACGME duty hour restrictions. Pubmed was queried using publicly available alumni lists from programs across the United States. The years 2008 to 2011 were included to assess pre-2011 productivity. The years 2012 to 2015 were included in the post 2011 group. Paired t tests were used to assess differences between groups. Statistical significance was set to p care in any meaningful way. In our study, there was a statistically significant increase in publications after 2011; however, the number of publications between NIH funded and non-NIH funded programs did not differ. Our study is the first to demonstrate that with increasing duty hour restrictions, orthopaedic surgery residents may be using more of their free time to conduct research. Copyright © 2017 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  15. Significant social events and increasing use of life-sustaining treatment: trend analysis using extracorporeal membrane oxygenation as an example.

    Science.gov (United States)

    Chen, Yen-Yuan; Chen, Likwang; Huang, Tien-Shang; Ko, Wen-Je; Chu, Tzong-Shinn; Ni, Yen-Hsuan; Chang, Shan-Chwen

    2014-03-04

    Most studies have examined the outcomes of patients supported by extracorporeal membrane oxygenation as a life-sustaining treatment. It is unclear whether significant social events are associated with the use of life-sustaining treatment. This study aimed to compare the trend of extracorporeal membrane oxygenation use in Taiwan with that in the world, and to examine the influence of significant social events on the trend of extracorporeal membrane oxygenation use in Taiwan. Taiwan's extracorporeal membrane oxygenation uses from 2000 to 2009 were collected from National Health Insurance Research Dataset. The number of the worldwide extracorporeal membrane oxygenation cases was mainly estimated using Extracorporeal Life Support Registry Report International Summary July 2012. The trend of Taiwan's crude annual incidence rate of extracorporeal membrane oxygenation use was compared with that of the rest of the world. Each trend of extracorporeal membrane oxygenation use was examined using joinpoint regression. The measurement was the crude annual incidence rate of extracorporeal membrane oxygenation use. Each of the Taiwan's crude annual incidence rates was much higher than the worldwide one in the same year. Both the trends of Taiwan's and worldwide crude annual incidence rates have significantly increased since 2000. Joinpoint regression selected the model of the Taiwan's trend with one joinpoint in 2006 as the best-fitted model, implying that the significant social events in 2006 were significantly associated with the trend change of extracorporeal membrane oxygenation use following 2006. In addition, significantly social events highlighted by the media are more likely to be associated with the increase of extracorporeal membrane oxygenation use than being fully covered by National Health Insurance. Significant social events, such as a well-known person's successful extracorporeal membrane oxygenation use highlighted by the mass media, are associated with the use of

  16. Profile of Research Methodology and Statistics Training of ...

    African Journals Online (AJOL)

    Background: Medical practitioners need to have knowledge of statistics and research principles, especially with the increasing emphasis on evidence-based medicine. The aim of this study was to determine the profile of research methodology and statistics training of undergraduate medical students at South African ...

  17. Statistics with JMP graphs, descriptive statistics and probability

    CERN Document Server

    Goos, Peter

    2015-01-01

    Peter Goos, Department of Statistics, University ofLeuven, Faculty of Bio-Science Engineering and University ofAntwerp, Faculty of Applied Economics, BelgiumDavid Meintrup, Department of Mathematics and Statistics,University of Applied Sciences Ingolstadt, Faculty of MechanicalEngineering, GermanyThorough presentation of introductory statistics and probabilitytheory, with numerous examples and applications using JMPDescriptive Statistics and Probability provides anaccessible and thorough overview of the most important descriptivestatistics for nominal, ordinal and quantitative data withpartic

  18. Testing University Rankings Statistically: Why this Perhaps is not such a Good Idea after All. Some Reflections on Statistical Power, Effect Size, Random Sampling and Imaginary Populations

    DEFF Research Database (Denmark)

    Schneider, Jesper Wiborg

    2012-01-01

    In this paper we discuss and question the use of statistical significance tests in relation to university rankings as recently suggested. We outline the assumptions behind and interpretations of statistical significance tests and relate this to examples from the recent SCImago Institutions Rankin...

  19. Statistical Analysis of Research Data | Center for Cancer Research

    Science.gov (United States)

    Recent advances in cancer biology have resulted in the need for increased statistical analysis of research data. The Statistical Analysis of Research Data (SARD) course will be held on April 5-6, 2018 from 9 a.m.-5 p.m. at the National Institutes of Health's Natcher Conference Center, Balcony C on the Bethesda Campus. SARD is designed to provide an overview on the general principles of statistical analysis of research data.  The first day will feature univariate data analysis, including descriptive statistics, probability distributions, one- and two-sample inferential statistics.

  20. Increased risk of hyperthyroidism among patients hospitalized with bipolar disorder

    DEFF Research Database (Denmark)

    Thomsen, Anders F; Kessing, Lars V

    2005-01-01

    OBJECTIVES: Hyperthyroidism has been associated with affective disorder in many cross-sectional studies, but longitudinal studies in this connection are scarce. We assessed whether hospitalization with depressive disorder or bipolar disorder was a risk factor for development of hyperthyroidism....... METHODS: We conducted a historical cohort study using the Danish register data. The observational period was 1977--99. Three study cohorts were identified: all patients with a first hospital admission with resulting index discharge diagnoses of depression, bipolar disorder, or osteoarthritis. The risks...... with depressive disorder did not have an increased risk of hyperthyroidism, whereas patients with bipolar disorder had an increased of risk on the margin of statistical significance, when compared to patients with osteoarthritis. Patients with bipolar disorder had a significantly increased risk of hyperthyroidism...

  1. Statistical parametric mapping and statistical probabilistic anatomical mapping analyses of basal/acetazolamide Tc-99m ECD brain SPECT for efficacy assessment of endovascular stent placement for middle cerebral artery stenosis

    International Nuclear Information System (INIS)

    Lee, Tae-Hong; Kim, Seong-Jang; Kim, In-Ju; Kim, Yong-Ki; Kim, Dong-Soo; Park, Kyung-Pil

    2007-01-01

    Statistical parametric mapping (SPM) and statistical probabilistic anatomical mapping (SPAM) were applied to basal/acetazolamide Tc-99m ECD brain perfusion SPECT images in patients with middle cerebral artery (MCA) stenosis to assess the efficacy of endovascular stenting of the MCA. Enrolled in the study were 11 patients (8 men and 3 women, mean age 54.2 ± 6.2 years) who had undergone endovascular stent placement for MCA stenosis. Using SPM and SPAM analyses, we compared the number of significant voxels and cerebral counts in basal and acetazolamide SPECT images before and after stenting, and assessed the perfusion changes and cerebral vascular reserve index (CVRI). The numbers of hypoperfusion voxels in SPECT images were decreased from 10,083 ± 8,326 to 4,531 ± 5,091 in basal images (P 0.0317) and from 13,398 ± 14,222 to 7,699 ± 10,199 in acetazolamide images (P = 0.0142) after MCA stenting. On SPAM analysis, the increases in cerebral counts were significant in acetazolamide images (90.9 ± 2.2 to 93.5 ± 2.3, P = 0.0098) but not in basal images (91 ± 2.7 to 92 ± 2.6, P = 0.1602). The CVRI also showed a statistically significant increase from before stenting (median 0.32; 95% CI -2.19-2.37) to after stenting (median 1.59; 95% CI -0.85-4.16; P = 0.0068). This study revealed the usefulness of voxel-based analysis of basal/acetazolamide brain perfusion SPECT after MCA stent placement. This study showed that SPM and SPAM analyses of basal/acetazolamide Tc-99m brain SPECT could be used to evaluate the short-term hemodynamic efficacy of successful MCA stent placement. (orig.)

  2. Uncertainty the soul of modeling, probability & statistics

    CERN Document Server

    Briggs, William

    2016-01-01

    This book presents a philosophical approach to probability and probabilistic thinking, considering the underpinnings of probabilistic reasoning and modeling, which effectively underlie everything in data science. The ultimate goal is to call into question many standard tenets and lay the philosophical and probabilistic groundwork and infrastructure for statistical modeling. It is the first book devoted to the philosophy of data aimed at working scientists and calls for a new consideration in the practice of probability and statistics to eliminate what has been referred to as the "Cult of Statistical Significance". The book explains the philosophy of these ideas and not the mathematics, though there are a handful of mathematical examples. The topics are logically laid out, starting with basic philosophy as related to probability, statistics, and science, and stepping through the key probabilistic ideas and concepts, and ending with statistical models. Its jargon-free approach asserts that standard methods, suc...

  3. Templated assembly of photoswitches significantly increases the energy-storage capacity of solar thermal fuels.

    Science.gov (United States)

    Kucharski, Timothy J; Ferralis, Nicola; Kolpak, Alexie M; Zheng, Jennie O; Nocera, Daniel G; Grossman, Jeffrey C

    2014-05-01

    Large-scale utilization of solar-energy resources will require considerable advances in energy-storage technologies to meet ever-increasing global energy demands. Other than liquid fuels, existing energy-storage materials do not provide the requisite combination of high energy density, high stability, easy handling, transportability and low cost. New hybrid solar thermal fuels, composed of photoswitchable molecules on rigid, low-mass nanostructures, transcend the physical limitations of molecular solar thermal fuels by introducing local sterically constrained environments in which interactions between chromophores can be tuned. We demonstrate this principle of a hybrid solar thermal fuel using azobenzene-functionalized carbon nanotubes. We show that, on composite bundling, the amount of energy stored per azobenzene more than doubles from 58 to 120 kJ mol(-1), and the material also maintains robust cyclability and stability. Our results demonstrate that solar thermal fuels composed of molecule-nanostructure hybrids can exhibit significantly enhanced energy-storage capabilities through the generation of template-enforced steric strain.

  4. Asymmetric statistical features of the Chinese domestic and international gold price fluctuation

    Science.gov (United States)

    Cao, Guangxi; Zhao, Yingchao; Han, Yan

    2015-05-01

    Analyzing the statistical features of fluctuation is remarkably significant for financial risk identification and measurement. In this study, the asymmetric detrended fluctuation analysis (A-DFA) method was applied to evaluate asymmetric multifractal scaling behaviors in the Shanghai and New York gold markets. Our findings showed that the multifractal features of the Chinese and international gold spot markets were asymmetric. The gold return series persisted longer in an increasing trend than in a decreasing trend. Moreover, the asymmetric degree of multifractals in the Chinese and international gold markets decreased with the increase in fluctuation range. In addition, the empirical analysis using sliding window technology indicated that multifractal asymmetry in the Chinese and international gold markets was characterized by its time-varying feature. However, the Shanghai and international gold markets basically shared a similar asymmetric degree evolution pattern. The American subprime mortgage crisis (2008) and the European debt crisis (2010) enhanced the asymmetric degree of the multifractal features of the Chinese and international gold markets. Furthermore, we also make statistical tests for the results of multifractatity and asymmetry, and discuss the origin of them. Finally, results of the empirical analysis using the threshold autoregressive conditional heteroskedasticity (TARCH) and exponential generalized autoregressive conditional heteroskedasticity (EGARCH) models exhibited that good news had a more significant effect on the cyclical fluctuation of the gold market than bad news. Moreover, good news exerted a more significant effect on the Chinese gold market than on the international gold market.

  5. Statistical-Based Insights in Spence’s Theory of Honest Signaling

    Directory of Open Access Journals (Sweden)

    Mihaela Grecu

    2015-09-01

    Full Text Available Since Michael Spence revealed the secrets of (dishonest signalling on labour market, an increasing body of literature in various fields struggled to find the best way to solve the game under imperfect information that describes the interaction between the employer and the employee. Despite the value of the signal originally acknowledged by Spence, the university degree, a recent trend of increasing in unemployment rate among graduates of higher education suggests that between higher education and labour market may be a less significant connection than universities claim, potentially resulting in a decreasing power of the signal consisting of an university diploma. The aim of this study is to provide statistical evidence of the connection between higher education and labour market in Romania and to discuss some of the factors that potentially cause young people to choose a particular study program. Based on statistical analysis, we investigate the gap between the number of graduates in Law and the labour market capacity in the field, and draw conclusions regarding the accuracy of the mechanism that leads to equilibrium between supply and demand on the university market.

  6. Statistical process control in nursing research.

    Science.gov (United States)

    Polit, Denise F; Chaboyer, Wendy

    2012-02-01

    In intervention studies in which randomization to groups is not possible, researchers typically use quasi-experimental designs. Time series designs are strong quasi-experimental designs but are seldom used, perhaps because of technical and analytic hurdles. Statistical process control (SPC) is an alternative analytic approach to testing hypotheses about intervention effects using data collected over time. SPC, like traditional statistical methods, is a tool for understanding variation and involves the construction of control charts that distinguish between normal, random fluctuations (common cause variation), and statistically significant special cause variation that can result from an innovation. The purpose of this article is to provide an overview of SPC and to illustrate its use in a study of a nursing practice improvement intervention. Copyright © 2011 Wiley Periodicals, Inc.

  7. MSD Recombination Method in Statistical Machine Translation

    Science.gov (United States)

    Gros, Jerneja Žganec

    2008-11-01

    Freely available tools and language resources were used to build the VoiceTRAN statistical machine translation (SMT) system. Various configuration variations of the system are presented and evaluated. The VoiceTRAN SMT system outperformed the baseline conventional rule-based MT system in all English-Slovenian in-domain test setups. To further increase the generalization capability of the translation model for lower-coverage out-of-domain test sentences, an "MSD-recombination" approach was proposed. This approach not only allows a better exploitation of conventional translation models, but also performs well in the more demanding translation direction; that is, into a highly inflectional language. Using this approach in the out-of-domain setup of the English-Slovenian JRC-ACQUIS task, we have achieved significant improvements in translation quality.

  8. Beyond P Values and Hypothesis Testing: Using the Minimum Bayes Factor to Teach Statistical Inference in Undergraduate Introductory Statistics Courses

    Science.gov (United States)

    Page, Robert; Satake, Eiki

    2017-01-01

    While interest in Bayesian statistics has been growing in statistics education, the treatment of the topic is still inadequate in both textbooks and the classroom. Because so many fields of study lead to careers that involve a decision-making process requiring an understanding of Bayesian methods, it is becoming increasingly clear that Bayesian…

  9. Facts and Findings on the Statistical Discrepancies of the Korean Balance of Payments

    Directory of Open Access Journals (Sweden)

    Sangyong Joo

    2000-03-01

    Full Text Available The Korea experienced drastic increases in the statistical discrepancy in balance of payments since 1997. In general, the expansion and complexity of external transactions induced by contributing capital account liberalization has contributed to this fact. The abolition of the export/import approval system seems to have accelerated the mismeasurement and omissions in external transactions. Of course, the influence of currency crisis occurred cannot be ruled out. Among economic factors, Won/dollar exchange rate volatility is found to have significant explanatory power on the magnitude of statistical discrepancy, while exchange rate returns have not. WE interpret this as the demand for relatively safe currency rising in presence of the uncertainty in domestic currency values.

  10. Statistics Anxiety and Business Statistics: The International Student

    Science.gov (United States)

    Bell, James A.

    2008-01-01

    Does the international student suffer from statistics anxiety? To investigate this, the Statistics Anxiety Rating Scale (STARS) was administered to sixty-six beginning statistics students, including twelve international students and fifty-four domestic students. Due to the small number of international students, nonparametric methods were used to…

  11. Statistical disclosure control for microdata methods and applications in R

    CERN Document Server

    Templ, Matthias

    2017-01-01

    This book on statistical disclosure control presents the theory, applications and software implementation of the traditional approach to (micro)data anonymization, including data perturbation methods, disclosure risk, data utility, information loss and methods for simulating synthetic data. Introducing readers to the R packages sdcMicro and simPop, the book also features numerous examples and exercises with solutions, as well as case studies with real-world data, accompanied by the underlying R code to allow readers to reproduce all results. The demand for and volume of data from surveys, registers or other sources containing sensible information on persons or enterprises have increased significantly over the last several years. At the same time, privacy protection principles and regulations have imposed restrictions on the access and use of individual data. Proper and secure microdata dissemination calls for the application of statistical disclosure control methods to the data before release. This book is in...

  12. Applying Statistical Mechanics to pixel detectors

    International Nuclear Information System (INIS)

    Pindo, Massimiliano

    2002-01-01

    Pixel detectors, being made of a large number of active cells of the same kind, can be considered as significant sets to which Statistical Mechanics variables and methods can be applied. By properly redefining well known statistical parameters in order to let them match the ones that actually characterize pixel detectors, an analysis of the way they work can be performed in a totally new perspective. A deeper understanding of pixel detectors is attained, helping in the evaluation and comparison of their intrinsic characteristics and performance

  13. Introductory statistics for the behavioral sciences

    CERN Document Server

    Welkowitz, Joan; Cohen, Jacob

    1971-01-01

    Introductory Statistics for the Behavioral Sciences provides an introduction to statistical concepts and principles. This book emphasizes the robustness of parametric procedures wherein such significant tests as t and F yield accurate results even if such assumptions as equal population variances and normal population distributions are not well met.Organized into three parts encompassing 16 chapters, this book begins with an overview of the rationale upon which much of behavioral science research is based, namely, drawing inferences about a population based on data obtained from a samp

  14. Using Statistical Process Control to Enhance Student Progression

    Science.gov (United States)

    Hanna, Mark D.; Raichura, Nilesh; Bernardes, Ednilson

    2012-01-01

    Public interest in educational outcomes has markedly increased in the most recent decade; however, quality management and statistical process control have not deeply penetrated the management of academic institutions. This paper presents results of an attempt to use Statistical Process Control (SPC) to identify a key impediment to continuous…

  15. [Characteristic and clinical significance of DNA methyltransferase 3B overexpression in endometrial carcinoma].

    Science.gov (United States)

    Dong, Y; Zhou, M; Ba, X J; Si, J W; Li, W T; Wang, Y; Li, D; Li, T

    2016-10-18

    To determine the clinicopathological significance of the DNA methyltransferase 3B (DNMT3B) overexpression in endometrial carcinomas and to evaluate its correlation with hormone receptor status. Immunohistochemistry was performed to assess the expression of DNMT3B and hormone receptors in 104 endometrial carcinomas. DNMT3B overexpression occurred frequently in endometrioid carcinoma (EC, 54.8%) more than in nonendometrioid carcinoma (NEC, 30.0%) with statistical significance (P=0.028). Furthermore, there was a trend that EC with worse clinico-pathological variables and shorter survival had a higher DNMT3B expression, and the correlation between DNMT3B and tumor grade reached statistical significance (P=0.019).A negative correlation between DNMT3B and estrogen receptor (ER) or progesterone receptor (PR) expression was found in EC. NMT3B overexpression occurred frequently in the ER or PR negative subgroups (78.9%, 86.7%) more than in the positive subgroups (47.7%, 47.8%) with statistical significance (P=0.016, P=0.006). In addition, the DNMT3B overexpression increased in tumors with both ER and PR negative expression (92.9%, P=0.002). However, no such correlation was found in NEC (P>0.05). Sequence analyses demonstrated multiple ER and PR binding sites in the promoter regions of DNMT3B gene. This study showed that the expression of DNMT3B in EC and NEC was different. DNMT3B overexpression in EC was associated with the worse clinicopathological variables and might have predictive value. The methylation status of EC and NEC maybe different. In addition, in EC, DNMT3B overexpression negatively correlated with ER or PR expression. In NEC, the correlation between DNMT3B and ER or PR status was not present.

  16. Spreadsheets as tools for statistical computing and statistics education

    OpenAIRE

    Neuwirth, Erich

    2000-01-01

    Spreadsheets are an ubiquitous program category, and we will discuss their use in statistics and statistics education on various levels, ranging from very basic examples to extremely powerful methods. Since the spreadsheet paradigm is very familiar to many potential users, using it as the interface to statistical methods can make statistics more easily accessible.

  17. A comprehensive investigation on static and dynamic friction coefficients of wheat grain with the adoption of statistical analysis.

    Science.gov (United States)

    Shafaei, S M; Kamgar, S

    2017-07-01

    This paper deals with studying and modeling static friction coefficient (SFC) and dynamic friction coefficient (DFC) of wheat grain as affected by several treatments. Significance of single effect (SE) and dual interaction effect (DIE) of treatments (moisture content and contact surface) on SFC and, SE, DIE, and triple interaction effect (TIE) of treatments (moisture content, contact surface and sliding velocity) on DFC were determined using statistical analysis methods. Multiple linear regression (MLR) modeling was employed to predict SFC and DFC on different contact surfaces. Predictive ability of developed MLR models was evaluated using some statistical parameters (coefficient of determination ( R 2 ), root mean square error (RMSE), and mean relative deviation modulus (MRDM)). Results indicated that significant increasing DIE of treatments on SFC was 3.2 and 3 times greater than significant increasing SE of moisture content and contact surface, respectively. In case of DFC, the significant increasing TIE of treatments was 8.8, 3.7, and 8.9 times greater than SE of moisture content, contact surface, and sliding velocity, respectively. It was also found that the SE of contact surface on SFC was 1.1 times greater than that of moisture content and the SE of contact surface on DFC was 2.4 times greater than that of moisture content or sliding velocity. According to the reasonable average of statistical parameters ( R 2  = 0.955, RMSE = 0.01788 and MRDM = 3.152%), the SFC and DFC could be successfully predicted by suggested MLR models. Practically, it is recommended to apply the models for direct prediction of SFC and DFC, respective to each contact surface, based on moisture content and sliding velocity.

  18. Evaluation of undergraduate nursing students' attitudes towards statistics courses, before and after a course in applied statistics.

    Science.gov (United States)

    Hagen, Brad; Awosoga, Olu; Kellett, Peter; Dei, Samuel Ofori

    2013-09-01

    Undergraduate nursing students must often take a course in statistics, yet there is scant research to inform teaching pedagogy. The objectives of this study were to assess nursing students' overall attitudes towards statistics courses - including (among other things) overall fear and anxiety, preferred learning and teaching styles, and the perceived utility and benefit of taking a statistics course - before and after taking a mandatory course in applied statistics. The authors used a pre-experimental research design (a one-group pre-test/post-test research design), by administering a survey to nursing students at the beginning and end of the course. The study was conducted at a University in Western Canada that offers an undergraduate Bachelor of Nursing degree. Participants included 104 nursing students, in the third year of a four-year nursing program, taking a course in statistics. Although students only reported moderate anxiety towards statistics, student anxiety about statistics had dropped by approximately 40% by the end of the course. Students also reported a considerable and positive change in their attitudes towards learning in groups by the end of the course, a potential reflection of the team-based learning that was used. Students identified preferred learning and teaching approaches, including the use of real-life examples, visual teaching aids, clear explanations, timely feedback, and a well-paced course. Students also identified preferred instructor characteristics, such as patience, approachability, in-depth knowledge of statistics, and a sense of humor. Unfortunately, students only indicated moderate agreement with the idea that statistics would be useful and relevant to their careers, even by the end of the course. Our findings validate anecdotal reports on statistics teaching pedagogy, although more research is clearly needed, particularly on how to increase students' perceptions of the benefit and utility of statistics courses for their nursing

  19. HYPE: a WFD tool for the identification of significant and sustained upward trends in groundwater time series

    Science.gov (United States)

    Lopez, Benjamin; Croiset, Nolwenn; Laurence, Gourcy

    2014-05-01

    The Water Framework Directive 2006/11/CE (WFD) on the protection of groundwater against pollution and deterioration asks Member States to identify significant and sustained upward trends in all bodies or groups of bodies of groundwater that are characterised as being at risk in accordance with Annex II to Directive 2000/60/EC. The Directive indicates that the procedure for the identification of significant and sustained upward trends must be based on a statistical method. Moreover, for significant increases of concentrations of pollutants, trend reversals are identified as being necessary. This means to be able to identify significant trend reversals. A specific tool, named HYPE, has been developed in order to help stakeholders working on groundwater trend assessment. The R encoded tool HYPE provides statistical analysis of groundwater time series. It follows several studies on the relevancy of the use of statistical tests on groundwater data series (Lopez et al., 2011) and other case studies on the thematic (Bourgine et al., 2012). It integrates the most powerful and robust statistical tests for hydrogeological applications. HYPE is linked to the French national database on groundwater data (ADES). So monitoring data gathered by the Water Agencies can be directly processed. HYPE has two main modules: - a characterisation module, which allows to visualize time series. HYPE calculates the main statistical characteristics and provides graphical representations; - a trend module, which identifies significant breaks, trends and trend reversals in time series, providing result table and graphical representation (cf figure). Additional modules are also implemented to identify regional and seasonal trends and to sample time series in a relevant way. HYPE has been used successfully in 2012 by the French Water Agencies to satisfy requirements of the WFD, concerning characterization of groundwater bodies' qualitative status and evaluation of the risk of non-achievement of

  20. An Exploration of the Perceived Usefulness of the Introductory Statistics Course and Students’ Intentions to Further Engage in Statistics

    Directory of Open Access Journals (Sweden)

    Rossi Hassad

    2018-01-01

    Full Text Available Students� attitude, including perceived usefulness, is generally associated with academic success. The related research in statistics education has focused almost exclusively on the role of attitude in explaining and predicting academic learning outcomes, hence there is a paucity of research evidence on how attitude (particularly perceived usefulness impacts students� intentions to use and stay engaged in statistics beyond the introductory course. This study explored the relationship between college students� perception of the usefulness of an introductory statistics course, their beliefs about where statistics will be most useful, and their intentions to take another statistics course. A cross-sectional study of 106 students was conducted. The mean rating for usefulness was 4.7 (out of 7, with no statistically significant differences based on gender and age. Sixty-four percent reported that they would consider taking another statistics course, and this subgroup rated the course as more useful (p = .01. The majority (67% reported that statistics would be most useful for either graduate school or research, whereas 14% indicated their job, and 19% were undecided. The �undecided� students had the lowest mean rating for usefulness of the course (p = .001. Addressing data, in the context of real-world problem-solving and decision-making, could facilitate students to better appreciate the usefulness and practicality of statistics. Qualitative research methods could help to elucidate these findings.

  1. Data-driven inference for the spatial scan statistic

    Directory of Open Access Journals (Sweden)

    Duczmal Luiz H

    2011-08-01

    Full Text Available Abstract Background Kulldorff's spatial scan statistic for aggregated area maps searches for clusters of cases without specifying their size (number of areas or geographic location in advance. Their statistical significance is tested while adjusting for the multiple testing inherent in such a procedure. However, as is shown in this work, this adjustment is not done in an even manner for all possible cluster sizes. Results A modification is proposed to the usual inference test of the spatial scan statistic, incorporating additional information about the size of the most likely cluster found. A new interpretation of the results of the spatial scan statistic is done, posing a modified inference question: what is the probability that the null hypothesis is rejected for the original observed cases map with a most likely cluster of size k, taking into account only those most likely clusters of size k found under null hypothesis for comparison? This question is especially important when the p-value computed by the usual inference process is near the alpha significance level, regarding the correctness of the decision based in this inference. Conclusions A practical procedure is provided to make more accurate inferences about the most likely cluster found by the spatial scan statistic.

  2. Implementing statistical equating for MRCP(UK) Parts 1 and 2.

    Science.gov (United States)

    McManus, I C; Chis, Liliana; Fox, Ray; Waller, Derek; Tang, Peter

    2014-09-26

    The MRCP(UK) exam, in 2008 and 2010, changed the standard-setting of its Part 1 and Part 2 examinations from a hybrid Angoff/Hofstee method to statistical equating using Item Response Theory, the reference group being UK graduates. The present paper considers the implementation of the change, the question of whether the pass rate increased amongst non-UK candidates, any possible role of Differential Item Functioning (DIF), and changes in examination predictive validity after the change. Analysis of data of MRCP(UK) Part 1 exam from 2003 to 2013 and Part 2 exam from 2005 to 2013. Inspection suggested that Part 1 pass rates were stable after the introduction of statistical equating, but showed greater annual variation probably due to stronger candidates taking the examination earlier. Pass rates seemed to have increased in non-UK graduates after equating was introduced, but was not associated with any changes in DIF after statistical equating. Statistical modelling of the pass rates for non-UK graduates found that pass rates, in both Part 1 and Part 2, were increasing year on year, with the changes probably beginning before the introduction of equating. The predictive validity of Part 1 for Part 2 was higher with statistical equating than with the previous hybrid Angoff/Hofstee method, confirming the utility of IRT-based statistical equating. Statistical equating was successfully introduced into the MRCP(UK) Part 1 and Part 2 written examinations, resulting in higher predictive validity than the previous Angoff/Hofstee standard setting. Concerns about an artefactual increase in pass rates for non-UK candidates after equating were shown not to be well-founded. Most likely the changes resulted from a genuine increase in candidate ability, albeit for reasons which remain unclear, coupled with a cognitive illusion giving the impression of a step-change immediately after equating began. Statistical equating provides a robust standard-setting method, with a better

  3. Register-based statistics statistical methods for administrative data

    CERN Document Server

    Wallgren, Anders

    2014-01-01

    This book provides a comprehensive and up to date treatment of  theory and practical implementation in Register-based statistics. It begins by defining the area, before explaining how to structure such systems, as well as detailing alternative approaches. It explains how to create statistical registers, how to implement quality assurance, and the use of IT systems for register-based statistics. Further to this, clear details are given about the practicalities of implementing such statistical methods, such as protection of privacy and the coordination and coherence of such an undertaking. Thi

  4. Cancer Statistics

    Science.gov (United States)

    ... What Is Cancer? Cancer Statistics Cancer Disparities Cancer Statistics Cancer has a major impact on society in ... success of efforts to control and manage cancer. Statistics at a Glance: The Burden of Cancer in ...

  5. Skipping one or more dialysis sessions significantly increases mortality: measuring the impact of non-adherence

    Directory of Open Access Journals (Sweden)

    Eduardo Gottlieb

    2014-06-01

    Full Text Available Introduction: Non-adherence to the prescribed dialysis sessions frequency ranges from 2% to 50% of patients. The objective of this study was to evaluate the impact of detecting and measuring the non-adherence to the prescribed dialysis frequency and to determine the importance of a multidisciplinary approach with the aim of improving adherence. Methods: longitudinal cohort study including 8,164 prevalent hemodialysis patients in April 2010, with more than 90 days of treatment, in Fresenius Medical Care Argentina units that were monitored for 3 years. The survey evaluated: interruption of at least one dialysis session in a month or reduction at least 10 minutes of a dialysis session in a month, during 6 months prior to the survey. Relative mortality risks were evaluated among groups. Results: 648 patients (7.9% interrupted dialysis sessions: 320 (3.9% interrupted one session per month and 328 (4.01% interrupted more than one session per month. After 3 years monitoring, 349 patients (53.8 % remained active in hemodialysis and 299 were inactive due to different reasons: 206 deceased (31.8 %, 47 transfers or monitoring losses (7.25 %, 36 transplanted (5.55 %, 8 changes to PD modality (1.2% and 2 recovered their kidney function (0.3 %.Interrupting one session per month significantly increased the mortality risk comparing both groups (interrupters and non-interrupters: RR 2.65 (IC 95% 2.24 – 3.14. Interrupting more than one dialysis session also increased significantly mortality risk comparing to the non-interrupters: RR 2.8 (IC 95% 2.39 – 3.28. After 3 years monitoring, 41.6 % of interrupters at the beginning had improved their adherence through a multidisciplinary program of quality improvement. Conclusion: Global mortality was greater among patients who interrupted dialysis sessions. A considerable proportion of interrupter patients at the beginning modified their behavior through the implementation of a multidisciplinary program of quality

  6. Accelerator driven reactors, - the significance of the energy distribution of spallation neutrons on the neutron statistics

    Energy Technology Data Exchange (ETDEWEB)

    Fhager, V

    2000-01-01

    In order to make correct predictions of the second moment of statistical nuclear variables, such as the number of fissions and the number of thermalized neutrons, the dependence of the energy distribution of the source particles on their number should be considered. It has been pointed out recently that neglecting this number dependence in accelerator driven systems might result in bad estimates of the second moment, and this paper contains qualitative and quantitative estimates of the size of these efforts. We walk towards the requested results in two steps. First, models of the number dependent energy distributions of the neutrons that are ejected in the spallation reactions are constructed, both by simple assumptions and by extracting energy distributions of spallation neutrons from a high-energy particle transport code. Then, the second moment of nuclear variables in a sub-critical reactor, into which spallation neutrons are injected, is calculated. The results from second moment calculations using number dependent energy distributions for the source neutrons are compared to those where only the average energy distribution is used. Two physical models are employed to simulate the neutron transport in the reactor. One is analytical, treating only slowing down of neutrons by elastic scattering in the core material. For this model, equations are written down and solved for the second moment of thermalized neutrons that include the distribution of energy of the spallation neutrons. The other model utilizes Monte Carlo methods for tracking the source neutrons as they travel inside the reactor material. Fast and thermal fission reactions are considered, as well as neutron capture and elastic scattering, and the second moment of the number of fissions, the number of neutrons that leaked out of the system, etc. are calculated. Both models use a cylindrical core with a homogenous mixture of core material. Our results indicate that the number dependence of the energy

  7. Accelerator driven reactors, - the significance of the energy distribution of spallation neutrons on the neutron statistics

    International Nuclear Information System (INIS)

    Fhager, V.

    2000-01-01

    In order to make correct predictions of the second moment of statistical nuclear variables, such as the number of fissions and the number of thermalized neutrons, the dependence of the energy distribution of the source particles on their number should be considered. It has been pointed out recently that neglecting this number dependence in accelerator driven systems might result in bad estimates of the second moment, and this paper contains qualitative and quantitative estimates of the size of these efforts. We walk towards the requested results in two steps. First, models of the number dependent energy distributions of the neutrons that are ejected in the spallation reactions are constructed, both by simple assumptions and by extracting energy distributions of spallation neutrons from a high-energy particle transport code. Then, the second moment of nuclear variables in a sub-critical reactor, into which spallation neutrons are injected, is calculated. The results from second moment calculations using number dependent energy distributions for the source neutrons are compared to those where only the average energy distribution is used. Two physical models are employed to simulate the neutron transport in the reactor. One is analytical, treating only slowing down of neutrons by elastic scattering in the core material. For this model, equations are written down and solved for the second moment of thermalized neutrons that include the distribution of energy of the spallation neutrons. The other model utilizes Monte Carlo methods for tracking the source neutrons as they travel inside the reactor material. Fast and thermal fission reactions are considered, as well as neutron capture and elastic scattering, and the second moment of the number of fissions, the number of neutrons that leaked out of the system, etc. are calculated. Both models use a cylindrical core with a homogenous mixture of core material. Our results indicate that the number dependence of the energy

  8. Testing for Statistical Discrimination based on Gender

    DEFF Research Database (Denmark)

    Lesner, Rune Vammen

    . It is shown that the implications of both screening discrimination and stereotyping are consistent with observable wage dynamics. In addition, it is found that the gender wage gap decreases in tenure but increases in job transitions and that the fraction of women in high-ranking positions within a firm does......This paper develops a model which incorporates the two most commonly cited strands of the literature on statistical discrimination, namely screening discrimination and stereotyping. The model is used to provide empirical evidence of statistical discrimination based on gender in the labour market...... not affect the level of statistical discrimination by gender....

  9. The increasing financial impact of chronic kidney disease in australia.

    Science.gov (United States)

    Tucker, Patrick S; Kingsley, Michael I; Morton, R Hugh; Scanlan, Aaron T; Dalbo, Vincent J

    2014-01-01

    The aim of this investigation was to determine and compare current and projected expenditure associated with chronic kidney disease (CKD), renal replacement therapy (RRT), and cardiovascular disease (CVD) in Australia. Data published by Australia and New Zealand Dialysis and Transplant Registry, Australian Institute of Health and Welfare, and World Bank were used to compare CKD-, RRT-, and CVD-related expenditure and prevalence rates. Prevalence and expenditure predictions were made using a linear regression model. Direct statistical comparisons of rates of annual increase utilised indicator variables in combined regressions. Statistical significance was set at P Australia's healthcare system, compared to CVD. Research focusing on novel preventative/therapeutic interventions is warranted.

  10. Statistical Compression for Climate Model Output

    Science.gov (United States)

    Hammerling, D.; Guinness, J.; Soh, Y. J.

    2017-12-01

    Numerical climate model simulations run at high spatial and temporal resolutions generate massive quantities of data. As our computing capabilities continue to increase, storing all of the data is not sustainable, and thus is it important to develop methods for representing the full datasets by smaller compressed versions. We propose a statistical compression and decompression algorithm based on storing a set of summary statistics as well as a statistical model describing the conditional distribution of the full dataset given the summary statistics. We decompress the data by computing conditional expectations and conditional simulations from the model given the summary statistics. Conditional expectations represent our best estimate of the original data but are subject to oversmoothing in space and time. Conditional simulations introduce realistic small-scale noise so that the decompressed fields are neither too smooth nor too rough compared with the original data. Considerable attention is paid to accurately modeling the original dataset-one year of daily mean temperature data-particularly with regard to the inherent spatial nonstationarity in global fields, and to determining the statistics to be stored, so that the variation in the original data can be closely captured, while allowing for fast decompression and conditional emulation on modest computers.

  11. Parsing statistical machine translation output

    NARCIS (Netherlands)

    Carter, S.; Monz, C.; Vetulani, Z.

    2009-01-01

    Despite increasing research into the use of syntax during statistical machine translation, the incorporation of syntax into language models has seen limited success. We present a study of the discriminative abilities of generative syntax-based language models, over and above standard n-gram models,

  12. Software Used to Generate Cancer Statistics - SEER Cancer Statistics

    Science.gov (United States)

    Videos that highlight topics and trends in cancer statistics and definitions of statistical terms. Also software tools for analyzing and reporting cancer statistics, which are used to compile SEER's annual reports.

  13. Culture of Fear and Control in Costa Rica (I). Crime Statistics and Law Enforcement

    OpenAIRE

    Huhn, Sebastian

    2009-01-01

    The Costa Rican talk of crime is fundamentally based on the assumption that crime rates have increased significantly in recent years and that there is today a vast and alarming amount of crime. On the basis of this assumption, fear of crime, the call for the “iron fist,” and drastic law enforcement actions are continually increasing. While crime statistics are the logical basis for the hypothesis on the far-reaching extent of delinquency, they are used in a problematic way in the talk of crim...

  14. The use and misuse of statistical methodologies in pharmacology research.

    Science.gov (United States)

    Marino, Michael J

    2014-01-01

    Descriptive, exploratory, and inferential statistics are necessary components of hypothesis-driven biomedical research. Despite the ubiquitous need for these tools, the emphasis on statistical methods in pharmacology has become dominated by inferential methods often chosen more by the availability of user-friendly software than by any understanding of the data set or the critical assumptions of the statistical tests. Such frank misuse of statistical methodology and the quest to reach the mystical αstatistical training. Perhaps more critically, a poor understanding of statistical tools limits the conclusions that may be drawn from a study by divorcing the investigator from their own data. The net result is a decrease in quality and confidence in research findings, fueling recent controversies over the reproducibility of high profile findings and effects that appear to diminish over time. The recent development of "omics" approaches leading to the production of massive higher dimensional data sets has amplified these issues making it clear that new approaches are needed to appropriately and effectively mine this type of data. Unfortunately, statistical education in the field has not kept pace. This commentary provides a foundation for an intuitive understanding of statistics that fosters an exploratory approach and an appreciation for the assumptions of various statistical tests that hopefully will increase the correct use of statistics, the application of exploratory data analysis, and the use of statistical study design, with the goal of increasing reproducibility and confidence in the literature. Copyright © 2013. Published by Elsevier Inc.

  15. Understanding Statistics and Statistics Education: A Chinese Perspective

    Science.gov (United States)

    Shi, Ning-Zhong; He, Xuming; Tao, Jian

    2009-01-01

    In recent years, statistics education in China has made great strides. However, there still exists a fairly large gap with the advanced levels of statistics education in more developed countries. In this paper, we identify some existing problems in statistics education in Chinese schools and make some proposals as to how they may be overcome. We…

  16. [Significant increase in the colonisation of Staphylococcus aureus among medical students during their hospital practices].

    Science.gov (United States)

    Rodríguez-Avial, Carmen; Alvarez-Novoa, Andrea; Losa, Azucena; Picazo, Juan J

    2013-10-01

    Staphylococcus aureus is a pathogen of major concern. The emergence of methicillin-resistant S. aureus (MRSA) has increasingly complicated the therapeutic approach of hospital-acquired infections. Surveillance of MRSA and control measures must be implemented in different healthcare settings, including screening programs for carriers. Our first aim was to determine the prevalence of methicillin-susceptible S. aureus (MSSA) and MRSA nasal carriage in medical students from the Clínico San Carlos Hospital (Madrid). As the MRSA carrier rate in healthcare workers is higher than in the general population, we hypothesised that carrier rate could be increased during their clinical practice in their last three years. We performed an epidemiologic al study of the prevalence of S. aureus colonisation among a group of medical students, who were sampled in 2008 in their third-year, and in 2012 when this class was in its sixth year. We have found a significant increase in MSSA carriage, from 27% to 46%. There were no MRSA colonisations in the third-year, but one was found in the sixth-year group. The large majority of strains (89%) of strains were resistant to penicillin, and 27% to erythromycin and clindamycin. As 19 coagulase-negative Staphylococcus MR were also identified, a horizontal transfer of genes, such as mecA gene to S. aureus, could have occurred. Medical students are both, at risk for acquiring, and a potential source of nosocomial pathogens, mainly MSSA. Therefore, they should take special care for hygienic precautions, such as frequent and proper hand washing, while working in the hospital. Copyright © 2012 Elsevier España, S.L. All rights reserved.

  17. 9th Symposium on Computational Statistics

    CERN Document Server

    Mildner, Vesna

    1990-01-01

    Although no-one is, probably, too enthused about the idea, it is a fact that the development of most empirical sciences to a great extent depends on the development of data analysis methods and techniques, which, due to the necessity of application of computers for that purpose, actually means that it practically depends on the advancement and orientation of computer statistics. Every other year the International Association for Statistical Computing sponsors the organizition of meetings of individual s professiona77y involved in computational statistics. Since these meetings attract professionals from allover the world, they are a good sample for the estimation of trends in this area which some believe is a statistics proper while others claim it is computer science. It seems, though, that an increasing number of colleagues treat it as an independent scientific or at least technical discipline. This volume contains six invited papers, 41 contributed papers and, finally, two papers which are, formally, softwa...

  18. Mathematical statistics

    CERN Document Server

    Pestman, Wiebe R

    2009-01-01

    This textbook provides a broad and solid introduction to mathematical statistics, including the classical subjects hypothesis testing, normal regression analysis, and normal analysis of variance. In addition, non-parametric statistics and vectorial statistics are considered, as well as applications of stochastic analysis in modern statistics, e.g., Kolmogorov-Smirnov testing, smoothing techniques, robustness and density estimation. For students with some elementary mathematical background. With many exercises. Prerequisites from measure theory and linear algebra are presented.

  19. Evaluating statistical and clinical significance of intervention effects in single-case experimental designs: An SPSS method to analyze univariate data

    NARCIS (Netherlands)

    Maric, M.; de Haan, M.; Hogendoorn, S.M.; Wolters, L.H.; Huizenga, H.M.

    2015-01-01

    Single-case experimental designs are useful methods in clinical research practice to investigate individual client progress. Their proliferation might have been hampered by methodological challenges such as the difficulty applying existing statistical procedures. In this article, we describe a

  20. Evaluating statistical and clinical significance of intervention effects in single-case experimental designs: an SPSS method to analyze univariate data

    NARCIS (Netherlands)

    Maric, Marija; de Haan, Else; Hogendoorn, Sanne M.; Wolters, Lidewij H.; Huizenga, Hilde M.

    2015-01-01

    Single-case experimental designs are useful methods in clinical research practice to investigate individual client progress. Their proliferation might have been hampered by methodological challenges such as the difficulty applying existing statistical procedures. In this article, we describe a

  1. Statistical inference and visualization in scale-space for spatially dependent images

    KAUST Repository

    Vaughan, Amy

    2012-03-01

    SiZer (SIgnificant ZERo crossing of the derivatives) is a graphical scale-space visualization tool that allows for statistical inferences. In this paper we develop a spatial SiZer for finding significant features and conducting goodness-of-fit tests for spatially dependent images. The spatial SiZer utilizes a family of kernel estimates of the image and provides not only exploratory data analysis but also statistical inference with spatial correlation taken into account. It is also capable of comparing the observed image with a specific null model being tested by adjusting the statistical inference using an assumed covariance structure. Pixel locations having statistically significant differences between the image and a given null model are highlighted by arrows. The spatial SiZer is compared with the existing independent SiZer via the analysis of simulated data with and without signal on both planar and spherical domains. We apply the spatial SiZer method to the decadal temperature change over some regions of the Earth. © 2011 The Korean Statistical Society.

  2. Increasing vaginal progesterone gel supplementation after frozen-thawed embryo transfer significantly increases the delivery rate

    DEFF Research Database (Denmark)

    Alsbjerg, Birgit; Polyzos, Nikolaos P; Elbaek, Helle Olesen

    2013-01-01

    The aim of this study was to evaluate the reproductive outcome in patients receiving frozen-thawed embryo transfer before and after doubling of the vaginal progesterone gel supplementation. The study was a retrospective study performed in The Fertility Clinic, Skive Regional Hospital, Denmark....... A total of 346 infertility patients with oligoamenorrhoea undergoing frozen-thawed embryo transfer after priming with oestradiol and vaginal progesterone gel were included. The vaginal progesterone dose was changed from 90mg (Crinone) once a day to twice a day and the reproductive outcome during the two...... rate (8.7% versus 20.5%, respectively; P=0.002). Doubling of the vaginal progesterone gel supplementation during frozen-thawed embryo transfer cycles decreased the early pregnancy loss rate, resulting in a significantly higher delivery rate. This study evaluated the reproductive outcome of 346 women...

  3. The Role of Statistics in Business and Industry

    CERN Document Server

    Hahn, Gerald J

    2011-01-01

    An insightful guide to the use of statistics for solving key problems in modern-day business and industry This book has been awarded the Technometrics Ziegel Prize for the best book reviewed by the journal in 2010. Technometrics is a journal of statistics for the physical, chemical and engineering sciences, published jointly by the American Society for Quality and the American Statistical Association. Criteria for the award include that the book brings together in one volume a body of material previously only available in scattered research articles and having the potential to significantly im

  4. Sampling, Probability Models and Statistical Reasoning Statistical

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 1; Issue 5. Sampling, Probability Models and Statistical Reasoning Statistical Inference. Mohan Delampady V R Padmawar. General Article Volume 1 Issue 5 May 1996 pp 49-58 ...

  5. Statistical Symbolic Execution with Informed Sampling

    Science.gov (United States)

    Filieri, Antonio; Pasareanu, Corina S.; Visser, Willem; Geldenhuys, Jaco

    2014-01-01

    Symbolic execution techniques have been proposed recently for the probabilistic analysis of programs. These techniques seek to quantify the likelihood of reaching program events of interest, e.g., assert violations. They have many promising applications but have scalability issues due to high computational demand. To address this challenge, we propose a statistical symbolic execution technique that performs Monte Carlo sampling of the symbolic program paths and uses the obtained information for Bayesian estimation and hypothesis testing with respect to the probability of reaching the target events. To speed up the convergence of the statistical analysis, we propose Informed Sampling, an iterative symbolic execution that first explores the paths that have high statistical significance, prunes them from the state space and guides the execution towards less likely paths. The technique combines Bayesian estimation with a partial exact analysis for the pruned paths leading to provably improved convergence of the statistical analysis. We have implemented statistical symbolic execution with in- formed sampling in the Symbolic PathFinder tool. We show experimentally that the informed sampling obtains more precise results and converges faster than a purely statistical analysis and may also be more efficient than an exact symbolic analysis. When the latter does not terminate symbolic execution with informed sampling can give meaningful results under the same time and memory limits.

  6. Statistical shape analysis with applications in R

    CERN Document Server

    Dryden, Ian L

    2016-01-01

    A thoroughly revised and updated edition of this introduction to modern statistical methods for shape analysis Shape analysis is an important tool in the many disciplines where objects are compared using geometrical features. Examples include comparing brain shape in schizophrenia; investigating protein molecules in bioinformatics; and describing growth of organisms in biology. This book is a significant update of the highly-regarded `Statistical Shape Analysis’ by the same authors. The new edition lays the foundations of landmark shape analysis, including geometrical concepts and statistical techniques, and extends to include analysis of curves, surfaces, images and other types of object data. Key definitions and concepts are discussed throughout, and the relative merits of different approaches are presented. The authors have included substantial new material on recent statistical developments and offer numerous examples throughout the text. Concepts are introduced in an accessible manner, while reta...

  7. Ontologies and tag-statistics

    Science.gov (United States)

    Tibély, Gergely; Pollner, Péter; Vicsek, Tamás; Palla, Gergely

    2012-05-01

    Due to the increasing popularity of collaborative tagging systems, the research on tagged networks, hypergraphs, ontologies, folksonomies and other related concepts is becoming an important interdisciplinary area with great potential and relevance for practical applications. In most collaborative tagging systems the tagging by the users is completely ‘flat’, while in some cases they are allowed to define a shallow hierarchy for their own tags. However, usually no overall hierarchical organization of the tags is given, and one of the interesting challenges of this area is to provide an algorithm generating the ontology of the tags from the available data. In contrast, there are also other types of tagged networks available for research, where the tags are already organized into a directed acyclic graph (DAG), encapsulating the ‘is a sub-category of’ type of hierarchy between each other. In this paper, we study how this DAG affects the statistical distribution of tags on the nodes marked by the tags in various real networks. The motivation for this research was the fact that understanding the tagging based on a known hierarchy can help in revealing the hidden hierarchy of tags in collaborative tagging systems. We analyse the relation between the tag-frequency and the position of the tag in the DAG in two large sub-networks of the English Wikipedia and a protein-protein interaction network. We also study the tag co-occurrence statistics by introducing a two-dimensional (2D) tag-distance distribution preserving both the difference in the levels and the absolute distance in the DAG for the co-occurring pairs of tags. Our most interesting finding is that the local relevance of tags in the DAG (i.e. their rank or significance as characterized by, e.g., the length of the branches starting from them) is much more important than their global distance from the root. Furthermore, we also introduce a simple tagging model based on random walks on the DAG, capable of

  8. Ontologies and tag-statistics

    International Nuclear Information System (INIS)

    Tibély, Gergely; Vicsek, Tamás; Pollner, Péter; Palla, Gergely

    2012-01-01

    Due to the increasing popularity of collaborative tagging systems, the research on tagged networks, hypergraphs, ontologies, folksonomies and other related concepts is becoming an important interdisciplinary area with great potential and relevance for practical applications. In most collaborative tagging systems the tagging by the users is completely ‘flat’, while in some cases they are allowed to define a shallow hierarchy for their own tags. However, usually no overall hierarchical organization of the tags is given, and one of the interesting challenges of this area is to provide an algorithm generating the ontology of the tags from the available data. In contrast, there are also other types of tagged networks available for research, where the tags are already organized into a directed acyclic graph (DAG), encapsulating the ‘is a sub-category of’ type of hierarchy between each other. In this paper, we study how this DAG affects the statistical distribution of tags on the nodes marked by the tags in various real networks. The motivation for this research was the fact that understanding the tagging based on a known hierarchy can help in revealing the hidden hierarchy of tags in collaborative tagging systems. We analyse the relation between the tag-frequency and the position of the tag in the DAG in two large sub-networks of the English Wikipedia and a protein-protein interaction network. We also study the tag co-occurrence statistics by introducing a two-dimensional (2D) tag-distance distribution preserving both the difference in the levels and the absolute distance in the DAG for the co-occurring pairs of tags. Our most interesting finding is that the local relevance of tags in the DAG (i.e. their rank or significance as characterized by, e.g., the length of the branches starting from them) is much more important than their global distance from the root. Furthermore, we also introduce a simple tagging model based on random walks on the DAG, capable of

  9. Statistics

    International Nuclear Information System (INIS)

    2005-01-01

    For the years 2004 and 2005 the figures shown in the tables of Energy Review are partly preliminary. The annual statistics published in Energy Review are presented in more detail in a publication called Energy Statistics that comes out yearly. Energy Statistics also includes historical time-series over a longer period of time (see e.g. Energy Statistics, Statistics Finland, Helsinki 2004.) The applied energy units and conversion coefficients are shown in the back cover of the Review. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in GDP, energy consumption and electricity consumption, Carbon dioxide emissions from fossile fuels use, Coal consumption, Consumption of natural gas, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices in heat production, Fuel prices in electricity production, Price of electricity by type of consumer, Average monthly spot prices at the Nord pool power exchange, Total energy consumption by source and CO 2 -emissions, Supplies and total consumption of electricity GWh, Energy imports by country of origin in January-June 2003, Energy exports by recipient country in January-June 2003, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Price of natural gas by type of consumer, Price of electricity by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes, precautionary stock fees and oil pollution fees

  10. Targeting Change: Assessing a Faculty Learning Community Focused on Increasing Statistics Content in Life Science Curricula

    Science.gov (United States)

    Parker, Loran Carleton; Gleichsner, Alyssa M.; Adedokun, Omolola A.; Forney, James

    2016-01-01

    Transformation of research in all biological fields necessitates the design, analysis and, interpretation of large data sets. Preparing students with the requisite skills in experimental design, statistical analysis, and interpretation, and mathematical reasoning will require both curricular reform and faculty who are willing and able to integrate…

  11. Endogenous and exogenous testosterone and prostate cancer: decreased-, increased- or null-risk?

    Science.gov (United States)

    Lopez, David S; Advani, Shailesh; Tsilidis, Konstantinos K; Wang, Run; Canfield, Steven

    2017-06-01

    For more than 70 years, the contention that high levels of testosterone or that the use of testosterone therapy (TTh) increases the development and progression of prostate cancer (PCa) has been widely accepted and practiced. Yet, the increasing and emerging evidence on testosterone research seems to challenge that contention. To review literature on the associations of endogenous and exogenous testosterone with decreased-, increased-, or null-risk of PCa, and to further evaluate only those studies that reported magnitude of associations from multivariable modeling as it minimizes confounding effects. We conducted a literature search to identify studies that investigated the association of endogenous total testosterone [continuous (per 1 unit increment and 5 nmol/L increment) and categorical (high vs. low)] and use of TTh with PCa events [1990-2016]. Emphasis was given to studies/analyses that reported magnitude of associations [odds ratio (OR), relative risk (RR) and hazard ratios (HRs)] from multivariable analyses to determine risk of PCa and their statistical significance. Most identified studies/analyses included observational and randomized placebo-controlled trials. This review was organized in three parts: (I) association of endogenous total testosterone (per 1 unit increment and 5 nmol/L increment) with PCa; (II) relationship of endogenous total testosterone (categorical high vs. low) with PCa; and (III) association of use of TTh with PCa in meta-analyses of randomized placebo-controlled trials. The first part included 31 observational studies [20 prospective (per 5 nmol/L increment) and 11 prospective and retrospective cohort studies (per 1 unit increment)]. None of the 20 prospective studies found a significant association between total testosterone (5 nmol/L increment) and increased- or decreased-risk of PCa. Two out of the 11 studies/analyses showed a significant decreased-risk of PCa for total testosterone per 1 unit increment, but also two other

  12. One stone, two birds: silica nanospheres significantly increase photocatalytic activity and colloidal stability of photocatalysts

    Science.gov (United States)

    Rasamani, Kowsalya D.; Foley, Jonathan J., IV; Sun, Yugang

    2018-03-01

    Silver-doped silver chloride [AgCl(Ag)] nanoparticles represent a unique class of visible-light-driven photocatalysts, in which the silver dopants introduce electron-abundant mid-gap energy levels to lower the bandgap of AgCl. However, free-standing AgCl(Ag) nanoparticles, particularly those with small sizes and large surface areas, exhibit low colloidal stability and low compositional stability upon exposure to light irradiation, leading to easy aggregation and conversion to metallic silver and thus a loss of photocatalytic activity. These problems could be eliminated by attaching the small AgCl(Ag) nanoparticles to the surfaces of spherical dielectric silica particles with submicrometer sizes. The high optical transparency in the visible spectral region (400-800 nm), colloidal stability, and chemical/electronic inertness displayed by the silica spheres make them ideal for supporting photocatalysts and significantly improving their stability. The spherical morphology of the dielectric silica particles can support light scattering resonances to generate significantly enhanced electric fields near the silica particle surfaces, on which the optical absorption cross-section of the AgCl(Ag) nanoparticles is dramatically increased to promote their photocatalytic activity. The hybrid silica/AgCl(Ag) structures exhibit superior photocatalytic activity and stability, suitable for supporting photocatalysis sustainably; for instance, their efficiency in the photocatalytic decomposition of methylene blue decreases by only ˜9% even after ten cycles of operation.

  13. The Statistics of Health and Longevity

    DEFF Research Database (Denmark)

    Zarulli, Virginia

    Increases in human longevity have made it critical to distinguish healthy longevity from longevity without regard to health. We present a new method for calculating the statistics of healthy longevity which extends, in several directions, current calculations of health expectancy (HE) and disabil......Increases in human longevity have made it critical to distinguish healthy longevity from longevity without regard to health. We present a new method for calculating the statistics of healthy longevity which extends, in several directions, current calculations of health expectancy (HE......) and disability-adjusted life years (DALYs), from data on prevalence of health conditions. Current methods focus on binary conditions (e.g., disabled or not disabled) or on categorical classifications (e.g. in good, poor, or very bad health) and report only expectations. Our method, based on Markov chain theory...

  14. Frog Statistics

    Science.gov (United States)

    Whole Frog Project and Virtual Frog Dissection Statistics wwwstats output for January 1 through duplicate or extraneous accesses. For example, in these statistics, while a POST requesting an image is as well. Note that this under-represents the bytes requested. Starting date for following statistics

  15. Teaching biology through statistics: application of statistical methods in genetics and zoology courses.

    Science.gov (United States)

    Colon-Berlingeri, Migdalisel; Burrowes, Patricia A

    2011-01-01

    Incorporation of mathematics into biology curricula is critical to underscore for undergraduate students the relevance of mathematics to most fields of biology and the usefulness of developing quantitative process skills demanded in modern biology. At our institution, we have made significant changes to better integrate mathematics into the undergraduate biology curriculum. The curricular revision included changes in the suggested course sequence, addition of statistics and precalculus as prerequisites to core science courses, and incorporating interdisciplinary (math-biology) learning activities in genetics and zoology courses. In this article, we describe the activities developed for these two courses and the assessment tools used to measure the learning that took place with respect to biology and statistics. We distinguished the effectiveness of these learning opportunities in helping students improve their understanding of the math and statistical concepts addressed and, more importantly, their ability to apply them to solve a biological problem. We also identified areas that need emphasis in both biology and mathematics courses. In light of our observations, we recommend best practices that biology and mathematics academic departments can implement to train undergraduates for the demands of modern biology.

  16. Myriocin significantly increases the mortality of a non-mammalian model host during Candida pathogenesis.

    Directory of Open Access Journals (Sweden)

    Nadja Rodrigues de Melo

    Full Text Available Candida albicans is a major human pathogen whose treatment is challenging due to antifungal drug toxicity, drug resistance and paucity of antifungal agents available. Myrocin (MYR inhibits sphingosine synthesis, a precursor of sphingolipids, an important cell membrane and signaling molecule component. MYR also has dual immune suppressive and antifungal properties, potentially modulating mammalian immunity and simultaneously reducing fungal infection risk. Wax moth (Galleria mellonella larvae, alternatives to mice, were used to establish if MYR suppressed insect immunity and increased survival of C. albicans-infected insects. MYR effects were studied in vivo and in vitro, and compared alone and combined with those of approved antifungal drugs, fluconazole (FLC and amphotericin B (AMPH. Insect immune defenses failed to inhibit C. albicans with high mortalities. In insects pretreated with the drug followed by C. albicans inoculation, MYR+C. albicans significantly increased mortality to 93% from 67% with C. albicans alone 48 h post-infection whilst AMPH+C. albicans and FLC+C. albicans only showed 26% and 0% mortalities, respectively. MYR combinations with other antifungal drugs in vivo also enhanced larval mortalities, contrasting the synergistic antifungal effect of the MYR+AMPH combination in vitro. MYR treatment influenced immunity and stress management gene expression during C. albicans pathogenesis, modulating transcripts putatively associated with signal transduction/regulation of cytokines, I-kappaB kinase/NF-kappaB cascade, G-protein coupled receptor and inflammation. In contrast, all stress management gene expression was down-regulated in FLC and AMPH pretreated C. albicans-infected insects. Results are discussed with their implications for clinical use of MYR to treat sphingolipid-associated disorders.

  17. The clinical significances of the abnormal expressions of Piwil1 and Piwil2 in colonic adenoma and adenocarcinoma

    Directory of Open Access Journals (Sweden)

    Wang HL

    2015-05-01

    Full Text Available Hai-Ling Wang,1 Bei-Bei Chen,1 Xin-Guang Cao,1 Jin Wang,2 Xiu-Feng Hu,1 Xiao-Qian Mu,1 Xiao-Bing Chen1 1The Affiliated Cancer Hospital of Zhengzhou University, Henan Cancer Hospital, Zhengzhou, People’s Republic of China; 2The First Affiliated Hospital of Zhengzhou University, Zhengzhou, People’s Republic of China Objective: The objective of the present investigation was to study the clinical significances of the abnormal expressions of Piwil1 and Piwil2 protein in colonic adenoma and adenocarcinoma.Methods: This study had applied immunohistochemical method to detect 45 cases of tissues adjacent to carcinoma (distance to cancerous tissue was above 5 cm, 41 cases of colonic adenoma and 92 cases of colon cancer tissues, and their Piwil1 and Piwil2 protein expression levels.Analysis: The correlation of both expression and its relationship with clinicopathological features of colon cancer was analyzed.Results: Positive expression rates of Piwil1 in tissues adjacent to carcinoma, colonic adenoma, and colon cancer were 11.1% (5/45, 53.7% (22/41, and 80.4% (74/92, respectively; the expression rates increased, and the comparisons between each two groups were statistically significant (P<0.05. In each group, the positive expression rates of Piwil2 were 24.4% (11/45 cases, 75.6% (31/41 cases, and 92.4% (85/92 cases; expression rates increased, and the comparisons between each two groups were statistically significant (P<0.05. Piwil1 expression and the correlation of the degree of differentiation, TNM stage, and lymph node metastasis were statistically significant (P<0.05. Piwil2 expression and the correlation of the degree of differentiation, tumor node metastasis (TNM stage, and lymph node metastasis had no statistical significance (P>0.05. In colon cancer tissue, Piwil1 and Piwil2 expressions were positively correlated (r=0.262, P<0.05.Conclusion: The results showed that the abnormal expression of Piwil1 and Piwil2 might play an important role in

  18. The adipokine leptin increases skeletal muscle mass and significantly alters skeletal muscle miRNA expression profile in aged mice

    International Nuclear Information System (INIS)

    Hamrick, Mark W.; Herberg, Samuel; Arounleut, Phonepasong; He, Hong-Zhi; Shiver, Austin; Qi, Rui-Qun; Zhou, Li; Isales, Carlos M.

    2010-01-01

    Research highlights: → Aging is associated with muscle atrophy and loss of muscle mass, known as the sarcopenia of aging. → We demonstrate that age-related muscle atrophy is associated with marked changes in miRNA expression in muscle. → Treating aged mice with the adipokine leptin significantly increased muscle mass and the expression of miRNAs involved in muscle repair. → Recombinant leptin therapy may therefore be a novel approach for treating age-related muscle atrophy. -- Abstract: Age-associated loss of muscle mass, or sarcopenia, contributes directly to frailty and an increased risk of falls and fractures among the elderly. Aged mice and elderly adults both show decreased muscle mass as well as relatively low levels of the fat-derived hormone leptin. Here we demonstrate that loss of muscle mass and myofiber size with aging in mice is associated with significant changes in the expression of specific miRNAs. Aging altered the expression of 57 miRNAs in mouse skeletal muscle, and many of these miRNAs are now reported to be associated specifically with age-related muscle atrophy. These include miR-221, previously identified in studies of myogenesis and muscle development as playing a role in the proliferation and terminal differentiation of myogenic precursors. We also treated aged mice with recombinant leptin, to determine whether leptin therapy could improve muscle mass and alter the miRNA expression profile of aging skeletal muscle. Leptin treatment significantly increased hindlimb muscle mass and extensor digitorum longus fiber size in aged mice. Furthermore, the expression of 37 miRNAs was altered in muscles of leptin-treated mice. In particular, leptin treatment increased the expression of miR-31 and miR-223, miRNAs known to be elevated during muscle regeneration and repair. These findings suggest that aging in skeletal muscle is associated with marked changes in the expression of specific miRNAs, and that nutrient-related hormones such as leptin

  19. The adipokine leptin increases skeletal muscle mass and significantly alters skeletal muscle miRNA expression profile in aged mice

    Energy Technology Data Exchange (ETDEWEB)

    Hamrick, Mark W., E-mail: mhamrick@mail.mcg.edu [Department of Cellular Biology and Anatomy, Institute of Molecular Medicine and Genetics, Medical College of Georgia, Augusta, GA (United States); Department of Orthopaedic Surgery, Institute of Molecular Medicine and Genetics, Medical College of Georgia, Augusta, GA (United States); Herberg, Samuel; Arounleut, Phonepasong [Department of Cellular Biology and Anatomy, Institute of Molecular Medicine and Genetics, Medical College of Georgia, Augusta, GA (United States); Department of Orthopaedic Surgery, Institute of Molecular Medicine and Genetics, Medical College of Georgia, Augusta, GA (United States); He, Hong-Zhi [Henry Ford Immunology Program, Henry Ford Health System, Detroit, MI (United States); Department of Dermatology, Henry Ford Health System, Detroit, MI (United States); Shiver, Austin [Department of Cellular Biology and Anatomy, Institute of Molecular Medicine and Genetics, Medical College of Georgia, Augusta, GA (United States); Department of Orthopaedic Surgery, Institute of Molecular Medicine and Genetics, Medical College of Georgia, Augusta, GA (United States); Qi, Rui-Qun [Henry Ford Immunology Program, Henry Ford Health System, Detroit, MI (United States); Department of Dermatology, Henry Ford Health System, Detroit, MI (United States); Zhou, Li [Henry Ford Immunology Program, Henry Ford Health System, Detroit, MI (United States); Department of Dermatology, Henry Ford Health System, Detroit, MI (United States); Department of Internal Medicine, Henry Ford Health System, Detroit, MI (United States); Isales, Carlos M. [Department of Cellular Biology and Anatomy, Institute of Molecular Medicine and Genetics, Medical College of Georgia, Augusta, GA (United States); Department of Orthopaedic Surgery, Institute of Molecular Medicine and Genetics, Medical College of Georgia, Augusta, GA (United States); others, and

    2010-09-24

    Research highlights: {yields} Aging is associated with muscle atrophy and loss of muscle mass, known as the sarcopenia of aging. {yields} We demonstrate that age-related muscle atrophy is associated with marked changes in miRNA expression in muscle. {yields} Treating aged mice with the adipokine leptin significantly increased muscle mass and the expression of miRNAs involved in muscle repair. {yields} Recombinant leptin therapy may therefore be a novel approach for treating age-related muscle atrophy. -- Abstract: Age-associated loss of muscle mass, or sarcopenia, contributes directly to frailty and an increased risk of falls and fractures among the elderly. Aged mice and elderly adults both show decreased muscle mass as well as relatively low levels of the fat-derived hormone leptin. Here we demonstrate that loss of muscle mass and myofiber size with aging in mice is associated with significant changes in the expression of specific miRNAs. Aging altered the expression of 57 miRNAs in mouse skeletal muscle, and many of these miRNAs are now reported to be associated specifically with age-related muscle atrophy. These include miR-221, previously identified in studies of myogenesis and muscle development as playing a role in the proliferation and terminal differentiation of myogenic precursors. We also treated aged mice with recombinant leptin, to determine whether leptin therapy could improve muscle mass and alter the miRNA expression profile of aging skeletal muscle. Leptin treatment significantly increased hindlimb muscle mass and extensor digitorum longus fiber size in aged mice. Furthermore, the expression of 37 miRNAs was altered in muscles of leptin-treated mice. In particular, leptin treatment increased the expression of miR-31 and miR-223, miRNAs known to be elevated during muscle regeneration and repair. These findings suggest that aging in skeletal muscle is associated with marked changes in the expression of specific miRNAs, and that nutrient

  20. Statistical learning and selective inference.

    Science.gov (United States)

    Taylor, Jonathan; Tibshirani, Robert J

    2015-06-23

    We describe the problem of "selective inference." This addresses the following challenge: Having mined a set of data to find potential associations, how do we properly assess the strength of these associations? The fact that we have "cherry-picked"--searched for the strongest associations--means that we must set a higher bar for declaring significant the associations that we see. This challenge becomes more important in the era of big data and complex statistical modeling. The cherry tree (dataset) can be very large and the tools for cherry picking (statistical learning methods) are now very sophisticated. We describe some recent new developments in selective inference and illustrate their use in forward stepwise regression, the lasso, and principal components analysis.

  1. Statistical assessment of coal charge effect on metallurgical coke quality

    Directory of Open Access Journals (Sweden)

    Pavlína Pustějovská

    2016-06-01

    Full Text Available The paper studies coke quality. Blast furnace technique has been interested in iron ore charge; meanwhile coke was not studied because, in previous conditions, it seemed to be good enough. Nowadays, requirements for blast furnace coke has risen, especially, requirements for coke reactivity. The level of reactivity parameter is determined primarily by the composition and properties of coal mixtures for coking. The paper deals with a statistical analysis of the tightness and characteristics of the relationship between selected properties of coal mixture and coke reactivity. Software Statgraphic using both simple linear regression and multiple linear regressions was used for the calculations. Obtained regression equations provide a statistically significant prediction of the reactivity of coke, or its strength after reduction of CO2, and, thus, their subsequent management by change in composition and properties of coal mixture. There were determined indexes CSR/CRI for coke. Fifty – four results were acquired in the experimental parts where correlation between index CRI and coal components were studied. For linear regression the determinant was 55.0204%, between parameters CRI – Inertinit 21.5873%. For regression between CRI and coal components it was 31.03%. For multiple linear regression between CRI and 3 feedstock components determinant was 34.0691%. The final correlation has shown the decrease in final coke reactivity for higher ash, higher content of volatile combustible in coal increases the total coke reactivity and higher amount of inertinit in coal increases the reactivity. Generally, coke quality is significantly affected by coal processing, carbonization and maceral content of coal mixture.

  2. Increased serum estrone and estradiol following spironolactone administration in hypertensive men

    Energy Technology Data Exchange (ETDEWEB)

    Miyatake, A; Noma, K; Nakao, K; Morimoto, Y; Yamamura, Y [Osaka Univ. (Japan). Faculty of Medicine

    1978-12-01

    This study was undertaken to evaluate long-term effects of spironolactone on basal serum estrone, estradiol, testosterone, LH and prolactin concentrations in hypertensive male patients. Serum prolactin response to TRH was also evaluated. There were two groups, (a) six males with essential hypertension given 75 - 150 mg spironolactone daily for 12 weeks, and (b) two males with idiopathic hyperaldosteronism given 300 mg daily for over 40 weeks. In the conventional-dosage group, serum estrone concentrations significantly increased (P < 0.01) at 12 weeks serum estradiol gradually increased but not statistically significantly (P < 0,2). Basal serum testosterone, LH and prolactin concentrations did not show significant changes. There was no increase in serum prolactin response to TRH. In the high-dosage group, serum estrone levels remained high, and serum estradiol increased with the development of gynaecomastia. Serum testosterone, LH and prolactin concentrations showed no marked changes. The elevations in circulating oestrogens could well explain the oestrogenic side-effects of spironolactone treatment.

  3. Increased serum oestrone and oestradiol following spironolactone administration in hypertensive men

    International Nuclear Information System (INIS)

    Miyatake, A.; Noma, K.; Nakao, K.; Morimoto, Y.; Yamamura, Y.

    1978-01-01

    This study was undertaken to evaluate long-term effects of spironolactone on basal serum oestrone, oestradiol, testosterone, LH and prolactin concentrations in hypertensive male patients. Serum prolactin response to TRH was also evaluated. There were two groups, (a) six males with essential hypertension given 75 - 150 mg spironolactone daily for 12 weeks, and (b) two males with idiopathic hyperaldosteronism given 300 mg daily for over 40 weeks. In the conventional-dosage group, serum oestrone concentrations significantly increased (P < 0.01) at 12 weeks serum oestradiol gradually increased but not statistically significantly (P < 0,2). Basal serum testosterone, LH and prolactin concentrations did not show significant changes. There was no increase in serum prolactin response to TRH. In the high-dosage group, serum oestrone levels remained high, and serum oestradiol increased with the development of gynaecomastia. Serum testosterone, LH and prolactin concentrations showed no marked changes. The elevations in circulating oestrogens could well explain the oestrogenic side-effects of spironolactone treatment. (author)

  4. Statistics for Learning Genetics

    Science.gov (United States)

    Charles, Abigail Sheena

    This study investigated the knowledge and skills that biology students may need to help them understand statistics/mathematics as it applies to genetics. The data are based on analyses of current representative genetics texts, practicing genetics professors' perspectives, and more directly, students' perceptions of, and performance in, doing statistically-based genetics problems. This issue is at the emerging edge of modern college-level genetics instruction, and this study attempts to identify key theoretical components for creating a specialized biological statistics curriculum. The goal of this curriculum will be to prepare biology students with the skills for assimilating quantitatively-based genetic processes, increasingly at the forefront of modern genetics. To fulfill this, two college level classes at two universities were surveyed. One university was located in the northeastern US and the other in the West Indies. There was a sample size of 42 students and a supplementary interview was administered to a select 9 students. Interviews were also administered to professors in the field in order to gain insight into the teaching of statistics in genetics. Key findings indicated that students had very little to no background in statistics (55%). Although students did perform well on exams with 60% of the population receiving an A or B grade, 77% of them did not offer good explanations on a probability question associated with the normal distribution provided in the survey. The scope and presentation of the applicable statistics/mathematics in some of the most used textbooks in genetics teaching, as well as genetics syllabi used by instructors do not help the issue. It was found that the text books, often times, either did not give effective explanations for students, or completely left out certain topics. The omission of certain statistical/mathematical oriented topics was seen to be also true with the genetics syllabi reviewed for this study. Nonetheless

  5. Statistical reporting inconsistencies in experimental philosophy.

    Science.gov (United States)

    Colombo, Matteo; Duev, Georgi; Nuijten, Michèle B; Sprenger, Jan

    2018-01-01

    Experimental philosophy (x-phi) is a young field of research in the intersection of philosophy and psychology. It aims to make progress on philosophical questions by using experimental methods traditionally associated with the psychological and behavioral sciences, such as null hypothesis significance testing (NHST). Motivated by recent discussions about a methodological crisis in the behavioral sciences, questions have been raised about the methodological standards of x-phi. Here, we focus on one aspect of this question, namely the rate of inconsistencies in statistical reporting. Previous research has examined the extent to which published articles in psychology and other behavioral sciences present statistical inconsistencies in reporting the results of NHST. In this study, we used the R package statcheck to detect statistical inconsistencies in x-phi, and compared rates of inconsistencies in psychology and philosophy. We found that rates of inconsistencies in x-phi are lower than in the psychological and behavioral sciences. From the point of view of statistical reporting consistency, x-phi seems to do no worse, and perhaps even better, than psychological science.

  6. Statistical reporting inconsistencies in experimental philosophy

    Science.gov (United States)

    Colombo, Matteo; Duev, Georgi; Nuijten, Michèle B.; Sprenger, Jan

    2018-01-01

    Experimental philosophy (x-phi) is a young field of research in the intersection of philosophy and psychology. It aims to make progress on philosophical questions by using experimental methods traditionally associated with the psychological and behavioral sciences, such as null hypothesis significance testing (NHST). Motivated by recent discussions about a methodological crisis in the behavioral sciences, questions have been raised about the methodological standards of x-phi. Here, we focus on one aspect of this question, namely the rate of inconsistencies in statistical reporting. Previous research has examined the extent to which published articles in psychology and other behavioral sciences present statistical inconsistencies in reporting the results of NHST. In this study, we used the R package statcheck to detect statistical inconsistencies in x-phi, and compared rates of inconsistencies in psychology and philosophy. We found that rates of inconsistencies in x-phi are lower than in the psychological and behavioral sciences. From the point of view of statistical reporting consistency, x-phi seems to do no worse, and perhaps even better, than psychological science. PMID:29649220

  7. A proposal for the measurement of graphical statistics effectiveness: Does it enhance or interfere with statistical reasoning?

    International Nuclear Information System (INIS)

    Agus, M; Penna, M P; Peró-Cebollero, M; Guàrdia-Olmos, J

    2015-01-01

    Numerous studies have examined students' difficulties in understanding some notions related to statistical problems. Some authors observed that the presentation of distinct visual representations could increase statistical reasoning, supporting the principle of graphical facilitation. But other researchers disagree with this viewpoint, emphasising the impediments related to the use of illustrations that could overcharge the cognitive system with insignificant data. In this work we aim at comparing the probabilistic statistical reasoning regarding two different formats of problem presentations: graphical and verbal-numerical. We have conceived and presented five pairs of homologous simple problems in the verbal numerical and graphical format to 311 undergraduate Psychology students (n=156 in Italy and n=155 in Spain) without statistical expertise. The purpose of our work was to evaluate the effect of graphical facilitation in probabilistic statistical reasoning. Every undergraduate has solved each pair of problems in two formats in different problem presentation orders and sequences. Data analyses have highlighted that the effect of graphical facilitation is infrequent in psychology undergraduates. This effect is related to many factors (as knowledge, abilities, attitudes, and anxiety); moreover it might be considered the resultant of interaction between individual and task characteristics

  8. Statistical Analysis of Big Data on Pharmacogenomics

    Science.gov (United States)

    Fan, Jianqing; Liu, Han

    2013-01-01

    This paper discusses statistical methods for estimating complex correlation structure from large pharmacogenomic datasets. We selectively review several prominent statistical methods for estimating large covariance matrix for understanding correlation structure, inverse covariance matrix for network modeling, large-scale simultaneous tests for selecting significantly differently expressed genes and proteins and genetic markers for complex diseases, and high dimensional variable selection for identifying important molecules for understanding molecule mechanisms in pharmacogenomics. Their applications to gene network estimation and biomarker selection are used to illustrate the methodological power. Several new challenges of Big data analysis, including complex data distribution, missing data, measurement error, spurious correlation, endogeneity, and the need for robust statistical methods, are also discussed. PMID:23602905

  9. Analysis of statistical misconception in terms of statistical reasoning

    Science.gov (United States)

    Maryati, I.; Priatna, N.

    2018-05-01

    Reasoning skill is needed for everyone to face globalization era, because every person have to be able to manage and use information from all over the world which can be obtained easily. Statistical reasoning skill is the ability to collect, group, process, interpret, and draw conclusion of information. Developing this skill can be done through various levels of education. However, the skill is low because many people assume that statistics is just the ability to count and using formulas and so do students. Students still have negative attitude toward course which is related to research. The purpose of this research is analyzing students’ misconception in descriptive statistic course toward the statistical reasoning skill. The observation was done by analyzing the misconception test result and statistical reasoning skill test; observing the students’ misconception effect toward statistical reasoning skill. The sample of this research was 32 students of math education department who had taken descriptive statistic course. The mean value of misconception test was 49,7 and standard deviation was 10,6 whereas the mean value of statistical reasoning skill test was 51,8 and standard deviation was 8,5. If the minimal value is 65 to state the standard achievement of a course competence, students’ mean value is lower than the standard competence. The result of students’ misconception study emphasized on which sub discussion that should be considered. Based on the assessment result, it was found that students’ misconception happen on this: 1) writing mathematical sentence and symbol well, 2) understanding basic definitions, 3) determining concept that will be used in solving problem. In statistical reasoning skill, the assessment was done to measure reasoning from: 1) data, 2) representation, 3) statistic format, 4) probability, 5) sample, and 6) association.

  10. Statistical Inference at Work: Statistical Process Control as an Example

    Science.gov (United States)

    Bakker, Arthur; Kent, Phillip; Derry, Jan; Noss, Richard; Hoyles, Celia

    2008-01-01

    To characterise statistical inference in the workplace this paper compares a prototypical type of statistical inference at work, statistical process control (SPC), with a type of statistical inference that is better known in educational settings, hypothesis testing. Although there are some similarities between the reasoning structure involved in…

  11. Risk prediction model: Statistical and artificial neural network approach

    Science.gov (United States)

    Paiman, Nuur Azreen; Hariri, Azian; Masood, Ibrahim

    2017-04-01

    Prediction models are increasingly gaining popularity and had been used in numerous areas of studies to complement and fulfilled clinical reasoning and decision making nowadays. The adoption of such models assist physician's decision making, individual's behavior, and consequently improve individual outcomes and the cost-effectiveness of care. The objective of this paper is to reviewed articles related to risk prediction model in order to understand the suitable approach, development and the validation process of risk prediction model. A qualitative review of the aims, methods and significant main outcomes of the nineteen published articles that developed risk prediction models from numerous fields were done. This paper also reviewed on how researchers develop and validate the risk prediction models based on statistical and artificial neural network approach. From the review done, some methodological recommendation in developing and validating the prediction model were highlighted. According to studies that had been done, artificial neural network approached in developing the prediction model were more accurate compared to statistical approach. However currently, only limited published literature discussed on which approach is more accurate for risk prediction model development.

  12. What can we learn from noise? - Mesoscopic nonequilibrium statistical physics.

    Science.gov (United States)

    Kobayashi, Kensuke

    2016-01-01

    Mesoscopic systems - small electric circuits working in quantum regime - offer us a unique experimental stage to explorer quantum transport in a tunable and precise way. The purpose of this Review is to show how they can contribute to statistical physics. We introduce the significance of fluctuation, or equivalently noise, as noise measurement enables us to address the fundamental aspects of a physical system. The significance of the fluctuation theorem (FT) in statistical physics is noted. We explain what information can be deduced from the current noise measurement in mesoscopic systems. As an important application of the noise measurement to statistical physics, we describe our experimental work on the current and current noise in an electron interferometer, which is the first experimental test of FT in quantum regime. Our attempt will shed new light in the research field of mesoscopic quantum statistical physics.

  13. Big data integration shows Australian bush-fire frequency is increasing significantly.

    Science.gov (United States)

    Dutta, Ritaban; Das, Aruneema; Aryal, Jagannath

    2016-02-01

    Increasing Australian bush-fire frequencies over the last decade has indicated a major climatic change in coming future. Understanding such climatic change for Australian bush-fire is limited and there is an urgent need of scientific research, which is capable enough to contribute to Australian society. Frequency of bush-fire carries information on spatial, temporal and climatic aspects of bush-fire events and provides contextual information to model various climate data for accurately predicting future bush-fire hot spots. In this study, we develop an ensemble method based on a two-layered machine learning model to establish relationship between fire incidence and climatic data. In a 336 week data trial, we demonstrate that the model provides highly accurate bush-fire incidence hot-spot estimation (91% global accuracy) from the weekly climatic surfaces. Our analysis also indicates that Australian weekly bush-fire frequencies increased by 40% over the last 5 years, particularly during summer months, implicating a serious climatic shift.

  14. Watt-Lite; Energy Statistics Made Tangible

    DEFF Research Database (Denmark)

    Jönsson, Li; Broms, Loove; Katzeff, Cecilia

    2011-01-01

    of consumers its consequences are poorly understood. In order to better understand how we can use design to increase awareness of electricity consumption in everyday life, we will discuss the design of Watt-Lite, a set of three oversized torches projecting real time energy statistics of a factory...... in the physical environments of its employees. The design of Watt-Lite is meant to explore ways of representing, understanding and interacting with electricity in industrial workspaces. We discuss three design inquiries and their implications for the design of Watt-Lite: the use of tangible statistics...

  15. A Statistical Primer: Understanding Descriptive and Inferential Statistics

    OpenAIRE

    Gillian Byrne

    2007-01-01

    As libraries and librarians move more towards evidence‐based decision making, the data being generated in libraries is growing. Understanding the basics of statistical analysis is crucial for evidence‐based practice (EBP), in order to correctly design and analyze researchas well as to evaluate the research of others. This article covers the fundamentals of descriptive and inferential statistics, from hypothesis construction to sampling to common statistical techniques including chi‐square, co...

  16. Solution of the statistical bootstrap with Bose statistics

    International Nuclear Information System (INIS)

    Engels, J.; Fabricius, K.; Schilling, K.

    1977-01-01

    A brief and transparent way to introduce Bose statistics into the statistical bootstrap of Hagedorn and Frautschi is presented. The resulting bootstrap equation is solved by a cluster expansion for the grand canonical partition function. The shift of the ultimate temperature due to Bose statistics is determined through an iteration process. We discuss two-particle spectra of the decaying fireball (with given mass) as obtained from its grand microcanonical level density

  17. MIDAS: Regionally linear multivariate discriminative statistical mapping.

    Science.gov (United States)

    Varol, Erdem; Sotiras, Aristeidis; Davatzikos, Christos

    2018-07-01

    Statistical parametric maps formed via voxel-wise mass-univariate tests, such as the general linear model, are commonly used to test hypotheses about regionally specific effects in neuroimaging cross-sectional studies where each subject is represented by a single image. Despite being informative, these techniques remain limited as they ignore multivariate relationships in the data. Most importantly, the commonly employed local Gaussian smoothing, which is important for accounting for registration errors and making the data follow Gaussian distributions, is usually chosen in an ad hoc fashion. Thus, it is often suboptimal for the task of detecting group differences and correlations with non-imaging variables. Information mapping techniques, such as searchlight, which use pattern classifiers to exploit multivariate information and obtain more powerful statistical maps, have become increasingly popular in recent years. However, existing methods may lead to important interpretation errors in practice (i.e., misidentifying a cluster as informative, or failing to detect truly informative voxels), while often being computationally expensive. To address these issues, we introduce a novel efficient multivariate statistical framework for cross-sectional studies, termed MIDAS, seeking highly sensitive and specific voxel-wise brain maps, while leveraging the power of regional discriminant analysis. In MIDAS, locally linear discriminative learning is applied to estimate the pattern that best discriminates between two groups, or predicts a variable of interest. This pattern is equivalent to local filtering by an optimal kernel whose coefficients are the weights of the linear discriminant. By composing information from all neighborhoods that contain a given voxel, MIDAS produces a statistic that collectively reflects the contribution of the voxel to the regional classifiers as well as the discriminative power of the classifiers. Critically, MIDAS efficiently assesses the

  18. Correlation in the statistical analysis of a reverse Fourier neutron time-of-flight experiment. Pt. 2

    International Nuclear Information System (INIS)

    Tilli, K.J.

    1982-01-01

    The significance of the correlation in the statistical analysis of reverse Fourier neutron time-of-flight observations has been evaluated by applying different methods of estimation to diffraction patterns containing peaks with Gaussian line shapes. Effects of the correlation between adjacent channels of a spectrum arise both from the incorrect weighting of the experiment's independent variables and from the misinterpretation of the number of independent observations in the data. The incorrect weighting bears the greatest effects on the width parameter of a Gaussian profile, and it leads to an increase in the relative weights of the broadest peaks of the diffraction pattern. If the correlation is ignored in the analysis, the estimates obtained for the parameters of a model will not be exactly the same as those evaluated from the minimum variance estimation, in which the correlation is taken into account. However, the differences will not be statistically significant. Nevertheless, the standard deviations will then be underestimated typically by a factor of two, which will have serious consequences on every aspect of the statistical inference. (orig.)

  19. Descriptive and inferential statistical methods used in burns research.

    Science.gov (United States)

    Al-Benna, Sammy; Al-Ajam, Yazan; Way, Benjamin; Steinstraesser, Lars

    2010-05-01

    Burns research articles utilise a variety of descriptive and inferential methods to present and analyse data. The aim of this study was to determine the descriptive methods (e.g. mean, median, SD, range, etc.) and survey the use of inferential methods (statistical tests) used in articles in the journal Burns. This study defined its population as all original articles published in the journal Burns in 2007. Letters to the editor, brief reports, reviews, and case reports were excluded. Study characteristics, use of descriptive statistics and the number and types of statistical methods employed were evaluated. Of the 51 articles analysed, 11(22%) were randomised controlled trials, 18(35%) were cohort studies, 11(22%) were case control studies and 11(22%) were case series. The study design and objectives were defined in all articles. All articles made use of continuous and descriptive data. Inferential statistics were used in 49(96%) articles. Data dispersion was calculated by standard deviation in 30(59%). Standard error of the mean was quoted in 19(37%). The statistical software product was named in 33(65%). Of the 49 articles that used inferential statistics, the tests were named in 47(96%). The 6 most common tests used (Student's t-test (53%), analysis of variance/co-variance (33%), chi(2) test (27%), Wilcoxon & Mann-Whitney tests (22%), Fisher's exact test (12%)) accounted for the majority (72%) of statistical methods employed. A specified significance level was named in 43(88%) and the exact significance levels were reported in 28(57%). Descriptive analysis and basic statistical techniques account for most of the statistical tests reported. This information should prove useful in deciding which tests should be emphasised in educating burn care professionals. These results highlight the need for burn care professionals to have a sound understanding of basic statistics, which is crucial in interpreting and reporting data. Advice should be sought from professionals

  20. Statistical methods

    CERN Document Server

    Szulc, Stefan

    1965-01-01

    Statistical Methods provides a discussion of the principles of the organization and technique of research, with emphasis on its application to the problems in social statistics. This book discusses branch statistics, which aims to develop practical ways of collecting and processing numerical data and to adapt general statistical methods to the objectives in a given field.Organized into five parts encompassing 22 chapters, this book begins with an overview of how to organize the collection of such information on individual units, primarily as accomplished by government agencies. This text then

  1. Statistical optics

    CERN Document Server

    Goodman, Joseph W

    2015-01-01

    This book discusses statistical methods that are useful for treating problems in modern optics, and the application of these methods to solving a variety of such problems This book covers a variety of statistical problems in optics, including both theory and applications.  The text covers the necessary background in statistics, statistical properties of light waves of various types, the theory of partial coherence and its applications, imaging with partially coherent light, atmospheric degradations of images, and noise limitations in the detection of light. New topics have been introduced i

  2. An application of an optimal statistic for characterizing relative orientations

    Science.gov (United States)

    Jow, Dylan L.; Hill, Ryley; Scott, Douglas; Soler, J. D.; Martin, P. G.; Devlin, M. J.; Fissel, L. M.; Poidevin, F.

    2018-02-01

    We present the projected Rayleigh statistic (PRS), a modification of the classic Rayleigh statistic, as a test for non-uniform relative orientation between two pseudo-vector fields. In the application here, this gives an effective way of investigating whether polarization pseudo-vectors (spin-2 quantities) are preferentially parallel or perpendicular to filaments in the interstellar medium. For example, there are other potential applications in astrophysics, e.g. when comparing small-scale orientations with larger scale shear patterns. We compare the efficiency of the PRS against histogram binning methods that have previously been used for characterizing the relative orientations of gas column density structures with the magnetic field projected on the plane of the sky. We examine data for the Vela C molecular cloud, where the column density is inferred from Herschel submillimetre observations, and the magnetic field from observations by the Balloon-borne Large-Aperture Submillimetre Telescope in the 250-, 350- and 500-μm wavelength bands. We find that the PRS has greater statistical power than approaches that bin the relative orientation angles, as it makes more efficient use of the information contained in the data. In particular, the use of the PRS to test for preferential alignment results in a higher statistical significance, in each of the four Vela C regions, with the greatest increase being by a factor 1.3 in the South-Nest region in the 250 - μ m band.

  3. Complex Data Modeling and Computationally Intensive Statistical Methods

    CERN Document Server

    Mantovan, Pietro

    2010-01-01

    The last years have seen the advent and development of many devices able to record and store an always increasing amount of complex and high dimensional data; 3D images generated by medical scanners or satellite remote sensing, DNA microarrays, real time financial data, system control datasets. The analysis of this data poses new challenging problems and requires the development of novel statistical models and computational methods, fueling many fascinating and fast growing research areas of modern statistics. The book offers a wide variety of statistical methods and is addressed to statistici

  4. Time to significant pain reduction following DETP application vs placebo for acute soft tissue injuries.

    Science.gov (United States)

    Yanchick, J; Magelli, M; Bodie, J; Sjogren, J; Rovati, S

    2010-08-01

    Nonsteroidal anti-inflammatory drugs (NSAIDs) provide fast and effective acute pain relief, but systemic administration has increased risk for some adverse reactions. The diclofenac epolamine 1.3% topical patch (DETP) is a topical NSAID with demonstrated safety and efficacy in treatment of acute pain from minor soft tissue injuries. Significant pain reduction has been observed in clinical trials within several hours following DETP application, suggesting rapid pain relief; however, this has not been extensively studied for topical NSAIDs in general. This retrospective post-hoc analysis examined time to onset of significant pain reduction after DETP application compared to a placebo patch for patients with mild-to-moderate acute ankle sprain, evaluating the primary efficacy endpoint from two nearly identical studies. Data from two double-blind, randomized, parallel-group, placebo-controlled studies (N = 274) of safety and efficacy of the DETP applied once daily for 7 days for acute ankle sprain were evaluated post-hoc using statistical modeling to estimate time to onset of significant pain reduction following DETP application. Pain on active movement on a 100 mm Visual Analog Scale (VAS) recorded in patient diaries; physician- and patient-assessed tolerability; and adverse events. DETP treatment resulted in significant pain reduction within approximately 3 hours compared to placebo. Within-treatment post-hoc analysis based on a statistical model suggested significant pain reduction occurred as early as 1.27 hours for the DETP group. The study may have been limited by the retrospective nature of the analyses. In both studies, the DETP was well tolerated with few adverse events, limited primarily to application site skin reactions. The DETP is an effective treatment for acute minor soft tissue injury, providing pain relief as rapidly as 1.27 hours post-treatment. Statistical modeling may be useful in estimating time to onset of pain relief for comparison of topical

  5. Significant yield increases from control of leaf diseases in maize - an overlooked problem?!

    DEFF Research Database (Denmark)

    Jørgensen, Lise Nistrup

    2012-01-01

    The area of maize has increased in several European countries in recent years. In Denmark, the area has increased from 10,000 ha in 1980 to 185,000 ha in 2011. Initially only silage maize was cultivated in Denmark, but in more recent years the area of grain maize has also increased. Farms growing...

  6. All of statistics a concise course in statistical inference

    CERN Document Server

    Wasserman, Larry

    2004-01-01

    This book is for people who want to learn probability and statistics quickly It brings together many of the main ideas in modern statistics in one place The book is suitable for students and researchers in statistics, computer science, data mining and machine learning This book covers a much wider range of topics than a typical introductory text on mathematical statistics It includes modern topics like nonparametric curve estimation, bootstrapping and classification, topics that are usually relegated to follow-up courses The reader is assumed to know calculus and a little linear algebra No previous knowledge of probability and statistics is required The text can be used at the advanced undergraduate and graduate level Larry Wasserman is Professor of Statistics at Carnegie Mellon University He is also a member of the Center for Automated Learning and Discovery in the School of Computer Science His research areas include nonparametric inference, asymptotic theory, causality, and applications to astrophysics, bi...

  7. Computed statistics at streamgages, and methods for estimating low-flow frequency statistics and development of regional regression equations for estimating low-flow frequency statistics at ungaged locations in Missouri

    Science.gov (United States)

    Southard, Rodney E.

    2013-01-01

    located in Region 1, 120 were located in Region 2, and 10 were located in Region 3. Streamgages located outside of Missouri were selected to extend the range of data used for the independent variables in the regression analyses. Streamgages included in the regression analyses had 10 or more years of record and were considered to be affected minimally by anthropogenic activities or trends. Regional regression analyses identified three characteristics as statistically significant for the development of regional equations. For Region 1, drainage area, longest flow path, and streamflow-variability index were statistically significant. The range in the standard error of estimate for Region 1 is 79.6 to 94.2 percent. For Region 2, drainage area and streamflow variability index were statistically significant, and the range in the standard error of estimate is 48.2 to 72.1 percent. For Region 3, drainage area and streamflow-variability index also were statistically significant with a range in the standard error of estimate of 48.1 to 96.2 percent. Limitations on the use of estimating low-flow frequency statistics at ungaged locations are dependent on the method used. The first method outlined for use in Missouri, power curve equations, were developed to estimate the selected statistics for ungaged locations on 28 selected streams with multiple streamgages located on the same stream. A second method uses a drainage-area ratio to compute statistics at an ungaged location using data from a single streamgage on the same stream with 10 or more years of record. Ungaged locations on these streams may use the ratio of the drainage area at an ungaged location to the drainage area at a streamgage location to scale the selected statistic value from the streamgage location to the ungaged location. This method can be used if the drainage area of the ungaged location is within 40 to 150 percent of the streamgage drainage area. The third method is the use of the regional regression equations

  8. MQSA National Statistics

    Science.gov (United States)

    ... Standards Act and Program MQSA Insights MQSA National Statistics Share Tweet Linkedin Pin it More sharing options ... but should level off with time. Archived Scorecard Statistics 2018 Scorecard Statistics 2017 Scorecard Statistics 2016 Scorecard ...

  9. Identification of sequence motifs significantly associated with antisense activity

    Directory of Open Access Journals (Sweden)

    Peek Andrew S

    2007-06-01

    Full Text Available Abstract Background Predicting the suppression activity of antisense oligonucleotide sequences is the main goal of the rational design of nucleic acids. To create an effective predictive model, it is important to know what properties of an oligonucleotide sequence associate significantly with antisense activity. Also, for the model to be efficient we must know what properties do not associate significantly and can be omitted from the model. This paper will discuss the results of a randomization procedure to find motifs that associate significantly with either high or low antisense suppression activity, analysis of their properties, as well as the results of support vector machine modelling using these significant motifs as features. Results We discovered 155 motifs that associate significantly with high antisense suppression activity and 202 motifs that associate significantly with low suppression activity. The motifs range in length from 2 to 5 bases, contain several motifs that have been previously discovered as associating highly with antisense activity, and have thermodynamic properties consistent with previous work associating thermodynamic properties of sequences with their antisense activity. Statistical analysis revealed no correlation between a motif's position within an antisense sequence and that sequences antisense activity. Also, many significant motifs existed as subwords of other significant motifs. Support vector regression experiments indicated that the feature set of significant motifs increased correlation compared to all possible motifs as well as several subsets of the significant motifs. Conclusion The thermodynamic properties of the significantly associated motifs support existing data correlating the thermodynamic properties of the antisense oligonucleotide with antisense efficiency, reinforcing our hypothesis that antisense suppression is strongly associated with probe/target thermodynamics, as there are no enzymatic

  10. Generalized quantum statistics

    International Nuclear Information System (INIS)

    Chou, C.

    1992-01-01

    In the paper, a non-anyonic generalization of quantum statistics is presented, in which Fermi-Dirac statistics (FDS) and Bose-Einstein statistics (BES) appear as two special cases. The new quantum statistics, which is characterized by the dimension of its single particle Fock space, contains three consistent parts, namely the generalized bilinear quantization, the generalized quantum mechanical description and the corresponding statistical mechanics

  11. IETS statement on worldwide ET statistics for 2010

    DEFF Research Database (Denmark)

    Stroud, Brad; Callesen, Henrik

    2012-01-01

    For the twentieth consecutive year, the Data Retrieval Committee of the international Embryo Transfer Society (IETS) can report global embryo transfer (ET) statistics. The number of bovine in vivoderived (IVD) embryos collected/flushed worldwide in 2010 increased to 732,000, a 4% increase from 2009...... the committee’s regional data collectors indicates that the embryo transfer industry is doing well. It is important to note that this report does not include every country’s statistics, and very few, if any, country has 100% of its activity represented; however, it is the best worldwide report available about...... the commercial embryo transfer business....

  12. STATISTICS IN SERVICE QUALITY ASSESSMENT

    Directory of Open Access Journals (Sweden)

    Dragana Gardašević

    2012-09-01

    Full Text Available For any quality evaluation in sports, science, education, and so, it is useful to collect data to construct a strategy to improve the quality of services offered to the user. For this purpose, we use statistical software packages for data processing data collected in order to increase customer satisfaction. The principle is demonstrated by the example of the level of student satisfaction ratings Belgrade Polytechnic (as users the quality of institutions (Belgrade Polytechnic. Here, the emphasis on statistical analysis as a tool for quality control in order to improve the same, and not the interpretation of results. Therefore, the above can be used as a model in sport to improve the overall results.

  13. The culture of fear and control in Costa Rica (I): Crime statistics and law enforcement

    OpenAIRE

    Huhn, Sebastian

    2009-01-01

    The Costa Rican talk of crime is fundamentally based on the assumption that crime rates have increased significantly in recent years and that there is today a vast and alarming amount of crime. On the basis of this assumption, fear of crime, the call for the 'iron fist', and drastic law enforcement actions are continually increasing. While crime statistics are the logical basis for the hypothesis on the far-reaching extent of delinquency, they are used in a problematic way in the talk of crim...

  14. Statistics For Dummies

    CERN Document Server

    Rumsey, Deborah

    2011-01-01

    The fun and easy way to get down to business with statistics Stymied by statistics? No fear ? this friendly guide offers clear, practical explanations of statistical ideas, techniques, formulas, and calculations, with lots of examples that show you how these concepts apply to your everyday life. Statistics For Dummies shows you how to interpret and critique graphs and charts, determine the odds with probability, guesstimate with confidence using confidence intervals, set up and carry out a hypothesis test, compute statistical formulas, and more.Tracks to a typical first semester statistics cou

  15. Descriptive statistics.

    Science.gov (United States)

    Nick, Todd G

    2007-01-01

    Statistics is defined by the Medical Subject Headings (MeSH) thesaurus as the science and art of collecting, summarizing, and analyzing data that are subject to random variation. The two broad categories of summarizing and analyzing data are referred to as descriptive and inferential statistics. This chapter considers the science and art of summarizing data where descriptive statistics and graphics are used to display data. In this chapter, we discuss the fundamentals of descriptive statistics, including describing qualitative and quantitative variables. For describing quantitative variables, measures of location and spread, for example the standard deviation, are presented along with graphical presentations. We also discuss distributions of statistics, for example the variance, as well as the use of transformations. The concepts in this chapter are useful for uncovering patterns within the data and for effectively presenting the results of a project.

  16. Statistical Analysis of Development Trends in Global Renewable Energy

    Directory of Open Access Journals (Sweden)

    Marina D. Simonova

    2016-01-01

    Full Text Available The article focuses on the economic and statistical analysis of industries associated with the use of renewable energy sources in several countries. The dynamic development and implementation of technologies based on renewable energy sources (hereinafter RES is the defining trend of world energy development. The uneven distribution of hydrocarbon reserves, increasing demand of developing countries and environmental risks associated with the production and consumption of fossil resources has led to an increasing interest of many states to this field. Creating low-carbon economies involves the implementation of plans to increase the proportion of clean energy through renewable energy sources, energy efficiency, reduce greenhouse gas emissions. The priority of this sector is a characteristic feature of modern development of developed (USA, EU, Japan and emerging economies (China, India, Brazil, etc., as evidenced by the inclusion of the development of this segment in the state energy strategies and the revision of existing approaches to energy security. The analysis of the use of renewable energy, its contribution to value added of countries-producers is of a particular interest. Over the last decade, the share of energy produced from renewable sources in the energy balances of the world's largest economies increased significantly. Every year the number of power generating capacity based on renewable energy is growing, especially, this trend is apparent in China, USA and European Union countries. There is a significant increase in direct investment in renewable energy. The total investment over the past ten years increased by 5.6 times. The most rapidly developing kinds are solar energy and wind power.

  17. Statistical inference for template aging

    Science.gov (United States)

    Schuckers, Michael E.

    2006-04-01

    A change in classification error rates for a biometric device is often referred to as template aging. Here we offer two methods for determining whether the effect of time is statistically significant. The first of these is the use of a generalized linear model to determine if these error rates change linearly over time. This approach generalizes previous work assessing the impact of covariates using generalized linear models. The second approach uses of likelihood ratio tests methodology. The focus here is on statistical methods for estimation not the underlying cause of the change in error rates over time. These methodologies are applied to data from the National Institutes of Standards and Technology Biometric Score Set Release 1. The results of these applications are discussed.

  18. Testing statistical self-similarity in the topology of river networks

    Science.gov (United States)

    Troutman, Brent M.; Mantilla, Ricardo; Gupta, Vijay K.

    2010-01-01

    Recent work has demonstrated that the topological properties of real river networks deviate significantly from predictions of Shreve's random model. At the same time the property of mean self-similarity postulated by Tokunaga's model is well supported by data. Recently, a new class of network model called random self-similar networks (RSN) that combines self-similarity and randomness has been introduced to replicate important topological features observed in real river networks. We investigate if the hypothesis of statistical self-similarity in the RSN model is supported by data on a set of 30 basins located across the continental United States that encompass a wide range of hydroclimatic variability. We demonstrate that the generators of the RSN model obey a geometric distribution, and self-similarity holds in a statistical sense in 26 of these 30 basins. The parameters describing the distribution of interior and exterior generators are tested to be statistically different and the difference is shown to produce the well-known Hack's law. The inter-basin variability of RSN parameters is found to be statistically significant. We also test generator dependence on two climatic indices, mean annual precipitation and radiative index of dryness. Some indication of climatic influence on the generators is detected, but this influence is not statistically significant with the sample size available. Finally, two key applications of the RSN model to hydrology and geomorphology are briefly discussed.

  19. Do increases in cigarette prices lead to increases in sales of cigarettes with high tar and nicotine yields?

    Science.gov (United States)

    Farrelly, Matthew C; Loomis, Brett R; Mann, Nathan H

    2007-10-01

    We used scanner data on cigarette prices and sales collected from supermarkets across the United States from 1994 to 2004 to test the hypothesis that cigarette prices are positively correlated with sales of cigarettes with higher tar and nicotine content. During this period the average inflation-adjusted price for menthol cigarettes increased 55.8%. Price elasticities from multivariate regression models suggest that this price increase led to an increase of 1.73% in sales-weighted average tar yields and a 1.28% increase in sales-weighted average nicotine yields for menthol cigarettes. The 50.5% price increase of nonmenthol varieties over the same period yielded an estimated increase of 1% in tar per cigarette but no statistically significant increase in nicotine yields. An ordered probit model of the impact of cigarette prices on cigarette strength (ultra-light, light, full flavor, unfiltered) offers an explanation: As cigarette prices increase, the probability that stronger cigarette types will be sold increases. This effect is larger for menthol than for nonmenthol cigarettes. Our results are consistent with earlier population-based cross-sectional and longitudinal studies showing that higher cigarette prices and taxes are associated with increasing consumption of higher-yield cigarettes by smokers.

  20. Limit temperature for entanglement in generalized statistics

    International Nuclear Information System (INIS)

    Rossignoli, R.; Canosa, N.

    2004-01-01

    We discuss the main properties of general thermal states derived from non-additive entropic forms and their use for studying quantum entanglement. It is shown that all these states become more mixed as the temperature increases, approaching the full random state for T→∞. The formalism is then applied to examine the limit temperature for entanglement in a two-qubit XXZ Heisenberg chain, which exhibits the peculiar feature of being independent of the applied magnetic field in the conventional von Neumann based statistics. In contrast, this temperature is shown to be field dependent in a generalized statistics, even for small deviations from the standard form. Results for the Tsallis-based statistics are examined in detail

  1. Fulfilling the needs for statistical expertise at Aalborg Hospital

    DEFF Research Database (Denmark)

    Dethlefsen, Claus

    In 2005, the first statistician was employed at Aalborg Hospital due to expanding research activities as part of Aarhus University Hospital. Since then, there has been an increased demand for statistical expertise at all levels. In the talk, I will give an overview of the current staff...... of statisticians and the organisation. I will give examples from our statistical consultancy and illustrate some of the challenges that have led to research projects with heavy statistical involvement....

  2. All-carbon nanotube diode and solar cell statistically formed from macroscopic network

    Institute of Scientific and Technical Information of China (English)

    Albert G. Nasibulin[1,2,3; Adinath M. Funde[3,4; Ilya V. Anoshkin[3; Igor A. Levitskyt[5,6

    2015-01-01

    Schottky diodes and solar cells are statistically created in the contact area between two macroscopic films of single-walled carbon nanotubes (SWNTs) at the junction of semiconducting and quasi-metallic bundles consisting of several high quality tubes. The n-doping of one of the films allows for photovoltaic action, owing to an increase in the built-in potential at the bundle-to-bundle interface. Statistical analysis demonstrates that the Schottky barrier device contributes significantly to the I-V characteristics, compared to the p-n diode. The upper limit of photovoltaic conversion efficiency has been estimated at N20%, demonstrating that the light energy conversion is very efficient for such a unique solar cell. While there have been multiple studies on rectifying SWNT diodes in the nanoscale environment, this is the first report of a macroscopic all-carbon nanotube diode and solar cell.

  3. Gene cluster statistics with gene families.

    Science.gov (United States)

    Raghupathy, Narayanan; Durand, Dannie

    2009-05-01

    Identifying genomic regions that descended from a common ancestor is important for understanding the function and evolution of genomes. In distantly related genomes, clusters of homologous gene pairs are evidence of candidate homologous regions. Demonstrating the statistical significance of such "gene clusters" is an essential component of comparative genomic analyses. However, currently there are no practical statistical tests for gene clusters that model the influence of the number of homologs in each gene family on cluster significance. In this work, we demonstrate empirically that failure to incorporate gene family size in gene cluster statistics results in overestimation of significance, leading to incorrect conclusions. We further present novel analytical methods for estimating gene cluster significance that take gene family size into account. Our methods do not require complete genome data and are suitable for testing individual clusters found in local regions, such as contigs in an unfinished assembly. We consider pairs of regions drawn from the same genome (paralogous clusters), as well as regions drawn from two different genomes (orthologous clusters). Determining cluster significance under general models of gene family size is computationally intractable. By assuming that all gene families are of equal size, we obtain analytical expressions that allow fast approximation of cluster probabilities. We evaluate the accuracy of this approximation by comparing the resulting gene cluster probabilities with cluster probabilities obtained by simulating a realistic, power-law distributed model of gene family size, with parameters inferred from genomic data. Surprisingly, despite the simplicity of the underlying assumption, our method accurately approximates the true cluster probabilities. It slightly overestimates these probabilities, yielding a conservative test. We present additional simulation results indicating the best choice of parameter values for data

  4. Statistical versus Musical Significance: Commentary on Leigh VanHandel's 'National Metrical Types in Nineteenth Century Art Song'

    Directory of Open Access Journals (Sweden)

    Justin London

    2010-01-01

    Full Text Available In “National Metrical Types in Nineteenth Century Art Song” Leigh Van Handel gives a sympathetic critique of William Rothstein’s claim that in western classical music of the late 18th and 19th centuries there are discernable differences in the phrasing and metrical practice of German versus French and Italian composers. This commentary (a examines just what Rothstein means in terms of his proposed metrical typology, (b questions Van Handel on how she has applied it to a purely melodic framework, (c amplifies Van Handel’s critique of Rothstein, and then (d concludes with a rumination on the reach of quantitative (i.e., statistically-driven versus qualitative claims regarding such things as “national metrical types.”

  5. Statistical characterization report for Single-Shell Tank 241-T-107

    International Nuclear Information System (INIS)

    Cromar, R.D.; Wilmarth, S.R.; Jensen, L.

    1994-01-01

    This report contains the results of the statistical analysis of data from three core samples obtained from single-shell tank 241-T-107 (T-107). Four specific topics are addressed. They are summarized below. Section 3.0 contains mean concentration estimates of analytes found in T-107. The estimates of open-quotes errorclose quotes associated with the concentration estimates are given as 95% confidence intervals (CI) on the mean. The results given are based on three types of samples: core composite samples, core segment samples, and drainable liquid samples. Section 4.0 contains estimates of the spatial variability (variability between cores and between segments) and the analytical variability (variability between the primary and the duplicate analysis). Statistical tests were performed to test the hypothesis that the between cores and the between segments spatial variability is zero. The results of the tests are as follows. Based on the core composite data, the between cores variance is significantly different from zero for 35 out of 74 analytes; i.e., for 53% of the analytes there is no statistically significant difference between the concentration means for two cores. Based on core segment data, the between segments variance is significantly different from zero for 22 out of 24 analytes and the between cores variance is significantly different from zero for 4 out of 24 analytes; i.e., for 8% of the analytes there is no statistically significant difference between segment means and for 83% of the analytes there is no difference between the means from the three cores. Section 5.0 contains the results of the application of multiple comparison methods to the core composite data, the core segment data, and the drainable liquid data. Section 6.0 contains the results of a statistical test conducted to determine the 222-S Analytical Laboratory's ability to homogenize solid core segments

  6. Statistics in Schools

    Science.gov (United States)

    Information Statistics in Schools Educate your students about the value and everyday use of statistics. The Statistics in Schools program provides resources for teaching and learning with real life data. Explore the site for standards-aligned, classroom-ready activities. Statistics in Schools Math Activities History

  7. Direct Learning of Systematics-Aware Summary Statistics

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    Complex machine learning tools, such as deep neural networks and gradient boosting algorithms, are increasingly being used to construct powerful discriminative features for High Energy Physics analyses. These methods are typically trained with simulated or auxiliary data samples by optimising some classification or regression surrogate objective. The learned feature representations are then used to build a sample-based statistical model to perform inference (e.g. interval estimation or hypothesis testing) over a set of parameters of interest. However, the effectiveness of the mentioned approach can be reduced by the presence of known uncertainties that cause differences between training and experimental data, included in the statistical model via nuisance parameters. This work presents an end-to-end algorithm, which leverages on existing deep learning technologies but directly aims to produce inference-optimal sample-summary statistics. By including the statistical model and a differentiable approximation of ...

  8. Benchmark validation of statistical models: Application to mediation analysis of imagery and memory.

    Science.gov (United States)

    MacKinnon, David P; Valente, Matthew J; Wurpts, Ingrid C

    2018-03-29

    This article describes benchmark validation, an approach to validating a statistical model. According to benchmark validation, a valid model generates estimates and research conclusions consistent with a known substantive effect. Three types of benchmark validation-(a) benchmark value, (b) benchmark estimate, and (c) benchmark effect-are described and illustrated with examples. Benchmark validation methods are especially useful for statistical models with assumptions that are untestable or very difficult to test. Benchmark effect validation methods were applied to evaluate statistical mediation analysis in eight studies using the established effect that increasing mental imagery improves recall of words. Statistical mediation analysis led to conclusions about mediation that were consistent with established theory that increased imagery leads to increased word recall. Benchmark validation based on established substantive theory is discussed as a general way to investigate characteristics of statistical models and a complement to mathematical proof and statistical simulation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  9. Dynamics of EEG functional connectivity during statistical learning.

    Science.gov (United States)

    Tóth, Brigitta; Janacsek, Karolina; Takács, Ádám; Kóbor, Andrea; Zavecz, Zsófia; Nemeth, Dezso

    2017-10-01

    Statistical learning is a fundamental mechanism of the brain, which extracts and represents regularities of our environment. Statistical learning is crucial in predictive processing, and in the acquisition of perceptual, motor, cognitive, and social skills. Although previous studies have revealed competitive neurocognitive processes underlying statistical learning, the neural communication of the related brain regions (functional connectivity, FC) has not yet been investigated. The present study aimed to fill this gap by investigating FC networks that promote statistical learning in humans. Young adults (N=28) performed a statistical learning task while 128-channels EEG was acquired. The task involved probabilistic sequences, which enabled to measure incidental/implicit learning of conditional probabilities. Phase synchronization in seven frequency bands was used to quantify FC between cortical regions during the first, second, and third periods of the learning task, respectively. Here we show that statistical learning is negatively correlated with FC of the anterior brain regions in slow (theta) and fast (beta) oscillations. These negative correlations increased as the learning progressed. Our findings provide evidence that dynamic antagonist brain networks serve a hallmark of statistical learning. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Statistical Damage Detection of Civil Engineering Structures using ARMAV Models

    DEFF Research Database (Denmark)

    Andersen, P.; Kirkegaard, Poul Henning

    In this paper a statistically based damage detection of a lattice steel mast is performed. By estimation of the modal parameters and their uncertainties it is possible to detect whether some of the modal parameters have changed with a statistical significance. The estimation of the uncertainties ...

  11. Clinicopathological significance of c-MYC in esophageal squamous cell carcinoma.

    Science.gov (United States)

    Lian, Yu; Niu, Xiangdong; Cai, Hui; Yang, Xiaojun; Ma, Haizhong; Ma, Shixun; Zhang, Yupeng; Chen, Yifeng

    2017-07-01

    Esophageal squamous cell carcinoma is one of the most common malignant tumors. The oncogene c-MYC is thought to be important in the initiation, promotion, and therapy resistance of cancer. In this study, we aim to investigate the clinicopathologic roles of c-MYC in esophageal squamous cell carcinoma tissue. This study is aimed at discovering and analyzing c-MYC expression in a series of human esophageal tissues. A total of 95 esophageal squamous cell carcinoma samples were analyzed by the western blotting and immunohistochemistry techniques. Then, correlation of c-MYC expression with clinicopathological features of esophageal squamous cell carcinoma patients was statistically analyzed. In most esophageal squamous cell carcinoma cases, the c-MYC expression was positive in tumor tissues. The positive rate of c-MYC expression in tumor tissues was 61.05%, obviously higher than the adjacent normal tissues (8.42%, 8/92) and atypical hyperplasia tissues (19.75%, 16/95). There was a statistical difference among adjacent normal tissues, atypical hyperplasia tissues, and tumor tissues. Overexpression of the c-MYC was detected in 61.05% (58/95) esophageal squamous cell carcinomas, which was significantly correlated with the degree of differentiation (p = 0.004). The positive rate of c-MYC expression was 40.0% in well-differentiated esophageal tissues, with a significantly statistical difference (p = 0.004). The positive rate of c-MYC was 41.5% in T1 + T2 esophageal tissues and 74.1% in T3 + T4 esophageal tissues, with a significantly statistical difference (p = 0.001). The positive rate of c-MYC was 45.0% in I + II esophageal tissues and 72.2% in III + IV esophageal tissues, with a significantly statistical difference (p = 0.011). The c-MYC expression strongly correlated with clinical staging (p = 0.011), differentiation degree (p = 0.004), lymph node metastasis (p = 0.003), and invasion depth (p = 0.001) of patients with esophageal squamous cell carcinoma. The c-MYC was

  12. Endogenous and exogenous testosterone and prostate cancer: decreased-, increased- or null-risk?

    Science.gov (United States)

    Advani, Shailesh; Tsilidis, Konstantinos K.; Wang, Run; Canfield, Steven

    2017-01-01

    For more than 70 years, the contention that high levels of testosterone or that the use of testosterone therapy (TTh) increases the development and progression of prostate cancer (PCa) has been widely accepted and practiced. Yet, the increasing and emerging evidence on testosterone research seems to challenge that contention. To review literature on the associations of endogenous and exogenous testosterone with decreased-, increased-, or null-risk of PCa, and to further evaluate only those studies that reported magnitude of associations from multivariable modeling as it minimizes confounding effects. We conducted a literature search to identify studies that investigated the association of endogenous total testosterone [continuous (per 1 unit increment and 5 nmol/L increment) and categorical (high vs. low)] and use of TTh with PCa events [1990–2016]. Emphasis was given to studies/analyses that reported magnitude of associations [odds ratio (OR), relative risk (RR) and hazard ratios (HRs)] from multivariable analyses to determine risk of PCa and their statistical significance. Most identified studies/analyses included observational and randomized placebo-controlled trials. This review was organized in three parts: (I) association of endogenous total testosterone (per 1 unit increment and 5 nmol/L increment) with PCa; (II) relationship of endogenous total testosterone (categorical high vs. low) with PCa; and (III) association of use of TTh with PCa in meta-analyses of randomized placebo-controlled trials. The first part included 31 observational studies [20 prospective (per 5 nmol/L increment) and 11 prospective and retrospective cohort studies (per 1 unit increment)]. None of the 20 prospective studies found a significant association between total testosterone (5 nmol/L increment) and increased- or decreased-risk of PCa. Two out of the 11 studies/analyses showed a significant decreased-risk of PCa for total testosterone per 1 unit increment, but also two other

  13. Optimization of significant insolation distribution parameters - A new approach towards BIPV system design

    Energy Technology Data Exchange (ETDEWEB)

    Paul, D. [SSBB and Senior Member-ASQ, Kolkata (India); Mandal, S.N. [Kalyani Govt Engg College, Kalyani (India); Mukherjee, D.; Bhadra Chaudhuri, S.R. [Dept of E. and T. C. Engg, B.E.S.U., Shibpur (India)

    2010-10-15

    System efficiency and payback time are yet to attain a commercially viable level for solar photovoltaic energy projects. Despite huge development in prediction of solar radiation data, there is a gap in extraction of pertinent information from such data. Hence the available data cannot be effectively utilized for engineering application. This is acting as a barrier for the emerging technology. For making accurate engineering and financial calculations regarding any solar energy project, it is crucial to identify and optimize the most significant statistic(s) representing insolation availability by the Photovoltaic setup at the installation site. Quality Function Deployment (QFD) technique has been applied for identifying the statistic(s), which are of high significance from a project designer's point of view. A MATLAB trademark program has been used to build the annual frequency distribution of hourly insolation over any module plane at a given location. Descriptive statistical analysis of such distributions is done through MINITAB trademark. For Building Integrated Photo Voltaic (BIPV) installation, similar statistical analysis has been carried out for the composite frequency distribution, which is formed by weighted summation of insolation distributions for different module planes used in the installation. Vital most influential statistic(s) of the composite distribution have been optimized through Artificial Neural Network computation. This approach is expected to open up a new horizon in BIPV system design. (author)

  14. PRIS-STATISTICS: Power Reactor Information System Statistical Reports. User's Manual

    International Nuclear Information System (INIS)

    2013-01-01

    The IAEA developed the Power Reactor Information System (PRIS)-Statistics application to assist PRIS end users with generating statistical reports from PRIS data. Statistical reports provide an overview of the status, specification and performance results of every nuclear power reactor in the world. This user's manual was prepared to facilitate the use of the PRIS-Statistics application and to provide guidelines and detailed information for each report in the application. Statistical reports support analyses of nuclear power development and strategies, and the evaluation of nuclear power plant performance. The PRIS database can be used for comprehensive trend analyses and benchmarking against best performers and industrial standards.

  15. Mathematics Anxiety and Statistics Anxiety. Shared but Also Unshared Components and Antagonistic Contributions to Performance in Statistics

    Science.gov (United States)

    Paechter, Manuela; Macher, Daniel; Martskvishvili, Khatuna; Wimmer, Sigrid; Papousek, Ilona

    2017-01-01

    equation model and, therefore, contributed indirectly and negatively to performance. Furthermore, it had a direct negative impact on performance (probably via increased tension and worry in the exam). The results of the study speak for shared but also unique components of statistics anxiety and mathematics anxiety. They are also important for instruction and give recommendations to learners as well as to instructors. PMID:28790938

  16. Mathematics Anxiety and Statistics Anxiety. Shared but Also Unshared Components and Antagonistic Contributions to Performance in Statistics.

    Science.gov (United States)

    Paechter, Manuela; Macher, Daniel; Martskvishvili, Khatuna; Wimmer, Sigrid; Papousek, Ilona

    2017-01-01

    equation model and, therefore, contributed indirectly and negatively to performance. Furthermore, it had a direct negative impact on performance (probably via increased tension and worry in the exam). The results of the study speak for shared but also unique components of statistics anxiety and mathematics anxiety. They are also important for instruction and give recommendations to learners as well as to instructors.

  17. Mathematics Anxiety and Statistics Anxiety. Shared but Also Unshared Components and Antagonistic Contributions to Performance in Statistics

    Directory of Open Access Journals (Sweden)

    Manuela Paechter

    2017-07-01

    the structural equation model and, therefore, contributed indirectly and negatively to performance. Furthermore, it had a direct negative impact on performance (probably via increased tension and worry in the exam. The results of the study speak for shared but also unique components of statistics anxiety and mathematics anxiety. They are also important for instruction and give recommendations to learners as well as to instructors.

  18. Statistical physics

    CERN Document Server

    Sadovskii, Michael V

    2012-01-01

    This volume provides a compact presentation of modern statistical physics at an advanced level. Beginning with questions on the foundations of statistical mechanics all important aspects of statistical physics are included, such as applications to ideal gases, the theory of quantum liquids and superconductivity and the modern theory of critical phenomena. Beyond that attention is given to new approaches, such as quantum field theory methods and non-equilibrium problems.

  19. Application of Bioorganic Fertilizer Significantly Increased Apple Yields and Shaped Bacterial Community Structure in Orchard Soil.

    Science.gov (United States)

    Wang, Lei; Li, Jing; Yang, Fang; E, Yaoyao; Raza, Waseem; Huang, Qiwei; Shen, Qirong

    2017-02-01

    and Rhodospirillaceae, were found to be the significantly increased by the BOF addition and the genus Lysobacter may identify members of this group effective in biological control-based plant disease management and the members of family Rhodospirillaceae had an important role in fixing molecular nitrogen. These results strengthen the understanding of responses to the BOF and possible interactions within bacterial communities in soil that can be associated with disease suppression and the accumulation of carbon and nitrogen. The increase of apple yields after the application of BOF might be attributed to the fact that the application of BOF increased SOM, and soil total nitrogen, and changed the bacterial community by enriching Rhodospirillaceae, Alphaprotreobateria, and Proteobacteria.

  20. Stupid statistics!

    Science.gov (United States)

    Tellinghuisen, Joel

    2008-01-01

    The method of least squares is probably the most powerful data analysis tool available to scientists. Toward a fuller appreciation of that power, this work begins with an elementary review of statistics fundamentals, and then progressively increases in sophistication as the coverage is extended to the theory and practice of linear and nonlinear least squares. The results are illustrated in application to data analysis problems important in the life sciences. The review of fundamentals includes the role of sampling and its connection to probability distributions, the Central Limit Theorem, and the importance of finite variance. Linear least squares are presented using matrix notation, and the significance of the key probability distributions-Gaussian, chi-square, and t-is illustrated with Monte Carlo calculations. The meaning of correlation is discussed, including its role in the propagation of error. When the data themselves are correlated, special methods are needed for the fitting, as they are also when fitting with constraints. Nonlinear fitting gives rise to nonnormal parameter distributions, but the 10% Rule of Thumb suggests that such problems will be insignificant when the parameter is sufficiently well determined. Illustrations include calibration with linear and nonlinear response functions, the dangers inherent in fitting inverted data (e.g., Lineweaver-Burk equation), an analysis of the reliability of the van't Hoff analysis, the problem of correlated data in the Guggenheim method, and the optimization of isothermal titration calorimetry procedures using the variance-covariance matrix for experiment design. The work concludes with illustrations on assessing and presenting results.

  1. Behavioral investment strategy matters: a statistical arbitrage approach

    OpenAIRE

    Sun, David; Tsai, Shih-Chuan; Wang, Wei

    2011-01-01

    In this study, we employ a statistical arbitrage approach to demonstrate that momentum investment strategy tend to work better in periods longer than six months, a result different from findings in past literature. Compared with standard parametric tests, the statistical arbitrage method produces more clearly that momentum strategies work only in longer formation and holding periods. Also they yield positive significant returns in an up market, but negative yet insignificant returns in a down...

  2. Comparing identified and statistically significant lipids and polar metabolites in 15-year old serum and dried blood spot samples for longitudinal studies: Comparing lipids and metabolites in serum and DBS samples

    Energy Technology Data Exchange (ETDEWEB)

    Kyle, Jennifer E. [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Casey, Cameron P. [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Stratton, Kelly G. [National Security Directorate, Pacific Northwest National Laboratory, Richland WA USA; Zink, Erika M. [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Kim, Young-Mo [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Zheng, Xueyun [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Monroe, Matthew E. [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Weitz, Karl K. [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Bloodsworth, Kent J. [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Orton, Daniel J. [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Ibrahim, Yehia M. [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Moore, Ronald J. [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Lee, Christine G. [Department of Medicine, Bone and Mineral Unit, Oregon Health and Science University, Portland OR USA; Research Service, Portland Veterans Affairs Medical Center, Portland OR USA; Pedersen, Catherine [Department of Medicine, Bone and Mineral Unit, Oregon Health and Science University, Portland OR USA; Orwoll, Eric [Department of Medicine, Bone and Mineral Unit, Oregon Health and Science University, Portland OR USA; Smith, Richard D. [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Burnum-Johnson, Kristin E. [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Baker, Erin S. [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA

    2017-02-05

    The use of dried blood spots (DBS) has many advantages over traditional plasma and serum samples such as smaller blood volume required, storage at room temperature, and ability for sampling in remote locations. However, understanding the robustness of different analytes in DBS samples is essential, especially in older samples collected for longitudinal studies. Here we analyzed DBS samples collected in 2000-2001 and stored at room temperature and compared them to matched serum samples stored at -80°C to determine if they could be effectively used as specific time points in a longitudinal study following metabolic disease. Four hundred small molecules were identified in both the serum and DBS samples using gas chromatograph-mass spectrometry (GC-MS), liquid chromatography-MS (LC-MS) and LC-ion mobility spectrometry-MS (LC-IMS-MS). The identified polar metabolites overlapped well between the sample types, though only one statistically significant polar metabolite in a case-control study was conserved, indicating degradation occurs in the DBS samples affecting quantitation. Differences in the lipid identifications indicated that some oxidation occurs in the DBS samples. However, thirty-six statistically significant lipids correlated in both sample types indicating that lipid quantitation was more stable across the sample types.

  3. Multiple Sclerosis Increases Fracture Risk: A Meta-Analysis

    Directory of Open Access Journals (Sweden)

    Guixian Dong

    2015-01-01

    Full Text Available Purpose. The association between multiple sclerosis (MS and fracture risk has been reported, but results of previous studies remain controversial and ambiguous. To assess the association between MS and fracture risk, a meta-analysis was performed. Method. Based on comprehensive searches of the PubMed, Embase, and Web of Science, we identified outcome data from all articles estimating the association between MS and fracture risk. The pooled risk ratios (RRs with 95% confidence intervals (CIs were calculated. Results. A significant association between MS and fracture risk was found. This result remained statistically significant when the adjusted RRs were combined. Subgroup analysis stratified by the site of fracture suggested significant associations between MS and tibia fracture risk, femur fracture risk, hip fracture risk, pelvis fracture risk, vertebrae fracture risk, and humerus fracture risk. In the subgroup analysis by gender, female MS patients had increased fracture risk. When stratified by history of drug use, use of antidepressants, hypnotics/anxiolytics, anticonvulsants, and glucocorticoids increased the risk of fracture risk in MS patients. Conclusions. This meta-analysis demonstrated that MS was significantly associated with fracture risk.

  4. Bose and his statistics

    International Nuclear Information System (INIS)

    Venkataraman, G.

    1992-01-01

    Treating radiation gas as a classical gas, Einstein derived Planck's law of radiation by considering the dynamic equilibrium between atoms and radiation. Dissatisfied with this treatment, S.N. Bose derived Plank's law by another original way. He treated the problem in generality: he counted how many cells were available for the photon gas in phase space and distributed the photons into these cells. In this manner of distribution, there were three radically new ideas: The indistinguishability of particles, the spin of the photon (with only two possible orientations) and the nonconservation of photon number. This gave rise to a new discipline of quantum statistical mechanics. Physics underlying Bose's discovery, its significance and its role in development of the concept of ideal gas, spin-statistics theorem and spin particles are described. The book has been written in a simple and direct language in an informal style aiming to stimulate the curiosity of a reader. (M.G.B.)

  5. Statistical analysis of cone penetration resistance of railway ballast

    Directory of Open Access Journals (Sweden)

    Saussine Gilles

    2017-01-01

    Full Text Available Dynamic penetrometer tests are widely used in geotechnical studies for soils characterization but their implementation tends to be difficult. The light penetrometer test is able to give information about a cone resistance useful in the field of geotechnics and recently validated as a parameter for the case of coarse granular materials. In order to characterize directly the railway ballast on track and sublayers of ballast, a huge test campaign has been carried out for more than 5 years in order to build up a database composed of 19,000 penetration tests including endoscopic video record on the French railway network. The main objective of this work is to give a first statistical analysis of cone resistance in the coarse granular layer which represents a major component of railway track: the ballast. The results show that the cone resistance (qd increases with depth and presents strong variations corresponding to layers of different natures identified using the endoscopic records. In the first zone corresponding to the top 30cm, (qd increases linearly with a slope of around 1MPa/cm for fresh ballast and fouled ballast. In the second zone below 30cm deep, (qd increases more slowly with a slope of around 0,3MPa/cm and decreases below 50cm. These results show that there is no clear difference between fresh and fouled ballast. Hence, the (qd sensitivity is important and increases with depth. The (qd distribution for a set of tests does not follow a normal distribution. In the upper 30cm layer of ballast of track, data statistical treatment shows that train load and speed do not have any significant impact on the (qd distribution for clean ballast; they increase by 50% the average value of (qd for fouled ballast and increase the thickness as well. Below the 30cm upper layer, train load and speed have a clear impact on the (qd distribution.

  6. Statistical trend analysis methodology for rare failures in changing technical systems

    International Nuclear Information System (INIS)

    Ott, K.O.; Hoffmann, H.J.

    1983-07-01

    A methodology for a statistical trend analysis (STA) in failure rates is presented. It applies primarily to relatively rare events in changing technologies or components. The formulation is more general and the assumptions are less restrictive than in a previously published version. Relations of the statistical analysis and probabilistic assessment (PRA) are discussed in terms of categorization of decisions for action following particular failure events. The significance of tentatively identified trends is explored. In addition to statistical tests for trend significance, a combination of STA and PRA results quantifying the trend complement is proposed. The STA approach is compared with other concepts for trend characterization. (orig.)

  7. Statistical comparison of leaching behavior of incineration bottom ash using seawater and deionized water: Significant findings based on several leaching methods.

    Science.gov (United States)

    Yin, Ke; Dou, Xiaomin; Ren, Fei; Chan, Wei-Ping; Chang, Victor Wei-Chung

    2018-02-15

    Bottom ashes generated from municipal solid waste incineration have gained increasing popularity as alternative construction materials, however, they contains elevated heavy metals posing a challenge for its free usage. Different leaching methods are developed to quantify leaching potential of incineration bottom ashes meanwhile guide its environmentally friendly application. Yet, there are diverse IBA applications while the in situ environment is always complicated, challenging its legislation. In this study, leaching tests were conveyed using batch and column leaching methods with seawater as opposed to deionized water, to unveil the metal leaching potential of IBA subjected to salty environment, which is commonly encountered when using IBA in land reclamation yet not well understood. Statistical analysis for different leaching methods suggested disparate performance between seawater and deionized water primarily ascribed to ionic strength. Impacts of leachant are metal-specific dependent on leaching methods and have a function of intrinsic characteristics of incineration bottom ashes. Leaching performances were further compared on additional perspectives, e.g. leaching approach and liquid to solid ratio, indicating sophisticated leaching potentials dominated by combined geochemistry. It is necessary to develop application-oriented leaching methods with corresponding leaching criteria to preclude discriminations between different applications, e.g., terrestrial applications vs. land reclamation. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Targeting change: Assessing a faculty learning community focused on increasing statistics content in life science curricula.

    Science.gov (United States)

    Parker, Loran Carleton; Gleichsner, Alyssa M; Adedokun, Omolola A; Forney, James

    2016-11-12

    Transformation of research in all biological fields necessitates the design, analysis and, interpretation of large data sets. Preparing students with the requisite skills in experimental design, statistical analysis, and interpretation, and mathematical reasoning will require both curricular reform and faculty who are willing and able to integrate mathematical and statistical concepts into their life science courses. A new Faculty Learning Community (FLC) was constituted each year for four years to assist in the transformation of the life sciences curriculum and faculty at a large, Midwestern research university. Participants were interviewed after participation and surveyed before and after participation to assess the impact of the FLC on their attitudes toward teaching, perceived pedagogical skills, and planned teaching practice. Overall, the FLC had a meaningful positive impact on participants' attitudes toward teaching, knowledge about teaching, and perceived pedagogical skills. Interestingly, confidence for viewing the classroom as a site for research about teaching declined. Implications for the creation and development of FLCs for science faculty are discussed. © 2016 by The International Union of Biochemistry and Molecular Biology, 44(6):517-525, 2016. © 2016 The International Union of Biochemistry and Molecular Biology.

  9. Statistical distribution for generalized ideal gas of fractional-statistics particles

    International Nuclear Information System (INIS)

    Wu, Y.

    1994-01-01

    We derive the occupation-number distribution in a generalized ideal gas of particles obeying fractional statistics, including mutual statistics, by adopting a state-counting definition. When there is no mutual statistics, the statistical distribution interpolates between bosons and fermions, and respects a fractional exclusion principle (except for bosons). Anyons in a strong magnetic field at low temperatures constitute such a physical system. Applications to the thermodynamic properties of quasiparticle excitations in the Laughlin quantum Hall fluid are discussed

  10. Pre-service primary school teachers’ knowledge of informal statistical inference

    NARCIS (Netherlands)

    de Vetten, Arjen; Schoonenboom, Judith; Keijzer, Ronald; van Oers, Bert

    2018-01-01

    The ability to reason inferentially is increasingly important in today’s society. It is hypothesized here that engaging primary school students in informal statistical reasoning (ISI), defined as making generalizations without the use of formal statistical tests, will help them acquire the

  11. Childhood Cancer Statistics

    Science.gov (United States)

    ... Watchdog Ratings Feedback Contact Select Page Childhood Cancer Statistics Home > Cancer Resources > Childhood Cancer Statistics Childhood Cancer Statistics – Graphs and Infographics Number of Diagnoses Incidence Rates ...

  12. Statistical Power in Longitudinal Network Studies

    NARCIS (Netherlands)

    Stadtfeld, Christoph; Snijders, Tom A. B.; Steglich, Christian; van Duijn, Marijtje

    2018-01-01

    Longitudinal social network studies may easily suffer from a lack of statistical power. This is the case in particular for studies that simultaneously investigate change of network ties and change of nodal attributes. Such selection and influence studies have become increasingly popular due to the

  13. 12th Brazilian Meeting on Bayesian Statistics

    CERN Document Server

    Louzada, Francisco; Rifo, Laura; Stern, Julio; Lauretto, Marcelo

    2015-01-01

    Through refereed papers, this volume focuses on the foundations of the Bayesian paradigm; their comparison to objectivistic or frequentist Statistics counterparts; and the appropriate application of Bayesian foundations. This research in Bayesian Statistics is applicable to data analysis in biostatistics, clinical trials, law, engineering, and the social sciences. EBEB, the Brazilian Meeting on Bayesian Statistics, is held every two years by the ISBrA, the International Society for Bayesian Analysis, one of the most active chapters of the ISBA. The 12th meeting took place March 10-14, 2014 in Atibaia. Interest in foundations of inductive Statistics has grown recently in accordance with the increasing availability of Bayesian methodological alternatives. Scientists need to deal with the ever more difficult choice of the optimal method to apply to their problem. This volume shows how Bayes can be the answer. The examination and discussion on the foundations work towards the goal of proper application of Bayesia...

  14. Transabdominal cerclage: the significance of dual pathology and increased preterm delivery.

    Science.gov (United States)

    Farquharson, Roy G; Topping, Joanne; Quenby, Siobhan M

    2005-10-01

    Transabdominal cerclage is a recognised treatment for cervical weakness with a history of recurrent mid-trimester loss and a failed elective vaginal suture. The emergence of dual pathology, such as antiphospholipid syndrome and bacterial vaginosis, is associated with an increased risk of preterm delivery (RR 2.34, 95% CI 1.15-5.8). The first 40 cases are described where strict adherence to an investigation protocol and consistent treatment plan has been implemented.

  15. Critical analysis of adsorption data statistically

    Science.gov (United States)

    Kaushal, Achla; Singh, S. K.

    2017-10-01

    Experimental data can be presented, computed, and critically analysed in a different way using statistics. A variety of statistical tests are used to make decisions about the significance and validity of the experimental data. In the present study, adsorption was carried out to remove zinc ions from contaminated aqueous solution using mango leaf powder. The experimental data was analysed statistically by hypothesis testing applying t test, paired t test and Chi-square test to (a) test the optimum value of the process pH, (b) verify the success of experiment and (c) study the effect of adsorbent dose in zinc ion removal from aqueous solutions. Comparison of calculated and tabulated values of t and χ 2 showed the results in favour of the data collected from the experiment and this has been shown on probability charts. K value for Langmuir isotherm was 0.8582 and m value for Freundlich adsorption isotherm obtained was 0.725, both are mango leaf powder.

  16. Can Money Buy Happiness? A Statistical Analysis of Predictors for User Satisfaction

    Science.gov (United States)

    Hunter, Ben; Perret, Robert

    2011-01-01

    2007 data from LibQUAL+[TM] and the ACRL Library Trends and Statistics database were analyzed to determine if there is a statistically significant correlation between library expenditures and usage statistics and library patron satisfaction across 73 universities. The results show that users of larger, better funded libraries have higher…

  17. Nonparametric statistical inference

    CERN Document Server

    Gibbons, Jean Dickinson

    2010-01-01

    Overall, this remains a very fine book suitable for a graduate-level course in nonparametric statistics. I recommend it for all people interested in learning the basic ideas of nonparametric statistical inference.-Eugenia Stoimenova, Journal of Applied Statistics, June 2012… one of the best books available for a graduate (or advanced undergraduate) text for a theory course on nonparametric statistics. … a very well-written and organized book on nonparametric statistics, especially useful and recommended for teachers and graduate students.-Biometrics, 67, September 2011This excellently presente

  18. Statistics for Research

    CERN Document Server

    Dowdy, Shirley; Chilko, Daniel

    2011-01-01

    Praise for the Second Edition "Statistics for Research has other fine qualities besides superior organization. The examples and the statistical methods are laid out with unusual clarity by the simple device of using special formats for each. The book was written with great care and is extremely user-friendly."-The UMAP Journal Although the goals and procedures of statistical research have changed little since the Second Edition of Statistics for Research was published, the almost universal availability of personal computers and statistical computing application packages have made it possible f

  19. Head First Statistics

    CERN Document Server

    Griffiths, Dawn

    2009-01-01

    Wouldn't it be great if there were a statistics book that made histograms, probability distributions, and chi square analysis more enjoyable than going to the dentist? Head First Statistics brings this typically dry subject to life, teaching you everything you want and need to know about statistics through engaging, interactive, and thought-provoking material, full of puzzles, stories, quizzes, visual aids, and real-world examples. Whether you're a student, a professional, or just curious about statistical analysis, Head First's brain-friendly formula helps you get a firm grasp of statistics

  20. Treatment timing for an orthopedic approach to patients with increased vertical dimension.

    Science.gov (United States)

    Baccetti, Tiziano; Franchi, Lorenzo; Schulz, Scott O; McNamara, James A

    2008-01-01

    The aim of this study was to investigate the role of treatment timing on the effectiveness of vertical-pull chincup (V-PCC) therapy in conjunction with a bonded rapid maxillary expander (RME) in growing subjects with mild-to-severe hyperdivergent facial patterns. The records of 39 subjects treated with a bonded RME combined with a V-PCC were compared with 29 untreated subjects with similar vertical skeletal disharmonies. Lateral cephalograms were analyzed before (T1) and after treatment or observation (T2). Both the treated and the untreated samples were divided into prepubertal and pubertal groups on the basis of cervical vertebral maturation (prepubertal treated group, 21 subjects; pubertal treated group, 18 subjects; prepubertal control group, 15 subjects; pubertal control group, 14 subjects). Mean change differences from T2 to T1 were compared in the 2 prepubertal and the 2 pubertal groups with independent-sample t tests. No statistically significant differences between the 2 prepubertal groups were found for any cephalometric skeletal measures from T1 to T2. When compared with the untreated pubertal sample, the group treated with the RME and V-PCC at puberty showed a statistically significant reduction in the inclination of the mandibular plane to the Frankfort horizontal (-2.2 mm), a statistically significant reduction in the inclination of the condylar axis to the mandibular plane (-2.2 degrees), and statistically significant supplementary growth of the mandibular ramus (1.7 mm). Treatment of increased vertical dimension with the RME and V-PCC protocol appears to produce better results during the pubertal growth spurt than before puberty, although the absolute amount of correction in the vertical skeletal parameters is limited.

  1. Beyond δ : Tailoring marked statistics to reveal modified gravity

    Science.gov (United States)

    Valogiannis, Georgios; Bean, Rachel

    2018-01-01

    Models that seek to explain cosmic acceleration through modifications to general relativity (GR) evade stringent Solar System constraints through a restoring, screening mechanism. Down-weighting the high-density, screened regions in favor of the low density, unscreened ones offers the potential to enhance the amount of information carried in such modified gravity models. In this work, we assess the performance of a new "marked" transformation and perform a systematic comparison with the clipping and logarithmic transformations, in the context of Λ CDM and the symmetron and f (R ) modified gravity models. Performance is measured in terms of the fractional boost in the Fisher information and the signal-to-noise ratio (SNR) for these models relative to the statistics derived from the standard density distribution. We find that all three statistics provide improved Fisher boosts over the basic density statistics. The model parameters for the marked and clipped transformation that best enhance signals and the Fisher boosts are determined. We also show that the mark is useful both as a Fourier and real-space transformation; a marked correlation function also enhances the SNR relative to the standard correlation function, and can on mildly nonlinear scales show a significant difference between the Λ CDM and the modified gravity models. Our results demonstrate how a series of simple analytical transformations could dramatically increase the predicted information extracted on deviations from GR, from large-scale surveys, and give the prospect for a much more feasible potential detection.

  2. A Nineteenth Century Statistical Society that Abandoned Statistics

    NARCIS (Netherlands)

    Stamhuis, I.H.

    2007-01-01

    In 1857, a Statistical Society was founded in the Netherlands. Within this society, statistics was considered a systematic, quantitative, and qualitative description of society. In the course of time, the society attracted a wide and diverse membership, although the number of physicians on its rolls

  3. Statistical analysis of disruptions in JET

    International Nuclear Information System (INIS)

    De Vries, P.C.; Johnson, M.F.; Segui, I.

    2009-01-01

    The disruption rate (the percentage of discharges that disrupt) in JET was found to drop steadily over the years. Recent campaigns (2005-2007) show a yearly averaged disruption rate of only 6% while from 1991 to 1995 this was often higher than 20%. Besides the disruption rate, the so-called disruptivity, or the likelihood of a disruption depending on the plasma parameters, has been determined. The disruptivity of plasmas was found to be significantly higher close to the three main operational boundaries for tokamaks; the low-q, high density and β-limit. The frequency at which JET operated close to the density-limit increased six fold over the last decade; however, only a small reduction in disruptivity was found. Similarly the disruptivity close to the low-q and β-limit was found to be unchanged. The most significant reduction in disruptivity was found far from the operational boundaries, leading to the conclusion that the improved disruption rate is due to a better technical capability of operating JET, instead of safer operations close to the physics limits. The statistics showed that a simple protection system was able to mitigate the forces of a large fraction of disruptions, although it has proved to be at present more difficult to ameliorate the heat flux.

  4. Statistics for economics

    CERN Document Server

    Naghshpour, Shahdad

    2012-01-01

    Statistics is the branch of mathematics that deals with real-life problems. As such, it is an essential tool for economists. Unfortunately, the way you and many other economists learn the concept of statistics is not compatible with the way economists think and learn. The problem is worsened by the use of mathematical jargon and complex derivations. Here's a book that proves none of this is necessary. All the examples and exercises in this book are constructed within the field of economics, thus eliminating the difficulty of learning statistics with examples from fields that have no relation to business, politics, or policy. Statistics is, in fact, not more difficult than economics. Anyone who can comprehend economics can understand and use statistics successfully within this field, including you! This book utilizes Microsoft Excel to obtain statistical results, as well as to perform additional necessary computations. Microsoft Excel is not the software of choice for performing sophisticated statistical analy...

  5. Active Learning with Rationales for Identifying Operationally Significant Anomalies in Aviation

    Science.gov (United States)

    Sharma, Manali; Das, Kamalika; Bilgic, Mustafa; Matthews, Bryan; Nielsen, David Lynn; Oza, Nikunj C.

    2016-01-01

    A major focus of the commercial aviation community is discovery of unknown safety events in flight operations data. Data-driven unsupervised anomaly detection methods are better at capturing unknown safety events compared to rule-based methods which only look for known violations. However, not all statistical anomalies that are discovered by these unsupervised anomaly detection methods are operationally significant (e.g., represent a safety concern). Subject Matter Experts (SMEs) have to spend significant time reviewing these statistical anomalies individually to identify a few operationally significant ones. In this paper we propose an active learning algorithm that incorporates SME feedback in the form of rationales to build a classifier that can distinguish between uninteresting and operationally significant anomalies. Experimental evaluation on real aviation data shows that our approach improves detection of operationally significant events by as much as 75% compared to the state-of-the-art. The learnt classifier also generalizes well to additional validation data sets.

  6. Prevalence of significant bacteriuria among symptomatic and ...

    African Journals Online (AJOL)

    Data were analyzed using the Statistical Package for Social Sciences (SPSS) version 16.0 (SPSS, Inc., Chicago, Ill). Results: A total of 100 consenting participants were recruited into the study. The mean age was: 23.42 ± 8.31 years and a range of 14‑50 years. Only 9% (9/100) had significant bacteriuria while 44.4% (4/9) ...

  7. Auditory Magnetoencephalographic Frequency-Tagged Responses Mirror the Ongoing Segmentation Processes Underlying Statistical Learning.

    Science.gov (United States)

    Farthouat, Juliane; Franco, Ana; Mary, Alison; Delpouve, Julie; Wens, Vincent; Op de Beeck, Marc; De Tiège, Xavier; Peigneux, Philippe

    2017-03-01

    Humans are highly sensitive to statistical regularities in their environment. This phenomenon, usually referred as statistical learning, is most often assessed using post-learning behavioural measures that are limited by a lack of sensibility and do not monitor the temporal dynamics of learning. In the present study, we used magnetoencephalographic frequency-tagged responses to investigate the neural sources and temporal development of the ongoing brain activity that supports the detection of regularities embedded in auditory streams. Participants passively listened to statistical streams in which tones were grouped as triplets, and to random streams in which tones were randomly presented. Results show that during exposure to statistical (vs. random) streams, tritone frequency-related responses reflecting the learning of regularities embedded in the stream increased in the left supplementary motor area and left posterior superior temporal sulcus (pSTS), whereas tone frequency-related responses decreased in the right angular gyrus and right pSTS. Tritone frequency-related responses rapidly developed to reach significance after 3 min of exposure. These results suggest that the incidental extraction of novel regularities is subtended by a gradual shift from rhythmic activity reflecting individual tone succession toward rhythmic activity synchronised with triplet presentation, and that these rhythmic processes are subtended by distinct neural sources.

  8. Understanding and forecasting polar stratospheric variability with statistical models

    Directory of Open Access Journals (Sweden)

    C. Blume

    2012-07-01

    Full Text Available The variability of the north-polar stratospheric vortex is a prominent aspect of the middle atmosphere. This work investigates a wide class of statistical models with respect to their ability to model geopotential and temperature anomalies, representing variability in the polar stratosphere. Four partly nonstationary, nonlinear models are assessed: linear discriminant analysis (LDA; a cluster method based on finite elements (FEM-VARX; a neural network, namely the multi-layer perceptron (MLP; and support vector regression (SVR. These methods model time series by incorporating all significant external factors simultaneously, including ENSO, QBO, the solar cycle, volcanoes, to then quantify their statistical importance. We show that variability in reanalysis data from 1980 to 2005 is successfully modeled. The period from 2005 to 2011 can be hindcasted to a certain extent, where MLP performs significantly better than the remaining models. However, variability remains that cannot be statistically hindcasted within the current framework, such as the unexpected major warming in January 2009. Finally, the statistical model with the best generalization performance is used to predict a winter 2011/12 with warm and weak vortex conditions. A vortex breakdown is predicted for late January, early February 2012.

  9. Statistical data analysis using SAS intermediate statistical methods

    CERN Document Server

    Marasinghe, Mervyn G

    2018-01-01

    The aim of this textbook (previously titled SAS for Data Analytics) is to teach the use of SAS for statistical analysis of data for advanced undergraduate and graduate students in statistics, data science, and disciplines involving analyzing data. The book begins with an introduction beyond the basics of SAS, illustrated with non-trivial, real-world, worked examples. It proceeds to SAS programming and applications, SAS graphics, statistical analysis of regression models, analysis of variance models, analysis of variance with random and mixed effects models, and then takes the discussion beyond regression and analysis of variance to conclude. Pedagogically, the authors introduce theory and methodological basis topic by topic, present a problem as an application, followed by a SAS analysis of the data provided and a discussion of results. The text focuses on applied statistical problems and methods. Key features include: end of chapter exercises, downloadable SAS code and data sets, and advanced material suitab...

  10. Neuroendocrine Tumor: Statistics

    Science.gov (United States)

    ... Tumor > Neuroendocrine Tumor: Statistics Request Permissions Neuroendocrine Tumor: Statistics Approved by the Cancer.Net Editorial Board , 01/ ... the body. It is important to remember that statistics on the survival rates for people with a ...

  11. The significance test controversy revisited the fiducial Bayesian alternative

    CERN Document Server

    Lecoutre, Bruno

    2014-01-01

    The purpose of this book is not only to revisit the “significance test controversy,”but also to provide a conceptually sounder alternative. As such, it presents a Bayesian framework for a new approach to analyzing and interpreting experimental data. It also prepares students and researchers for reporting on experimental results. Normative aspects: The main views of statistical tests are revisited and the philosophies of Fisher, Neyman-Pearson and Jeffrey are discussed in detail. Descriptive aspects: The misuses of Null Hypothesis Significance Tests are reconsidered in light of Jeffreys’ Bayesian conceptions concerning the role of statistical inference in experimental investigations. Prescriptive aspects: The current effect size and confidence interval reporting practices are presented and seriously questioned. Methodological aspects are carefully discussed and fiducial Bayesian methods are proposed as a more suitable alternative for reporting on experimental results. In closing, basic routine procedures...

  12. Chains, Shops and Networks: Official Statistics and the Creation of Public Value

    Directory of Open Access Journals (Sweden)

    Asle Rolland

    2015-06-01

    Full Text Available The paper concerns offi cial statistics, particularly as produced by the NSIs. Their contribution to the society is considered well captured by the concept of public value. Official statistics create value for the democracy as foundation for evidence-based politics. Democracies and autocracies alike need statistics to govern the public. Unique for the democracy is the need of statistics to govern the governors, for which the independence of the NSI is crucial. Three ways of creating public value are the value chain, the value shop and the value network. The chain is appropriate for the production, the shop for the interpretation and the network for the dissemination of statistics. Automation reduces the need to rely on the value chain as core business model. Thereto automation increases the statistical output, which in turn increases the need of shop and network activities. Replacing the chain with the shop as core model will elevate the NSIs from commodity producers to a processing industry.

  13. The SACE Review Panel's Final Report: Significant Flaws in the Analysis of Statistical Data

    Science.gov (United States)

    Gregory, Kelvin

    2006-01-01

    The South Australian Certificate of Education (SACE) is a credential and formal qualification within the Australian Qualifications Framework. A recent review of the SACE outlined a number of recommendations for significant changes to this certificate. These recommendations were the result of a process that began with the review panel…

  14. Significantly increased risk of carotid atherosclerosis with arsenic exposure and polymorphisms in arsenic metabolism genes

    International Nuclear Information System (INIS)

    Hsieh, Yi-Chen; Lien, Li-Ming; Chung, Wen-Ting; Hsieh, Fang-I; Hsieh, Pei-Fan; Wu, Meei-Maan; Tseng, Hung-Pin; Chiou, Hung-Yi; Chen, Chien-Jen

    2011-01-01

    Individual susceptibility to arsenic-induced carotid atherosclerosis might be associated with genetic variations in arsenic metabolism. The purpose of this study is to explore the interaction effect on risk of carotid atherosclerosis between arsenic exposure and risk genotypes of purine nucleoside phosphorylase (PNP), arsenic (+3) methyltransferase (As3MT), and glutathione S-transferase omega 1 (GSTO1) and omega 2 (GSTO2). A community-based case-control study was conducted in northeastern Taiwan to investigate the arsenic metabolic-related genetic susceptibility to carotid atherosclerosis. In total, 863 subjects, who had been genotyped and for whom the severity of carotid atherosclerosis had been determined, were included in the present study. Individual well water was collected and arsenic concentration determined using hydride generation combined with flame atomic absorption spectrometry. The result showed that a significant dose-response trend (P=0.04) of carotid atherosclerosis risk associated with increasing arsenic concentration. Non-significant association between genetic polymorphisms of PNP Gly51Ser, Pro57Pro, As3MT Met287Thr, GSTO1 Ala140Asp, and GSTO2 A-183G and the risk for development of carotid atherosclerosis were observed. However, the significant interaction effect on carotid atherosclerosis risk was found for arsenic exposure (>50 μg/l) and the haplotypes of PNP (p=0.0115). A marked elevated risk of carotid atherosclerosis was observed in subjects with arsenic exposure of >50 μg/l in drinking water and those who carried the PNP A-T haplotype and at least either of the As3MT risk polymorphism or GSTO risk haplotypes (OR, 6.43; 95% CI, 1.79-23.19). In conclusion, arsenic metabolic genes, PNP, As3MT, and GSTO, may exacerbate the formation of atherosclerosis in individuals with high levels of arsenic concentration in well water (>50 μg/l). - Highlights: →Arsenic metabolic genes might be associated with carotid atherosclerosis. → A case

  15. Significantly increased risk of carotid atherosclerosis with arsenic exposure and polymorphisms in arsenic metabolism genes

    Energy Technology Data Exchange (ETDEWEB)

    Hsieh, Yi-Chen [School of Public Health, College of Public Health and Nutrition, Taipei Medical University, 250 Wusing St., Taipei 11031, Taiwan (China); Lien, Li-Ming [Graduate Institute of Clinical Medicine, College of Medicine, Taipei Medical University, Taipei, Taiwan (China); School of Medicine, Taipei Medical University, Taipei, Taiwan (China); Department of Neurology, Shin Kong WHS Memorial Hospital, Taipei, Taiwan (China); Chung, Wen-Ting [Department of Neurology, Wanfang Hospital, Taipei Medical University, Taipei, Taiwan (China); Graduate Institute of Clinical Medicine, Taipei Medical University, Taipei, Taiwan (China); Hsieh, Fang-I; Hsieh, Pei-Fan [School of Public Health, College of Public Health and Nutrition, Taipei Medical University, 250 Wusing St., Taipei 11031, Taiwan (China); Wu, Meei-Maan [School of Public Health, College of Public Health and Nutrition, Taipei Medical University, 250 Wusing St., Taipei 11031, Taiwan (China); Graduate Institute of Basic Medicine, College of Medicine, Fu-Jen Catholic University, Taipei, Taiwan (China); Tseng, Hung-Pin [Department of Neurology, Lotung Poh-Ai Hospital, I-Lan, Taiwan (China); Chiou, Hung-Yi, E-mail: hychiou@tmu.edu.tw [School of Public Health, College of Public Health and Nutrition, Taipei Medical University, 250 Wusing St., Taipei 11031, Taiwan (China); Chen, Chien-Jen [Genomics Research Center, Academia Sinica, Taipei, Taiwan (China)

    2011-08-15

    Individual susceptibility to arsenic-induced carotid atherosclerosis might be associated with genetic variations in arsenic metabolism. The purpose of this study is to explore the interaction effect on risk of carotid atherosclerosis between arsenic exposure and risk genotypes of purine nucleoside phosphorylase (PNP), arsenic (+3) methyltransferase (As3MT), and glutathione S-transferase omega 1 (GSTO1) and omega 2 (GSTO2). A community-based case-control study was conducted in northeastern Taiwan to investigate the arsenic metabolic-related genetic susceptibility to carotid atherosclerosis. In total, 863 subjects, who had been genotyped and for whom the severity of carotid atherosclerosis had been determined, were included in the present study. Individual well water was collected and arsenic concentration determined using hydride generation combined with flame atomic absorption spectrometry. The result showed that a significant dose-response trend (P=0.04) of carotid atherosclerosis risk associated with increasing arsenic concentration. Non-significant association between genetic polymorphisms of PNP Gly51Ser, Pro57Pro, As3MT Met287Thr, GSTO1 Ala140Asp, and GSTO2 A-183G and the risk for development of carotid atherosclerosis were observed. However, the significant interaction effect on carotid atherosclerosis risk was found for arsenic exposure (>50 {mu}g/l) and the haplotypes of PNP (p=0.0115). A marked elevated risk of carotid atherosclerosis was observed in subjects with arsenic exposure of >50 {mu}g/l in drinking water and those who carried the PNP A-T haplotype and at least either of the As3MT risk polymorphism or GSTO risk haplotypes (OR, 6.43; 95% CI, 1.79-23.19). In conclusion, arsenic metabolic genes, PNP, As3MT, and GSTO, may exacerbate the formation of atherosclerosis in individuals with high levels of arsenic concentration in well water (>50 {mu}g/l). - Highlights: {yields}Arsenic metabolic genes might be associated with carotid atherosclerosis. {yields

  16. Statistical methods for forecasting

    CERN Document Server

    Abraham, Bovas

    2009-01-01

    The Wiley-Interscience Paperback Series consists of selected books that have been made more accessible to consumers in an effort to increase global appeal and general circulation. With these new unabridged softcover volumes, Wiley hopes to extend the lives of these works by making them available to future generations of statisticians, mathematicians, and scientists."This book, it must be said, lives up to the words on its advertising cover: ''Bridging the gap between introductory, descriptive approaches and highly advanced theoretical treatises, it provides a practical, intermediate level discussion of a variety of forecasting tools, and explains how they relate to one another, both in theory and practice.'' It does just that!"-Journal of the Royal Statistical Society"A well-written work that deals with statistical methods and models that can be used to produce short-term forecasts, this book has wide-ranging applications. It could be used in the context of a study of regression, forecasting, and time series ...

  17. Usage Statistics

    Science.gov (United States)

    ... this page: https://medlineplus.gov/usestatistics.html MedlinePlus Statistics To use the sharing features on this page, ... By Quarter View image full size Quarterly User Statistics Quarter Page Views Unique Visitors Oct-Dec-98 ...

  18. Second Language Experience Facilitates Statistical Learning of Novel Linguistic Materials.

    Science.gov (United States)

    Potter, Christine E; Wang, Tianlin; Saffran, Jenny R

    2017-04-01

    Recent research has begun to explore individual differences in statistical learning, and how those differences may be related to other cognitive abilities, particularly their effects on language learning. In this research, we explored a different type of relationship between language learning and statistical learning: the possibility that learning a new language may also influence statistical learning by changing the regularities to which learners are sensitive. We tested two groups of participants, Mandarin Learners and Naïve Controls, at two time points, 6 months apart. At each time point, participants performed two different statistical learning tasks: an artificial tonal language statistical learning task and a visual statistical learning task. Only the Mandarin-learning group showed significant improvement on the linguistic task, whereas both groups improved equally on the visual task. These results support the view that there are multiple influences on statistical learning. Domain-relevant experiences may affect the regularities that learners can discover when presented with novel stimuli. Copyright © 2016 Cognitive Science Society, Inc.

  19. Clinical significance of plasma lysophosphatidic acid levels in the differential diagnosis of ovarian cancer

    Directory of Open Access Journals (Sweden)

    Yun-Jie Zhang

    2015-01-01

    Full Text Available Objective: To investigate the value of lysophosphatidic acid (LPA in the diagnosis of ovarian cancer. Materials and Methods: We first performed a hospital-based, case-control study involving 123 ovarian cancer patients and 101 benign ovarian tumor patients, and then conducted a meta-analysis with 19 case-control studies to assess the correlation between ovarian cancer and plasma LPA levels. Results: The case-control study results demonstrated that ovarian cancer patients have increased LPA and cancer antigen (CA-125 levels compared to patients with benign ovarian tumor (LPA: Ovarian cancer vs benign ovarian tumor: 5.28 ± 1.52 vs 1.82 ± 0.77 μmol/L; CA-125: Ovarian cancer vs benign ovarian tumor: 87.17 ± 45.81 vs. 14.03 ± 10.14 U/mL, which showed statistically significant differences (both P < 0.05. LPA with advanced sensitivity, specificity, positive predictive value, negative predictive value, and accuracy rate of diagnosis excelled CA-125 in the diagnosis of ovarian cancer (both P < 0.05. The areas under the receiver operating characteristic (ROC curve in the diagnosis of ovarian cancer (LPA: 0.983; CA-125: 0.910 were statistically significant compared with the reference (both P < 0.001 and the difference of the areas of ROC curve between LPA and CA-125 in the diagnosis of ovarian cancer showed statistically significant difference (P < 0.05. The meta-analysis results suggested that plasma LPA levels were higher in ovarian cancer tissues than in benign tissues (standardized mean difference (SMD =2.36, 95% confidence interval (CI: 1.61-3.11, P < 0.001 and normal tissues (SMD = 2.32, 95% CI: 1.77-2.87, P < 0.001. Conclusion: LPA shows greater value in the diagnosis of ovarian cancer compared to CA-125 and may be employed as a biological index to diagnose ovarian cancer.

  20. Measuring radioactive half-lives via statistical sampling in practice

    Science.gov (United States)

    Lorusso, G.; Collins, S. M.; Jagan, K.; Hitt, G. W.; Sadek, A. M.; Aitken-Smith, P. M.; Bridi, D.; Keightley, J. D.

    2017-10-01

    The statistical sampling method for the measurement of radioactive decay half-lives exhibits intriguing features such as that the half-life is approximately the median of a distribution closely resembling a Cauchy distribution. Whilst initial theoretical considerations suggested that in certain cases the method could have significant advantages, accurate measurements by statistical sampling have proven difficult, for they require an exercise in non-standard statistical analysis. As a consequence, no half-life measurement using this method has yet been reported and no comparison with traditional methods has ever been made. We used a Monte Carlo approach to address these analysis difficulties, and present the first experimental measurement of a radioisotope half-life (211Pb) by statistical sampling in good agreement with the literature recommended value. Our work also focused on the comparison between statistical sampling and exponential regression analysis, and concluded that exponential regression achieves generally the highest accuracy.

  1. Semiconductor statistics

    CERN Document Server

    Blakemore, J S

    1962-01-01

    Semiconductor Statistics presents statistics aimed at complementing existing books on the relationships between carrier densities and transport effects. The book is divided into two parts. Part I provides introductory material on the electron theory of solids, and then discusses carrier statistics for semiconductors in thermal equilibrium. Of course a solid cannot be in true thermodynamic equilibrium if any electrical current is passed; but when currents are reasonably small the distribution function is but little perturbed, and the carrier distribution for such a """"quasi-equilibrium"""" co

  2. Statistical analogues of thermodynamic extremum principles

    Science.gov (United States)

    Ramshaw, John D.

    2018-05-01

    As shown by Jaynes, the canonical and grand canonical probability distributions of equilibrium statistical mechanics can be simply derived from the principle of maximum entropy, in which the statistical entropy S=- {k}{{B}}{\\sum }i{p}i{log}{p}i is maximised subject to constraints on the mean values of the energy E and/or number of particles N in a system of fixed volume V. The Lagrange multipliers associated with those constraints are then found to be simply related to the temperature T and chemical potential μ. Here we show that the constrained maximisation of S is equivalent to, and can therefore be replaced by, the essentially unconstrained minimisation of the obvious statistical analogues of the Helmholtz free energy F = E ‑ TS and the grand potential J = F ‑ μN. Those minimisations are more easily performed than the maximisation of S because they formally eliminate the constraints on the mean values of E and N and their associated Lagrange multipliers. This procedure significantly simplifies the derivation of the canonical and grand canonical probability distributions, and shows that the well known extremum principles for the various thermodynamic potentials possess natural statistical analogues which are equivalent to the constrained maximisation of S.

  3. Statistics without Tears: Complex Statistics with Simple Arithmetic

    Science.gov (United States)

    Smith, Brian

    2011-01-01

    One of the often overlooked aspects of modern statistics is the analysis of time series data. Modern introductory statistics courses tend to rush to probabilistic applications involving risk and confidence. Rarely does the first level course linger on such useful and fascinating topics as time series decomposition, with its practical applications…

  4. Stochastic simulations for the time evolution of systems which obey generalized statistics: fractional exclusion statistics and Gentile's statistics

    International Nuclear Information System (INIS)

    Nemnes, G A; Anghel, D V

    2010-01-01

    We present a stochastic method for the simulation of the time evolution in systems which obey generalized statistics, namely fractional exclusion statistics and Gentile's statistics. The transition rates are derived in the framework of canonical ensembles. This approach introduces a tool for describing interacting fermionic and bosonic systems in non-equilibrium as ideal FES systems, in a computationally efficient manner. The two types of statistics are analyzed comparatively, indicating their intrinsic thermodynamic differences and revealing key aspects related to the species size

  5. Sibling Competition & Growth Tradeoffs. Biological vs. Statistical Significance

    OpenAIRE

    Kramer, Karen L.; Veile, Amanda; Ot?rola-Castillo, Erik

    2016-01-01

    Early childhood growth has many downstream effects on future health and reproduction and is an important measure of offspring quality. While a tradeoff between family size and child growth outcomes is theoretically predicted in high-fertility societies, empirical evidence is mixed. This is often attributed to phenotypic variation in parental condition. However, inconsistent study results may also arise because family size confounds the potentially differential effects that older and younger s...

  6. Detection of Clostridium difficile infection clusters, using the temporal scan statistic, in a community hospital in southern Ontario, Canada, 2006-2011.

    Science.gov (United States)

    Faires, Meredith C; Pearl, David L; Ciccotelli, William A; Berke, Olaf; Reid-Smith, Richard J; Weese, J Scott

    2014-05-12

    In hospitals, Clostridium difficile infection (CDI) surveillance relies on unvalidated guidelines or threshold criteria to identify outbreaks. This can result in false-positive and -negative cluster alarms. The application of statistical methods to identify and understand CDI clusters may be a useful alternative or complement to standard surveillance techniques. The objectives of this study were to investigate the utility of the temporal scan statistic for detecting CDI clusters and determine if there are significant differences in the rate of CDI cases by month, season, and year in a community hospital. Bacteriology reports of patients identified with a CDI from August 2006 to February 2011 were collected. For patients detected with CDI from March 2010 to February 2011, stool specimens were obtained. Clostridium difficile isolates were characterized by ribotyping and investigated for the presence of toxin genes by PCR. CDI clusters were investigated using a retrospective temporal scan test statistic. Statistically significant clusters were compared to known CDI outbreaks within the hospital. A negative binomial regression model was used to identify associations between year, season, month and the rate of CDI cases. Overall, 86 CDI cases were identified. Eighteen specimens were analyzed and nine ribotypes were classified with ribotype 027 (n = 6) the most prevalent. The temporal scan statistic identified significant CDI clusters at the hospital (n = 5), service (n = 6), and ward (n = 4) levels (P ≤ 0.05). Three clusters were concordant with the one C. difficile outbreak identified by hospital personnel. Two clusters were identified as potential outbreaks. The negative binomial model indicated years 2007-2010 (P ≤ 0.05) had decreased CDI rates compared to 2006 and spring had an increased CDI rate compared to the fall (P = 0.023). Application of the temporal scan statistic identified several clusters, including potential outbreaks not detected by hospital

  7. Principles of Statistics: What the Sports Medicine Professional Needs to Know.

    Science.gov (United States)

    Riemann, Bryan L; Lininger, Monica R

    2018-07-01

    Understanding the results and statistics reported in original research remains a large challenge for many sports medicine practitioners and, in turn, may be among one of the biggest barriers to integrating research into sports medicine practice. The purpose of this article is to provide minimal essentials a sports medicine practitioner needs to know about interpreting statistics and research results to facilitate the incorporation of the latest evidence into practice. Topics covered include the difference between statistical significance and clinical meaningfulness; effect sizes and confidence intervals; reliability statistics, including the minimal detectable difference and minimal important difference; and statistical power. Copyright © 2018 Elsevier Inc. All rights reserved.

  8. Statistics for wildlifers: how much and what kind?

    Science.gov (United States)

    Johnson, D.H.; Shaffer, T.L.; Newton, W.E.

    2001-01-01

    Quantitative methods are playing increasingly important roles in wildlife ecology and, ultimately, management. This change poses a challenge for wildlife practitioners and students who are not well-educated in mathematics and statistics. Here we give our opinions on what wildlife biologists should know about statistics, while recognizing that not everyone is inclined mathematically. For those who are, we recommend that they take mathematics coursework at least through calculus and linear algebra. They should take statistics courses that are focused conceptually , stressing the Why rather than the How of doing statistics. For less mathematically oriented wildlifers, introductory classes in statistical techniques will furnish some useful background in basic methods but may provide little appreciation of when the methods are appropriate. These wildlifers will have to rely much more on advice from statisticians. Far more important than knowing how to analyze data is an understanding of how to obtain and recognize good data. Regardless of the statistical education they receive, all wildlife biologists should appreciate the importance of controls, replication, and randomization in studies they conduct. Understanding these concepts requires little mathematical sophistication, but is critical to advancing the science of wildlife ecology.

  9. Validation of survey information on smoking and alcohol consumption against import statistics, Greenland 1993-2010.

    Science.gov (United States)

    Bjerregaard, Peter; Becker, Ulrik

    2013-01-01

    Questionnaires are widely used to obtain information on health-related behaviour, and they are more often than not the only method that can be used to assess the distribution of behaviour in subgroups of the population. No validation studies of reported consumption of tobacco or alcohol have been published from circumpolar indigenous communities. The purpose of the study is to compare information on the consumption of tobacco and alcohol obtained from 3 population surveys in Greenland with import statistics. Estimates of consumption of cigarettes and alcohol using several different survey instruments in cross-sectional population studies from 1993-1994, 1999-2001 and 2005-2010 were compared with import statistics from the same years. For cigarettes, survey results accounted for virtually the total import. Alcohol consumption was significantly under-reported with reporting completeness ranging from 40% to 51% for different estimates of habitual weekly consumption in the 3 study periods. Including an estimate of binge drinking increased the estimated total consumption to 78% of the import. Compared with import statistics, questionnaire-based population surveys capture the consumption of cigarettes well in Greenland. Consumption of alcohol is under-reported, but asking about binge episodes in addition to the usual intake considerably increased the reported intake in this population and made it more in agreement with import statistics. It is unknown to what extent these findings at the population level can be inferred to population subgroups.

  10. The Undergraduate Statistics Major--A Prelude to Actuarial Science Training.

    Science.gov (United States)

    Ratliff, Michael I.; Williams, Raymond E.

    Recently there has been increased interest related to the Actuarial Science field. An actuary is a business professional who uses mathematical skills to define, analyze, and solve financial and social problems. This paper examines: (1) the interface between Statistical and Actuarial Science training; (2) statistical courses corresponding to…

  11. Reversible Statistics

    DEFF Research Database (Denmark)

    Tryggestad, Kjell

    2004-01-01

    The study aims is to describe how the inclusion and exclusion of materials and calculative devices construct the boundaries and distinctions between statistical facts and artifacts in economics. My methodological approach is inspired by John Graunt's (1667) Political arithmetic and more recent work...... within constructivism and the field of Science and Technology Studies (STS). The result of this approach is here termed reversible statistics, reconstructing the findings of a statistical study within economics in three different ways. It is argued that all three accounts are quite normal, albeit...... in different ways. The presence and absence of diverse materials, both natural and political, is what distinguishes them from each other. Arguments are presented for a more symmetric relation between the scientific statistical text and the reader. I will argue that a more symmetric relation can be achieved...

  12. How to choose the right statistical software?-a method increasing the post-purchase satisfaction.

    Science.gov (United States)

    Cavaliere, Roberto

    2015-12-01

    Nowadays, we live in the "data era" where the use of statistical or data analysis software is inevitable, in any research field. This means that the choice of the right software tool or platform is a strategic issue for a research department. Nevertheless, in many cases decision makers do not pay the right attention to a comprehensive and appropriate evaluation of what the market offers. Indeed, the choice still depends on few factors like, for instance, researcher's personal inclination, e.g., which software have been used at the university or is already known. This is not wrong in principle, but in some cases it's not enough at all and might lead to a "dead end" situation, typically after months or years of investments already done on the wrong software. This article, far from being a full and complete guide to statistical software evaluation, aims to illustrate some key points of the decision process and introduce an extended range of factors which can help to undertake the right choice, at least in potential. There is not enough literature about that topic, most of the time underestimated, both in the traditional literature and even in the so called "gray literature", even if some documents or short pages can be found online. Anyhow, it seems there is not a common and known standpoint about the process of software evaluation from the final user perspective. We suggests a multi-factor analysis leading to an evaluation matrix tool, to be intended as a flexible and customizable tool, aimed to provide a clearer picture of the software alternatives available, not in abstract but related to the researcher's own context and needs. This method is a result of about twenty years of experience of the author in the field of evaluating and using technical-computing software and partially arises from a research made about such topics as part of a project funded by European Commission under the Lifelong Learning Programme 2011.

  13. Neurite outgrowth is significantly increased by the simultaneous presentation of Schwann cells and moderate exogenous electric fields

    Science.gov (United States)

    Koppes, Abigail N.; Seggio, Angela M.; Thompson, Deanna M.

    2011-08-01

    Axonal extension is influenced by a variety of external guidance cues; therefore, the development and optimization of a multi-faceted approach is probably necessary to address the intricacy of functional regeneration following nerve injury. In this study, primary dissociated neonatal rat dorsal root ganglia neurons and Schwann cells were examined in response to an 8 h dc electrical stimulation (0-100 mV mm-1). Stimulated samples were then fixed immediately, immunostained, imaged and analyzed to determine Schwann cell orientation and characterize neurite outgrowth relative to electric field strength and direction. Results indicate that Schwann cells are viable following electrical stimulation with 10-100 mV mm-1, and retain a normal morphology relative to unstimulated cells; however, no directional bias is observed. Neurite outgrowth was significantly enhanced by twofold following exposure to either a 50 mV mm-1 electric field (EF) or co-culture with unstimulated Schwann cells by comparison to neurons cultured alone. Neurite outgrowth was further increased in the presence of simultaneously applied cues (Schwann cells + 50 mV mm-1 dc EF), exhibiting a 3.2-fold increase over unstimulated control neurons, and a 1.2-fold increase over either neurons cultured with unstimulated Schwann cells or the electrical stimulus alone. These results indicate that dc electric stimulation in combination with Schwann cells may provide synergistic guidance cues for improved axonal growth relevant to nerve injuries in the peripheral nervous system.

  14. Statistical Physics

    CERN Document Server

    Wannier, Gregory Hugh

    1966-01-01

    Until recently, the field of statistical physics was traditionally taught as three separate subjects: thermodynamics, statistical mechanics, and kinetic theory. This text, a forerunner in its field and now a classic, was the first to recognize the outdated reasons for their separation and to combine the essentials of the three subjects into one unified presentation of thermal physics. It has been widely adopted in graduate and advanced undergraduate courses, and is recommended throughout the field as an indispensable aid to the independent study and research of statistical physics.Designed for

  15. Clinical significance of the VEGF level in urinary bladder carcinoma.

    Science.gov (United States)

    Sankhwar, Monica; Sankhwar, Satya Narayan; Abhishek, Amar; Rajender, Singh

    2015-01-01

    To investigate the correlation of Vascular Endothelial Growth Factor (VEGF) and micro-vessel density (MVD) with urinary bladder tumor and its stage. The study was conducted between January 2010 and December 2012. The study included screening of 122 patients at elevated risk for bladder cancer, of which 35 patients were finally enrolled in the study. Diagnosis was made on the basis of urine cytology, radiological investigation (ultrasound KUB, and CT-scan) and histopathology. Thirty-five normal cancer-free individuals were enrolled as controls. Human VEGF levels were measured using an enzyme linked immunoassay and protein content (pg/mg protein) by Lowry method. SPSS for Windows version 10.0.7 (SPSS, Chicago, IL, USA) was used for statistical analysis of the data. Mean urine VEGF level in the cases was significantly higher in comparison to the control group. There was a direct correlation between VEGF level and tumor stage. Mean urine VEGF values were minimum in the control group (22.75 ± 15.41 pg/mg creatinine) and maximum in stage IV patients (180.15 ± 75.93 pg/mg creatinine). Tissue VEGF levels also showed a similar trend of increase with increase in stage. Urine VEGF level also showed a correlation with tissue VEGF level. Similarly, MVD showed a significant increase with increase in tumor stage. A correlation between bladder cancer and MVD and VEGF suggest that the latter can serve as markers for therapeutic guidance. This is the first study from India on clinical and pathological correlation among urine VEGF, tumor tissue VEGF levels, and Micro Vessel Density (MVD) in urinary bladder cancer patients.

  16. Attitudes toward statistics in medical postgraduates: measuring, evaluating and monitoring.

    Science.gov (United States)

    Zhang, Yuhai; Shang, Lei; Wang, Rui; Zhao, Qinbo; Li, Chanjuan; Xu, Yongyong; Su, Haixia

    2012-11-23

    In medical training, statistics is considered a very difficult course to learn and teach. Current studies have found that students' attitudes toward statistics can influence their learning process. Measuring, evaluating and monitoring the changes of students' attitudes toward statistics are important. Few studies have focused on the attitudes of postgraduates, especially medical postgraduates. Our purpose was to understand current attitudes regarding statistics held by medical postgraduates and explore their effects on students' achievement. We also wanted to explore the influencing factors and the sources of these attitudes and monitor their changes after a systematic statistics course. A total of 539 medical postgraduates enrolled in a systematic statistics course completed the pre-form of the Survey of Attitudes Toward Statistics -28 scale, and 83 postgraduates were selected randomly from among them to complete the post-form scale after the course. Most medical postgraduates held positive attitudes toward statistics, but they thought statistics was a very difficult subject. The attitudes mainly came from experiences in a former statistical or mathematical class. Age, level of statistical education, research experience, specialty and mathematics basis may influence postgraduate attitudes toward statistics. There were significant positive correlations between course achievement and attitudes toward statistics. In general, student attitudes showed negative changes after completing a statistics course. The importance of student attitudes toward statistics must be recognized in medical postgraduate training. To make sure all students have a positive learning environment, statistics teachers should measure their students' attitudes and monitor their change of status during a course. Some necessary assistance should be offered for those students who develop negative attitudes.

  17. Online neural monitoring of statistical learning.

    Science.gov (United States)

    Batterink, Laura J; Paller, Ken A

    2017-05-01

    The extraction of patterns in the environment plays a critical role in many types of human learning, from motor skills to language acquisition. This process is known as statistical learning. Here we propose that statistical learning has two dissociable components: (1) perceptual binding of individual stimulus units into integrated composites and (2) storing those integrated representations for later use. Statistical learning is typically assessed using post-learning tasks, such that the two components are conflated. Our goal was to characterize the online perceptual component of statistical learning. Participants were exposed to a structured stream of repeating trisyllabic nonsense words and a random syllable stream. Online learning was indexed by an EEG-based measure that quantified neural entrainment at the frequency of the repeating words relative to that of individual syllables. Statistical learning was subsequently assessed using conventional measures in an explicit rating task and a reaction-time task. In the structured stream, neural entrainment to trisyllabic words was higher than in the random stream, increased as a function of exposure to track the progression of learning, and predicted performance on the reaction time (RT) task. These results demonstrate that monitoring this critical component of learning via rhythmic EEG entrainment reveals a gradual acquisition of knowledge whereby novel stimulus sequences are transformed into familiar composites. This online perceptual transformation is a critical component of learning. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. On Quantum Statistical Inference, II

    OpenAIRE

    Barndorff-Nielsen, O. E.; Gill, R. D.; Jupp, P. E.

    2003-01-01

    Interest in problems of statistical inference connected to measurements of quantum systems has recently increased substantially, in step with dramatic new developments in experimental techniques for studying small quantum systems. Furthermore, theoretical developments in the theory of quantum measurements have brought the basic mathematical framework for the probability calculations much closer to that of classical probability theory. The present paper reviews this field and proposes and inte...

  19. SIGNIFICANT INFLUENCES OF VIOLIN EXTRACURRICULAR ACHIEVEMENT TO EMOTIONAL INTELLIGENCE

    Directory of Open Access Journals (Sweden)

    Nafik

    2014-06-01

    Full Text Available This research aims to find out (1 whether there is an influence between student’s achievements of learning violin toward their emotional intelligence, (2 whether there is a correlation between student’s achievement of learning violin and their emotional intelligence, and (3 how much contribution of student’s achievement of learning violin to their emotional intelligence. It is a qualitative research which is defined as a research method based on positivism philosophy which is used to study particular sample and population. The sample and population are drawn randomly using research instruments to collect data, and the data are analyzed statistically. This aims to examine the hypothesis defined. The finding shows that there is a significant influence between student’s achievement of learning violin and their emotional intelligence about 76.1%, while the rest of it 23.9% is influenced by other factors which are not studied in this research. It proves that learning violin influences student’s emotional intelligence very much and emotional intelligence is influential in increasing student’s achievement. From the data, it shows that most of the students participating in violin extracurricular are able to increase their learning achievement.

  20. Quantifying spatial and temporal trends in beach-dune volumetric changes using spatial statistics

    Science.gov (United States)

    Eamer, Jordan B. R.; Walker, Ian J.

    2013-06-01

    Spatial statistics are generally underutilized in coastal geomorphology, despite offering great potential for identifying and quantifying spatial-temporal trends in landscape morphodynamics. In particular, local Moran's Ii provides a statistical framework for detecting clusters of significant change in an attribute (e.g., surface erosion or deposition) and quantifying how this changes over space and time. This study analyzes and interprets spatial-temporal patterns in sediment volume changes in a beach-foredune-transgressive dune complex following removal of invasive marram grass (Ammophila spp.). Results are derived by detecting significant changes in post-removal repeat DEMs derived from topographic surveys and airborne LiDAR. The study site was separated into discrete, linked geomorphic units (beach, foredune, transgressive dune complex) to facilitate sub-landscape scale analysis of volumetric change and sediment budget responses. Difference surfaces derived from a pixel-subtraction algorithm between interval DEMs and the LiDAR baseline DEM were filtered using the local Moran's Ii method and two different spatial weights (1.5 and 5 m) to detect statistically significant change. Moran's Ii results were compared with those derived from a more spatially uniform statistical method that uses a simpler student's t distribution threshold for change detection. Morphodynamic patterns and volumetric estimates were similar between the uniform geostatistical method and Moran's Ii at a spatial weight of 5 m while the smaller spatial weight (1.5 m) consistently indicated volumetric changes of less magnitude. The larger 5 m spatial weight was most representative of broader site morphodynamics and spatial patterns while the smaller spatial weight provided volumetric changes consistent with field observations. All methods showed foredune deflation immediately following removal with increased sediment volumes into the spring via deposition at the crest and on lobes in the lee

  1. Breakthroughs in statistics

    CERN Document Server

    Johnson, Norman

    This is author-approved bcc: This is the third volume of a collection of seminal papers in the statistical sciences written during the past 110 years. These papers have each had an outstanding influence on the development of statistical theory and practice over the last century. Each paper is preceded by an introduction written by an authority in the field providing background information and assessing its influence. Volume III concerntrates on articles from the 1980's while including some earlier articles not included in Volume I and II. Samuel Kotz is Professor of Statistics in the College of Business and Management at the University of Maryland. Norman L. Johnson is Professor Emeritus of Statistics at the University of North Carolina. Also available: Breakthroughs in Statistics Volume I: Foundations and Basic Theory Samuel Kotz and Norman L. Johnson, Editors 1993. 631 pp. Softcover. ISBN 0-387-94037-5 Breakthroughs in Statistics Volume II: Methodology and Distribution Samuel Kotz and Norman L. Johnson, Edi...

  2. A study on the use of Gumbel approximation with the Bernoulli spatial scan statistic.

    Science.gov (United States)

    Read, S; Bath, P A; Willett, P; Maheswaran, R

    2013-08-30

    The Bernoulli version of the spatial scan statistic is a well established method of detecting localised spatial clusters in binary labelled point data, a typical application being the epidemiological case-control study. A recent study suggests the inferential accuracy of several versions of the spatial scan statistic (principally the Poisson version) can be improved, at little computational cost, by using the Gumbel distribution, a method now available in SaTScan(TM) (www.satscan.org). We study in detail the effect of this technique when applied to the Bernoulli version and demonstrate that it is highly effective, albeit with some increase in false alarm rates at certain significance thresholds. We explain how this increase is due to the discrete nature of the Bernoulli spatial scan statistic and demonstrate that it can affect even small p-values. Despite this, we argue that the Gumbel method is actually preferable for very small p-values. Furthermore, we extend previous research by running benchmark trials on 12 000 synthetic datasets, thus demonstrating that the overall detection capability of the Bernoulli version (i.e. ratio of power to false alarm rate) is not noticeably affected by the use of the Gumbel method. We also provide an example application of the Gumbel method using data on hospital admissions for chronic obstructive pulmonary disease. Copyright © 2013 John Wiley & Sons, Ltd.

  3. The significance of reporting to the thousandths place: Figuring out the laboratory limitations

    Directory of Open Access Journals (Sweden)

    Joely A. Straseski

    2017-04-01

    Full Text Available Objectives: A request to report laboratory values to a specific number of decimal places represents a delicate balance between clinical interpretation of a true analytical change versus laboratory understanding of analytical imprecision and significant figures. Prostate specific antigen (PSA was used as an example to determine if an immunoassay routinely reported to the hundredths decimal place based on significant figure assessment in our laboratory was capable of providing analytically meaningful results when reported to the thousandths places when requested by clinicians. Design and methods: Results of imprecision studies of a representative PSA assay (Roche MODULAR E170 employing two methods of statistical analysis are reported. Sample pools were generated with target values of 0.01 and 0.20 μg/L PSA as determined by the E170. Intra-assay imprecision studies were conducted and the resultant data were analyzed using two independent statistical methods to evaluate reporting limits. Results: These statistical methods indicated reporting results to the thousandths place at the two assessed concentrations was an appropriate reflection of the measurement imprecision for the representative assay. This approach used two independent statistical tests to determine the ability of an analytical system to support a desired reporting level. Importantly, data were generated during a routine intra-assay imprecision study, thus this approach does not require extra data collection by the laboratory. Conclusions: Independent statistical analysis must be used to determine appropriate significant figure limitations for clinically relevant analytes. Establishing these limits is the responsibility of the laboratory and should be determined prior to providing clinical results. Keywords: Significant figures, Imprecision, Prostate cancer, Prostate specific antigen, PSA

  4. The new statistics: why and how.

    Science.gov (United States)

    Cumming, Geoff

    2014-01-01

    We need to make substantial changes to how we conduct research. First, in response to heightened concern that our published research literature is incomplete and untrustworthy, we need new requirements to ensure research integrity. These include prespecification of studies whenever possible, avoidance of selection and other inappropriate data-analytic practices, complete reporting, and encouragement of replication. Second, in response to renewed recognition of the severe flaws of null-hypothesis significance testing (NHST), we need to shift from reliance on NHST to estimation and other preferred techniques. The new statistics refers to recommended practices, including estimation based on effect sizes, confidence intervals, and meta-analysis. The techniques are not new, but adopting them widely would be new for many researchers, as well as highly beneficial. This article explains why the new statistics are important and offers guidance for their use. It describes an eight-step new-statistics strategy for research with integrity, which starts with formulation of research questions in estimation terms, has no place for NHST, and is aimed at building a cumulative quantitative discipline.

  5. Potential errors and misuse of statistics in studies on leakage in endodontics.

    Science.gov (United States)

    Lucena, C; Lopez, J M; Pulgar, R; Abalos, C; Valderrama, M J

    2013-04-01

    To assess the quality of the statistical methodology used in studies of leakage in Endodontics, and to compare the results found using appropriate versus inappropriate inferential statistical methods. The search strategy used the descriptors 'root filling' 'microleakage', 'dye penetration', 'dye leakage', 'polymicrobial leakage' and 'fluid filtration' for the time interval 2001-2010 in journals within the categories 'Dentistry, Oral Surgery and Medicine' and 'Materials Science, Biomaterials' of the Journal Citation Report. All retrieved articles were reviewed to find potential pitfalls in statistical methodology that may be encountered during study design, data management or data analysis. The database included 209 papers. In all the studies reviewed, the statistical methods used were appropriate for the category attributed to the outcome variable, but in 41% of the cases, the chi-square test or parametric methods were inappropriately selected subsequently. In 2% of the papers, no statistical test was used. In 99% of cases, a statistically 'significant' or 'not significant' effect was reported as a main finding, whilst only 1% also presented an estimation of the magnitude of the effect. When the appropriate statistical methods were applied in the studies with originally inappropriate data analysis, the conclusions changed in 19% of the cases. Statistical deficiencies in leakage studies may affect their results and interpretation and might be one of the reasons for the poor agreement amongst the reported findings. Therefore, more effort should be made to standardize statistical methodology. © 2012 International Endodontic Journal.

  6. Obtaining Application-based and Content-based Internet Traffic Statistics

    DEFF Research Database (Denmark)

    Bujlow, Tomasz; Pedersen, Jens Myrup

    2012-01-01

    the Volunteer-Based System for Research on the Internet, developed at Aalborg University, is capable of providing detailed statistics of Internet usage. Since an increasing amount of HTTP traffic has been observed during the last few years, the system also supports creating statistics of different kinds of HTTP...... traffic, like audio, video, file transfers, etc. All statistics can be obtained for individual users of the system, for groups of users, or for all users altogether. This paper presents results with real data collected from a limited number of real users over six months. We demonstrate that the system can...

  7. Sub-Poissonian photon statistics in time-dependent collective resonance fluorescence

    International Nuclear Information System (INIS)

    Buzek, V.; Tran Quang; Lan, L.H.

    1989-10-01

    We have discussed the photon statistics of the spectral components of N-atom time-dependent resonance fluorescence. It is shown that in contrast to the stationary limit, sub-Poissonian photon statistics in the sidebands occur for any number N of atoms including the case N >> 1. Reduction in Maldel's parameters Q ±1 is found with increasing numbers of atoms. The typical time for the presence of sub-Poissonian statistics is proportional to 1/N. (author). 31 refs, 1 fig

  8. Statistical density of nuclear excited states

    Directory of Open Access Journals (Sweden)

    V. M. Kolomietz

    2015-10-01

    Full Text Available A semi-classical approximation is applied to the calculations of single-particle and statistical level densities in excited nuclei. Landau's conception of quasi-particles with the nucleon effective mass m* < m is used. The approach provides the correct description of the continuum contribution to the level density for realistic finite-depth potentials. It is shown that the continuum states does not affect significantly the thermodynamic calculations for sufficiently small temperatures T ≤ 1 MeV but reduce strongly the results for the excitation energy at high temperatures. By use of standard Woods - Saxon potential and nucleon effective mass m* = 0.7m the A-dependency of the statistical level density parameter K was evaluated in a good qualitative agreement with experimental data.

  9. Statistical monitoring of linear antenna arrays

    KAUST Repository

    Harrou, Fouzi

    2016-11-03

    The paper concerns the problem of monitoring linear antenna arrays using the generalized likelihood ratio (GLR) test. When an abnormal event (fault) affects an array of antenna elements, the radiation pattern changes and significant deviation from the desired design performance specifications can resulted. In this paper, the detection of faults is addressed from a statistical point of view as a fault detection problem. Specifically, a statistical method rested on the GLR principle is used to detect potential faults in linear arrays. To assess the strength of the GLR-based monitoring scheme, three case studies involving different types of faults were performed. Simulation results clearly shown the effectiveness of the GLR-based fault-detection method to monitor the performance of linear antenna arrays.

  10. Paradigms and pragmatism: approaches to medical statistics.

    Science.gov (United States)

    Healy, M J

    2000-01-01

    Until recently, the dominant philosophy of science was that due to Karl Popper, with its doctrine that the proper task of science was the formulation of hypotheses followed by attempts at refuting them. In spite of the close analogy with significance testing, these ideas do not fit well with the practice of medical statistics. The same can be said of the later philosophy of Thomas Kuhn, who maintains that science proceeds by way of revolutionary upheavals separated by periods of relatively pedestrian research which are governed by what Kuhn refers to as paradigms. Through there have been paradigm shifts in the history of statistics, a degree of continuity can also be discerned. A current paradigm shift is embodied in the spread of Bayesian ideas. It may be that a future paradigm will emphasise the pragmatic approach to statistics that is associated with the name of Daniel Schwartz.

  11. Effects of increased vertebral number on carcass weight in PIC pigs.

    Science.gov (United States)

    Huang, Jieping; Zhang, Mingming; Ye, Runqing; Ma, Yun; Lei, Chuzhao

    2017-12-01

    Variation of the vertebral number is associated with carcass traits in pigs. However, results from different populations do not match well with others, especially for carcass weight. Therefore, effects of increased vertebral number on carcass weight were investigated by analyzing the relationship between two loci multi-vertebra causal loci (NR6A1 g.748 C > T and VRTN g.20311_20312ins291) and carcass weight in PIC pigs. Results from the association study between vertebral number and carcass weight showed that increased thoracic number had negative effects on carcass weight, but the results were not statistically significant. Further, VRTN Ins/Ins genotype increased more than one thoracic than that of Wt/Wt genotype on average in this PIC population. Meanwhile, there was a significant negative effect of VRTN Ins on carcass weight (P carcass weight in PIC pigs. © 2017 Japanese Society of Animal Science.

  12. Statistical Anxiety and Attitudes Towards Statistics: Development of a Comprehensive Danish Instrument

    DEFF Research Database (Denmark)

    Nielsen, Tine; Kreiner, Svend

    Short abstract Motivated by experiencing with students’ psychological barriers for learning statistics we modified and extended the Statistical Anxiety Rating Scale (STARS) to develop a contemporary Danish measure of attitudes and relationship to statistics for use with higher education students...... with evidence of DIF in all cases: One TCA-item functioned differentially relative to age, one WS-item functioned differentially relative to statistics course (first or second), and two IA-items functioned differentially relative to statistics course and academic discipline (sociology, public health...

  13. Ethics in Statistics

    Science.gov (United States)

    Lenard, Christopher; McCarthy, Sally; Mills, Terence

    2014-01-01

    There are many different aspects of statistics. Statistics involves mathematics, computing, and applications to almost every field of endeavour. Each aspect provides an opportunity to spark someone's interest in the subject. In this paper we discuss some ethical aspects of statistics, and describe how an introduction to ethics has been…

  14. Statistics

    International Nuclear Information System (INIS)

    1999-01-01

    For the year 1998 and the year 1999, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period (see e.g. Energiatilastot 1998, Statistics Finland, Helsinki 1999, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in January-June 1999, Energy exports by recipient country in January-June 1999, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products

  15. Statistics

    International Nuclear Information System (INIS)

    2001-01-01

    For the year 2000, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period (see e.g. Energiatilastot 1999, Statistics Finland, Helsinki 2000, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions from the use of fossil fuels, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in 2000, Energy exports by recipient country in 2000, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products

  16. Statistics

    International Nuclear Information System (INIS)

    2000-01-01

    For the year 1999 and 2000, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period (see e.g., Energiatilastot 1998, Statistics Finland, Helsinki 1999, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in January-March 2000, Energy exports by recipient country in January-March 2000, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products

  17. Principles of statistics

    CERN Document Server

    Bulmer, M G

    1979-01-01

    There are many textbooks which describe current methods of statistical analysis, while neglecting related theory. There are equally many advanced textbooks which delve into the far reaches of statistical theory, while bypassing practical applications. But between these two approaches is an unfilled gap, in which theory and practice merge at an intermediate level. Professor M. G. Bulmer's Principles of Statistics, originally published in 1965, was created to fill that need. The new, corrected Dover edition of Principles of Statistics makes this invaluable mid-level text available once again fo

  18. Industrial statistics with Minitab

    CERN Document Server

    Cintas, Pere Grima; Llabres, Xavier Tort-Martorell

    2012-01-01

    Industrial Statistics with MINITAB demonstrates the use of MINITAB as a tool for performing statistical analysis in an industrial context. This book covers introductory industrial statistics, exploring the most commonly used techniques alongside those that serve to give an overview of more complex issues. A plethora of examples in MINITAB are featured along with case studies for each of the statistical techniques presented. Industrial Statistics with MINITAB: Provides comprehensive coverage of user-friendly practical guidance to the essential statistical methods applied in industry.Explores

  19. Statistical uncertainties and unrecognized relationships

    International Nuclear Information System (INIS)

    Rankin, J.P.

    1985-01-01

    Hidden relationships in specific designs directly contribute to inaccuracies in reliability assessments. Uncertainty factors at the system level may sometimes be applied in attempts to compensate for the impact of such unrecognized relationships. Often uncertainty bands are used to relegate unknowns to a miscellaneous category of low-probability occurrences. However, experience and modern analytical methods indicate that perhaps the dominant, most probable and significant events are sometimes overlooked in statistical reliability assurances. The author discusses the utility of two unique methods of identifying the otherwise often unforeseeable system interdependencies for statistical evaluations. These methods are sneak circuit analysis and a checklist form of common cause failure analysis. Unless these techniques (or a suitable equivalent) are also employed along with the more widely-known assurance tools, high reliability of complex systems may not be adequately assured. This concern is indicated by specific illustrations. 8 references, 5 figures

  20. ASYMPTOTIC COMPARISONS OF U-STATISTICS, V-STATISTICS AND LIMITS OF BAYES ESTIMATES BY DEFICIENCIES

    OpenAIRE

    Toshifumi, Nomachi; Hajime, Yamato; Graduate School of Science and Engineering, Kagoshima University:Miyakonojo College of Technology; Faculty of Science, Kagoshima University

    2001-01-01

    As estimators of estimable parameters, we consider three statistics which are U-statistic, V-statistic and limit of Bayes estimate. This limit of Bayes estimate, called LB-statistic in this paper, is obtained from Bayes estimate of estimable parameter based on Dirichlet process, by letting its parameter tend to zero. For the estimable parameter with non-degenerate kernel, the asymptotic relative efficiencies of LB-statistic with respect to U-statistic and V-statistic and that of V-statistic w...