WorldWideScience

Sample records for varied statistically significantly

  1. Statistically significant relational data mining :

    Energy Technology Data Exchange (ETDEWEB)

    Berry, Jonathan W.; Leung, Vitus Joseph; Phillips, Cynthia Ann; Pinar, Ali; Robinson, David Gerald; Berger-Wolf, Tanya; Bhowmick, Sanjukta; Casleton, Emily; Kaiser, Mark; Nordman, Daniel J.; Wilson, Alyson G.

    2014-02-01

    This report summarizes the work performed under the project (3z(BStatitically significant relational data mining.(3y (BThe goal of the project was to add more statistical rigor to the fairly ad hoc area of data mining on graphs. Our goal was to develop better algorithms and better ways to evaluate algorithm quality. We concetrated on algorithms for community detection, approximate pattern matching, and graph similarity measures. Approximate pattern matching involves finding an instance of a relatively small pattern, expressed with tolerance, in a large graph of data observed with uncertainty. This report gathers the abstracts and references for the eight refereed publications that have appeared as part of this work. We then archive three pieces of research that have not yet been published. The first is theoretical and experimental evidence that a popular statistical measure for comparison of community assignments favors over-resolved communities over approximations to a ground truth. The second are statistically motivated methods for measuring the quality of an approximate match of a small pattern in a large graph. The third is a new probabilistic random graph model. Statisticians favor these models for graph analysis. The new local structure graph model overcomes some of the issues with popular models such as exponential random graph models and latent variable models.

  2. Statistical Significance for Hierarchical Clustering

    Science.gov (United States)

    Kimes, Patrick K.; Liu, Yufeng; Hayes, D. Neil; Marron, J. S.

    2017-01-01

    Summary Cluster analysis has proved to be an invaluable tool for the exploratory and unsupervised analysis of high dimensional datasets. Among methods for clustering, hierarchical approaches have enjoyed substantial popularity in genomics and other fields for their ability to simultaneously uncover multiple layers of clustering structure. A critical and challenging question in cluster analysis is whether the identified clusters represent important underlying structure or are artifacts of natural sampling variation. Few approaches have been proposed for addressing this problem in the context of hierarchical clustering, for which the problem is further complicated by the natural tree structure of the partition, and the multiplicity of tests required to parse the layers of nested clusters. In this paper, we propose a Monte Carlo based approach for testing statistical significance in hierarchical clustering which addresses these issues. The approach is implemented as a sequential testing procedure guaranteeing control of the family-wise error rate. Theoretical justification is provided for our approach, and its power to detect true clustering structure is illustrated through several simulation studies and applications to two cancer gene expression datasets. PMID:28099990

  3. Statistical significance versus clinical relevance.

    Science.gov (United States)

    van Rijn, Marieke H C; Bech, Anneke; Bouyer, Jean; van den Brand, Jan A J G

    2017-04-01

    In March this year, the American Statistical Association (ASA) posted a statement on the correct use of P-values, in response to a growing concern that the P-value is commonly misused and misinterpreted. We aim to translate these warnings given by the ASA into a language more easily understood by clinicians and researchers without a deep background in statistics. Moreover, we intend to illustrate the limitations of P-values, even when used and interpreted correctly, and bring more attention to the clinical relevance of study findings using two recently reported studies as examples. We argue that P-values are often misinterpreted. A common mistake is saying that P < 0.05 means that the null hypothesis is false, and P ≥0.05 means that the null hypothesis is true. The correct interpretation of a P-value of 0.05 is that if the null hypothesis were indeed true, a similar or more extreme result would occur 5% of the times upon repeating the study in a similar sample. In other words, the P-value informs about the likelihood of the data given the null hypothesis and not the other way around. A possible alternative related to the P-value is the confidence interval (CI). It provides more information on the magnitude of an effect and the imprecision with which that effect was estimated. However, there is no magic bullet to replace P-values and stop erroneous interpretation of scientific results. Scientists and readers alike should make themselves familiar with the correct, nuanced interpretation of statistical tests, P-values and CIs. © The Author 2017. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.

  4. Statistical significance of cis-regulatory modules

    Directory of Open Access Journals (Sweden)

    Smith Andrew D

    2007-01-01

    Full Text Available Abstract Background It is becoming increasingly important for researchers to be able to scan through large genomic regions for transcription factor binding sites or clusters of binding sites forming cis-regulatory modules. Correspondingly, there has been a push to develop algorithms for the rapid detection and assessment of cis-regulatory modules. While various algorithms for this purpose have been introduced, most are not well suited for rapid, genome scale scanning. Results We introduce methods designed for the detection and statistical evaluation of cis-regulatory modules, modeled as either clusters of individual binding sites or as combinations of sites with constrained organization. In order to determine the statistical significance of module sites, we first need a method to determine the statistical significance of single transcription factor binding site matches. We introduce a straightforward method of estimating the statistical significance of single site matches using a database of known promoters to produce data structures that can be used to estimate p-values for binding site matches. We next introduce a technique to calculate the statistical significance of the arrangement of binding sites within a module using a max-gap model. If the module scanned for has defined organizational parameters, the probability of the module is corrected to account for organizational constraints. The statistical significance of single site matches and the architecture of sites within the module can be combined to provide an overall estimation of statistical significance of cis-regulatory module sites. Conclusion The methods introduced in this paper allow for the detection and statistical evaluation of single transcription factor binding sites and cis-regulatory modules. The features described are implemented in the Search Tool for Occurrences of Regulatory Motifs (STORM and MODSTORM software.

  5. The thresholds for statistical and clinical significance

    DEFF Research Database (Denmark)

    Jakobsen, Janus Christian; Gluud, Christian; Winkel, Per

    2014-01-01

    BACKGROUND: Thresholds for statistical significance are insufficiently demonstrated by 95% confidence intervals or P-values when assessing results from randomised clinical trials. First, a P-value only shows the probability of getting a result assuming that the null hypothesis is true and does...... not reflect the probability of getting a result assuming an alternative hypothesis to the null hypothesis is true. Second, a confidence interval or a P-value showing significance may be caused by multiplicity. Third, statistical significance does not necessarily result in clinical significance. Therefore...... of the probability that a given trial result is compatible with a 'null' effect (corresponding to the P-value) divided by the probability that the trial result is compatible with the intervention effect hypothesised in the sample size calculation; (3) adjust the confidence intervals and the statistical significance...

  6. The insignificance of statistical significance testing

    Science.gov (United States)

    Johnson, Douglas H.

    1999-01-01

    Despite their use in scientific journals such as The Journal of Wildlife Management, statistical hypothesis tests add very little value to the products of research. Indeed, they frequently confuse the interpretation of data. This paper describes how statistical hypothesis tests are often viewed, and then contrasts that interpretation with the correct one. I discuss the arbitrariness of P-values, conclusions that the null hypothesis is true, power analysis, and distinctions between statistical and biological significance. Statistical hypothesis testing, in which the null hypothesis about the properties of a population is almost always known a priori to be false, is contrasted with scientific hypothesis testing, which examines a credible null hypothesis about phenomena in nature. More meaningful alternatives are briefly outlined, including estimation and confidence intervals for determining the importance of factors, decision theory for guiding actions in the face of uncertainty, and Bayesian approaches to hypothesis testing and other statistical practices.

  7. Swiss solar power statistics 2007 - Significant expansion

    International Nuclear Information System (INIS)

    Hostettler, T.

    2008-01-01

    This article presents and discusses the 2007 statistics for solar power in Switzerland. A significant number of new installations is noted as is the high production figures from newer installations. The basics behind the compilation of the Swiss solar power statistics are briefly reviewed and an overview for the period 1989 to 2007 is presented which includes figures on the number of photovoltaic plant in service and installed peak power. Typical production figures in kilowatt-hours (kWh) per installed kilowatt-peak power (kWp) are presented and discussed for installations of various sizes. Increased production after inverter replacement in older installations is noted. Finally, the general political situation in Switzerland as far as solar power is concerned are briefly discussed as are international developments.

  8. Significant Statistics: Viewed with a Contextual Lens

    Science.gov (United States)

    Tait-McCutcheon, Sandi

    2010-01-01

    This paper examines the pedagogical and organisational changes three lead teachers made to their statistics teaching and learning programs. The lead teachers posed the research question: What would the effect of contextually integrating statistical investigations and literacies into other curriculum areas be on student achievement? By finding the…

  9. Sibling Competition & Growth Tradeoffs. Biological vs. Statistical Significance.

    Science.gov (United States)

    Kramer, Karen L; Veile, Amanda; Otárola-Castillo, Erik

    2016-01-01

    Early childhood growth has many downstream effects on future health and reproduction and is an important measure of offspring quality. While a tradeoff between family size and child growth outcomes is theoretically predicted in high-fertility societies, empirical evidence is mixed. This is often attributed to phenotypic variation in parental condition. However, inconsistent study results may also arise because family size confounds the potentially differential effects that older and younger siblings can have on young children's growth. Additionally, inconsistent results might reflect that the biological significance associated with different growth trajectories is poorly understood. This paper addresses these concerns by tracking children's monthly gains in height and weight from weaning to age five in a high fertility Maya community. We predict that: 1) as an aggregate measure family size will not have a major impact on child growth during the post weaning period; 2) competition from young siblings will negatively impact child growth during the post weaning period; 3) however because of their economic value, older siblings will have a negligible effect on young children's growth. Accounting for parental condition, we use linear mixed models to evaluate the effects that family size, younger and older siblings have on children's growth. Congruent with our expectations, it is younger siblings who have the most detrimental effect on children's growth. While we find statistical evidence of a quantity/quality tradeoff effect, the biological significance of these results is negligible in early childhood. Our findings help to resolve why quantity/quality studies have had inconsistent results by showing that sibling competition varies with sibling age composition, not just family size, and that biological significance is distinct from statistical significance.

  10. Sibling Competition & Growth Tradeoffs. Biological vs. Statistical Significance.

    Directory of Open Access Journals (Sweden)

    Karen L Kramer

    Full Text Available Early childhood growth has many downstream effects on future health and reproduction and is an important measure of offspring quality. While a tradeoff between family size and child growth outcomes is theoretically predicted in high-fertility societies, empirical evidence is mixed. This is often attributed to phenotypic variation in parental condition. However, inconsistent study results may also arise because family size confounds the potentially differential effects that older and younger siblings can have on young children's growth. Additionally, inconsistent results might reflect that the biological significance associated with different growth trajectories is poorly understood. This paper addresses these concerns by tracking children's monthly gains in height and weight from weaning to age five in a high fertility Maya community. We predict that: 1 as an aggregate measure family size will not have a major impact on child growth during the post weaning period; 2 competition from young siblings will negatively impact child growth during the post weaning period; 3 however because of their economic value, older siblings will have a negligible effect on young children's growth. Accounting for parental condition, we use linear mixed models to evaluate the effects that family size, younger and older siblings have on children's growth. Congruent with our expectations, it is younger siblings who have the most detrimental effect on children's growth. While we find statistical evidence of a quantity/quality tradeoff effect, the biological significance of these results is negligible in early childhood. Our findings help to resolve why quantity/quality studies have had inconsistent results by showing that sibling competition varies with sibling age composition, not just family size, and that biological significance is distinct from statistical significance.

  11. Vector-field statistics for the analysis of time varying clinical gait data.

    Science.gov (United States)

    Donnelly, C J; Alexander, C; Pataky, T C; Stannage, K; Reid, S; Robinson, M A

    2017-01-01

    In clinical settings, the time varying analysis of gait data relies heavily on the experience of the individual(s) assessing these biological signals. Though three dimensional kinematics are recognised as time varying waveforms (1D), exploratory statistical analysis of these data are commonly carried out with multiple discrete or 0D dependent variables. In the absence of an a priori 0D hypothesis, clinicians are at risk of making type I and II errors in their analyis of time varying gait signatures in the event statistics are used in concert with prefered subjective clinical assesment methods. The aim of this communication was to determine if vector field waveform statistics were capable of providing quantitative corroboration to practically significant differences in time varying gait signatures as determined by two clinically trained gait experts. The case study was a left hemiplegic Cerebral Palsy (GMFCS I) gait patient following a botulinum toxin (BoNT-A) injection to their left gastrocnemius muscle. When comparing subjective clinical gait assessments between two testers, they were in agreement with each other for 61% of the joint degrees of freedom and phases of motion analysed. For tester 1 and tester 2, they were in agreement with the vector-field analysis for 78% and 53% of the kinematic variables analysed. When the subjective analyses of tester 1 and tester 2 were pooled together and then compared to the vector-field analysis, they were in agreement for 83% of the time varying kinematic variables analysed. These outcomes demonstrate that in principle, vector-field statistics corroborates with what a team of clinical gait experts would classify as practically meaningful pre- versus post time varying kinematic differences. The potential for vector-field statistics to be used as a useful clinical tool for the objective analysis of time varying clinical gait data is established. Future research is recommended to assess the usefulness of vector-field analyses

  12. Testing the Difference of Correlated Agreement Coefficients for Statistical Significance

    Science.gov (United States)

    Gwet, Kilem L.

    2016-01-01

    This article addresses the problem of testing the difference between two correlated agreement coefficients for statistical significance. A number of authors have proposed methods for testing the difference between two correlated kappa coefficients, which require either the use of resampling methods or the use of advanced statistical modeling…

  13. Significance levels for studies with correlated test statistics.

    Science.gov (United States)

    Shi, Jianxin; Levinson, Douglas F; Whittemore, Alice S

    2008-07-01

    When testing large numbers of null hypotheses, one needs to assess the evidence against the global null hypothesis that none of the hypotheses is false. Such evidence typically is based on the test statistic of the largest magnitude, whose statistical significance is evaluated by permuting the sample units to simulate its null distribution. Efron (2007) has noted that correlation among the test statistics can induce substantial interstudy variation in the shapes of their histograms, which may cause misleading tail counts. Here, we show that permutation-based estimates of the overall significance level also can be misleading when the test statistics are correlated. We propose that such estimates be conditioned on a simple measure of the spread of the observed histogram, and we provide a method for obtaining conditional significance levels. We justify this conditioning using the conditionality principle described by Cox and Hinkley (1974). Application of the method to gene expression data illustrates the circumstances when conditional significance levels are needed.

  14. Caveats for using statistical significance tests in research assessments

    DEFF Research Database (Denmark)

    Schneider, Jesper Wiborg

    2013-01-01

    controversial and numerous criticisms have been leveled against their use. Based on examples from articles by proponents of the use statistical significance tests in research assessments, we address some of the numerous problems with such tests. The issues specifically discussed are the ritual practice......This article raises concerns about the advantages of using statistical significance tests in research assessments as has recently been suggested in the debate about proper normalization procedures for citation indicators by Opthof and Leydesdorff (2010). Statistical significance tests are highly...... argue that applying statistical significance tests and mechanically adhering to their results are highly problematic and detrimental to critical thinking. We claim that the use of such tests do not provide any advantages in relation to deciding whether differences between citation indicators...

  15. On detection and assessment of statistical significance of Genomic Islands

    Directory of Open Access Journals (Sweden)

    Chaudhuri Probal

    2008-04-01

    Full Text Available Abstract Background Many of the available methods for detecting Genomic Islands (GIs in prokaryotic genomes use markers such as transposons, proximal tRNAs, flanking repeats etc., or they use other supervised techniques requiring training datasets. Most of these methods are primarily based on the biases in GC content or codon and amino acid usage of the islands. However, these methods either do not use any formal statistical test of significance or use statistical tests for which the critical values and the P-values are not adequately justified. We propose a method, which is unsupervised in nature and uses Monte-Carlo statistical tests based on randomly selected segments of a chromosome. Such tests are supported by precise statistical distribution theory, and consequently, the resulting P-values are quite reliable for making the decision. Results Our algorithm (named Design-Island, an acronym for Detection of Statistically Significant Genomic Island runs in two phases. Some 'putative GIs' are identified in the first phase, and those are refined into smaller segments containing horizontally acquired genes in the refinement phase. This method is applied to Salmonella typhi CT18 genome leading to the discovery of several new pathogenicity, antibiotic resistance and metabolic islands that were missed by earlier methods. Many of these islands contain mobile genetic elements like phage-mediated genes, transposons, integrase and IS elements confirming their horizontal acquirement. Conclusion The proposed method is based on statistical tests supported by precise distribution theory and reliable P-values along with a technique for visualizing statistically significant islands. The performance of our method is better than many other well known methods in terms of their sensitivity and accuracy, and in terms of specificity, it is comparable to other methods.

  16. Your Chi-Square Test Is Statistically Significant: Now What?

    Science.gov (United States)

    Sharpe, Donald

    2015-01-01

    Applied researchers have employed chi-square tests for more than one hundred years. This paper addresses the question of how one should follow a statistically significant chi-square test result in order to determine the source of that result. Four approaches were evaluated: calculating residuals, comparing cells, ransacking, and partitioning. Data…

  17. Statistical Significance and Effect Size: Two Sides of a Coin.

    Science.gov (United States)

    Fan, Xitao

    This paper suggests that statistical significance testing and effect size are two sides of the same coin; they complement each other, but do not substitute for one another. Good research practice requires that both should be taken into consideration to make sound quantitative decisions. A Monte Carlo simulation experiment was conducted, and a…

  18. Reporting effect sizes as a supplement to statistical significance ...

    African Journals Online (AJOL)

    The purpose of the article is to review the statistical significance reporting practices in reading instruction studies and to provide guidelines for when to calculate and report effect sizes in educational research. A review of six readily accessible (online) and accredited journals publishing research on reading instruction ...

  19. Test for the statistical significance of differences between ROC curves

    International Nuclear Information System (INIS)

    Metz, C.E.; Kronman, H.B.

    1979-01-01

    A test for the statistical significance of observed differences between two measured Receiver Operating Characteristic (ROC) curves has been designed and evaluated. The set of observer response data for each ROC curve is assumed to be independent and to arise from a ROC curve having a form which, in the absence of statistical fluctuations in the response data, graphs as a straight line on double normal-deviate axes. To test the significance of an apparent difference between two measured ROC curves, maximum likelihood estimates of the two parameters of each curve and the associated parameter variances and covariance are calculated from the corresponding set of observer response data. An approximate Chi-square statistic with two degrees of freedom is then constructed from the differences between the parameters estimated for each ROC curve and from the variances and covariances of these estimates. This statistic is known to be truly Chi-square distributed only in the limit of large numbers of trials in the observer performance experiments. Performance of the statistic for data arising from a limited number of experimental trials was evaluated. Independent sets of rating scale data arising from the same underlying ROC curve were paired, and the fraction of differences found (falsely) significant was compared to the significance level, α, used with the test. Although test performance was found to be somewhat dependent on both the number of trials in the data and the position of the underlying ROC curve in the ROC space, the results for various significance levels showed the test to be reliable under practical experimental conditions

  20. Complication rates of ostomy surgery are high and vary significantly between hospitals.

    Science.gov (United States)

    Sheetz, Kyle H; Waits, Seth A; Krell, Robert W; Morris, Arden M; Englesbe, Michael J; Mullard, Andrew; Campbell, Darrell A; Hendren, Samantha

    2014-05-01

    Ostomy surgery is common and has traditionally been associated with high rates of morbidity and mortality, suggesting an important target for quality improvement. The purpose of this work was to evaluate the variation in outcomes after ostomy creation surgery within Michigan to identify targets for quality improvement. This was a retrospective cohort study. The study took place within the 34-hospital Michigan Surgical Quality Collaborative. Patients included were those undergoing ostomy creation surgery between 2006 and 2011. We evaluated hospital morbidity and mortality rates after risk adjustment (age, comorbidities, emergency vs elective, and procedure type). A total of 4250 patients underwent ostomy creation surgery; 3866 procedures (91.0%) were open and 384 (9.0%) were laparoscopic. Unadjusted morbidity and mortality rates were 43.9% and 10.7%. Unadjusted morbidity rates for specific procedures ranged from 32.7% for ostomy-creation-only procedures to 47.8% for Hartmann procedures. Risk-adjusted morbidity rates varied significantly between hospitals, ranging from 31.2% (95% CI, 18.4-43.9) to 60.8% (95% CI, 48.9-72.6). There were 5 statistically significant high-outlier hospitals and 3 statistically significant low-outlier hospitals for risk-adjusted morbidity. The pattern of complication types was similar between high- and low-outlier hospitals. Case volume, operative duration, and use of laparoscopic surgery did not explain the variation in morbidity rates across hospitals. This work was limited by its retrospective study design, by unmeasured variation in case severity, and by our inability to differentiate between colostomies and ileostomies because of the use of Current Procedural Terminology codes. Morbidity and mortality rates for modern ostomy surgery are high. Although this type of surgery has received little attention in healthcare policy, these data reveal that it is both common and uncommonly morbid. Variation in hospital performance provides an

  1. Increasing the statistical significance of entanglement detection in experiments.

    Science.gov (United States)

    Jungnitsch, Bastian; Niekamp, Sönke; Kleinmann, Matthias; Gühne, Otfried; Lu, He; Gao, Wei-Bo; Chen, Yu-Ao; Chen, Zeng-Bing; Pan, Jian-Wei

    2010-05-28

    Entanglement is often verified by a violation of an inequality like a Bell inequality or an entanglement witness. Considerable effort has been devoted to the optimization of such inequalities in order to obtain a high violation. We demonstrate theoretically and experimentally that such an optimization does not necessarily lead to a better entanglement test, if the statistical error is taken into account. Theoretically, we show for different error models that reducing the violation of an inequality can improve the significance. Experimentally, we observe this phenomenon in a four-photon experiment, testing the Mermin and Ardehali inequality for different levels of noise. Furthermore, we provide a way to develop entanglement tests with high statistical significance.

  2. Common pitfalls in statistical analysis: "P" values, statistical significance and confidence intervals

    Directory of Open Access Journals (Sweden)

    Priya Ranganathan

    2015-01-01

    Full Text Available In the second part of a series on pitfalls in statistical analysis, we look at various ways in which a statistically significant study result can be expressed. We debunk some of the myths regarding the ′P′ value, explain the importance of ′confidence intervals′ and clarify the importance of including both values in a paper

  3. Common pitfalls in statistical analysis: “P” values, statistical significance and confidence intervals

    Science.gov (United States)

    Ranganathan, Priya; Pramesh, C. S.; Buyse, Marc

    2015-01-01

    In the second part of a series on pitfalls in statistical analysis, we look at various ways in which a statistically significant study result can be expressed. We debunk some of the myths regarding the ‘P’ value, explain the importance of ‘confidence intervals’ and clarify the importance of including both values in a paper PMID:25878958

  4. Statistical significance of epidemiological data. Seminar: Evaluation of epidemiological studies

    International Nuclear Information System (INIS)

    Weber, K.H.

    1993-01-01

    In stochastic damages, the numbers of events, e.g. the persons who are affected by or have died of cancer, and thus the relative frequencies (incidence or mortality) are binomially distributed random variables. Their statistical fluctuations can be characterized by confidence intervals. For epidemiologic questions, especially for the analysis of stochastic damages in the low dose range, the following issues are interesting: - Is a sample (a group of persons) with a definite observed damage frequency part of the whole population? - Is an observed frequency difference between two groups of persons random or statistically significant? - Is an observed increase or decrease of the frequencies with increasing dose random or statistically significant and how large is the regression coefficient (= risk coefficient) in this case? These problems can be solved by sttistical tests. So-called distribution-free tests and tests which are not bound to the supposition of normal distribution are of particular interest, such as: - χ 2 -independence test (test in contingency tables); - Fisher-Yates-test; - trend test according to Cochran; - rank correlation test given by Spearman. These tests are explained in terms of selected epidemiologic data, e.g. of leukaemia clusters, of the cancer mortality of the Japanese A-bomb survivors especially in the low dose range as well as on the sample of the cancer mortality in the high background area in Yangjiang (China). (orig.) [de

  5. Systematic reviews of anesthesiologic interventions reported as statistically significant

    DEFF Research Database (Denmark)

    Imberger, Georgina; Gluud, Christian; Boylan, John

    2015-01-01

    statistically significant meta-analyses of anesthesiologic interventions, we used TSA to estimate power and imprecision in the context of sparse data and repeated updates. METHODS: We conducted a search to identify all systematic reviews with meta-analyses that investigated an intervention that may......: From 11,870 titles, we found 682 systematic reviews that investigated anesthesiologic interventions. In the 50 sampled meta-analyses, the median number of trials included was 8 (interquartile range [IQR], 5-14), the median number of participants was 964 (IQR, 523-1736), and the median number...

  6. Increasing the statistical significance of entanglement detection in experiments

    Energy Technology Data Exchange (ETDEWEB)

    Jungnitsch, Bastian; Niekamp, Soenke; Kleinmann, Matthias; Guehne, Otfried [Institut fuer Quantenoptik und Quanteninformation, Innsbruck (Austria); Lu, He; Gao, Wei-Bo; Chen, Zeng-Bing [Hefei National Laboratory for Physical Sciences at Microscale and Department of Modern Physics, University of Science and Technology of China, Hefei (China); Chen, Yu-Ao; Pan, Jian-Wei [Hefei National Laboratory for Physical Sciences at Microscale and Department of Modern Physics, University of Science and Technology of China, Hefei (China); Physikalisches Institut, Universitaet Heidelberg (Germany)

    2010-07-01

    Entanglement is often verified by a violation of an inequality like a Bell inequality or an entanglement witness. Considerable effort has been devoted to the optimization of such inequalities in order to obtain a high violation. We demonstrate theoretically and experimentally that such an optimization does not necessarily lead to a better entanglement test, if the statistical error is taken into account. Theoretically, we show for different error models that reducing the violation of an inequality can improve the significance. We show this to be the case for an error model in which the variance of an observable is interpreted as its error and for the standard error model in photonic experiments. Specifically, we demonstrate that the Mermin inequality yields a Bell test which is statistically more significant than the Ardehali inequality in the case of a photonic four-qubit state that is close to a GHZ state. Experimentally, we observe this phenomenon in a four-photon experiment, testing the above inequalities for different levels of noise.

  7. A tutorial on hunting statistical significance by chasing N

    Directory of Open Access Journals (Sweden)

    Denes Szucs

    2016-09-01

    Full Text Available There is increasing concern about the replicability of studies in psychology and cognitive neuroscience. Hidden data dredging (also called p-hacking is a major contributor to this crisis because it substantially increases Type I error resulting in a much larger proportion of false positive findings than the usually expected 5%. In order to build better intuition to avoid, detect and criticise some typical problems, here I systematically illustrate the large impact of some easy to implement and so, perhaps frequent data dredging techniques on boosting false positive findings. I illustrate several forms of two special cases of data dredging. First, researchers may violate the data collection stopping rules of null hypothesis significance testing by repeatedly checking for statistical significance with various numbers of participants. Second, researchers may group participants post-hoc along potential but unplanned independent grouping variables. The first approach 'hacks' the number of participants in studies, the second approach ‘hacks’ the number of variables in the analysis. I demonstrate the high amount of false positive findings generated by these techniques with data from true null distributions. I also illustrate that it is extremely easy to introduce strong bias into data by very mild selection and re-testing. Similar, usually undocumented data dredging steps can easily lead to having 20-50%, or more false positives.

  8. Conducting tests for statistically significant differences using forest inventory data

    Science.gov (United States)

    James A. Westfall; Scott A. Pugh; John W. Coulston

    2013-01-01

    Many forest inventory and monitoring programs are based on a sample of ground plots from which estimates of forest resources are derived. In addition to evaluating metrics such as number of trees or amount of cubic wood volume, it is often desirable to make comparisons between resource attributes. To properly conduct statistical tests for differences, it is imperative...

  9. Detecting Statistically Significant Communities of Triangle Motifs in Undirected Networks

    Science.gov (United States)

    2016-04-26

    Systems, Statistics & Management Science, University of Alabama, USA. 1 DISTRIBUTION A: Distribution approved for public release. Contents 1 Summary 5...13 5 Application to Real Networks 18 5.1 2012 FBS Football Schedule Network... football schedule network. . . . . . . . . . . . . . . . . . . . . . 21 14 Stem plot of degree-ordered vertices versus the degree for college football

  10. Confidence intervals permit, but don't guarantee, better inference than statistical significance testing

    Directory of Open Access Journals (Sweden)

    Melissa Coulson

    2010-07-01

    Full Text Available A statistically significant result, and a non-significant result may differ little, although significance status may tempt an interpretation of difference. Two studies are reported that compared interpretation of such results presented using null hypothesis significance testing (NHST, or confidence intervals (CIs. Authors of articles published in psychology, behavioural neuroscience, and medical journals were asked, via email, to interpret two fictitious studies that found similar results, one statistically significant, and the other non-significant. Responses from 330 authors varied greatly, but interpretation was generally poor, whether results were presented as CIs or using NHST. However, when interpreting CIs respondents who mentioned NHST were 60% likely to conclude, unjustifiably, the two results conflicted, whereas those who interpreted CIs without reference to NHST were 95% likely to conclude, justifiably, the two results were consistent. Findings were generally similar for all three disciplines. An email survey of academic psychologists confirmed that CIs elicit better interpretations if NHST is not invoked. Improved statistical inference can result from encouragement of meta-analytic thinking and use of CIs but, for full benefit, such highly desirable statistical reform requires also that researchers interpret CIs without recourse to NHST.

  11. On-line statistical processing of radiation detector pulse trains with time-varying count rates

    International Nuclear Information System (INIS)

    Apostolopoulos, G.

    2008-01-01

    Statistical analysis is of primary importance for the correct interpretation of nuclear measurements, due to the inherent random nature of radioactive decay processes. This paper discusses the application of statistical signal processing techniques to the random pulse trains generated by radiation detectors. The aims of the presented algorithms are: (i) continuous, on-line estimation of the underlying time-varying count rate θ(t) and its first-order derivative dθ/dt; (ii) detection of abrupt changes in both of these quantities and estimation of their new value after the change point. Maximum-likelihood techniques, based on the Poisson probability distribution, are employed for the on-line estimation of θ and dθ/dt. Detection of abrupt changes is achieved on the basis of the generalized likelihood ratio statistical test. The properties of the proposed algorithms are evaluated by extensive simulations and possible applications for on-line radiation monitoring are discussed

  12. After statistics reform : Should we still teach significance testing?

    NARCIS (Netherlands)

    A. Hak (Tony)

    2014-01-01

    textabstractIn the longer term null hypothesis significance testing (NHST) will disappear because p- values are not informative and not replicable. Should we continue to teach in the future the procedures of then abolished routines (i.e., NHST)? Three arguments are discussed for not teaching NHST in

  13. Distinguishing between statistical significance and practical/clinical meaningfulness using statistical inference.

    Science.gov (United States)

    Wilkinson, Michael

    2014-03-01

    Decisions about support for predictions of theories in light of data are made using statistical inference. The dominant approach in sport and exercise science is the Neyman-Pearson (N-P) significance-testing approach. When applied correctly it provides a reliable procedure for making dichotomous decisions for accepting or rejecting zero-effect null hypotheses with known and controlled long-run error rates. Type I and type II error rates must be specified in advance and the latter controlled by conducting an a priori sample size calculation. The N-P approach does not provide the probability of hypotheses or indicate the strength of support for hypotheses in light of data, yet many scientists believe it does. Outcomes of analyses allow conclusions only about the existence of non-zero effects, and provide no information about the likely size of true effects or their practical/clinical value. Bayesian inference can show how much support data provide for different hypotheses, and how personal convictions should be altered in light of data, but the approach is complicated by formulating probability distributions about prior subjective estimates of population effects. A pragmatic solution is magnitude-based inference, which allows scientists to estimate the true magnitude of population effects and how likely they are to exceed an effect magnitude of practical/clinical importance, thereby integrating elements of subjective Bayesian-style thinking. While this approach is gaining acceptance, progress might be hastened if scientists appreciate the shortcomings of traditional N-P null hypothesis significance testing.

  14. Positive Selection or Free to Vary? Assessing the Functional Significance of Sequence Change Using Molecular Dynamics.

    Directory of Open Access Journals (Sweden)

    Jane R Allison

    Full Text Available Evolutionary arms races between pathogens and their hosts may be manifested as selection for rapid evolutionary change of key genes, and are sometimes detectable through sequence-level analyses. In the case of protein-coding genes, such analyses frequently predict that specific codons are under positive selection. However, detecting positive selection can be non-trivial, and false positive predictions are a common concern in such analyses. It is therefore helpful to place such predictions within a structural and functional context. Here, we focus on the p19 protein from tombusviruses. P19 is a homodimer that sequesters siRNAs, thereby preventing the host RNAi machinery from shutting down viral infection. Sequence analysis of the p19 gene is complicated by the fact that it is constrained at the sequence level by overprinting of a viral movement protein gene. Using homology modeling, in silico mutation and molecular dynamics simulations, we assess how non-synonymous changes to two residues involved in forming the dimer interface-one invariant, and one predicted to be under positive selection-impact molecular function. Interestingly, we find that both observed variation and potential variation (where a non-synonymous change to p19 would be synonymous for the overprinted movement protein does not significantly impact protein structure or RNA binding. Consequently, while several methods identify residues at the dimer interface as being under positive selection, MD results suggest they are functionally indistinguishable from a site that is free to vary. Our analyses serve as a caveat to using sequence-level analyses in isolation to detect and assess positive selection, and emphasize the importance of also accounting for how non-synonymous changes impact structure and function.

  15. A statistical theory of cell killing by radiation of varying linear energy transfer

    International Nuclear Information System (INIS)

    Hawkins, R.B.

    1994-01-01

    A theory is presented that provides an explanation for the observed features of the survival of cultured cells after exposure to densely ionizing high-linear energy transfer (LET) radiation. It starts from a phenomenological postulate based on the linear-quadratic form of cell survival observed for low-LET radiation and uses principles of statistics and fluctuation theory to demonstrate that the effect of varying LET on cell survival can be attributed to random variation of dose to small volumes contained within the nucleus. A simple relation is presented for surviving fraction of cells after exposure to radiation of varying LET that depends on the α and β parameters for the same cells in the limit of low-LET radiation. This relation implies that the value of β is independent of LET. Agreement of the theory with selected observations of cell survival from the literature is demonstrated. A relation is presented that gives relative biological effectiveness (RBE) as a function of the α and β parameters for low-LET radiation. Measurements from microdosimetry are used to estimate the size of the subnuclear volume to which the fluctuation pertains. 11 refs., 4 figs., 2 tabs

  16. Understanding the Sampling Distribution and Its Use in Testing Statistical Significance.

    Science.gov (United States)

    Breunig, Nancy A.

    Despite the increasing criticism of statistical significance testing by researchers, particularly in the publication of the 1994 American Psychological Association's style manual, statistical significance test results are still popular in journal articles. For this reason, it remains important to understand the logic of inferential statistics. A…

  17. Statistically derived factors of varied importance to audiologists when making a hearing aid brand preference decision.

    Science.gov (United States)

    Johnson, Earl E; Mueller, H Gustav; Ricketts, Todd A

    2009-01-01

    To determine the amount of importance audiologists place on various items related to their selection of a preferred hearing aid brand manufacturer. Three hundred forty-three hearing aid-dispensing audiologists rated a total of 32 randomized items by survey methodology. Principle component analysis identified seven orthogonal statistical factors of importance. In rank order, these factors were Aptitude of the Brand, Image, Cost, Sales and Speed of Delivery, Exposure, Colleague Recommendations, and Contracts and Incentives. While it was hypothesized that differences among audiologists in the importance ratings of these factors would dictate their preference for a given brand, that was not our finding. Specifically, mean ratings for the six most important factors did not differ among audiologists preferring different brands. A statistically significant difference among audiologists preferring different brands was present, however, for one factor: Contracts and Incentives. Its assigned importance, though, was always lower than that for the other six factors. Although most audiologists have a preferred hearing aid brand, differences in the perceived importance of common factors attributed to brands do not largely determine preference for a particular brand.

  18. "What If" Analyses: Ways to Interpret Statistical Significance Test Results Using EXCEL or "R"

    Science.gov (United States)

    Ozturk, Elif

    2012-01-01

    The present paper aims to review two motivations to conduct "what if" analyses using Excel and "R" to understand the statistical significance tests through the sample size context. "What if" analyses can be used to teach students what statistical significance tests really do and in applied research either prospectively to estimate what sample size…

  19. Controversy in the allometric application of fixed- versus varying-exponent models: a statistical and mathematical perspective.

    Science.gov (United States)

    Tang, Huadong; Hussain, Azher; Leal, Mauricio; Fluhler, Eric; Mayersohn, Michael

    2011-02-01

    This commentary is a reply to a recent article by Mahmood commenting on the authors' article on the use of fixed-exponent allometry in predicting human clearance. The commentary discusses eight issues that are related to criticisms made in Mahmood's article and examines the controversies (fixed-exponent vs. varying-exponent allometry) from the perspective of statistics and mathematics. The key conclusion is that any allometric method, which is to establish a power function based on a limited number of animal species and to extrapolate the resulting power function to human values (varying-exponent allometry), is infused with fundamental statistical errors. Copyright © 2010 Wiley-Liss, Inc.

  20. Application of a Statistical Linear Time-Varying System Model of High Grazing Angle Sea Clutter for Computing Interference Power

    Science.gov (United States)

    2017-12-08

    STATISTICAL LINEAR TIME-VARYING SYSTEM MODEL OF HIGH GRAZING ANGLE SEA CLUTTER FOR COMPUTING INTERFERENCE POWER 1. INTRODUCTION Statistical linear time...beam. We can approximate one of the sinc factors using the Dirichlet kernel to facilitate computation of the integral in (6) as follows: ∣∣∣∣sinc(WB...plotted in Figure 4. The resultant autocorrelation can then be found by substituting (18) into (28). The Python code used to generate Figures 1-4 is found

  1. Strategies for Testing Statistical and Practical Significance in Detecting DIF with Logistic Regression Models

    Science.gov (United States)

    Fidalgo, Angel M.; Alavi, Seyed Mohammad; Amirian, Seyed Mohammad Reza

    2014-01-01

    This study examines three controversial aspects in differential item functioning (DIF) detection by logistic regression (LR) models: first, the relative effectiveness of different analytical strategies for detecting DIF; second, the suitability of the Wald statistic for determining the statistical significance of the parameters of interest; and…

  2. Measuring individual significant change on the Beck Depression Inventory-II through IRT-based statistics.

    NARCIS (Netherlands)

    Brouwer, D.; Meijer, R.R.; Zevalkink, D.J.

    2013-01-01

    Several researchers have emphasized that item response theory (IRT)-based methods should be preferred over classical approaches in measuring change for individual patients. In the present study we discuss and evaluate the use of IRT-based statistics to measure statistical significant individual

  3. Using the Bootstrap Method for a Statistical Significance Test of Differences between Summary Histograms

    Science.gov (United States)

    Xu, Kuan-Man

    2006-01-01

    A new method is proposed to compare statistical differences between summary histograms, which are the histograms summed over a large ensemble of individual histograms. It consists of choosing a distance statistic for measuring the difference between summary histograms and using a bootstrap procedure to calculate the statistical significance level. Bootstrapping is an approach to statistical inference that makes few assumptions about the underlying probability distribution that describes the data. Three distance statistics are compared in this study. They are the Euclidean distance, the Jeffries-Matusita distance and the Kuiper distance. The data used in testing the bootstrap method are satellite measurements of cloud systems called cloud objects. Each cloud object is defined as a contiguous region/patch composed of individual footprints or fields of view. A histogram of measured values over footprints is generated for each parameter of each cloud object and then summary histograms are accumulated over all individual histograms in a given cloud-object size category. The results of statistical hypothesis tests using all three distances as test statistics are generally similar, indicating the validity of the proposed method. The Euclidean distance is determined to be most suitable after comparing the statistical tests of several parameters with distinct probability distributions among three cloud-object size categories. Impacts on the statistical significance levels resulting from differences in the total lengths of satellite footprint data between two size categories are also discussed.

  4. Health significance and statistical uncertainty. The value of P-value.

    Science.gov (United States)

    Consonni, Dario; Bertazzi, Pier Alberto

    2017-10-27

    The P-value is widely used as a summary statistics of scientific results. Unfortunately, there is a widespread tendency to dichotomize its value in "P0.05" ("statistically not significant"), with the former implying a "positive" result and the latter a "negative" one. To show the unsuitability of such an approach when evaluating the effects of environmental and occupational risk factors. We provide examples of distorted use of P-value and of the negative consequences for science and public health of such a black-and-white vision. The rigid interpretation of P-value as a dichotomy favors the confusion between health relevance and statistical significance, discourages thoughtful thinking, and distorts attention from what really matters, the health significance. A much better way to express and communicate scientific results involves reporting effect estimates (e.g., risks, risks ratios or risk differences) and their confidence intervals (CI), which summarize and convey both health significance and statistical uncertainty. Unfortunately, many researchers do not usually consider the whole interval of CI but only examine if it includes the null-value, therefore degrading this procedure to the same P-value dichotomy (statistical significance or not). In reporting statistical results of scientific research present effects estimates with their confidence intervals and do not qualify the P-value as "significant" or "not significant".

  5. Statistical vs. Economic Significance in Economics and Econometrics: Further comments on McCloskey & Ziliak

    DEFF Research Database (Denmark)

    Engsted, Tom

    I comment on the controversy between McCloskey & Ziliak and Hoover & Siegler on statistical versus economic significance, in the March 2008 issue of the Journal of Economic Methodology. I argue that while McCloskey & Ziliak are right in emphasizing 'real error', i.e. non-sampling error that cannot...... be eliminated through specification testing, they fail to acknowledge those areas in economics, e.g. rational expectations macroeconomics and asset pricing, where researchers clearly distinguish between statistical and economic significance and where statistical testing plays a relatively minor role in model...

  6. Codon Deviation Coefficient: A novel measure for estimating codon usage bias and its statistical significance

    KAUST Repository

    Zhang, Zhang

    2012-03-22

    Background: Genetic mutation, selective pressure for translational efficiency and accuracy, level of gene expression, and protein function through natural selection are all believed to lead to codon usage bias (CUB). Therefore, informative measurement of CUB is of fundamental importance to making inferences regarding gene function and genome evolution. However, extant measures of CUB have not fully accounted for the quantitative effect of background nucleotide composition and have not statistically evaluated the significance of CUB in sequence analysis.Results: Here we propose a novel measure--Codon Deviation Coefficient (CDC)--that provides an informative measurement of CUB and its statistical significance without requiring any prior knowledge. Unlike previous measures, CDC estimates CUB by accounting for background nucleotide compositions tailored to codon positions and adopts the bootstrapping to assess the statistical significance of CUB for any given sequence. We evaluate CDC by examining its effectiveness on simulated sequences and empirical data and show that CDC outperforms extant measures by achieving a more informative estimation of CUB and its statistical significance.Conclusions: As validated by both simulated and empirical data, CDC provides a highly informative quantification of CUB and its statistical significance, useful for determining comparative magnitudes and patterns of biased codon usage for genes or genomes with diverse sequence compositions. 2012 Zhang et al; licensee BioMed Central Ltd.

  7. Codon Deviation Coefficient: a novel measure for estimating codon usage bias and its statistical significance

    Directory of Open Access Journals (Sweden)

    Zhang Zhang

    2012-03-01

    Full Text Available Abstract Background Genetic mutation, selective pressure for translational efficiency and accuracy, level of gene expression, and protein function through natural selection are all believed to lead to codon usage bias (CUB. Therefore, informative measurement of CUB is of fundamental importance to making inferences regarding gene function and genome evolution. However, extant measures of CUB have not fully accounted for the quantitative effect of background nucleotide composition and have not statistically evaluated the significance of CUB in sequence analysis. Results Here we propose a novel measure--Codon Deviation Coefficient (CDC--that provides an informative measurement of CUB and its statistical significance without requiring any prior knowledge. Unlike previous measures, CDC estimates CUB by accounting for background nucleotide compositions tailored to codon positions and adopts the bootstrapping to assess the statistical significance of CUB for any given sequence. We evaluate CDC by examining its effectiveness on simulated sequences and empirical data and show that CDC outperforms extant measures by achieving a more informative estimation of CUB and its statistical significance. Conclusions As validated by both simulated and empirical data, CDC provides a highly informative quantification of CUB and its statistical significance, useful for determining comparative magnitudes and patterns of biased codon usage for genes or genomes with diverse sequence compositions.

  8. Statistics Refresher for Molecular Imaging Technologists, Part 2: Accuracy of Interpretation, Significance, and Variance.

    Science.gov (United States)

    Farrell, Mary Beth

    2018-06-01

    This article is the second part of a continuing education series reviewing basic statistics that nuclear medicine and molecular imaging technologists should understand. In this article, the statistics for evaluating interpretation accuracy, significance, and variance are discussed. Throughout the article, actual statistics are pulled from the published literature. We begin by explaining 2 methods for quantifying interpretive accuracy: interreader and intrareader reliability. Agreement among readers can be expressed simply as a percentage. However, the Cohen κ-statistic is a more robust measure of agreement that accounts for chance. The higher the κ-statistic is, the higher is the agreement between readers. When 3 or more readers are being compared, the Fleiss κ-statistic is used. Significance testing determines whether the difference between 2 conditions or interventions is meaningful. Statistical significance is usually expressed using a number called a probability ( P ) value. Calculation of P value is beyond the scope of this review. However, knowing how to interpret P values is important for understanding the scientific literature. Generally, a P value of less than 0.05 is considered significant and indicates that the results of the experiment are due to more than just chance. Variance, standard deviation (SD), confidence interval, and standard error (SE) explain the dispersion of data around a mean of a sample drawn from a population. SD is commonly reported in the literature. A small SD indicates that there is not much variation in the sample data. Many biologic measurements fall into what is referred to as a normal distribution taking the shape of a bell curve. In a normal distribution, 68% of the data will fall within 1 SD, 95% will fall within 2 SDs, and 99.7% will fall within 3 SDs. Confidence interval defines the range of possible values within which the population parameter is likely to lie and gives an idea of the precision of the statistic being

  9. Statistical significance of trends in monthly heavy precipitation over the US

    KAUST Repository

    Mahajan, Salil

    2011-05-11

    Trends in monthly heavy precipitation, defined by a return period of one year, are assessed for statistical significance in observations and Global Climate Model (GCM) simulations over the contiguous United States using Monte Carlo non-parametric and parametric bootstrapping techniques. The results from the two Monte Carlo approaches are found to be similar to each other, and also to the traditional non-parametric Kendall\\'s τ test, implying the robustness of the approach. Two different observational data-sets are employed to test for trends in monthly heavy precipitation and are found to exhibit consistent results. Both data-sets demonstrate upward trends, one of which is found to be statistically significant at the 95% confidence level. Upward trends similar to observations are observed in some climate model simulations of the twentieth century, but their statistical significance is marginal. For projections of the twenty-first century, a statistically significant upwards trend is observed in most of the climate models analyzed. The change in the simulated precipitation variance appears to be more important in the twenty-first century projections than changes in the mean precipitation. Stochastic fluctuations of the climate-system are found to be dominate monthly heavy precipitation as some GCM simulations show a downwards trend even in the twenty-first century projections when the greenhouse gas forcings are strong. © 2011 Springer-Verlag.

  10. Interpreting Statistical Significance Test Results: A Proposed New "What If" Method.

    Science.gov (United States)

    Kieffer, Kevin M.; Thompson, Bruce

    As the 1994 publication manual of the American Psychological Association emphasized, "p" values are affected by sample size. As a result, it can be helpful to interpret the results of statistical significant tests in a sample size context by conducting so-called "what if" analyses. However, these methods can be inaccurate…

  11. Recent Literature on Whether Statistical Significance Tests Should or Should Not Be Banned.

    Science.gov (United States)

    Deegear, James

    This paper summarizes the literature regarding statistical significant testing with an emphasis on recent literature in various discipline and literature exploring why researchers have demonstrably failed to be influenced by the American Psychological Association publication manual's encouragement to report effect sizes. Also considered are…

  12. Is statistical significance clinically important?--A guide to judge the clinical relevance of study findings

    NARCIS (Netherlands)

    Sierevelt, Inger N.; van Oldenrijk, Jakob; Poolman, Rudolf W.

    2007-01-01

    In this paper we describe several issues that influence the reporting of statistical significance in relation to clinical importance, since misinterpretation of p values is a common issue in orthopaedic literature. Orthopaedic research is tormented by the risks of false-positive (type I error) and

  13. Statistical Significance of the Contribution of Variables to the PCA Solution: An Alternative Permutation Strategy

    Science.gov (United States)

    Linting, Marielle; van Os, Bart Jan; Meulman, Jacqueline J.

    2011-01-01

    In this paper, the statistical significance of the contribution of variables to the principal components in principal components analysis (PCA) is assessed nonparametrically by the use of permutation tests. We compare a new strategy to a strategy used in previous research consisting of permuting the columns (variables) of a data matrix…

  14. Statistical significance versus clinical importance: trials on exercise therapy for chronic low back pain as example.

    NARCIS (Netherlands)

    van Tulder, M.W.; Malmivaara, A.; Hayden, J.; Koes, B.

    2007-01-01

    STUDY DESIGN. Critical appraisal of the literature. OBJECIVES. The objective of this study was to assess if results of back pain trials are statistically significant and clinically important. SUMMARY OF BACKGROUND DATA. There seems to be a discrepancy between conclusions reported by authors and

  15. P-Value, a true test of statistical significance? a cautionary note ...

    African Journals Online (AJOL)

    While it's not the intention of the founders of significance testing and hypothesis testing to have the two ideas intertwined as if they are complementary, the inconvenient marriage of the two practices into one coherent, convenient, incontrovertible and misinterpreted practice has dotted our standard statistics textbooks and ...

  16. A critical discussion of null hypothesis significance testing and statistical power analysis within psychological research

    DEFF Research Database (Denmark)

    Jones, Allan; Sommerlund, Bo

    2007-01-01

    The uses of null hypothesis significance testing (NHST) and statistical power analysis within psychological research are critically discussed. The article looks at the problems of relying solely on NHST when dealing with small and large sample sizes. The use of power-analysis in estimating...... the potential error introduced by small and large samples is advocated. Power analysis is not recommended as a replacement to NHST but as an additional source of information about the phenomena under investigation. Moreover, the importance of conceptual analysis in relation to statistical analysis of hypothesis...

  17. Thresholds for statistical and clinical significance in systematic reviews with meta-analytic methods

    DEFF Research Database (Denmark)

    Jakobsen, Janus Christian; Wetterslev, Jorn; Winkel, Per

    2014-01-01

    BACKGROUND: Thresholds for statistical significance when assessing meta-analysis results are being insufficiently demonstrated by traditional 95% confidence intervals and P-values. Assessment of intervention effects in systematic reviews with meta-analysis deserves greater rigour. METHODS......: Methodologies for assessing statistical and clinical significance of intervention effects in systematic reviews were considered. Balancing simplicity and comprehensiveness, an operational procedure was developed, based mainly on The Cochrane Collaboration methodology and the Grading of Recommendations...... Assessment, Development, and Evaluation (GRADE) guidelines. RESULTS: We propose an eight-step procedure for better validation of meta-analytic results in systematic reviews (1) Obtain the 95% confidence intervals and the P-values from both fixed-effect and random-effects meta-analyses and report the most...

  18. Testing statistical significance scores of sequence comparison methods with structure similarity

    Directory of Open Access Journals (Sweden)

    Leunissen Jack AM

    2006-10-01

    Full Text Available Abstract Background In the past years the Smith-Waterman sequence comparison algorithm has gained popularity due to improved implementations and rapidly increasing computing power. However, the quality and sensitivity of a database search is not only determined by the algorithm but also by the statistical significance testing for an alignment. The e-value is the most commonly used statistical validation method for sequence database searching. The CluSTr database and the Protein World database have been created using an alternative statistical significance test: a Z-score based on Monte-Carlo statistics. Several papers have described the superiority of the Z-score as compared to the e-value, using simulated data. We were interested if this could be validated when applied to existing, evolutionary related protein sequences. Results All experiments are performed on the ASTRAL SCOP database. The Smith-Waterman sequence comparison algorithm with both e-value and Z-score statistics is evaluated, using ROC, CVE and AP measures. The BLAST and FASTA algorithms are used as reference. We find that two out of three Smith-Waterman implementations with e-value are better at predicting structural similarities between proteins than the Smith-Waterman implementation with Z-score. SSEARCH especially has very high scores. Conclusion The compute intensive Z-score does not have a clear advantage over the e-value. The Smith-Waterman implementations give generally better results than their heuristic counterparts. We recommend using the SSEARCH algorithm combined with e-values for pairwise sequence comparisons.

  19. Statistical determination of significant curved I-girder bridge seismic response parameters

    Science.gov (United States)

    Seo, Junwon

    2013-06-01

    Curved steel bridges are commonly used at interchanges in transportation networks and more of these structures continue to be designed and built in the United States. Though the use of these bridges continues to increase in locations that experience high seismicity, the effects of curvature and other parameters on their seismic behaviors have been neglected in current risk assessment tools. These tools can evaluate the seismic vulnerability of a transportation network using fragility curves. One critical component of fragility curve development for curved steel bridges is the completion of sensitivity analyses that help identify influential parameters related to their seismic response. In this study, an accessible inventory of existing curved steel girder bridges located primarily in the Mid-Atlantic United States (MAUS) was used to establish statistical characteristics used as inputs for a seismic sensitivity study. Critical seismic response quantities were captured using 3D nonlinear finite element models. Influential parameters from these quantities were identified using statistical tools that incorporate experimental Plackett-Burman Design (PBD), which included Pareto optimal plots and prediction profiler techniques. The findings revealed that the potential variation in the influential parameters included number of spans, radius of curvature, maximum span length, girder spacing, and cross-frame spacing. These parameters showed varying levels of influence on the critical bridge response.

  20. Intensive inpatient treatment for bulimia nervosa: Statistical and clinical significance of symptom changes.

    Science.gov (United States)

    Diedrich, Alice; Schlegl, Sandra; Greetfeld, Martin; Fumi, Markus; Voderholzer, Ulrich

    2018-03-01

    This study examines the statistical and clinical significance of symptom changes during an intensive inpatient treatment program with a strong psychotherapeutic focus for individuals with severe bulimia nervosa. 295 consecutively admitted bulimic patients were administered the Structured Interview for Anorexic and Bulimic Syndromes-Self-Rating (SIAB-S), the Eating Disorder Inventory-2 (EDI-2), the Brief Symptom Inventory (BSI), and the Beck Depression Inventory-II (BDI-II) at treatment intake and discharge. Results indicated statistically significant symptom reductions with large effect sizes regarding severity of binge eating and compensatory behavior (SIAB-S), overall eating disorder symptom severity (EDI-2), overall psychopathology (BSI), and depressive symptom severity (BDI-II) even when controlling for antidepressant medication. The majority of patients showed either reliable (EDI-2: 33.7%, BSI: 34.8%, BDI-II: 18.1%) or even clinically significant symptom changes (EDI-2: 43.2%, BSI: 33.9%, BDI-II: 56.9%). Patients with clinically significant improvement were less distressed at intake and less likely to suffer from a comorbid borderline personality disorder when compared with those who did not improve to a clinically significant extent. Findings indicate that intensive psychotherapeutic inpatient treatment may be effective in about 75% of severely affected bulimic patients. For the remaining non-responding patients, inpatient treatment might be improved through an even stronger focus on the reduction of comorbid borderline personality traits.

  1. Cloud-based solution to identify statistically significant MS peaks differentiating sample categories.

    Science.gov (United States)

    Ji, Jun; Ling, Jeffrey; Jiang, Helen; Wen, Qiaojun; Whitin, John C; Tian, Lu; Cohen, Harvey J; Ling, Xuefeng B

    2013-03-23

    Mass spectrometry (MS) has evolved to become the primary high throughput tool for proteomics based biomarker discovery. Until now, multiple challenges in protein MS data analysis remain: large-scale and complex data set management; MS peak identification, indexing; and high dimensional peak differential analysis with the concurrent statistical tests based false discovery rate (FDR). "Turnkey" solutions are needed for biomarker investigations to rapidly process MS data sets to identify statistically significant peaks for subsequent validation. Here we present an efficient and effective solution, which provides experimental biologists easy access to "cloud" computing capabilities to analyze MS data. The web portal can be accessed at http://transmed.stanford.edu/ssa/. Presented web application supplies large scale MS data online uploading and analysis with a simple user interface. This bioinformatic tool will facilitate the discovery of the potential protein biomarkers using MS.

  2. Power, effects, confidence, and significance: an investigation of statistical practices in nursing research.

    Science.gov (United States)

    Gaskin, Cadeyrn J; Happell, Brenda

    2014-05-01

    improvement. Most importantly, researchers should abandon the misleading practice of interpreting the results from inferential tests based solely on whether they are statistically significant (or not) and, instead, focus on reporting and interpreting effect sizes, confidence intervals, and significance levels. Nursing researchers also need to conduct and report a priori power analyses, and to address the issue of Type I experiment-wise error inflation in their studies. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  3. Statistical significance estimation of a signal within the GooFit framework on GPUs

    Directory of Open Access Journals (Sweden)

    Cristella Leonardo

    2017-01-01

    Full Text Available In order to test the computing capabilities of GPUs with respect to traditional CPU cores a high-statistics toy Monte Carlo technique has been implemented both in ROOT/RooFit and GooFit frameworks with the purpose to estimate the statistical significance of the structure observed by CMS close to the kinematical boundary of the J/ψϕ invariant mass in the three-body decay B+ → J/ψϕK+. GooFit is a data analysis open tool under development that interfaces ROOT/RooFit to CUDA platform on nVidia GPU. The optimized GooFit application running on GPUs hosted by servers in the Bari Tier2 provides striking speed-up performances with respect to the RooFit application parallelised on multiple CPUs by means of PROOF-Lite tool. The considerable resulting speed-up, evident when comparing concurrent GooFit processes allowed by CUDA Multi Process Service and a RooFit/PROOF-Lite process with multiple CPU workers, is presented and discussed in detail. By means of GooFit it has also been possible to explore the behaviour of a likelihood ratio test statistic in different situations in which the Wilks Theorem may or may not apply because its regularity conditions are not satisfied.

  4. Statistical significance of theoretical predictions: A new dimension in nuclear structure theories (I)

    International Nuclear Information System (INIS)

    DUDEK, J; SZPAK, B; FORNAL, B; PORQUET, M-G

    2011-01-01

    In this and the follow-up article we briefly discuss what we believe represents one of the most serious problems in contemporary nuclear structure: the question of statistical significance of parametrizations of nuclear microscopic Hamiltonians and the implied predictive power of the underlying theories. In the present Part I, we introduce the main lines of reasoning of the so-called Inverse Problem Theory, an important sub-field in the contemporary Applied Mathematics, here illustrated on the example of the Nuclear Mean-Field Approach.

  5. The cag PAI is intact and functional but HP0521 varies significantly in Helicobacter pylori isolates from Malaysia and Singapore.

    Science.gov (United States)

    Schmidt, H-M A; Andres, S; Nilsson, C; Kovach, Z; Kaakoush, N O; Engstrand, L; Goh, K-L; Fock, K M; Forman, D; Mitchell, H

    2010-04-01

    Helicobacter pylori-related disease is at least partially attributable to the genotype of the infecting strain, particularly the presence of specific virulence factors. We investigated the prevalence of a novel combination of H. pylori virulence factors, including the cag pathogenicity island (PAI), and their association with severe disease in isolates from the three major ethnicities in Malaysia and Singapore, and evaluated whether the cag PAI was intact and functional in vitro. Polymerase chain reaction (PCR) was used to detect dupA, cagA, cagE, cagT, cagL and babA, and to type vacA, the EPIYA motifs, HP0521 alleles and oipA ON status in 159 H. pylori clinical isolates. Twenty-two strains were investigated for IL-8 induction and CagA translocation in vitro. The prevalence of cagA, cagE, cagL, cagT, babA, oipA ON and vacA s1 and i1 was >85%, irrespective of the disease state or ethnicity. The prevalence of dupA and the predominant HP0521 allele and EPIYA motif varied significantly with ethnicity (p < 0.05). A high prevalence of an intact cag PAI was found in all ethnic groups; however, no association was observed between any virulence factor and disease state. The novel association between the HP0521 alleles, EPIYA motifs and host ethnicity indicates that further studies to determine the function of this gene are important.

  6. Examining reproducibility in psychology : A hybrid method for combining a statistically significant original study and a replication

    NARCIS (Netherlands)

    Van Aert, R.C.M.; Van Assen, M.A.L.M.

    2018-01-01

    The unrealistically high rate of positive results within psychology has increased the attention to replication research. However, researchers who conduct a replication and want to statistically combine the results of their replication with a statistically significant original study encounter

  7. A Note on Comparing the Power of Test Statistics at Low Significance Levels.

    Science.gov (United States)

    Morris, Nathan; Elston, Robert

    2011-01-01

    It is an obvious fact that the power of a test statistic is dependent upon the significance (alpha) level at which the test is performed. It is perhaps a less obvious fact that the relative performance of two statistics in terms of power is also a function of the alpha level. Through numerous personal discussions, we have noted that even some competent statisticians have the mistaken intuition that relative power comparisons at traditional levels such as α = 0.05 will be roughly similar to relative power comparisons at very low levels, such as the level α = 5 × 10 -8 , which is commonly used in genome-wide association studies. In this brief note, we demonstrate that this notion is in fact quite wrong, especially with respect to comparing tests with differing degrees of freedom. In fact, at very low alpha levels the cost of additional degrees of freedom is often comparatively low. Thus we recommend that statisticians exercise caution when interpreting the results of power comparison studies which use alpha levels that will not be used in practice.

  8. Statistically significant faunal differences among Middle Ordovician age, Chickamauga Group bryozoan bioherms, central Alabama

    Energy Technology Data Exchange (ETDEWEB)

    Crow, C.J.

    1985-01-01

    Middle Ordovician age Chickamauga Group carbonates crop out along the Birmingham and Murphrees Valley anticlines in central Alabama. The macrofossil contents on exposed surfaces of seven bioherms have been counted to determine their various paleontologic characteristics. Twelve groups of organisms are present in these bioherms. Dominant organisms include bryozoans, algae, brachiopods, sponges, pelmatozoans, stromatoporoids and corals. Minor accessory fauna include predators, scavengers and grazers such as gastropods, ostracods, trilobites, cephalopods and pelecypods. Vertical and horizontal niche zonation has been detected for some of the bioherm dwelling fauna. No one bioherm of those studied exhibits all 12 groups of organisms; rather, individual bioherms display various subsets of the total diversity. Statistical treatment (G-test) of the diversity data indicates a lack of statistical homogeneity of the bioherms, both within and between localities. Between-locality population heterogeneity can be ascribed to differences in biologic responses to such gross environmental factors as water depth and clarity, and energy levels. At any one locality, gross aspects of the paleoenvironments are assumed to have been more uniform. Significant differences among bioherms at any one locality may have resulted from patchy distribution of species populations, differential preservation and other factors.

  9. Statistical significant changes in ground thermal conditions of alpine Austria during the last decade

    Science.gov (United States)

    Kellerer-Pirklbauer, Andreas

    2016-04-01

    Longer data series (e.g. >10 a) of ground temperatures in alpine regions are helpful to improve the understanding regarding the effects of present climate change on distribution and thermal characteristics of seasonal frost- and permafrost-affected areas. Beginning in 2004 - and more intensively since 2006 - a permafrost and seasonal frost monitoring network was established in Central and Eastern Austria by the University of Graz. This network consists of c.60 ground temperature (surface and near-surface) monitoring sites which are located at 1922-3002 m a.s.l., at latitude 46°55'-47°22'N and at longitude 12°44'-14°41'E. These data allow conclusions about general ground thermal conditions, potential permafrost occurrence, trend during the observation period, and regional pattern of changes. Calculations and analyses of several different temperature-related parameters were accomplished. At an annual scale a region-wide statistical significant warming during the observation period was revealed by e.g. an increase in mean annual temperature values (mean, maximum) or the significant lowering of the surface frost number (F+). At a seasonal scale no significant trend of any temperature-related parameter was in most cases revealed for spring (MAM) and autumn (SON). Winter (DJF) shows only a weak warming. In contrast, the summer (JJA) season reveals in general a significant warming as confirmed by several different temperature-related parameters such as e.g. mean seasonal temperature, number of thawing degree days, number of freezing degree days, or days without night frost. On a monthly basis August shows the statistically most robust and strongest warming of all months, although regional differences occur. Despite the fact that the general ground temperature warming during the last decade is confirmed by the field data in the study region, complications in trend analyses arise by temperature anomalies (e.g. warm winter 2006/07) or substantial variations in the winter

  10. Estimates of statistical significance for comparison of individual positions in multiple sequence alignments

    Directory of Open Access Journals (Sweden)

    Sadreyev Ruslan I

    2004-08-01

    Full Text Available Abstract Background Profile-based analysis of multiple sequence alignments (MSA allows for accurate comparison of protein families. Here, we address the problems of detecting statistically confident dissimilarities between (1 MSA position and a set of predicted residue frequencies, and (2 between two MSA positions. These problems are important for (i evaluation and optimization of methods predicting residue occurrence at protein positions; (ii detection of potentially misaligned regions in automatically produced alignments and their further refinement; and (iii detection of sites that determine functional or structural specificity in two related families. Results For problems (1 and (2, we propose analytical estimates of P-value and apply them to the detection of significant positional dissimilarities in various experimental situations. (a We compare structure-based predictions of residue propensities at a protein position to the actual residue frequencies in the MSA of homologs. (b We evaluate our method by the ability to detect erroneous position matches produced by an automatic sequence aligner. (c We compare MSA positions that correspond to residues aligned by automatic structure aligners. (d We compare MSA positions that are aligned by high-quality manual superposition of structures. Detected dissimilarities reveal shortcomings of the automatic methods for residue frequency prediction and alignment construction. For the high-quality structural alignments, the dissimilarities suggest sites of potential functional or structural importance. Conclusion The proposed computational method is of significant potential value for the analysis of protein families.

  11. Determining coding CpG islands by identifying regions significant for pattern statistics on Markov chains.

    Science.gov (United States)

    Singer, Meromit; Engström, Alexander; Schönhuth, Alexander; Pachter, Lior

    2011-09-23

    Recent experimental and computational work confirms that CpGs can be unmethylated inside coding exons, thereby showing that codons may be subjected to both genomic and epigenomic constraint. It is therefore of interest to identify coding CpG islands (CCGIs) that are regions inside exons enriched for CpGs. The difficulty in identifying such islands is that coding exons exhibit sequence biases determined by codon usage and constraints that must be taken into account. We present a method for finding CCGIs that showcases a novel approach we have developed for identifying regions of interest that are significant (with respect to a Markov chain) for the counts of any pattern. Our method begins with the exact computation of tail probabilities for the number of CpGs in all regions contained in coding exons, and then applies a greedy algorithm for selecting islands from among the regions. We show that the greedy algorithm provably optimizes a biologically motivated criterion for selecting islands while controlling the false discovery rate. We applied this approach to the human genome (hg18) and annotated CpG islands in coding exons. The statistical criterion we apply to evaluating islands reduces the number of false positives in existing annotations, while our approach to defining islands reveals significant numbers of undiscovered CCGIs in coding exons. Many of these appear to be examples of functional epigenetic specialization in coding exons.

  12. Publication of statistically significant research findings in prosthodontics & implant dentistry in the context of other dental specialties.

    Science.gov (United States)

    Papageorgiou, Spyridon N; Kloukos, Dimitrios; Petridis, Haralampos; Pandis, Nikolaos

    2015-10-01

    To assess the hypothesis that there is excessive reporting of statistically significant studies published in prosthodontic and implantology journals, which could indicate selective publication. The last 30 issues of 9 journals in prosthodontics and implant dentistry were hand-searched for articles with statistical analyses. The percentages of significant and non-significant results were tabulated by parameter of interest. Univariable/multivariable logistic regression analyses were applied to identify possible predictors of reporting statistically significance findings. The results of this study were compared with similar studies in dentistry with random-effects meta-analyses. From the 2323 included studies 71% of them reported statistically significant results, with the significant results ranging from 47% to 86%. Multivariable modeling identified that geographical area and involvement of statistician were predictors of statistically significant results. Compared to interventional studies, the odds that in vitro and observational studies would report statistically significant results was increased by 1.20 times (OR: 2.20, 95% CI: 1.66-2.92) and 0.35 times (OR: 1.35, 95% CI: 1.05-1.73), respectively. The probability of statistically significant results from randomized controlled trials was significantly lower compared to various study designs (difference: 30%, 95% CI: 11-49%). Likewise the probability of statistically significant results in prosthodontics and implant dentistry was lower compared to other dental specialties, but this result did not reach statistical significant (P>0.05). The majority of studies identified in the fields of prosthodontics and implant dentistry presented statistically significant results. The same trend existed in publications of other specialties in dentistry. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Indirectional statistics and the significance of an asymmetry discovered by Birch

    International Nuclear Information System (INIS)

    Kendall, D.G.; Young, G.A.

    1984-01-01

    Birch (1982, Nature, 298, 451) reported an apparent 'statistical asymmetry of the Universe'. The authors here develop 'indirectional analysis' as a technique for investigating statistical effects of this kind and conclude that the reported effect (whatever may be its origin) is strongly supported by the observations. The estimated pole of the asymmetry is at RA 13h 30m, Dec. -37deg. The angular error in its estimation is unlikely to exceed 20-30deg. (author)

  14. Confounding and Statistical Significance of Indirect Effects: Childhood Adversity, Education, Smoking, and Anxious and Depressive Symptomatology

    Directory of Open Access Journals (Sweden)

    Mashhood Ahmed Sheikh

    2017-08-01

    mediate the association between childhood adversity and ADS in adulthood. However, when education was excluded as a mediator-response confounding variable, the indirect effect of childhood adversity on ADS in adulthood was statistically significant (p < 0.05. This study shows that a careful inclusion of potential confounding variables is important when assessing mediation.

  15. Evaluation of significantly modified water bodies in Vojvodina by using multivariate statistical techniques

    Directory of Open Access Journals (Sweden)

    Vujović Svetlana R.

    2013-01-01

    Full Text Available This paper illustrates the utility of multivariate statistical techniques for analysis and interpretation of water quality data sets and identification of pollution sources/factors with a view to get better information about the water quality and design of monitoring network for effective management of water resources. Multivariate statistical techniques, such as factor analysis (FA/principal component analysis (PCA and cluster analysis (CA, were applied for the evaluation of variations and for the interpretation of a water quality data set of the natural water bodies obtained during 2010 year of monitoring of 13 parameters at 33 different sites. FA/PCA attempts to explain the correlations between the observations in terms of the underlying factors, which are not directly observable. Factor analysis is applied to physico-chemical parameters of natural water bodies with the aim classification and data summation as well as segmentation of heterogeneous data sets into smaller homogeneous subsets. Factor loadings were categorized as strong and moderate corresponding to the absolute loading values of >0.75, 0.75-0.50, respectively. Four principal factors were obtained with Eigenvalues >1 summing more than 78 % of the total variance in the water data sets, which is adequate to give good prior information regarding data structure. Each factor that is significantly related to specific variables represents a different dimension of water quality. The first factor F1 accounting for 28 % of the total variance and represents the hydrochemical dimension of water quality. The second factor F2 accounting for 18% of the total variance and may be taken factor of water eutrophication. The third factor F3 accounting 17 % of the total variance and represents the influence of point sources of pollution on water quality. The fourth factor F4 accounting 13 % of the total variance and may be taken as an ecological dimension of water quality. Cluster analysis (CA is an

  16. Accelerator driven reactors, - the significance of the energy distribution of spallation neutrons on the neutron statistics

    Energy Technology Data Exchange (ETDEWEB)

    Fhager, V

    2000-01-01

    In order to make correct predictions of the second moment of statistical nuclear variables, such as the number of fissions and the number of thermalized neutrons, the dependence of the energy distribution of the source particles on their number should be considered. It has been pointed out recently that neglecting this number dependence in accelerator driven systems might result in bad estimates of the second moment, and this paper contains qualitative and quantitative estimates of the size of these efforts. We walk towards the requested results in two steps. First, models of the number dependent energy distributions of the neutrons that are ejected in the spallation reactions are constructed, both by simple assumptions and by extracting energy distributions of spallation neutrons from a high-energy particle transport code. Then, the second moment of nuclear variables in a sub-critical reactor, into which spallation neutrons are injected, is calculated. The results from second moment calculations using number dependent energy distributions for the source neutrons are compared to those where only the average energy distribution is used. Two physical models are employed to simulate the neutron transport in the reactor. One is analytical, treating only slowing down of neutrons by elastic scattering in the core material. For this model, equations are written down and solved for the second moment of thermalized neutrons that include the distribution of energy of the spallation neutrons. The other model utilizes Monte Carlo methods for tracking the source neutrons as they travel inside the reactor material. Fast and thermal fission reactions are considered, as well as neutron capture and elastic scattering, and the second moment of the number of fissions, the number of neutrons that leaked out of the system, etc. are calculated. Both models use a cylindrical core with a homogenous mixture of core material. Our results indicate that the number dependence of the energy

  17. Accelerator driven reactors, - the significance of the energy distribution of spallation neutrons on the neutron statistics

    International Nuclear Information System (INIS)

    Fhager, V.

    2000-01-01

    In order to make correct predictions of the second moment of statistical nuclear variables, such as the number of fissions and the number of thermalized neutrons, the dependence of the energy distribution of the source particles on their number should be considered. It has been pointed out recently that neglecting this number dependence in accelerator driven systems might result in bad estimates of the second moment, and this paper contains qualitative and quantitative estimates of the size of these efforts. We walk towards the requested results in two steps. First, models of the number dependent energy distributions of the neutrons that are ejected in the spallation reactions are constructed, both by simple assumptions and by extracting energy distributions of spallation neutrons from a high-energy particle transport code. Then, the second moment of nuclear variables in a sub-critical reactor, into which spallation neutrons are injected, is calculated. The results from second moment calculations using number dependent energy distributions for the source neutrons are compared to those where only the average energy distribution is used. Two physical models are employed to simulate the neutron transport in the reactor. One is analytical, treating only slowing down of neutrons by elastic scattering in the core material. For this model, equations are written down and solved for the second moment of thermalized neutrons that include the distribution of energy of the spallation neutrons. The other model utilizes Monte Carlo methods for tracking the source neutrons as they travel inside the reactor material. Fast and thermal fission reactions are considered, as well as neutron capture and elastic scattering, and the second moment of the number of fissions, the number of neutrons that leaked out of the system, etc. are calculated. Both models use a cylindrical core with a homogenous mixture of core material. Our results indicate that the number dependence of the energy

  18. Robust statistical methods for significance evaluation and applications in cancer driver detection and biomarker discovery

    DEFF Research Database (Denmark)

    Madsen, Tobias

    2017-01-01

    In the present thesis I develop, implement and apply statistical methods for detecting genomic elements implicated in cancer development and progression. This is done in two separate bodies of work. The first uses the somatic mutation burden to distinguish cancer driver mutations from passenger m...

  19. The varied contribution of significant others to Complementary and Alternative Medicine (CAM) uptake by men with cancer: a qualitative analysis.

    Science.gov (United States)

    Klafke, Nadja; Eliott, Jaklin A; Olver, Ian N; Wittert, Gary A

    2014-06-01

    To explore how men's Significant Others (SOs), including family members and close friends, contribute to the uptake and maintenance of specific CAM therapies. This study was the second, qualitative phase of a mixed-methods project investigating the use of CAM in an Australian male cancer population. Male participants were purposefully selected from a pool of 403 patients who answered a survey in the first quantitative phase (94% response rate and 86% consent rate for follow-up interview). Then semi-structured interviews among 26 men with a variety of cancers and 24 SOs were conducted. All 43 interviews were recorded, transcribed, and analysed thematically. Men used CAM/Natural products to cope with physical concerns, and this was actively supported by men's SOs who contributed to the uptake and maintenance of these CAMs. The shared CAM preparation and consumption functioned to strengthen the bond between men and their SOs, and also helped men's SOs to cope with uncertainty and regain control. In contrast, men practiced CAM/Mind-body medicine to receive emotional benefits, and only rarely shared this practice with their SOs, indicating a need for coping with emotions in a private way. Men's CAM use is a multifaceted process that can be better understood by considering CAM categories separately. CAM/Natural products help men to cope with physical concerns, while CAM/Mind-body medicine assist men to cope with their emotions in a private way. Oncology professionals can use this information to better promote and implement integrative cancer care services. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Statistical Analysis and Evaluation of the Depth of the Ruts on Lithuanian State Significance Roads

    Directory of Open Access Journals (Sweden)

    Erinijus Getautis

    2011-04-01

    Full Text Available The aim of this work is to gather information about the national flexible pavement roads ruts depth, to determine its statistical dispersijon index and to determine their validity for needed requirements. Analysis of scientific works of ruts apearance in the asphalt and their influence for driving is presented in this work. Dynamical models of ruts in asphalt are presented in the work as well. Experimental outcome data of rut depth dispersijon in the national highway of Lithuania Vilnius – Kaunas is prepared. Conclusions are formulated and presented. Article in Lithuanian

  1. Confidence Intervals: From tests of statistical significance to confidence intervals, range hypotheses and substantial effects

    Directory of Open Access Journals (Sweden)

    Dominic Beaulieu-Prévost

    2006-03-01

    Full Text Available For the last 50 years of research in quantitative social sciences, the empirical evaluation of scientific hypotheses has been based on the rejection or not of the null hypothesis. However, more than 300 articles demonstrated that this method was problematic. In summary, null hypothesis testing (NHT is unfalsifiable, its results depend directly on sample size and the null hypothesis is both improbable and not plausible. Consequently, alternatives to NHT such as confidence intervals (CI and measures of effect size are starting to be used in scientific publications. The purpose of this article is, first, to provide the conceptual tools necessary to implement an approach based on confidence intervals, and second, to briefly demonstrate why such an approach is an interesting alternative to an approach based on NHT. As demonstrated in the article, the proposed CI approach avoids most problems related to a NHT approach and can often improve the scientific and contextual relevance of the statistical interpretations by testing range hypotheses instead of a point hypothesis and by defining the minimal value of a substantial effect. The main advantage of such a CI approach is that it replaces the notion of statistical power by an easily interpretable three-value logic (probable presence of a substantial effect, probable absence of a substantial effect and probabilistic undetermination. The demonstration includes a complete example.

  2. The SACE Review Panel's Final Report: Significant Flaws in the Analysis of Statistical Data

    Science.gov (United States)

    Gregory, Kelvin

    2006-01-01

    The South Australian Certificate of Education (SACE) is a credential and formal qualification within the Australian Qualifications Framework. A recent review of the SACE outlined a number of recommendations for significant changes to this certificate. These recommendations were the result of a process that began with the review panel…

  3. Childhood-compared to adolescent-onset bipolar disorder has more statistically significant clinical correlates.

    Science.gov (United States)

    Holtzman, Jessica N; Miller, Shefali; Hooshmand, Farnaz; Wang, Po W; Chang, Kiki D; Hill, Shelley J; Rasgon, Natalie L; Ketter, Terence A

    2015-07-01

    The strengths and limitations of considering childhood-and adolescent-onset bipolar disorder (BD) separately versus together remain to be established. We assessed this issue. BD patients referred to the Stanford Bipolar Disorder Clinic during 2000-2011 were assessed with the Systematic Treatment Enhancement Program for BD Affective Disorders Evaluation. Patients with childhood- and adolescent-onset were compared to those with adult-onset for 7 unfavorable bipolar illness characteristics with replicated associations with early-onset patients. Among 502 BD outpatients, those with childhood- (adolescent- (13-18 years, N=218) onset had significantly higher rates for 4/7 unfavorable illness characteristics, including lifetime comorbid anxiety disorder, at least ten lifetime mood episodes, lifetime alcohol use disorder, and prior suicide attempt, than those with adult-onset (>18 years, N=174). Childhood- but not adolescent-onset BD patients also had significantly higher rates of first-degree relative with mood disorder, lifetime substance use disorder, and rapid cycling in the prior year. Patients with pooled childhood/adolescent - compared to adult-onset had significantly higher rates for 5/7 of these unfavorable illness characteristics, while patients with childhood- compared to adolescent-onset had significantly higher rates for 4/7 of these unfavorable illness characteristics. Caucasian, insured, suburban, low substance abuse, American specialty clinic-referred sample limits generalizability. Onset age is based on retrospective recall. Childhood- compared to adolescent-onset BD was more robustly related to unfavorable bipolar illness characteristics, so pooling these groups attenuated such relationships. Further study is warranted to determine the extent to which adolescent-onset BD represents an intermediate phenotype between childhood- and adult-onset BD. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. The statistical significance of error probability as determined from decoding simulations for long codes

    Science.gov (United States)

    Massey, J. L.

    1976-01-01

    The very low error probability obtained with long error-correcting codes results in a very small number of observed errors in simulation studies of practical size and renders the usual confidence interval techniques inapplicable to the observed error probability. A natural extension of the notion of a 'confidence interval' is made and applied to such determinations of error probability by simulation. An example is included to show the surprisingly great significance of as few as two decoding errors in a very large number of decoding trials.

  5. Time-varying analysis of CO_2 emissions, energy consumption, and economic growth nexus: Statistical experience in next 11 countries

    International Nuclear Information System (INIS)

    Shahbaz, Muhammad; Mahalik, Mantu Kumar; Shah, Syed Hasanat; Sato, João Ricardo

    2016-01-01

    This paper detects the direction of causality among carbon dioxide (CO_2) emissions, energy consumption, and economic growth in Next 11 countries for the period 1972–2013. Changes in economic, energy, and environmental policies as well as regulatory and technological advancement over time, cause changes in the relationship among the variables. We use a novel approach i.e. time-varying Granger causality and find that economic growth is the cause of CO_2 emissions in Bangladesh and Egypt. Economic growth causes energy consumption in the Philippines, Turkey, and Vietnam but the feedback effect exists between energy consumption and economic growth in South Korea. In the cases of Indonesia and Turkey, we find the unidirectional time-varying Granger causality running from economic growth to CO_2 emissions thus validates the existence of the Environmental Kuznets Curve hypothesis, which indicates that economic growth is achievable at the minimal cost of environment. The paper gives new insights for policy makers to attain sustainable economic growth while maintaining long-run environmental quality.

  6. Statistics

    CERN Document Server

    Hayslett, H T

    1991-01-01

    Statistics covers the basic principles of Statistics. The book starts by tackling the importance and the two kinds of statistics; the presentation of sample data; the definition, illustration and explanation of several measures of location; and the measures of variation. The text then discusses elementary probability, the normal distribution and the normal approximation to the binomial. Testing of statistical hypotheses and tests of hypotheses about the theoretical proportion of successes in a binomial population and about the theoretical mean of a normal population are explained. The text the

  7. Statistical Significance of the Maximum Hardness Principle Applied to Some Selected Chemical Reactions.

    Science.gov (United States)

    Saha, Ranajit; Pan, Sudip; Chattaraj, Pratim K

    2016-11-05

    The validity of the maximum hardness principle (MHP) is tested in the cases of 50 chemical reactions, most of which are organic in nature and exhibit anomeric effect. To explore the effect of the level of theory on the validity of MHP in an exothermic reaction, B3LYP/6-311++G(2df,3pd) and LC-BLYP/6-311++G(2df,3pd) (def2-QZVP for iodine and mercury) levels are employed. Different approximations like the geometric mean of hardness and combined hardness are considered in case there are multiple reactants and/or products. It is observed that, based on the geometric mean of hardness, while 82% of the studied reactions obey the MHP at the B3LYP level, 84% of the reactions follow this rule at the LC-BLYP level. Most of the reactions possess the hardest species on the product side. A 50% null hypothesis is rejected at a 1% level of significance.

  8. Statistics

    Science.gov (United States)

    Links to sources of cancer-related statistics, including the Surveillance, Epidemiology and End Results (SEER) Program, SEER-Medicare datasets, cancer survivor prevalence data, and the Cancer Trends Progress Report.

  9. Sigsearch: a new term for post hoc unplanned search for statistically significant relationships with the intent to create publishable findings.

    Science.gov (United States)

    Hashim, Muhammad Jawad

    2010-09-01

    Post-hoc secondary data analysis with no prespecified hypotheses has been discouraged by textbook authors and journal editors alike. Unfortunately no single term describes this phenomenon succinctly. I would like to coin the term "sigsearch" to define this practice and bring it within the teaching lexicon of statistics courses. Sigsearch would include any unplanned, post-hoc search for statistical significance using multiple comparisons of subgroups. It would also include data analysis with outcomes other than the prespecified primary outcome measure of a study as well as secondary data analyses of earlier research.

  10. Second-order statistics of colour codes modulate transformations that effectuate varying degrees of scene invariance and illumination invariance.

    Science.gov (United States)

    Mausfeld, Rainer; Andres, Johannes

    2002-01-01

    We argue, from an ethology-inspired perspective, that the internal concepts 'surface colours' and 'illumination colours' are part of the data format of two different representational primitives. Thus, the internal concept of 'colour' is not a unitary one but rather refers to two different types of 'data structure', each with its own proprietary types of parameters and relations. The relation of these representational structures is modulated by a class of parameterised transformations whose effects are mirrored in the idealised computational achievements of illumination invariance of colour codes, on the one hand, and scene invariance, on the other hand. Because the same characteristics of a light array reaching the eye can be physically produced in many different ways, the visual system, then, has to make an 'inference' whether a chromatic deviation of the space-averaged colour codes from the neutral point is due to a 'non-normal', ie chromatic, illumination or due to an imbalanced spectral reflectance composition. We provide evidence that the visual system uses second-order statistics of chromatic codes of a single view of a scene in order to modulate corresponding transformations. In our experiments we used centre surround configurations with inhomogeneous surrounds given by a random structure of overlapping circles, referred to as Seurat configurations. Each family of surrounds has a fixed space-average of colour codes, but differs with respect to the covariance matrix of colour codes of pixels that defines the chromatic variance along some chromatic axis and the covariance between luminance and chromatic channels. We found that dominant wavelengths of red-green equilibrium settings of the infield exhibited a stable and strong dependence on the chromatic variance of the surround. High variances resulted in a tendency towards 'scene invariance', low variances in a tendency towards 'illumination invariance' of the infield.

  11. ClusterSignificance: A bioconductor package facilitating statistical analysis of class cluster separations in dimensionality reduced data

    DEFF Research Database (Denmark)

    Serviss, Jason T.; Gådin, Jesper R.; Eriksson, Per

    2017-01-01

    , e.g. genes in a specific pathway, alone can separate samples into these established classes. Despite this, the evaluation of class separations is often subjective and performed via visualization. Here we present the ClusterSignificance package; a set of tools designed to assess the statistical...... significance of class separations downstream of dimensionality reduction algorithms. In addition, we demonstrate the design and utility of the ClusterSignificance package and utilize it to determine the importance of long non-coding RNA expression in the identity of multiple hematological malignancies....

  12. Statistics

    International Nuclear Information System (INIS)

    2005-01-01

    For the years 2004 and 2005 the figures shown in the tables of Energy Review are partly preliminary. The annual statistics published in Energy Review are presented in more detail in a publication called Energy Statistics that comes out yearly. Energy Statistics also includes historical time-series over a longer period of time (see e.g. Energy Statistics, Statistics Finland, Helsinki 2004.) The applied energy units and conversion coefficients are shown in the back cover of the Review. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in GDP, energy consumption and electricity consumption, Carbon dioxide emissions from fossile fuels use, Coal consumption, Consumption of natural gas, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices in heat production, Fuel prices in electricity production, Price of electricity by type of consumer, Average monthly spot prices at the Nord pool power exchange, Total energy consumption by source and CO 2 -emissions, Supplies and total consumption of electricity GWh, Energy imports by country of origin in January-June 2003, Energy exports by recipient country in January-June 2003, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Price of natural gas by type of consumer, Price of electricity by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes, precautionary stock fees and oil pollution fees

  13. Statistics

    International Nuclear Information System (INIS)

    2001-01-01

    For the year 2000, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period (see e.g. Energiatilastot 1999, Statistics Finland, Helsinki 2000, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions from the use of fossil fuels, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in 2000, Energy exports by recipient country in 2000, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products

  14. Statistics

    International Nuclear Information System (INIS)

    2000-01-01

    For the year 1999 and 2000, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period (see e.g., Energiatilastot 1998, Statistics Finland, Helsinki 1999, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in January-March 2000, Energy exports by recipient country in January-March 2000, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products

  15. Statistics

    International Nuclear Information System (INIS)

    1999-01-01

    For the year 1998 and the year 1999, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period (see e.g. Energiatilastot 1998, Statistics Finland, Helsinki 1999, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in January-June 1999, Energy exports by recipient country in January-June 1999, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products

  16. Assessing Statistically Significant Heavy-Metal Concentrations in Abandoned Mine Areas via Hot Spot Analysis of Portable XRF Data.

    Science.gov (United States)

    Kim, Sung-Min; Choi, Yosoon

    2017-06-18

    To develop appropriate measures to prevent soil contamination in abandoned mining areas, an understanding of the spatial variation of the potentially toxic trace elements (PTEs) in the soil is necessary. For the purpose of effective soil sampling, this study uses hot spot analysis, which calculates a z -score based on the Getis-Ord Gi* statistic to identify a statistically significant hot spot sample. To constitute a statistically significant hot spot, a feature with a high value should also be surrounded by other features with high values. Using relatively cost- and time-effective portable X-ray fluorescence (PXRF) analysis, sufficient input data are acquired from the Busan abandoned mine and used for hot spot analysis. To calibrate the PXRF data, which have a relatively low accuracy, the PXRF analysis data are transformed using the inductively coupled plasma atomic emission spectrometry (ICP-AES) data. The transformed PXRF data of the Busan abandoned mine are classified into four groups according to their normalized content and z -scores: high content with a high z -score (HH), high content with a low z -score (HL), low content with a high z -score (LH), and low content with a low z -score (LL). The HL and LH cases may be due to measurement errors. Additional or complementary surveys are required for the areas surrounding these suspect samples or for significant hot spot areas. The soil sampling is conducted according to a four-phase procedure in which the hot spot analysis and proposed group classification method are employed to support the development of a sampling plan for the following phase. Overall, 30, 50, 80, and 100 samples are investigated and analyzed in phases 1-4, respectively. The method implemented in this case study may be utilized in the field for the assessment of statistically significant soil contamination and the identification of areas for which an additional survey is required.

  17. Assessing Statistically Significant Heavy-Metal Concentrations in Abandoned Mine Areas via Hot Spot Analysis of Portable XRF Data

    Directory of Open Access Journals (Sweden)

    Sung-Min Kim

    2017-06-01

    Full Text Available To develop appropriate measures to prevent soil contamination in abandoned mining areas, an understanding of the spatial variation of the potentially toxic trace elements (PTEs in the soil is necessary. For the purpose of effective soil sampling, this study uses hot spot analysis, which calculates a z-score based on the Getis-Ord Gi* statistic to identify a statistically significant hot spot sample. To constitute a statistically significant hot spot, a feature with a high value should also be surrounded by other features with high values. Using relatively cost- and time-effective portable X-ray fluorescence (PXRF analysis, sufficient input data are acquired from the Busan abandoned mine and used for hot spot analysis. To calibrate the PXRF data, which have a relatively low accuracy, the PXRF analysis data are transformed using the inductively coupled plasma atomic emission spectrometry (ICP-AES data. The transformed PXRF data of the Busan abandoned mine are classified into four groups according to their normalized content and z-scores: high content with a high z-score (HH, high content with a low z-score (HL, low content with a high z-score (LH, and low content with a low z-score (LL. The HL and LH cases may be due to measurement errors. Additional or complementary surveys are required for the areas surrounding these suspect samples or for significant hot spot areas. The soil sampling is conducted according to a four-phase procedure in which the hot spot analysis and proposed group classification method are employed to support the development of a sampling plan for the following phase. Overall, 30, 50, 80, and 100 samples are investigated and analyzed in phases 1–4, respectively. The method implemented in this case study may be utilized in the field for the assessment of statistically significant soil contamination and the identification of areas for which an additional survey is required.

  18. Statistics

    International Nuclear Information System (INIS)

    2003-01-01

    For the year 2002, part of the figures shown in the tables of the Energy Review are partly preliminary. The annual statistics of the Energy Review also includes historical time-series over a longer period (see e.g. Energiatilastot 2001, Statistics Finland, Helsinki 2002). The applied energy units and conversion coefficients are shown in the inside back cover of the Review. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in GDP, energy consumption and electricity consumption, Carbon dioxide emissions from fossile fuels use, Coal consumption, Consumption of natural gas, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices in heat production, Fuel prices in electricity production, Price of electricity by type of consumer, Average monthly spot prices at the Nord pool power exchange, Total energy consumption by source and CO 2 -emissions, Supply and total consumption of electricity GWh, Energy imports by country of origin in January-June 2003, Energy exports by recipient country in January-June 2003, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Price of natural gas by type of consumer, Price of electricity by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Excise taxes, precautionary stock fees on oil pollution fees on energy products

  19. Statistics

    International Nuclear Information System (INIS)

    2004-01-01

    For the year 2003 and 2004, the figures shown in the tables of the Energy Review are partly preliminary. The annual statistics of the Energy Review also includes historical time-series over a longer period (see e.g. Energiatilastot, Statistics Finland, Helsinki 2003, ISSN 0785-3165). The applied energy units and conversion coefficients are shown in the inside back cover of the Review. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in GDP, energy consumption and electricity consumption, Carbon dioxide emissions from fossile fuels use, Coal consumption, Consumption of natural gas, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices in heat production, Fuel prices in electricity production, Price of electricity by type of consumer, Average monthly spot prices at the Nord pool power exchange, Total energy consumption by source and CO 2 -emissions, Supplies and total consumption of electricity GWh, Energy imports by country of origin in January-March 2004, Energy exports by recipient country in January-March 2004, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Price of natural gas by type of consumer, Price of electricity by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Excise taxes, precautionary stock fees on oil pollution fees

  20. Statistics

    International Nuclear Information System (INIS)

    2000-01-01

    For the year 1999 and 2000, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy also includes historical time series over a longer period (see e.g., Energiatilastot 1999, Statistics Finland, Helsinki 2000, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in January-June 2000, Energy exports by recipient country in January-June 2000, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products

  1. Intelligent system for statistically significant expertise knowledge on the basis of the model of self-organizing nonequilibrium dissipative system

    Directory of Open Access Journals (Sweden)

    E. A. Tatokchin

    2017-01-01

    Full Text Available Development of the modern educational technologies caused by broad introduction of comput-er testing and development of distant forms of education does necessary revision of methods of an examination of pupils. In work it was shown, need transition to mathematical criteria, exami-nations of knowledge which are deprived of subjectivity. In article the review of the problems arising at realization of this task and are offered approaches for its decision. The greatest atten-tion is paid to discussion of a problem of objective transformation of rated estimates of the ex-pert on to the scale estimates of the student. In general, the discussion this question is was con-cluded that the solution to this problem lies in the creation of specialized intellectual systems. The basis for constructing intelligent system laid the mathematical model of self-organizing nonequilibrium dissipative system, which is a group of students. This article assumes that the dissipative system is provided by the constant influx of new test items of the expert and non-equilibrium – individual psychological characteristics of students in the group. As a result, the system must self-organize themselves into stable patterns. This patern will allow for, relying on large amounts of data, get a statistically significant assessment of student. To justify the pro-posed approach in the work presents the data of the statistical analysis of the results of testing a large sample of students (> 90. Conclusions from this statistical analysis allowed to develop intelligent system statistically significant examination of student performance. It is based on data clustering algorithm (k-mean for the three key parameters. It is shown that this approach allows you to create of the dynamics and objective expertise evaluation.

  2. Rapid Classification and Identification of Multiple Microorganisms with Accurate Statistical Significance via High-Resolution Tandem Mass Spectrometry.

    Science.gov (United States)

    Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y; Drake, Steven K; Gucek, Marjan; Sacks, David B; Yu, Yi-Kuo

    2018-06-05

    Rapid and accurate identification and classification of microorganisms is of paramount importance to public health and safety. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is complicating correct microbial identification even in a simple sample due to the large number of candidates present. To properly untwine candidate microbes in samples containing one or more microbes, one needs to go beyond apparent morphology or simple "fingerprinting"; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptide-centric representations of microbes to better separate them and by augmenting our earlier analysis method that yields accurate statistical significance. Here, we present an updated analysis workflow that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using 226 MS/MS publicly available data files (each containing from 2500 to nearly 100,000 MS/MS spectra) and 4000 additional MS/MS data files, that the updated workflow can correctly identify multiple microbes at the genus and often the species level for samples containing more than one microbe. We have also shown that the proposed workflow computes accurate statistical significances, i.e., E values for identified peptides and unified E values for identified microbes. Our updated analysis workflow MiCId, a freely available software for Microorganism Classification and Identification, is available for download at https://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html . Graphical Abstract ᅟ.

  3. The distribution of P-values in medical research articles suggested selective reporting associated with statistical significance.

    Science.gov (United States)

    Perneger, Thomas V; Combescure, Christophe

    2017-07-01

    Published P-values provide a window into the global enterprise of medical research. The aim of this study was to use the distribution of published P-values to estimate the relative frequencies of null and alternative hypotheses and to seek irregularities suggestive of publication bias. This cross-sectional study included P-values published in 120 medical research articles in 2016 (30 each from the BMJ, JAMA, Lancet, and New England Journal of Medicine). The observed distribution of P-values was compared with expected distributions under the null hypothesis (i.e., uniform between 0 and 1) and the alternative hypothesis (strictly decreasing from 0 to 1). P-values were categorized according to conventional levels of statistical significance and in one-percent intervals. Among 4,158 recorded P-values, 26.1% were highly significant (P values values equal to 1, and (3) about twice as many P-values less than 0.05 compared with those more than 0.05. The latter finding was seen in both randomized trials and observational studies, and in most types of analyses, excepting heterogeneity tests and interaction tests. Under plausible assumptions, we estimate that about half of the tested hypotheses were null and the other half were alternative. This analysis suggests that statistical tests published in medical journals are not a random sample of null and alternative hypotheses but that selective reporting is prevalent. In particular, significant results are about twice as likely to be reported as nonsignificant results. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Statistically significant dependence of the Xaa-Pro peptide bond conformation on secondary structure and amino acid sequence

    Directory of Open Access Journals (Sweden)

    Leitner Dietmar

    2005-04-01

    Full Text Available Abstract Background A reliable prediction of the Xaa-Pro peptide bond conformation would be a useful tool for many protein structure calculation methods. We have analyzed the Protein Data Bank and show that the combined use of sequential and structural information has a predictive value for the assessment of the cis versus trans peptide bond conformation of Xaa-Pro within proteins. For the analysis of the data sets different statistical methods such as the calculation of the Chou-Fasman parameters and occurrence matrices were used. Furthermore we analyzed the relationship between the relative solvent accessibility and the relative occurrence of prolines in the cis and in the trans conformation. Results One of the main results of the statistical investigations is the ranking of the secondary structure and sequence information with respect to the prediction of the Xaa-Pro peptide bond conformation. We observed a significant impact of secondary structure information on the occurrence of the Xaa-Pro peptide bond conformation, while the sequence information of amino acids neighboring proline is of little predictive value for the conformation of this bond. Conclusion In this work, we present an extensive analysis of the occurrence of the cis and trans proline conformation in proteins. Based on the data set, we derived patterns and rules for a possible prediction of the proline conformation. Upon adoption of the Chou-Fasman parameters, we are able to derive statistically relevant correlations between the secondary structure of amino acid fragments and the Xaa-Pro peptide bond conformation.

  5. Testing earthquake prediction algorithms: Statistically significant advance prediction of the largest earthquakes in the Circum-Pacific, 1992-1997

    Science.gov (United States)

    Kossobokov, V.G.; Romashkova, L.L.; Keilis-Borok, V. I.; Healy, J.H.

    1999-01-01

    Algorithms M8 and MSc (i.e., the Mendocino Scenario) were used in a real-time intermediate-term research prediction of the strongest earthquakes in the Circum-Pacific seismic belt. Predictions are made by M8 first. Then, the areas of alarm are reduced by MSc at the cost that some earthquakes are missed in the second approximation of prediction. In 1992-1997, five earthquakes of magnitude 8 and above occurred in the test area: all of them were predicted by M8 and MSc identified correctly the locations of four of them. The space-time volume of the alarms is 36% and 18%, correspondingly, when estimated with a normalized product measure of empirical distribution of epicenters and uniform time. The statistical significance of the achieved results is beyond 99% both for M8 and MSc. For magnitude 7.5 + , 10 out of 19 earthquakes were predicted by M8 in 40% and five were predicted by M8-MSc in 13% of the total volume considered. This implies a significance level of 81% for M8 and 92% for M8-MSc. The lower significance levels might result from a global change in seismic regime in 1993-1996, when the rate of the largest events has doubled and all of them become exclusively normal or reversed faults. The predictions are fully reproducible; the algorithms M8 and MSc in complete formal definitions were published before we started our experiment [Keilis-Borok, V.I., Kossobokov, V.G., 1990. Premonitory activation of seismic flow: Algorithm M8, Phys. Earth and Planet. Inter. 61, 73-83; Kossobokov, V.G., Keilis-Borok, V.I., Smith, S.W., 1990. Localization of intermediate-term earthquake prediction, J. Geophys. Res., 95, 19763-19772; Healy, J.H., Kossobokov, V.G., Dewey, J.W., 1992. A test to evaluate the earthquake prediction algorithm, M8. U.S. Geol. Surv. OFR 92-401]. M8 is available from the IASPEI Software Library [Healy, J.H., Keilis-Borok, V.I., Lee, W.H.K. (Eds.), 1997. Algorithms for Earthquake Statistics and Prediction, Vol. 6. IASPEI Software Library]. ?? 1999 Elsevier

  6. Macro-indicators of citation impacts of six prolific countries: InCites data and the statistical significance of trends.

    Directory of Open Access Journals (Sweden)

    Lutz Bornmann

    Full Text Available Using the InCites tool of Thomson Reuters, this study compares normalized citation impact values calculated for China, Japan, France, Germany, United States, and the UK throughout the time period from 1981 to 2010. InCites offers a unique opportunity to study the normalized citation impacts of countries using (i a long publication window (1981 to 2010, (ii a differentiation in (broad or more narrow subject areas, and (iii allowing for the use of statistical procedures in order to obtain an insightful investigation of national citation trends across the years. Using four broad categories, our results show significantly increasing trends in citation impact values for France, the UK, and especially Germany across the last thirty years in all areas. The citation impact of papers from China is still at a relatively low level (mostly below the world average, but the country follows an increasing trend line. The USA exhibits a stable pattern of high citation impact values across the years. With small impact differences between the publication years, the US trend is increasing in engineering and technology but decreasing in medical and health sciences as well as in agricultural sciences. Similar to the USA, Japan follows increasing as well as decreasing trends in different subject areas, but the variability across the years is small. In most of the years, papers from Japan perform below or approximately at the world average in each subject area.

  7. Development of free statistical software enabling researchers to calculate confidence levels, clinical significance curves and risk-benefit contours

    International Nuclear Information System (INIS)

    Shakespeare, T.P.; Mukherjee, R.K.; Gebski, V.J.

    2003-01-01

    Confidence levels, clinical significance curves, and risk-benefit contours are tools improving analysis of clinical studies and minimizing misinterpretation of published results, however no software has been available for their calculation. The objective was to develop software to help clinicians utilize these tools. Excel 2000 spreadsheets were designed using only built-in functions, without macros. The workbook was protected and encrypted so that users can modify only input cells. The workbook has 4 spreadsheets for use in studies comparing two patient groups. Sheet 1 comprises instructions and graphic examples for use. Sheet 2 allows the user to input the main study results (e.g. survival rates) into a 2-by-2 table. Confidence intervals (95%), p-value and the confidence level for Treatment A being better than Treatment B are automatically generated. An additional input cell allows the user to determine the confidence associated with a specified level of benefit. For example if the user wishes to know the confidence that Treatment A is at least 10% better than B, 10% is entered. Sheet 2 automatically displays clinical significance curves, graphically illustrating confidence levels for all possible benefits of one treatment over the other. Sheet 3 allows input of toxicity data, and calculates the confidence that one treatment is more toxic than the other. It also determines the confidence that the relative toxicity of the most effective arm does not exceed user-defined tolerability. Sheet 4 automatically calculates risk-benefit contours, displaying the confidence associated with a specified scenario of minimum benefit and maximum risk of one treatment arm over the other. The spreadsheet is freely downloadable at www.ontumor.com/professional/statistics.htm A simple, self-explanatory, freely available spreadsheet calculator was developed using Excel 2000. The incorporated decision-making tools can be used for data analysis and improve the reporting of results of any

  8. Multivariate statistical process control (MSPC) using Raman spectroscopy for in-line culture cell monitoring considering time-varying batches synchronized with correlation optimized warping (COW).

    Science.gov (United States)

    Liu, Ya-Juan; André, Silvère; Saint Cristau, Lydia; Lagresle, Sylvain; Hannas, Zahia; Calvosa, Éric; Devos, Olivier; Duponchel, Ludovic

    2017-02-01

    Multivariate statistical process control (MSPC) is increasingly popular as the challenge provided by large multivariate datasets from analytical instruments such as Raman spectroscopy for the monitoring of complex cell cultures in the biopharmaceutical industry. However, Raman spectroscopy for in-line monitoring often produces unsynchronized data sets, resulting in time-varying batches. Moreover, unsynchronized data sets are common for cell culture monitoring because spectroscopic measurements are generally recorded in an alternate way, with more than one optical probe parallelly connecting to the same spectrometer. Synchronized batches are prerequisite for the application of multivariate analysis such as multi-way principal component analysis (MPCA) for the MSPC monitoring. Correlation optimized warping (COW) is a popular method for data alignment with satisfactory performance; however, it has never been applied to synchronize acquisition time of spectroscopic datasets in MSPC application before. In this paper we propose, for the first time, to use the method of COW to synchronize batches with varying durations analyzed with Raman spectroscopy. In a second step, we developed MPCA models at different time intervals based on the normal operation condition (NOC) batches synchronized by COW. New batches are finally projected considering the corresponding MPCA model. We monitored the evolution of the batches using two multivariate control charts based on Hotelling's T 2 and Q. As illustrated with results, the MSPC model was able to identify abnormal operation condition including contaminated batches which is of prime importance in cell culture monitoring We proved that Raman-based MSPC monitoring can be used to diagnose batches deviating from the normal condition, with higher efficacy than traditional diagnosis, which would save time and money in the biopharmaceutical industry. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Comparative estimation of inevitable endogenous ileal flow of amino acids in Pekin ducks under varying dietary or physiological conditions and their significance to nutritional requirements for amino acids.

    Science.gov (United States)

    Akinde, D O

    2017-10-01

    In 2 experiments in Pekin ducks the inevitable endogenous ileal flow (IEIF) of AA was estimated at changing intake and source of crude fiber (CF) or soybean oil (SO) level. Also the roles of dry matter intake (DMI) and BW or age as well as the proportion of IEIF in the dietary requirement for AA were studied. In experiment 1 three basal CP (20, 60, or 100 g/kg) diets were formulated containing a low CF (LCF, 30 g/kg) or high (HCF, 80 g/kg) level; achieved with cellulose supplementation. All diets were similar in every other respect including having SO content of 40 g/kg. Four floor pens of eight 85-day-old ducks were randomly allocated to each diet. Similar diets were mixed in experiment 2 but corn cob meal replaced cellulose as the fiber source. A high SO (HSO) series was also formed by increasing the SO level from 40 g/kg in the basal series to 100 g/kg. Thus the LCF series was concurrently classified as low SO (LSO) series to control SO effect. Each of the eventual 9 diets were fed to 5 floor pens of ten 65-day-old ducks. Ileal AA flow was measured after a 5 day feeding period in both experiments. Linear regression was calculated between ileal flow and dietary intake of individual AA. The IEIF interpreted as the y-intercept of each linear function responded neither to elevated ingestion of each CF type nor to SO level. Age and DMI had no effect on IEIF computed in relation to BW, but wide discrepancies resulted when related to DMI. Overall IEIF of AA varied between 14.3 to 129.8 mg/kg BW d-1. These flows were established in model computations to account for 10 to 64% of the recommended intake of limiting AA. In conclusion the ileal inevitable flow is constant within the dietary/age conditions investigated. However it is modulated by feed intake and accounts for a significant portion of total amino acid requirement. © 2017 Poultry Science Association Inc.

  10. New scanning technique using Adaptive Statistical lterative Reconstruction (ASIR) significantly reduced the radiation dose of cardiac CT

    International Nuclear Information System (INIS)

    Tumur, Odgerel; Soon, Kean; Brown, Fraser; Mykytowycz, Marcus

    2013-01-01

    The aims of our study were to evaluate the effect of application of Adaptive Statistical Iterative Reconstruction (ASIR) algorithm on the radiation dose of coronary computed tomography angiography (CCTA) and its effects on image quality of CCTA and to evaluate the effects of various patient and CT scanning factors on the radiation dose of CCTA. This was a retrospective study that included 347 consecutive patients who underwent CCTA at a tertiary university teaching hospital between 1 July 2009 and 20 September 2011. Analysis was performed comparing patient demographics, scan characteristics, radiation dose and image quality in two groups of patients in whom conventional Filtered Back Projection (FBP) or ASIR was used for image reconstruction. There were 238 patients in the FBP group and 109 patients in the ASIR group. There was no difference between the groups in the use of prospective gating, scan length or tube voltage. In ASIR group, significantly lower tube current was used compared with FBP group, 550mA (450–600) vs. 650mA (500–711.25) (median (interquartile range)), respectively, P<0.001. There was 27% effective radiation dose reduction in the ASIR group compared with FBP group, 4.29mSv (2.84–6.02) vs. 5.84mSv (3.88–8.39) (median (interquartile range)), respectively, P<0.001. Although ASIR was associated with increased image noise compared with FBP (39.93±10.22 vs. 37.63±18.79 (mean ±standard deviation), respectively, P<001), it did not affect the signal intensity, signal-to-noise ratio, contrast-to-noise ratio or the diagnostic quality of CCTA. Application of ASIR reduces the radiation dose of CCTA without affecting the image quality.

  11. New scanning technique using Adaptive Statistical Iterative Reconstruction (ASIR) significantly reduced the radiation dose of cardiac CT.

    Science.gov (United States)

    Tumur, Odgerel; Soon, Kean; Brown, Fraser; Mykytowycz, Marcus

    2013-06-01

    The aims of our study were to evaluate the effect of application of Adaptive Statistical Iterative Reconstruction (ASIR) algorithm on the radiation dose of coronary computed tomography angiography (CCTA) and its effects on image quality of CCTA and to evaluate the effects of various patient and CT scanning factors on the radiation dose of CCTA. This was a retrospective study that included 347 consecutive patients who underwent CCTA at a tertiary university teaching hospital between 1 July 2009 and 20 September 2011. Analysis was performed comparing patient demographics, scan characteristics, radiation dose and image quality in two groups of patients in whom conventional Filtered Back Projection (FBP) or ASIR was used for image reconstruction. There were 238 patients in the FBP group and 109 patients in the ASIR group. There was no difference between the groups in the use of prospective gating, scan length or tube voltage. In ASIR group, significantly lower tube current was used compared with FBP group, 550 mA (450-600) vs. 650 mA (500-711.25) (median (interquartile range)), respectively, P ASIR group compared with FBP group, 4.29 mSv (2.84-6.02) vs. 5.84 mSv (3.88-8.39) (median (interquartile range)), respectively, P ASIR was associated with increased image noise compared with FBP (39.93 ± 10.22 vs. 37.63 ± 18.79 (mean ± standard deviation), respectively, P ASIR reduces the radiation dose of CCTA without affecting the image quality. © 2013 The Authors. Journal of Medical Imaging and Radiation Oncology © 2013 The Royal Australian and New Zealand College of Radiologists.

  12. Statistical versus Musical Significance: Commentary on Leigh VanHandel's 'National Metrical Types in Nineteenth Century Art Song'

    Directory of Open Access Journals (Sweden)

    Justin London

    2010-01-01

    Full Text Available In “National Metrical Types in Nineteenth Century Art Song” Leigh Van Handel gives a sympathetic critique of William Rothstein’s claim that in western classical music of the late 18th and 19th centuries there are discernable differences in the phrasing and metrical practice of German versus French and Italian composers. This commentary (a examines just what Rothstein means in terms of his proposed metrical typology, (b questions Van Handel on how she has applied it to a purely melodic framework, (c amplifies Van Handel’s critique of Rothstein, and then (d concludes with a rumination on the reach of quantitative (i.e., statistically-driven versus qualitative claims regarding such things as “national metrical types.”

  13. Microbial Community and Functional Structure Significantly Varied among Distinct Types of Paddy Soils But Responded Differently along Gradients of Soil Depth Layers

    Directory of Open Access Journals (Sweden)

    Ren Bai

    2017-05-01

    Full Text Available Paddy rice fields occupy broad agricultural area in China and cover diverse soil types. Microbial community in paddy soils is of great interest since many microorganisms are involved in soil functional processes. In the present study, Illumina Mi-Seq sequencing and functional gene array (GeoChip 4.2 techniques were combined to investigate soil microbial communities and functional gene patterns across the three soil types including an Inceptisol (Binhai, an Oxisol (Leizhou, and an Ultisol (Taoyuan along four profile depths (up to 70 cm in depth in mesocosm incubation columns. Detrended correspondence analysis revealed that distinctly differentiation in microbial community existed among soil types and profile depths, while the manifest variance in functional structure was only observed among soil types and two rice growth stages, but not across profile depths. Along the profile depth within each soil type, Acidobacteria, Chloroflexi, and Firmicutes increased whereas Cyanobacteria, β-proteobacteria, and Verrucomicrobia declined, suggesting their specific ecophysiological properties. Compared to bacterial community, the archaeal community showed a more contrasting pattern with the predominant groups within phyla Euryarchaeota, Thaumarchaeota, and Crenarchaeota largely varying among soil types and depths. Phylogenetic molecular ecological network (pMEN analysis further indicated that the pattern of bacterial and archaeal communities interactions changed with soil depth and the highest modularity of microbial community occurred in top soils, implying a relatively higher system resistance to environmental change compared to communities in deeper soil layers. Meanwhile, microbial communities had higher connectivity in deeper soils in comparison with upper soils, suggesting less microbial interaction in surface soils. Structure equation models were developed and the models indicated that pH was the most representative characteristics of soil type and

  14. Microbial Community and Functional Structure Significantly Varied among Distinct Types of Paddy Soils But Responded Differently along Gradients of Soil Depth Layers.

    Science.gov (United States)

    Bai, Ren; Wang, Jun-Tao; Deng, Ye; He, Ji-Zheng; Feng, Kai; Zhang, Li-Mei

    2017-01-01

    Paddy rice fields occupy broad agricultural area in China and cover diverse soil types. Microbial community in paddy soils is of great interest since many microorganisms are involved in soil functional processes. In the present study, Illumina Mi-Seq sequencing and functional gene array (GeoChip 4.2) techniques were combined to investigate soil microbial communities and functional gene patterns across the three soil types including an Inceptisol (Binhai), an Oxisol (Leizhou), and an Ultisol (Taoyuan) along four profile depths (up to 70 cm in depth) in mesocosm incubation columns. Detrended correspondence analysis revealed that distinctly differentiation in microbial community existed among soil types and profile depths, while the manifest variance in functional structure was only observed among soil types and two rice growth stages, but not across profile depths. Along the profile depth within each soil type, Acidobacteria , Chloroflexi , and Firmicutes increased whereas Cyanobacteria , β -proteobacteria , and Verrucomicrobia declined, suggesting their specific ecophysiological properties. Compared to bacterial community, the archaeal community showed a more contrasting pattern with the predominant groups within phyla Euryarchaeota , Thaumarchaeota , and Crenarchaeota largely varying among soil types and depths. Phylogenetic molecular ecological network (pMEN) analysis further indicated that the pattern of bacterial and archaeal communities interactions changed with soil depth and the highest modularity of microbial community occurred in top soils, implying a relatively higher system resistance to environmental change compared to communities in deeper soil layers. Meanwhile, microbial communities had higher connectivity in deeper soils in comparison with upper soils, suggesting less microbial interaction in surface soils. Structure equation models were developed and the models indicated that pH was the most representative characteristics of soil type and

  15. TNFα blockade for inflammatory rheumatic diseases is associated with a significant gain in android fat mass and has varying effects on adipokines: a 2-year prospective study.

    Science.gov (United States)

    Toussirot, Éric; Mourot, Laurent; Dehecq, Barbara; Wendling, Daniel; Grandclément, Émilie; Dumoulin, Gilles

    2014-04-01

    To evaluate the long-term consequences of TNFα inhibitors on body composition and fat distribution, as well as changes in serum adipokines in patients with rheumatoid arthritis (RA) or ankylosing spondylitis (AS). Eight patients with RA and twelve with AS requiring a TNFα inhibitor were prospectively followed for 2 years. Body composition was evaluated by dual X-ray absorptiometry and included measurements of total fat mass, lean mass, fat in the gynoid and android regions, and visceral fat. Serum leptin, total and high molecular weight (HMW) adiponectin, resistin, and ghrelin were also assessed. There was a significant gain in body mass index (p = 0.05) and a tendency for weight (p = 0.07), android fat (p = 0.07), and visceral fat (p = 0.059) increase in patients with RA, while in AS, total fat mass significantly increased (p = 0.02) with a parallel weight gain (p = 0.07). When examining the whole population of patients, we observed after 2 years a significant increase in body weight (+1.9%; p = 0.003), body mass index (+2.5%; p = 0.004), total fat mass (+11.1%; p = 0.007), and fat in the android region (+18.3%; p = 0.02). There was a substantial, albeit nonsignificant gain in visceral fat (+24.3%; p = 0.088). Lean mass and gynoid fat were not modified. No major changes were observed for serum leptin, total adiponectin, and ghrelin, while HMW adiponectin and the HMW/total adiponectin ratio tended to decrease (-15.2%, p = 0.057 and -9.3%, p = 0.067, respectively). Resistin decreased significantly (-22.4%, p = 0.01). Long-term TNFα inhibition in RA and AS is associated with a significant gain in fat mass, with a shift to the android (visceral) region. This fat redistribution raises questions about its influence on the cardiovascular profile of patients receiving these treatments.

  16. CXCR4 expression varies significantly among different subtypes of glioblastoma multiforme (GBM) and its low expression or hypermethylation might predict favorable overall survival.

    Science.gov (United States)

    Ma, Xinlong; Shang, Feng; Zhu, Weidong; Lin, Qingtang

    2017-09-01

    CXCR4 is an oncogene in glioblastoma multiforme (GBM) but the mechanism of its dysregulation and its prognostic value in GBM have not been fully understood. Bioinformatic analysis was performed by using R2 and the UCSC Xena browser based on data from GSE16011 in GEO datasets and in GBM cohort in TCGA database (TCGA-GBM). Kaplan Meier curves of overall survival (OS) were generated to assess the association between CXCR4 expression/methylation and OS in patients with GBM. GBM patients with high CXCR4 expression had significantly worse 5 and 10 yrs OS (p GBM subtypes, there was an inverse relationship between overall DNA methylation and CXCR4 expression. CXCR4 expression was significantly lower in CpG island methylation phenotype (CIMP) group than in non CIMP group. Log rank test results showed that patients with high CXCR4 methylation (first tertile) had significantly better 5 yrs OS (p = 0.038). CXCR4 expression is regulated by DNA methylation in GBM and its low expression or hypermethylation might indicate favorable OS in GBM patients.

  17. Crackle pitch and rate do not vary significantly during a single automated-auscultation session in patients with pneumonia, congestive heart failure, or interstitial pulmonary fibrosis.

    Science.gov (United States)

    Vyshedskiy, Andrey; Ishikawa, Sadamu; Murphy, Raymond L H

    2011-06-01

    To determine the variability of crackle pitch and crackle rate during a single automated-auscultation session with a computerized 16-channel lung-sound analyzer. Forty-nine patients with pneumonia, 52 with congestive heart failure (CHF), and 18 with interstitial pulmonary fibrosis (IPF) performed breathing maneuvers in the following sequence: normal breathing, deep breathing, cough several times; deep breathing, vital-capacity maneuver, and deep breathing. From the auscultation recordings we measured the crackle pitch and crackle rate. Crackle pitch variability, expressed as a percentage of the average crackle pitch, was small in all patients and in all maneuvers: pneumonia 11%, CHF 11%, pulmonary fibrosis 7%. Crackle rate variability was also small: pneumonia 31%, CHF 32%, IPF 24%. Compared to the first deep-breathing maneuver (100%), the average crackle pitch did not significantly change following coughing (pneumonia 100%, CHF 103%, IPF 100%), the vital-capacity maneuver (pneumonia 100%, CHF 92%, IPF 104%), or during quiet breathing (pneumonia 97%, CHF 100%, IPF 104%). Similarly, the average crackle rate did not change significantly following coughing (pneumonia 105%, CHF 110%, IPF 90%) or the vital-capacity maneuver (pneumonia 102%, CHF 101%, IPF 99%). However, during normal breathing the crackle rate was significantly lower in the patients with pneumonia (74%, P auscultation session suggests that crackle rate can be used to follow the course of cardiopulmonary illnesses such as pneumonia, IPF, and CHF.

  18. Statistical and molecular analyses of evolutionary significance of red-green color vision and color blindness in vertebrates.

    Science.gov (United States)

    Yokoyama, Shozo; Takenaka, Naomi

    2005-04-01

    Red-green color vision is strongly suspected to enhance the survival of its possessors. Despite being red-green color blind, however, many species have successfully competed in nature, which brings into question the evolutionary advantage of achieving red-green color vision. Here, we propose a new method of identifying positive selection at individual amino acid sites with the premise that if positive Darwinian selection has driven the evolution of the protein under consideration, then it should be found mostly at the branches in the phylogenetic tree where its function had changed. The statistical and molecular methods have been applied to 29 visual pigments with the wavelengths of maximal absorption at approximately 510-540 nm (green- or middle wavelength-sensitive [MWS] pigments) and at approximately 560 nm (red- or long wavelength-sensitive [LWS] pigments), which are sampled from a diverse range of vertebrate species. The results show that the MWS pigments are positively selected through amino acid replacements S180A, Y277F, and T285A and that the LWS pigments have been subjected to strong evolutionary conservation. The fact that these positively selected M/LWS pigments are found not only in animals with red-green color vision but also in those with red-green color blindness strongly suggests that both red-green color vision and color blindness have undergone adaptive evolution independently in different species.

  19. Flux-based Enrichment Ratios of Throughfall and Stemflow Found to Vary Significantly within Urban Fragments and Along an Urban-to-Rural Gradient

    Science.gov (United States)

    Dowtin, A. L.; Levia, D. F., Jr.

    2017-12-01

    Throughfall and stemflow are important inputs of water and solutes to forest soils in both rural and urban forests. In metropolitan wooded ecosystems, a number of factors can affect flux-based enrichment ratios, including combustion of fossil fuels and proximity to industry. Use of flux-based enrichment ratios provides a means by which this modification of net precipitation chemistry can be quantified for both throughfall and stemflow, and allows for a characterization of the relative contributions of stemflow and throughfall in the delivery of nutrients and pollutants to forest soils. This study utilizes five mixed deciduous forest stands along an urban-to-rural gradient (3 urban fragments, 1 suburban fragment, and a portion of 1 contiguous rural forest) within a medium-sized metropolitan region of the United States' Northeast megalopolis, to determine how the size, shape, structure, and geographic context of remnant forest fragments determine hydrologic and solute fluxes within them. In situ observations of throughfall and stemflow (the latter of which is limited to Quercus rubra and Quercus alba) within each study plot allow for an identification and characterization of the spatial variability in solute fluxes within and between the respective sites. Preliminary observations indicate significant intra-site variability in solute concentrations as observed in both throughfall and stemflow, with higher concentrations along the respective windward edges of the study plots than at greater depths into their interiors. Higher flux-based stemflow enrichment ratios, for both Q. rubra and Q. alba, were also evident for certain ions (i.e., S2-, NO3-) in the urban forest fragments, with significantly lower ratios observed at the suburban and rural sites. Findings from this research are intended to aid in quantifying the spatial variability of the hydrologic and hydrochemical ecosystem service provisions of remnant metropolitan forest fragments. This research is supported in

  20. Soil biogeochemistry properties vary between two boreal forest ecosystems in Quebec: significant differences in soil carbon, available nutrients and iron and aluminium crystallinity

    Science.gov (United States)

    Bastianelli, Carole; Ali, Adam A.; Beguin, Julien; Bergeron, Yves; Grondin, Pierre; Hély, Christelle; Paré, David

    2017-04-01

    At the northernmost extent of the managed forest in Quebec, the boreal forest is currently undergoing an ecological transition from closed-canopy black spruce-moss forests towards open-canopy lichen woodlands, which spread southward. Our study aim was to determine whether this shift could impact soil properties on top of its repercussions on forest productivity or carbon storage. We studied the soil biogeochemical composition of three pedological layers in moss forests (MF) and lichen woodlands (LW) north of the Manicouagan crater in Quebec. The humus layer (FH horizons) was significantly thicker and held more carbon, nitrogen and exchangeable Ca and Mg in MF plots than in LW plots. When considering mineral horizons, we found that the deep C horizon had a very close composition in both ecosystem plots, suggesting that the parent material was of similar geochemical nature. This was expected as all selected sites developed from glacial deposit. Multivariate analysis of surficial mineral B horizon showed however that LW B horizon displayed higher concentrations of Al and Fe oxides than MF B horizon, particularly for inorganic amorphous forms. Conversely, main exchangeable base cations (Ca, Mg) were higher in B horizon of MF than that of LW. Ecosystem types explained much of the variations in the B horizon geochemical composition. We thus suggest that the differences observed in the geochemical composition of the B horizon have a biological origin rather than a mineralogical origin. We also showed that total net stocks of carbon stored in MF soils were three times higher than in LW soils (FH + B horizons, roots apart). Altogether, we suggest that variations in soil properties between MF and LW are linked to a cascade of events involving the impacts of natural disturbances such as wildfires on forest regeneration that determines the of vegetation structure (stand density) and composition (ground cover type) and their subsequent consequences on soil environmental

  1. Prognostic significance of P2RY8-CRLF2 and CRLF2 overexpression may vary across risk subgroups of childhood B-cell acute lymphoblastic leukemia.

    Science.gov (United States)

    Dou, Hu; Chen, Xi; Huang, Yi; Su, Yongchun; Lu, Ling; Yu, Jie; Yin, Yibing; Bao, Liming

    2017-02-01

    The cytokine receptor-like factor 2 (CRLF2) gene plays an important role in early B-cell development. Aberrations in CRLF2 activate the JAK-STAT signaling pathway that contributes to B-cell acute lymphoblastic leukemia (B-ALL). The prognostic significance of CRLF2 overexpression and P2RY8-CRLF2 fusion in various B-ALL risk subgroups has not been well established. Two hundred seventy-one patients with newly diagnosed childhood B-ALL were enrolled from a Chinese population. The prevalence of CRLF2 overexpression, CRLF2-P2RY8 fusion, CRLF2 F232C mutation, and JAK2 and IL7R mutational status were analyzed, and the prognostic impact of CRLF2 overexpression and P2RY8-CRLF2 on B-ALL was evaluated by assessing their influence on overall survival and event-free survival. CRLF2 overexpression and P2RY8-CRLF2 were found in 19% and 10%, respectively, in the whole cohort. No correlation between CRLF2 overexpression and P2RY8-CRLF2 was observed. CRLF2 F322C and IL7R mutations were not detected in B-ALL cases overexpressing CRLF2, and no JAK2 mutations were found in the whole cohort either. The results showed that CRLF2 overexpression and P2RY8-CRLF2 were associated with a poor outcome in unselected B-ALL. Moreover, in an intermediate risk B-ALL subgroup P2RY8-CRLF2 was correlated with worse survival, whereas in high- and low-risk subgroups, CRLF2 overexpression predicted a poor outcome. Our findings suggest that P2RY8-CRLF2 is an independent prognostic indicator in intermediate risk B-ALL, while CRLF2 overexpression is correlated with an inferior outcome in high- or low-risk B-ALL. Our study demonstrates that the impact of P2RY8-CRLF2 and CRLF2 overexpression on B-ALL survival may differ across risk subgroups. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  2. Test the Overall Significance of p-values by Using Joint Tail Probability of Ordered p-values as Test Statistic

    NARCIS (Netherlands)

    Fang, Yongxiang; Wit, Ernst

    2008-01-01

    Fisher’s combined probability test is the most commonly used method to test the overall significance of a set independent p-values. However, it is very obviously that Fisher’s statistic is more sensitive to smaller p-values than to larger p-value and a small p-value may overrule the other p-values

  3. The Importance of Integrating Clinical Relevance and Statistical Significance in the Assessment of Quality of Care--Illustrated Using the Swedish Stroke Register.

    Directory of Open Access Journals (Sweden)

    Anita Lindmark

    Full Text Available When profiling hospital performance, quality inicators are commonly evaluated through hospital-specific adjusted means with confidence intervals. When identifying deviations from a norm, large hospitals can have statistically significant results even for clinically irrelevant deviations while important deviations in small hospitals can remain undiscovered. We have used data from the Swedish Stroke Register (Riksstroke to illustrate the properties of a benchmarking method that integrates considerations of both clinical relevance and level of statistical significance.The performance measure used was case-mix adjusted risk of death or dependency in activities of daily living within 3 months after stroke. A hospital was labeled as having outlying performance if its case-mix adjusted risk exceeded a benchmark value with a specified statistical confidence level. The benchmark was expressed relative to the population risk and should reflect the clinically relevant deviation that is to be detected. A simulation study based on Riksstroke patient data from 2008-2009 was performed to investigate the effect of the choice of the statistical confidence level and benchmark value on the diagnostic properties of the method.Simulations were based on 18,309 patients in 76 hospitals. The widely used setting, comparing 95% confidence intervals to the national average, resulted in low sensitivity (0.252 and high specificity (0.991. There were large variations in sensitivity and specificity for different requirements of statistical confidence. Lowering statistical confidence improved sensitivity with a relatively smaller loss of specificity. Variations due to different benchmark values were smaller, especially for sensitivity. This allows the choice of a clinically relevant benchmark to be driven by clinical factors without major concerns about sufficiently reliable evidence.The study emphasizes the importance of combining clinical relevance and level of statistical

  4. The Importance of Integrating Clinical Relevance and Statistical Significance in the Assessment of Quality of Care--Illustrated Using the Swedish Stroke Register.

    Science.gov (United States)

    Lindmark, Anita; van Rompaye, Bart; Goetghebeur, Els; Glader, Eva-Lotta; Eriksson, Marie

    2016-01-01

    When profiling hospital performance, quality inicators are commonly evaluated through hospital-specific adjusted means with confidence intervals. When identifying deviations from a norm, large hospitals can have statistically significant results even for clinically irrelevant deviations while important deviations in small hospitals can remain undiscovered. We have used data from the Swedish Stroke Register (Riksstroke) to illustrate the properties of a benchmarking method that integrates considerations of both clinical relevance and level of statistical significance. The performance measure used was case-mix adjusted risk of death or dependency in activities of daily living within 3 months after stroke. A hospital was labeled as having outlying performance if its case-mix adjusted risk exceeded a benchmark value with a specified statistical confidence level. The benchmark was expressed relative to the population risk and should reflect the clinically relevant deviation that is to be detected. A simulation study based on Riksstroke patient data from 2008-2009 was performed to investigate the effect of the choice of the statistical confidence level and benchmark value on the diagnostic properties of the method. Simulations were based on 18,309 patients in 76 hospitals. The widely used setting, comparing 95% confidence intervals to the national average, resulted in low sensitivity (0.252) and high specificity (0.991). There were large variations in sensitivity and specificity for different requirements of statistical confidence. Lowering statistical confidence improved sensitivity with a relatively smaller loss of specificity. Variations due to different benchmark values were smaller, especially for sensitivity. This allows the choice of a clinically relevant benchmark to be driven by clinical factors without major concerns about sufficiently reliable evidence. The study emphasizes the importance of combining clinical relevance and level of statistical confidence when

  5. A novel complete-case analysis to determine statistical significance between treatments in an intention-to-treat population of randomized clinical trials involving missing data.

    Science.gov (United States)

    Liu, Wei; Ding, Jinhui

    2018-04-01

    The application of the principle of the intention-to-treat (ITT) to the analysis of clinical trials is challenged in the presence of missing outcome data. The consequences of stopping an assigned treatment in a withdrawn subject are unknown. It is difficult to make a single assumption about missing mechanisms for all clinical trials because there are complicated reactions in the human body to drugs due to the presence of complex biological networks, leading to data missing randomly or non-randomly. Currently there is no statistical method that can tell whether a difference between two treatments in the ITT population of a randomized clinical trial with missing data is significant at a pre-specified level. Making no assumptions about the missing mechanisms, we propose a generalized complete-case (GCC) analysis based on the data of completers. An evaluation of the impact of missing data on the ITT analysis reveals that a statistically significant GCC result implies a significant treatment effect in the ITT population at a pre-specified significance level unless, relative to the comparator, the test drug is poisonous to the non-completers as documented in their medical records. Applications of the GCC analysis are illustrated using literature data, and its properties and limits are discussed.

  6. Performance studies of GooFit on GPUs vs RooFit on CPUs while estimating the statistical significance of a new physical signal

    Science.gov (United States)

    Di Florio, Adriano

    2017-10-01

    In order to test the computing capabilities of GPUs with respect to traditional CPU cores a high-statistics toy Monte Carlo technique has been implemented both in ROOT/RooFit and GooFit frameworks with the purpose to estimate the statistical significance of the structure observed by CMS close to the kinematical boundary of the J/ψϕ invariant mass in the three-body decay B + → J/ψϕK +. GooFit is a data analysis open tool under development that interfaces ROOT/RooFit to CUDA platform on nVidia GPU. The optimized GooFit application running on GPUs hosted by servers in the Bari Tier2 provides striking speed-up performances with respect to the RooFit application parallelised on multiple CPUs by means of PROOF-Lite tool. The considerable resulting speed-up, evident when comparing concurrent GooFit processes allowed by CUDA Multi Process Service and a RooFit/PROOF-Lite process with multiple CPU workers, is presented and discussed in detail. By means of GooFit it has also been possible to explore the behaviour of a likelihood ratio test statistic in different situations in which the Wilks Theorem may or may not apply because its regularity conditions are not satisfied.

  7. The significance of Good Chair as part of children’s school and home environment in the preventive treatment of body statistics distortions

    OpenAIRE

    Mirosław Mrozkowiak; Hanna Żukowska

    2015-01-01

    Mrozkowiak Mirosław, Żukowska Hanna. Znaczenie Dobrego Krzesła, jako elementu szkolnego i domowego środowiska ucznia, w profilaktyce zaburzeń statyki postawy ciała = The significance of Good Chair as part of children’s school and home environment in the preventive treatment of body statistics distortions. Journal of Education, Health and Sport. 2015;5(7):179-215. ISSN 2391-8306. DOI 10.5281/zenodo.19832 http://ojs.ukw.edu.pl/index.php/johs/article/view/2015%3B5%287%29%3A179-215 https:...

  8. Test the Overall Significance of p-values by Using Joint Tail Probability of Ordered p-values as Test Statistic

    OpenAIRE

    Fang, Yongxiang; Wit, Ernst

    2008-01-01

    Fisher’s combined probability test is the most commonly used method to test the overall significance of a set independent p-values. However, it is very obviously that Fisher’s statistic is more sensitive to smaller p-values than to larger p-value and a small p-value may overrule the other p-values and decide the test result. This is, in some cases, viewed as a flaw. In order to overcome this flaw and improve the power of the test, the joint tail probability of a set p-values is proposed as a ...

  9. Theoretical basis and significance of the variance of discharge as a bidimensional variable for the design of lateral lines of micro-irrigation Bases teóricas e importância da variância da vazão como variável bidimensional no dimensionamento de linhas laterais em microirrigação

    Directory of Open Access Journals (Sweden)

    Euro Roberto Detomini

    2009-08-01

    Full Text Available In order to support the theoretical basis and contribute to the improvement of educational capability issues relating to irrigation systems design, this point of view presents an alternative deduction of the variance of the discharge as a bidimensional and independent random variable. Then a subsequent brief application of an existing model is applied for statistical design of laterals in micro-irrigation. The better manufacturing precision of emitters allows lengthening a lateral for a given soil slope, although this does not necessarily mean that the statistical uniformity throughout the lateral will be more homogenous.Visando a reforçar as bases teóricas e contribuir com a melhoria da capacitação educacional em assuntos relacionados a dimensionamento de sistemas de irrigação, o presente ponto de vista revela uma dedução alternativa para a variância da vazão dos emissores, variável aleatória independente bidimensional. Posteriormente apresenta breve aplicação do modelo aceito para dimensionamento de linhas laterais em sistemas de microirrigação, de acordo com a abordagem estatística. A melhor precisão na fabricação de emssores permite, para uma dada inclinação de terreno, dimensionar laterais mais longas, o que não significa necessariamente que uniformidade de emissão dessas laterais será projetada como mais homogênea.

  10. Statistical significance approximation in local trend analysis of high-throughput time-series data using the theory of Markov chains.

    Science.gov (United States)

    Xia, Li C; Ai, Dongmei; Cram, Jacob A; Liang, Xiaoyi; Fuhrman, Jed A; Sun, Fengzhu

    2015-09-21

    Local trend (i.e. shape) analysis of time series data reveals co-changing patterns in dynamics of biological systems. However, slow permutation procedures to evaluate the statistical significance of local trend scores have limited its applications to high-throughput time series data analysis, e.g., data from the next generation sequencing technology based studies. By extending the theories for the tail probability of the range of sum of Markovian random variables, we propose formulae for approximating the statistical significance of local trend scores. Using simulations and real data, we show that the approximate p-value is close to that obtained using a large number of permutations (starting at time points >20 with no delay and >30 with delay of at most three time steps) in that the non-zero decimals of the p-values obtained by the approximation and the permutations are mostly the same when the approximate p-value is less than 0.05. In addition, the approximate p-value is slightly larger than that based on permutations making hypothesis testing based on the approximate p-value conservative. The approximation enables efficient calculation of p-values for pairwise local trend analysis, making large scale all-versus-all comparisons possible. We also propose a hybrid approach by integrating the approximation and permutations to obtain accurate p-values for significantly associated pairs. We further demonstrate its use with the analysis of the Polymouth Marine Laboratory (PML) microbial community time series from high-throughput sequencing data and found interesting organism co-occurrence dynamic patterns. The software tool is integrated into the eLSA software package that now provides accelerated local trend and similarity analysis pipelines for time series data. The package is freely available from the eLSA website: http://bitbucket.org/charade/elsa.

  11. Evaluating statistical and clinical significance of intervention effects in single-case experimental designs: an SPSS method to analyze univariate data.

    Science.gov (United States)

    Maric, Marija; de Haan, Else; Hogendoorn, Sanne M; Wolters, Lidewij H; Huizenga, Hilde M

    2015-03-01

    Single-case experimental designs are useful methods in clinical research practice to investigate individual client progress. Their proliferation might have been hampered by methodological challenges such as the difficulty applying existing statistical procedures. In this article, we describe a data-analytic method to analyze univariate (i.e., one symptom) single-case data using the common package SPSS. This method can help the clinical researcher to investigate whether an intervention works as compared with a baseline period or another intervention type, and to determine whether symptom improvement is clinically significant. First, we describe the statistical method in a conceptual way and show how it can be implemented in SPSS. Simulation studies were performed to determine the number of observation points required per intervention phase. Second, to illustrate this method and its implications, we present a case study of an adolescent with anxiety disorders treated with cognitive-behavioral therapy techniques in an outpatient psychotherapy clinic, whose symptoms were regularly assessed before each session. We provide a description of the data analyses and results of this case study. Finally, we discuss the advantages and shortcomings of the proposed method. Copyright © 2014. Published by Elsevier Ltd.

  12. Spectral and cross-spectral analysis of uneven time series with the smoothed Lomb-Scargle periodogram and Monte Carlo evaluation of statistical significance

    Science.gov (United States)

    Pardo-Igúzquiza, Eulogio; Rodríguez-Tovar, Francisco J.

    2012-12-01

    Many spectral analysis techniques have been designed assuming sequences taken with a constant sampling interval. However, there are empirical time series in the geosciences (sediment cores, fossil abundance data, isotope analysis, …) that do not follow regular sampling because of missing data, gapped data, random sampling or incomplete sequences, among other reasons. In general, interpolating an uneven series in order to obtain a succession with a constant sampling interval alters the spectral content of the series. In such cases it is preferable to follow an approach that works with the uneven data directly, avoiding the need for an explicit interpolation step. The Lomb-Scargle periodogram is a popular choice in such circumstances, as there are programs available in the public domain for its computation. One new computer program for spectral analysis improves the standard Lomb-Scargle periodogram approach in two ways: (1) It explicitly adjusts the statistical significance to any bias introduced by variance reduction smoothing, and (2) it uses a permutation test to evaluate confidence levels, which is better suited than parametric methods when neighbouring frequencies are highly correlated. Another novel program for cross-spectral analysis offers the advantage of estimating the Lomb-Scargle cross-periodogram of two uneven time series defined on the same interval, and it evaluates the confidence levels of the estimated cross-spectra by a non-parametric computer intensive permutation test. Thus, the cross-spectrum, the squared coherence spectrum, the phase spectrum, and the Monte Carlo statistical significance of the cross-spectrum and the squared-coherence spectrum can be obtained. Both of the programs are written in ANSI Fortran 77, in view of its simplicity and compatibility. The program code is of public domain, provided on the website of the journal (http://www.iamg.org/index.php/publisher/articleview/frmArticleID/112/). Different examples (with simulated and

  13. Detection by voxel-wise statistical analysis of significant changes in regional cerebral glucose uptake in an APP/PS1 transgenic mouse model of Alzheimer's disease.

    Science.gov (United States)

    Dubois, Albertine; Hérard, Anne-Sophie; Delatour, Benoît; Hantraye, Philippe; Bonvento, Gilles; Dhenain, Marc; Delzescaux, Thierry

    2010-06-01

    Biomarkers and technologies similar to those used in humans are essential for the follow-up of Alzheimer's disease (AD) animal models, particularly for the clarification of mechanisms and the screening and validation of new candidate treatments. In humans, changes in brain metabolism can be detected by 1-deoxy-2-[(18)F] fluoro-D-glucose PET (FDG-PET) and assessed in a user-independent manner with dedicated software, such as Statistical Parametric Mapping (SPM). FDG-PET can be carried out in small animals, but its resolution is low as compared to the size of rodent brain structures. In mouse models of AD, changes in cerebral glucose utilization are usually detected by [(14)C]-2-deoxyglucose (2DG) autoradiography, but this requires prior manual outlining of regions of interest (ROI) on selected sections. Here, we evaluate the feasibility of applying the SPM method to 3D autoradiographic data sets mapping brain metabolic activity in a transgenic mouse model of AD. We report the preliminary results obtained with 4 APP/PS1 (64+/-1 weeks) and 3 PS1 (65+/-2 weeks) mice. We also describe new procedures for the acquisition and use of "blockface" photographs and provide the first demonstration of their value for the 3D reconstruction and spatial normalization of post mortem mouse brain volumes. Despite this limited sample size, our results appear to be meaningful, consistent, and more comprehensive than findings from previously published studies based on conventional ROI-based methods. The establishment of statistical significance at the voxel level, rather than with a user-defined ROI, makes it possible to detect more reliably subtle differences in geometrically complex regions, such as the hippocampus. Our approach is generic and could be easily applied to other biomarkers and extended to other species and applications. Copyright 2010 Elsevier Inc. All rights reserved.

  14. Evaluating statistical and clinical significance of intervention effects in single-case experimental designs: An SPSS method to analyze univariate data

    NARCIS (Netherlands)

    Maric, M.; de Haan, M.; Hogendoorn, S.M.; Wolters, L.H.; Huizenga, H.M.

    2015-01-01

    Single-case experimental designs are useful methods in clinical research practice to investigate individual client progress. Their proliferation might have been hampered by methodological challenges such as the difficulty applying existing statistical procedures. In this article, we describe a

  15. Evaluating statistical and clinical significance of intervention effects in single-case experimental designs: an SPSS method to analyze univariate data

    NARCIS (Netherlands)

    Maric, Marija; de Haan, Else; Hogendoorn, Sanne M.; Wolters, Lidewij H.; Huizenga, Hilde M.

    2015-01-01

    Single-case experimental designs are useful methods in clinical research practice to investigate individual client progress. Their proliferation might have been hampered by methodological challenges such as the difficulty applying existing statistical procedures. In this article, we describe a

  16. Statistical behavior and geological significance of the geochemical distribution of trace elements in the Cretaceous volcanics Cordoba and San Luis, Argentina

    International Nuclear Information System (INIS)

    Daziano, C.

    2010-01-01

    Statistical analysis of trace elements in volcanics research s, allowed to distinguish two independent populations with the same geochemical environment. For each component they have variable index of homogeneity resulting in dissimilar average values that reveal geochemical intra telluric phenomena. On the other hand the inhomogeneities observed in these rocks - as reflected in its petrochemical characters - could be exacerbated especially at so remote and dispersed location of their pitches, their relations with the enclosing rocks for the ranges of compositional variation, due differences relative ages

  17. Statistical comparison of leaching behavior of incineration bottom ash using seawater and deionized water: Significant findings based on several leaching methods.

    Science.gov (United States)

    Yin, Ke; Dou, Xiaomin; Ren, Fei; Chan, Wei-Ping; Chang, Victor Wei-Chung

    2018-02-15

    Bottom ashes generated from municipal solid waste incineration have gained increasing popularity as alternative construction materials, however, they contains elevated heavy metals posing a challenge for its free usage. Different leaching methods are developed to quantify leaching potential of incineration bottom ashes meanwhile guide its environmentally friendly application. Yet, there are diverse IBA applications while the in situ environment is always complicated, challenging its legislation. In this study, leaching tests were conveyed using batch and column leaching methods with seawater as opposed to deionized water, to unveil the metal leaching potential of IBA subjected to salty environment, which is commonly encountered when using IBA in land reclamation yet not well understood. Statistical analysis for different leaching methods suggested disparate performance between seawater and deionized water primarily ascribed to ionic strength. Impacts of leachant are metal-specific dependent on leaching methods and have a function of intrinsic characteristics of incineration bottom ashes. Leaching performances were further compared on additional perspectives, e.g. leaching approach and liquid to solid ratio, indicating sophisticated leaching potentials dominated by combined geochemistry. It is necessary to develop application-oriented leaching methods with corresponding leaching criteria to preclude discriminations between different applications, e.g., terrestrial applications vs. land reclamation. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. An evaluation of the statistical significance of the association between northward turnings of the interplanetary magnetic field and substorm expansion onsets

    Science.gov (United States)

    Hsu, Tung-Shin; McPherron, R. L.

    2002-11-01

    An outstanding problem in magnetospheric physics is deciding whether substorms are always triggered by external changes in the interplanetary magnetic field (IMF) or solar wind plasma, or whether they sometimes occur spontaneously. Over the past decade, arguments have been made on both sides of this issue. In fact, there is considerable evidence that some substorms are triggered. However, equally persuasive examples of substorms with no obvious trigger have been found. Because of conflicting views on this subject, further work is required to determine whether there is a physical relation between IMF triggers and substorm onset. In the work reported here a list of substorm onsets was created using two independent substorm signatures: sudden changes in the slope of the AL index and the start of a Pi 2 pulsation burst. Possible IMF triggers were determined from ISEE-2 observations. With the ISEE spacecraft near local noon immediately upstream of the bow shock, there can be little question about propagation delay to the magnetopause or whether a particular IMF feature hits the subsolar magnetopause. Thus it eliminates the objections that the calculated arrival time is subject to a large error or that the solar wind monitor missed a potential trigger incident at the subsolar point. Using a less familiar technique, statistics of point process, we find that the time delay between substorm onsets and the propagated arrival time of IMF triggers are clustered around zero. We estimate for independent processes that the probability of this clustering by chance alone is about 10-11. If we take into account the requirement that the IMF must have been southward prior to the onset, then the probability of clustering is higher, ˜10-5, but still extremely unlikely. Thus it is not possible to ascribe the apparent relation between IMF northward turnings and substorm onset to coincidence.

  19. Research Pearls: The Significance of Statistics and Perils of Pooling. Part 3: Pearls and Pitfalls of Meta-analyses and Systematic Reviews.

    Science.gov (United States)

    Harris, Joshua D; Brand, Jefferson C; Cote, Mark P; Dhawan, Aman

    2017-08-01

    Within the health care environment, there has been a recent and appropriate trend towards emphasizing the value of care provision. Reduced cost and higher quality improve the value of care. Quality is a challenging, heterogeneous, variably defined concept. At the core of quality is the patient's outcome, quantified by a vast assortment of subjective and objective outcome measures. There has been a recent evolution towards evidence-based medicine in health care, clearly elucidating the role of high-quality evidence across groups of patients and studies. Synthetic studies, such as systematic reviews and meta-analyses, are at the top of the evidence-based medicine hierarchy. Thus, these investigations may be the best potential source of guiding diagnostic, therapeutic, prognostic, and economic medical decision making. Systematic reviews critically appraise and synthesize the best available evidence to provide a conclusion statement (a "take-home point") in response to a specific answerable clinical question. A meta-analysis uses statistical methods to quantitatively combine data from single studies. Meta-analyses should be performed with high methodological quality homogenous studies (Level I or II) or evidence randomized studies, to minimize confounding variable bias. When it is known that the literature is inadequate or a recent systematic review has already been performed with a demonstration of insufficient data, then a new systematic review does not add anything meaningful to the literature. PROSPERO registration and PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines assist authors in the design and conduct of systematic reviews and should always be used. Complete transparency of the conduct of the review permits reproducibility and improves fidelity of the conclusions. Pooling of data from overly dissimilar investigations should be avoided. This particularly applies to Level IV evidence, that is, noncomparative investigations

  20. Clinical progress of human papillomavirus genotypes and their persistent infection in subjects with atypical squamous cells of undetermined significance cytology: Statistical and latent Dirichlet allocation analysis

    Science.gov (United States)

    Kim, Yee Suk; Lee, Sungin; Zong, Nansu; Kahng, Jimin

    2017-01-01

    The present study aimed to investigate differences in prognosis based on human papillomavirus (HPV) infection, persistent infection and genotype variations for patients exhibiting atypical squamous cells of undetermined significance (ASCUS) in their initial Papanicolaou (PAP) test results. A latent Dirichlet allocation (LDA)-based tool was developed that may offer a facilitated means of communication to be employed during patient-doctor consultations. The present study assessed 491 patients (139 HPV-positive and 352 HPV-negative cases) with a PAP test result of ASCUS with a follow-up period ≥2 years. Patients underwent PAP and HPV DNA chip tests between January 2006 and January 2009. The HPV-positive subjects were followed up with at least 2 instances of PAP and HPV DNA chip tests. The most common genotypes observed were HPV-16 (25.9%, 36/139), HPV-52 (14.4%, 20/139), HPV-58 (13.7%, 19/139), HPV-56 (11.5%, 16/139), HPV-51 (9.4%, 13/139) and HPV-18 (8.6%, 12/139). A total of 33.3% (12/36) patients positive for HPV-16 had cervical intraepithelial neoplasia (CIN)2 or a worse result, which was significantly higher than the prevalence of CIN2 of 1.8% (8/455) in patients negative for HPV-16 (Paged ≥51 years (38.7%) than in those aged ≤50 years (20.4%; P=0.036). Progression from persistent infection to CIN2 or worse (19/34, 55.9%) was higher than clearance (0/105, 0.0%; Page and long infection period with a clinical progression of CIN2 or worse. Therefore, LDA results may be presented as explanatory evidence during time-constrained patient-doctor consultations in order to deliver information regarding the patient's status. PMID:28587376

  1. An Estimation of the Likelihood of Significant Eruptions During 2000-2009 Using Poisson Statistics on Two-Point Moving Averages of the Volcanic Time Series

    Science.gov (United States)

    Wilson, Robert M.

    2001-01-01

    Since 1750, the number of cataclysmic volcanic eruptions (volcanic explosivity index (VEI)>=4) per decade spans 2-11, with 96 percent located in the tropics and extra-tropical Northern Hemisphere. A two-point moving average of the volcanic time series has higher values since the 1860's than before, being 8.00 in the 1910's (the highest value) and 6.50 in the 1980's, the highest since the 1910's peak. Because of the usual behavior of the first difference of the two-point moving averages, one infers that its value for the 1990's will measure approximately 6.50 +/- 1, implying that approximately 7 +/- 4 cataclysmic volcanic eruptions should be expected during the present decade (2000-2009). Because cataclysmic volcanic eruptions (especially those having VEI>=5) nearly always have been associated with short-term episodes of global cooling, the occurrence of even one might confuse our ability to assess the effects of global warming. Poisson probability distributions reveal that the probability of one or more events with a VEI>=4 within the next ten years is >99 percent. It is approximately 49 percent for an event with a VEI>=5, and 18 percent for an event with a VEI>=6. Hence, the likelihood that a climatically significant volcanic eruption will occur within the next ten years appears reasonably high.

  2. Do Time-Varying Covariances, Volatility Comovement and Spillover Matter?

    OpenAIRE

    Lakshmi Balasubramanyan

    2005-01-01

    Financial markets and their respective assets are so intertwined; analyzing any single market in isolation ignores important information. We investigate whether time varying volatility comovement and spillover impact the true variance-covariance matrix under a time-varying correlation set up. Statistically significant volatility spillover and comovement between US, UK and Japan is found. To demonstrate the importance of modelling volatility comovement and spillover, we look at a simple portfo...

  3. Caregiver Statistics: Demographics

    Science.gov (United States)

    ... You are here Home Selected Long-Term Care Statistics Order this publication Printer-friendly version What is ... needs and services are wide-ranging and complex, statistics may vary from study to study. Sources for ...

  4. Understanding Statistics - Cancer Statistics

    Science.gov (United States)

    Annual reports of U.S. cancer statistics including new cases, deaths, trends, survival, prevalence, lifetime risk, and progress toward Healthy People targets, plus statistical summaries for a number of common cancer types.

  5. Comparing identified and statistically significant lipids and polar metabolites in 15-year old serum and dried blood spot samples for longitudinal studies: Comparing lipids and metabolites in serum and DBS samples

    Energy Technology Data Exchange (ETDEWEB)

    Kyle, Jennifer E. [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Casey, Cameron P. [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Stratton, Kelly G. [National Security Directorate, Pacific Northwest National Laboratory, Richland WA USA; Zink, Erika M. [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Kim, Young-Mo [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Zheng, Xueyun [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Monroe, Matthew E. [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Weitz, Karl K. [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Bloodsworth, Kent J. [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Orton, Daniel J. [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Ibrahim, Yehia M. [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Moore, Ronald J. [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Lee, Christine G. [Department of Medicine, Bone and Mineral Unit, Oregon Health and Science University, Portland OR USA; Research Service, Portland Veterans Affairs Medical Center, Portland OR USA; Pedersen, Catherine [Department of Medicine, Bone and Mineral Unit, Oregon Health and Science University, Portland OR USA; Orwoll, Eric [Department of Medicine, Bone and Mineral Unit, Oregon Health and Science University, Portland OR USA; Smith, Richard D. [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Burnum-Johnson, Kristin E. [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Baker, Erin S. [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA

    2017-02-05

    The use of dried blood spots (DBS) has many advantages over traditional plasma and serum samples such as smaller blood volume required, storage at room temperature, and ability for sampling in remote locations. However, understanding the robustness of different analytes in DBS samples is essential, especially in older samples collected for longitudinal studies. Here we analyzed DBS samples collected in 2000-2001 and stored at room temperature and compared them to matched serum samples stored at -80°C to determine if they could be effectively used as specific time points in a longitudinal study following metabolic disease. Four hundred small molecules were identified in both the serum and DBS samples using gas chromatograph-mass spectrometry (GC-MS), liquid chromatography-MS (LC-MS) and LC-ion mobility spectrometry-MS (LC-IMS-MS). The identified polar metabolites overlapped well between the sample types, though only one statistically significant polar metabolite in a case-control study was conserved, indicating degradation occurs in the DBS samples affecting quantitation. Differences in the lipid identifications indicated that some oxidation occurs in the DBS samples. However, thirty-six statistically significant lipids correlated in both sample types indicating that lipid quantitation was more stable across the sample types.

  6. Las pruebas de significación estadística en tres revistas biomédicas: una revisión crítica Tests of statistical significance in three biomedical journals: a critical review

    Directory of Open Access Journals (Sweden)

    Madelaine Sarria Castro

    2004-05-01

    Full Text Available OBJETIVOS: Caracterizar el empleo de las pruebas convencionales de significación estadística y las tendencias actuales que muestra su uso en tres revistas biomédicas del ámbito hispanohablante. MÉTODOS: Se examinaron todos los artículos originales descriptivos o explicativos que fueron publicados en el quinquenio de 1996­2000 en tres publicaciones: Revista Cubana de Medicina General Integral, Revista Panamericana de Salud Pública/Pan American Journal of Public Health y Medicina Clínica. RESULTADOS: En las tres revistas examinadas se detectaron diversos rasgos criticables en el empleo de las pruebas de hipótesis basadas en los "valores P" y la escasa presencia de las nuevas tendencias que se proponen en su lugar: intervalos de confianza (IC e inferencia bayesiana. Los hallazgos fundamentales fueron los siguientes: mínima presencia de los IC, ya fuese como complemento de las pruebas de significación o como recurso estadístico único; mención del tamaño muestral como posible explicación de los resultados; predominio del empleo de valores rígidos de alfa; falta de uniformidad en la presentación de los resultados, y alusión indebida en las conclusiones de la investigación a los resultados de las pruebas de hipótesis. CONCLUSIONES: Los resultados reflejan la falta de acatamiento de autores y editores en relación con las normas aceptadas en torno al uso de las pruebas de significación estadística y apuntan a que el empleo adocenado de estas pruebas sigue ocupando un espacio importante en la literatura biomédica del ámbito hispanohablante.OBJECTIVE: To describe the use of conventional tests of statistical significance and the current trends shown by their use in three biomedical journals read in Spanish-speaking countries. METHODS: All descriptive or explanatory original articles published in the five-year period of 1996 through 2000 were reviewed in three journals: Revista Cubana de Medicina General Integral [Cuban Journal of

  7. Statistical thermodynamics

    International Nuclear Information System (INIS)

    Lim, Gyeong Hui

    2008-03-01

    This book consists of 15 chapters, which are basic conception and meaning of statistical thermodynamics, Maxwell-Boltzmann's statistics, ensemble, thermodynamics function and fluctuation, statistical dynamics with independent particle system, ideal molecular system, chemical equilibrium and chemical reaction rate in ideal gas mixture, classical statistical thermodynamics, ideal lattice model, lattice statistics and nonideal lattice model, imperfect gas theory on liquid, theory on solution, statistical thermodynamics of interface, statistical thermodynamics of a high molecule system and quantum statistics

  8. Statistical Traffic Anomaly Detection in Time-Varying Communication Networks

    Science.gov (United States)

    2015-02-01

    PLs can be generated using tad and (7). Otherwise, the network is periodic according to feature a, and a family of candidate PLs can be generated...using tad , t a p, and (8). In addition, in case that some prior knowledge of td and tp is available, the family of candidate PLs can include the PLs

  9. Statistical Traffic Anomaly Detection in Time Varying Communication Networks

    Science.gov (United States)

    2015-02-01

    PLs can be generated using tad and (7). Otherwise, the network is periodic according to feature a, and a family of candidate PLs can be generated...using tad , t a p, and (8). In addition, in case that some prior knowledge of td and tp is available, the family of candidate PLs can include the PLs

  10. Time-varying BRDFs.

    Science.gov (United States)

    Sun, Bo; Sunkavalli, Kalyan; Ramamoorthi, Ravi; Belhumeur, Peter N; Nayar, Shree K

    2007-01-01

    The properties of virtually all real-world materials change with time, causing their bidirectional reflectance distribution functions (BRDFs) to be time varying. However, none of the existing BRDF models and databases take time variation into consideration; they represent the appearance of a material at a single time instance. In this paper, we address the acquisition, analysis, modeling, and rendering of a wide range of time-varying BRDFs (TVBRDFs). We have developed an acquisition system that is capable of sampling a material's BRDF at multiple time instances, with each time sample acquired within 36 sec. We have used this acquisition system to measure the BRDFs of a wide range of time-varying phenomena, which include the drying of various types of paints (watercolor, spray, and oil), the drying of wet rough surfaces (cement, plaster, and fabrics), the accumulation of dusts (household and joint compound) on surfaces, and the melting of materials (chocolate). Analytic BRDF functions are fit to these measurements and the model parameters' variations with time are analyzed. Each category exhibits interesting and sometimes nonintuitive parameter trends. These parameter trends are then used to develop analytic TVBRDF models. The analytic TVBRDF models enable us to apply effects such as paint drying and dust accumulation to arbitrary surfaces and novel materials.

  11. Cancer Statistics

    Science.gov (United States)

    ... What Is Cancer? Cancer Statistics Cancer Disparities Cancer Statistics Cancer has a major impact on society in ... success of efforts to control and manage cancer. Statistics at a Glance: The Burden of Cancer in ...

  12. Conversion factors and oil statistics

    International Nuclear Information System (INIS)

    Karbuz, Sohbet

    2004-01-01

    World oil statistics, in scope and accuracy, are often far from perfect. They can easily lead to misguided conclusions regarding the state of market fundamentals. Without proper attention directed at statistic caveats, the ensuing interpretation of oil market data opens the door to unnecessary volatility, and can distort perception of market fundamentals. Among the numerous caveats associated with the compilation of oil statistics, conversion factors, used to produce aggregated data, play a significant role. Interestingly enough, little attention is paid to conversion factors, i.e. to the relation between different units of measurement for oil. Additionally, the underlying information regarding the choice of a specific factor when trying to produce measurements of aggregated data remains scant. The aim of this paper is to shed some light on the impact of conversion factors for two commonly encountered issues, mass to volume equivalencies (barrels to tonnes) and for broad energy measures encountered in world oil statistics. This paper will seek to demonstrate how inappropriate and misused conversion factors can yield wildly varying results and ultimately distort oil statistics. Examples will show that while discrepancies in commonly used conversion factors may seem trivial, their impact on the assessment of a world oil balance is far from negligible. A unified and harmonised convention for conversion factors is necessary to achieve accurate comparisons and aggregate oil statistics for the benefit of both end-users and policy makers

  13. Caracterização estatística de variáveis usadas para ensaiar uma semeadora-adubadora em semeadura direta e convencional = Statistical characterization of variables used to test a planter under direct and conventional sowing systems

    Directory of Open Access Journals (Sweden)

    Geraldo do Amaral Gravina

    2009-10-01

    Full Text Available O objetivo deste trabalho foi caracterizar, estatisticamente, as variáveis patinagem dos rodados, espaço percorrido por parcela, área da parcela trabalhada,capacidade de campo teórica e efetiva de uma semeadora em sistema de semeadura direta (SD e convencional (SC, com base na verificação do ajuste de uma série de dados a uma distribuição estatística, visando indicar a melhor forma de representação e valores a serem adotados para que estas variáveis sejam utilizadas em operações de práticas agrícolas. O experimento na SC foi realizado com velocidade de 1,5 m s-1, com 190 repetições, durante a semeadura de milho em solo classificado como Cambissolo; oexperimento na SD, com 1,8 m s-1 e 58 repetições, durante a semeadura de sorgo em solo classificado como Latossolo Vermelho-Amarelo. Concluiu-se que não foram detectados valores discrepantes e que as variáveis em estudo podem ser representadas pela função densidade de probabilidade normal (Distribuição de Gauss, podendo ser utilizados parâmetros para suas representações.Statistical characterization of variables used to test a planter under direct and conventional sowing systems. The objective of this work was to statistically characterize variables such as wheels slip, theoretical and effective field capacity of a planter in direct (DS and conventional sowing (CS systems, based on the verification of the adjustment of a series of data to a statistical distribution, aiming for the best form of representation and values be adopted so that these variables be used in agricultural practices operations. The experiment in CS was done with a speed of 1.5 m s-1 with 190 repetitions, and the experiment in DS was done with 1.8 m s-1 and58 repetitions. It was concluded that differing values were not detected, and the variables in study can be represented by the function density of normal probability (Gaussian distribution, and the variables can be used for its representations.

  14. Functional abilities and cognitive decline in adult and aging intellectual disabilities. Psychometric validation of an Italian version of the Alzheimer's Functional Assessment Tool (AFAST): analysis of its clinical significance with linear statistics and artificial neural networks.

    Science.gov (United States)

    De Vreese, L P; Gomiero, T; Uberti, M; De Bastiani, E; Weger, E; Mantesso, U; Marangoni, A

    2015-04-01

    (a) A psychometric validation of an Italian version of the Alzheimer's Functional Assessment Tool scale (AFAST-I), designed for informant-based assessment of the degree of impairment and of assistance required in seven basic daily activities in adult/elderly people with intellectual disabilities (ID) and (suspected) dementia; (b) a pilot analysis of its clinical significance with traditional statistical procedures and with an artificial neural network. AFAST-I was administered to the professional caregivers of 61 adults/seniors with ID with a mean age (± SD) of 53.4 (± 7.7) years (36% with Down syndrome). Internal consistency (Cronbach's α coefficient), inter/intra-rater reliabilities (intra-class coefficients, ICC) and concurrent, convergent and discriminant validity (Pearson's r coefficients) were computed. Clinical significance was probed by analysing the relationships among AFAST-I scores and the Sum of Cognitive Scores (SCS) and the Sum of Social Scores (SOS) of the Dementia Questionnaire for Persons with Intellectual Disabilities (DMR-I) after standardisation of their raw scores in equivalent scores (ES). An adaptive artificial system (AutoContractive Maps, AutoCM) was applied to all the variables recorded in the study sample, aimed at uncovering which variable occupies a central position and supports the entire network made up of the remaining variables interconnected among themselves with different weights. AFAST-I shows a high level of internal homogeneity with a Cronbach's α coefficient of 0.92. Inter-rater and intra-rater reliabilities were also excellent with ICC correlations of 0.96 and 0.93, respectively. The results of the analyses of the different AFAST-I validities all go in the expected direction: concurrent validity (r=-0.87 with ADL); convergent validity (r=0.63 with SCS; r=0.61 with SOS); discriminant validity (r=0.21 with the frequency of occurrence of dementia-related Behavioral Excesses of the Assessment for Adults with Developmental

  15. Usage Statistics

    Science.gov (United States)

    ... this page: https://medlineplus.gov/usestatistics.html MedlinePlus Statistics To use the sharing features on this page, ... By Quarter View image full size Quarterly User Statistics Quarter Page Views Unique Visitors Oct-Dec-98 ...

  16. Mathematical statistics

    CERN Document Server

    Pestman, Wiebe R

    2009-01-01

    This textbook provides a broad and solid introduction to mathematical statistics, including the classical subjects hypothesis testing, normal regression analysis, and normal analysis of variance. In addition, non-parametric statistics and vectorial statistics are considered, as well as applications of stochastic analysis in modern statistics, e.g., Kolmogorov-Smirnov testing, smoothing techniques, robustness and density estimation. For students with some elementary mathematical background. With many exercises. Prerequisites from measure theory and linear algebra are presented.

  17. Frog Statistics

    Science.gov (United States)

    Whole Frog Project and Virtual Frog Dissection Statistics wwwstats output for January 1 through duplicate or extraneous accesses. For example, in these statistics, while a POST requesting an image is as well. Note that this under-represents the bytes requested. Starting date for following statistics

  18. varying elastic parameters distributions

    KAUST Repository

    Moussawi, Ali

    2014-12-01

    The experimental identication of mechanical properties is crucial in mechanics for understanding material behavior and for the development of numerical models. Classical identi cation procedures employ standard shaped specimens, assume that the mechanical elds in the object are homogeneous, and recover global properties. Thus, multiple tests are required for full characterization of a heterogeneous object, leading to a time consuming and costly process. The development of non-contact, full- eld measurement techniques from which complex kinematic elds can be recorded has opened the door to a new way of thinking. From the identi cation point of view, suitable methods can be used to process these complex kinematic elds in order to recover multiple spatially varying parameters through one test or a few tests. The requirement is the development of identi cation techniques that can process these complex experimental data. This thesis introduces a novel identi cation technique called the constitutive compatibility method. The key idea is to de ne stresses as compatible with the observed kinematic eld through the chosen class of constitutive equation, making possible the uncoupling of the identi cation of stress from the identi cation of the material parameters. This uncoupling leads to parametrized solutions in cases where 5 the solution is non-unique (due to unknown traction boundary conditions) as demonstrated on 2D numerical examples. First the theory is outlined and the method is demonstrated in 2D applications. Second, the method is implemented within a domain decomposition framework in order to reduce the cost for processing very large problems. Finally, it is extended to 3D numerical examples. Promising results are shown for 2D and 3D problems.

  19. Statistical physics

    CERN Document Server

    Sadovskii, Michael V

    2012-01-01

    This volume provides a compact presentation of modern statistical physics at an advanced level. Beginning with questions on the foundations of statistical mechanics all important aspects of statistical physics are included, such as applications to ideal gases, the theory of quantum liquids and superconductivity and the modern theory of critical phenomena. Beyond that attention is given to new approaches, such as quantum field theory methods and non-equilibrium problems.

  20. Statistical optics

    CERN Document Server

    Goodman, Joseph W

    2015-01-01

    This book discusses statistical methods that are useful for treating problems in modern optics, and the application of these methods to solving a variety of such problems This book covers a variety of statistical problems in optics, including both theory and applications.  The text covers the necessary background in statistics, statistical properties of light waves of various types, the theory of partial coherence and its applications, imaging with partially coherent light, atmospheric degradations of images, and noise limitations in the detection of light. New topics have been introduced i

  1. Harmonic statistics

    Energy Technology Data Exchange (ETDEWEB)

    Eliazar, Iddo, E-mail: eliazar@post.tau.ac.il

    2017-05-15

    The exponential, the normal, and the Poisson statistical laws are of major importance due to their universality. Harmonic statistics are as universal as the three aforementioned laws, but yet they fall short in their ‘public relations’ for the following reason: the full scope of harmonic statistics cannot be described in terms of a statistical law. In this paper we describe harmonic statistics, in their full scope, via an object termed harmonic Poisson process: a Poisson process, over the positive half-line, with a harmonic intensity. The paper reviews the harmonic Poisson process, investigates its properties, and presents the connections of this object to an assortment of topics: uniform statistics, scale invariance, random multiplicative perturbations, Pareto and inverse-Pareto statistics, exponential growth and exponential decay, power-law renormalization, convergence and domains of attraction, the Langevin equation, diffusions, Benford’s law, and 1/f noise. - Highlights: • Harmonic statistics are described and reviewed in detail. • Connections to various statistical laws are established. • Connections to perturbation, renormalization and dynamics are established.

  2. Harmonic statistics

    International Nuclear Information System (INIS)

    Eliazar, Iddo

    2017-01-01

    The exponential, the normal, and the Poisson statistical laws are of major importance due to their universality. Harmonic statistics are as universal as the three aforementioned laws, but yet they fall short in their ‘public relations’ for the following reason: the full scope of harmonic statistics cannot be described in terms of a statistical law. In this paper we describe harmonic statistics, in their full scope, via an object termed harmonic Poisson process: a Poisson process, over the positive half-line, with a harmonic intensity. The paper reviews the harmonic Poisson process, investigates its properties, and presents the connections of this object to an assortment of topics: uniform statistics, scale invariance, random multiplicative perturbations, Pareto and inverse-Pareto statistics, exponential growth and exponential decay, power-law renormalization, convergence and domains of attraction, the Langevin equation, diffusions, Benford’s law, and 1/f noise. - Highlights: • Harmonic statistics are described and reviewed in detail. • Connections to various statistical laws are established. • Connections to perturbation, renormalization and dynamics are established.

  3. Statistical methods

    CERN Document Server

    Szulc, Stefan

    1965-01-01

    Statistical Methods provides a discussion of the principles of the organization and technique of research, with emphasis on its application to the problems in social statistics. This book discusses branch statistics, which aims to develop practical ways of collecting and processing numerical data and to adapt general statistical methods to the objectives in a given field.Organized into five parts encompassing 22 chapters, this book begins with an overview of how to organize the collection of such information on individual units, primarily as accomplished by government agencies. This text then

  4. Histoplasmosis Statistics

    Science.gov (United States)

    ... Testing Treatment & Outcomes Health Professionals Statistics More Resources Candidiasis Candida infections of the mouth, throat, and esophagus Vaginal candidiasis Invasive candidiasis Definition Symptoms Risk & Prevention Sources Diagnosis ...

  5. Statistical Diversions

    Science.gov (United States)

    Petocz, Peter; Sowey, Eric

    2012-01-01

    The term "data snooping" refers to the practice of choosing which statistical analyses to apply to a set of data after having first looked at those data. Data snooping contradicts a fundamental precept of applied statistics, that the scheme of analysis is to be planned in advance. In this column, the authors shall elucidate the…

  6. Statistical Diversions

    Science.gov (United States)

    Petocz, Peter; Sowey, Eric

    2008-01-01

    In this article, the authors focus on hypothesis testing--that peculiarly statistical way of deciding things. Statistical methods for testing hypotheses were developed in the 1920s and 1930s by some of the most famous statisticians, in particular Ronald Fisher, Jerzy Neyman and Egon Pearson, who laid the foundations of almost all modern methods of…

  7. Scan Statistics

    CERN Document Server

    Glaz, Joseph

    2009-01-01

    Suitable for graduate students and researchers in applied probability and statistics, as well as for scientists in biology, computer science, pharmaceutical science and medicine, this title brings together a collection of chapters illustrating the depth and diversity of theory, methods and applications in the area of scan statistics.

  8. Practical Statistics

    CERN Document Server

    Lyons, L.

    2016-01-01

    Accelerators and detectors are expensive, both in terms of money and human effort. It is thus important to invest effort in performing a good statistical anal- ysis of the data, in order to extract the best information from it. This series of five lectures deals with practical aspects of statistical issues that arise in typical High Energy Physics analyses.

  9. Descriptive statistics.

    Science.gov (United States)

    Nick, Todd G

    2007-01-01

    Statistics is defined by the Medical Subject Headings (MeSH) thesaurus as the science and art of collecting, summarizing, and analyzing data that are subject to random variation. The two broad categories of summarizing and analyzing data are referred to as descriptive and inferential statistics. This chapter considers the science and art of summarizing data where descriptive statistics and graphics are used to display data. In this chapter, we discuss the fundamentals of descriptive statistics, including describing qualitative and quantitative variables. For describing quantitative variables, measures of location and spread, for example the standard deviation, are presented along with graphical presentations. We also discuss distributions of statistics, for example the variance, as well as the use of transformations. The concepts in this chapter are useful for uncovering patterns within the data and for effectively presenting the results of a project.

  10. Statistical theory of dynamo

    Science.gov (United States)

    Kim, E.; Newton, A. P.

    2012-04-01

    One major problem in dynamo theory is the multi-scale nature of the MHD turbulence, which requires statistical theory in terms of probability distribution functions. In this contribution, we present the statistical theory of magnetic fields in a simplified mean field α-Ω dynamo model by varying the statistical property of alpha, including marginal stability and intermittency, and then utilize observational data of solar activity to fine-tune the mean field dynamo model. Specifically, we first present a comprehensive investigation into the effect of the stochastic parameters in a simplified α-Ω dynamo model. Through considering the manifold of marginal stability (the region of parameter space where the mean growth rate is zero), we show that stochastic fluctuations are conductive to dynamo. Furthermore, by considering the cases of fluctuating alpha that are periodic and Gaussian coloured random noise with identical characteristic time-scales and fluctuating amplitudes, we show that the transition to dynamo is significantly facilitated for stochastic alpha with random noise. Furthermore, we show that probability density functions (PDFs) of the growth-rate, magnetic field and magnetic energy can provide a wealth of useful information regarding the dynamo behaviour/intermittency. Finally, the precise statistical property of the dynamo such as temporal correlation and fluctuating amplitude is found to be dependent on the distribution the fluctuations of stochastic parameters. We then use observations of solar activity to constrain parameters relating to the effect in stochastic α-Ω nonlinear dynamo models. This is achieved through performing a comprehensive statistical comparison by computing PDFs of solar activity from observations and from our simulation of mean field dynamo model. The observational data that are used are the time history of solar activity inferred for C14 data in the past 11000 years on a long time scale and direct observations of the sun spot

  11. Semiconductor statistics

    CERN Document Server

    Blakemore, J S

    1962-01-01

    Semiconductor Statistics presents statistics aimed at complementing existing books on the relationships between carrier densities and transport effects. The book is divided into two parts. Part I provides introductory material on the electron theory of solids, and then discusses carrier statistics for semiconductors in thermal equilibrium. Of course a solid cannot be in true thermodynamic equilibrium if any electrical current is passed; but when currents are reasonably small the distribution function is but little perturbed, and the carrier distribution for such a """"quasi-equilibrium"""" co

  12. Statistical Physics

    CERN Document Server

    Wannier, Gregory Hugh

    1966-01-01

    Until recently, the field of statistical physics was traditionally taught as three separate subjects: thermodynamics, statistical mechanics, and kinetic theory. This text, a forerunner in its field and now a classic, was the first to recognize the outdated reasons for their separation and to combine the essentials of the three subjects into one unified presentation of thermal physics. It has been widely adopted in graduate and advanced undergraduate courses, and is recommended throughout the field as an indispensable aid to the independent study and research of statistical physics.Designed for

  13. Statistics Clinic

    Science.gov (United States)

    Feiveson, Alan H.; Foy, Millennia; Ploutz-Snyder, Robert; Fiedler, James

    2014-01-01

    Do you have elevated p-values? Is the data analysis process getting you down? Do you experience anxiety when you need to respond to criticism of statistical methods in your manuscript? You may be suffering from Insufficient Statistical Support Syndrome (ISSS). For symptomatic relief of ISSS, come for a free consultation with JSC biostatisticians at our help desk during the poster sessions at the HRP Investigators Workshop. Get answers to common questions about sample size, missing data, multiple testing, when to trust the results of your analyses and more. Side effects may include sudden loss of statistics anxiety, improved interpretation of your data, and increased confidence in your results.

  14. Stupid statistics!

    Science.gov (United States)

    Tellinghuisen, Joel

    2008-01-01

    The method of least squares is probably the most powerful data analysis tool available to scientists. Toward a fuller appreciation of that power, this work begins with an elementary review of statistics fundamentals, and then progressively increases in sophistication as the coverage is extended to the theory and practice of linear and nonlinear least squares. The results are illustrated in application to data analysis problems important in the life sciences. The review of fundamentals includes the role of sampling and its connection to probability distributions, the Central Limit Theorem, and the importance of finite variance. Linear least squares are presented using matrix notation, and the significance of the key probability distributions-Gaussian, chi-square, and t-is illustrated with Monte Carlo calculations. The meaning of correlation is discussed, including its role in the propagation of error. When the data themselves are correlated, special methods are needed for the fitting, as they are also when fitting with constraints. Nonlinear fitting gives rise to nonnormal parameter distributions, but the 10% Rule of Thumb suggests that such problems will be insignificant when the parameter is sufficiently well determined. Illustrations include calibration with linear and nonlinear response functions, the dangers inherent in fitting inverted data (e.g., Lineweaver-Burk equation), an analysis of the reliability of the van't Hoff analysis, the problem of correlated data in the Guggenheim method, and the optimization of isothermal titration calorimetry procedures using the variance-covariance matrix for experiment design. The work concludes with illustrations on assessing and presenting results.

  15. Image Statistics

    Energy Technology Data Exchange (ETDEWEB)

    Wendelberger, Laura Jean [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-08-08

    In large datasets, it is time consuming or even impossible to pick out interesting images. Our proposed solution is to find statistics to quantify the information in each image and use those to identify and pick out images of interest.

  16. Accident Statistics

    Data.gov (United States)

    Department of Homeland Security — Accident statistics available on the Coast Guard’s website by state, year, and one variable to obtain tables and/or graphs. Data from reports has been loaded for...

  17. CMS Statistics

    Data.gov (United States)

    U.S. Department of Health & Human Services — The CMS Center for Strategic Planning produces an annual CMS Statistics reference booklet that provides a quick reference for summary information about health...

  18. WPRDC Statistics

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — Data about the usage of the WPRDC site and its various datasets, obtained by combining Google Analytics statistics with information from the WPRDC's data portal.

  19. Multiparametric statistics

    CERN Document Server

    Serdobolskii, Vadim Ivanovich

    2007-01-01

    This monograph presents mathematical theory of statistical models described by the essentially large number of unknown parameters, comparable with sample size but can also be much larger. In this meaning, the proposed theory can be called "essentially multiparametric". It is developed on the basis of the Kolmogorov asymptotic approach in which sample size increases along with the number of unknown parameters.This theory opens a way for solution of central problems of multivariate statistics, which up until now have not been solved. Traditional statistical methods based on the idea of an infinite sampling often break down in the solution of real problems, and, dependent on data, can be inefficient, unstable and even not applicable. In this situation, practical statisticians are forced to use various heuristic methods in the hope the will find a satisfactory solution.Mathematical theory developed in this book presents a regular technique for implementing new, more efficient versions of statistical procedures. ...

  20. Gonorrhea Statistics

    Science.gov (United States)

    ... Search Form Controls Cancel Submit Search the CDC Gonorrhea Note: Javascript is disabled or is not supported ... Twitter STD on Facebook Sexually Transmitted Diseases (STDs) Gonorrhea Statistics Recommend on Facebook Tweet Share Compartir Gonorrhea ...

  1. Reversible Statistics

    DEFF Research Database (Denmark)

    Tryggestad, Kjell

    2004-01-01

    The study aims is to describe how the inclusion and exclusion of materials and calculative devices construct the boundaries and distinctions between statistical facts and artifacts in economics. My methodological approach is inspired by John Graunt's (1667) Political arithmetic and more recent work...... within constructivism and the field of Science and Technology Studies (STS). The result of this approach is here termed reversible statistics, reconstructing the findings of a statistical study within economics in three different ways. It is argued that all three accounts are quite normal, albeit...... in different ways. The presence and absence of diverse materials, both natural and political, is what distinguishes them from each other. Arguments are presented for a more symmetric relation between the scientific statistical text and the reader. I will argue that a more symmetric relation can be achieved...

  2. Vital statistics

    CERN Document Server

    MacKenzie, Dana

    2004-01-01

    The drawbacks of using 19th-century mathematics in physics and astronomy are illustrated. To continue with the expansion of the knowledge about the cosmos, the scientists will have to come in terms with modern statistics. Some researchers have deliberately started importing techniques that are used in medical research. However, the physicists need to identify the brand of statistics that will be suitable for them, and make a choice between the Bayesian and the frequentists approach. (Edited abstract).

  3. Funding source and primary outcome changes in clinical trials registered on ClinicalTrials.gov are associated with the reporting of a statistically significant primary outcome: a cross-sectional study [v2; ref status: indexed, http://f1000r.es/5bj

    Directory of Open Access Journals (Sweden)

    Sreeram V Ramagopalan

    2015-04-01

    Full Text Available Background: We and others have shown a significant proportion of interventional trials registered on ClinicalTrials.gov have their primary outcomes altered after the listed study start and completion dates. The objectives of this study were to investigate whether changes made to primary outcomes are associated with the likelihood of reporting a statistically significant primary outcome on ClinicalTrials.gov. Methods: A cross-sectional analysis of all interventional clinical trials registered on ClinicalTrials.gov as of 20 November 2014 was performed. The main outcome was any change made to the initially listed primary outcome and the time of the change in relation to the trial start and end date. Findings: 13,238 completed interventional trials were registered with ClinicalTrials.gov that also had study results posted on the website. 2555 (19.3% had one or more statistically significant primary outcomes. Statistical analysis showed that registration year, funding source and primary outcome change after trial completion were associated with reporting a statistically significant primary outcome. Conclusions: Funding source and primary outcome change after trial completion are associated with a statistically significant primary outcome report on clinicaltrials.gov.

  4. Statistical optics

    Science.gov (United States)

    Goodman, J. W.

    This book is based on the thesis that some training in the area of statistical optics should be included as a standard part of any advanced optics curriculum. Random variables are discussed, taking into account definitions of probability and random variables, distribution functions and density functions, an extension to two or more random variables, statistical averages, transformations of random variables, sums of real random variables, Gaussian random variables, complex-valued random variables, and random phasor sums. Other subjects examined are related to random processes, some first-order properties of light waves, the coherence of optical waves, some problems involving high-order coherence, effects of partial coherence on imaging systems, imaging in the presence of randomly inhomogeneous media, and fundamental limits in photoelectric detection of light. Attention is given to deterministic versus statistical phenomena and models, the Fourier transform, and the fourth-order moment of the spectrum of a detected speckle image.

  5. Statistical mechanics

    CERN Document Server

    Schwabl, Franz

    2006-01-01

    The completely revised new edition of the classical book on Statistical Mechanics covers the basic concepts of equilibrium and non-equilibrium statistical physics. In addition to a deductive approach to equilibrium statistics and thermodynamics based on a single hypothesis - the form of the microcanonical density matrix - this book treats the most important elements of non-equilibrium phenomena. Intermediate calculations are presented in complete detail. Problems at the end of each chapter help students to consolidate their understanding of the material. Beyond the fundamentals, this text demonstrates the breadth of the field and its great variety of applications. Modern areas such as renormalization group theory, percolation, stochastic equations of motion and their applications to critical dynamics, kinetic theories, as well as fundamental considerations of irreversibility, are discussed. The text will be useful for advanced students of physics and other natural sciences; a basic knowledge of quantum mechan...

  6. Statistical mechanics

    CERN Document Server

    Jana, Madhusudan

    2015-01-01

    Statistical mechanics is self sufficient, written in a lucid manner, keeping in mind the exam system of the universities. Need of study this subject and its relation to Thermodynamics is discussed in detail. Starting from Liouville theorem gradually, the Statistical Mechanics is developed thoroughly. All three types of Statistical distribution functions are derived separately with their periphery of applications and limitations. Non-interacting ideal Bose gas and Fermi gas are discussed thoroughly. Properties of Liquid He-II and the corresponding models have been depicted. White dwarfs and condensed matter physics, transport phenomenon - thermal and electrical conductivity, Hall effect, Magneto resistance, viscosity, diffusion, etc. are discussed. Basic understanding of Ising model is given to explain the phase transition. The book ends with a detailed coverage to the method of ensembles (namely Microcanonical, canonical and grand canonical) and their applications. Various numerical and conceptual problems ar...

  7. Statistical physics

    CERN Document Server

    Guénault, Tony

    2007-01-01

    In this revised and enlarged second edition of an established text Tony Guénault provides a clear and refreshingly readable introduction to statistical physics, an essential component of any first degree in physics. The treatment itself is self-contained and concentrates on an understanding of the physical ideas, without requiring a high level of mathematical sophistication. A straightforward quantum approach to statistical averaging is adopted from the outset (easier, the author believes, than the classical approach). The initial part of the book is geared towards explaining the equilibrium properties of a simple isolated assembly of particles. Thus, several important topics, for example an ideal spin-½ solid, can be discussed at an early stage. The treatment of gases gives full coverage to Maxwell-Boltzmann, Fermi-Dirac and Bose-Einstein statistics. Towards the end of the book the student is introduced to a wider viewpoint and new chapters are included on chemical thermodynamics, interactions in, for exam...

  8. Statistical Physics

    CERN Document Server

    Mandl, Franz

    1988-01-01

    The Manchester Physics Series General Editors: D. J. Sandiford; F. Mandl; A. C. Phillips Department of Physics and Astronomy, University of Manchester Properties of Matter B. H. Flowers and E. Mendoza Optics Second Edition F. G. Smith and J. H. Thomson Statistical Physics Second Edition E. Mandl Electromagnetism Second Edition I. S. Grant and W. R. Phillips Statistics R. J. Barlow Solid State Physics Second Edition J. R. Hook and H. E. Hall Quantum Mechanics F. Mandl Particle Physics Second Edition B. R. Martin and G. Shaw The Physics of Stars Second Edition A. C. Phillips Computing for Scient

  9. Statistical inference

    CERN Document Server

    Rohatgi, Vijay K

    2003-01-01

    Unified treatment of probability and statistics examines and analyzes the relationship between the two fields, exploring inferential issues. Numerous problems, examples, and diagrams--some with solutions--plus clear-cut, highlighted summaries of results. Advanced undergraduate to graduate level. Contents: 1. Introduction. 2. Probability Model. 3. Probability Distributions. 4. Introduction to Statistical Inference. 5. More on Mathematical Expectation. 6. Some Discrete Models. 7. Some Continuous Models. 8. Functions of Random Variables and Random Vectors. 9. Large-Sample Theory. 10. General Meth

  10. AP statistics

    CERN Document Server

    Levine-Wissing, Robin

    2012-01-01

    All Access for the AP® Statistics Exam Book + Web + Mobile Everything you need to prepare for the Advanced Placement® exam, in a study system built around you! There are many different ways to prepare for an Advanced Placement® exam. What's best for you depends on how much time you have to study and how comfortable you are with the subject matter. To score your highest, you need a system that can be customized to fit you: your schedule, your learning style, and your current level of knowledge. This book, and the online tools that come with it, will help you personalize your AP® Statistics prep

  11. Statistical mechanics

    CERN Document Server

    Davidson, Norman

    2003-01-01

    Clear and readable, this fine text assists students in achieving a grasp of the techniques and limitations of statistical mechanics. The treatment follows a logical progression from elementary to advanced theories, with careful attention to detail and mathematical development, and is sufficiently rigorous for introductory or intermediate graduate courses.Beginning with a study of the statistical mechanics of ideal gases and other systems of non-interacting particles, the text develops the theory in detail and applies it to the study of chemical equilibrium and the calculation of the thermody

  12. Statistical Computing

    Indian Academy of Sciences (India)

    inference and finite population sampling. Sudhakar Kunte. Elements of statistical computing are discussed in this series. ... which captain gets an option to decide whether to field first or bat first ... may of course not be fair, in the sense that the team which wins ... describe two methods of drawing a random number between 0.

  13. Statistical thermodynamics

    CERN Document Server

    Schrödinger, Erwin

    1952-01-01

    Nobel Laureate's brilliant attempt to develop a simple, unified standard method of dealing with all cases of statistical thermodynamics - classical, quantum, Bose-Einstein, Fermi-Dirac, and more.The work also includes discussions of Nernst theorem, Planck's oscillator, fluctuations, the n-particle problem, problem of radiation, much more.

  14. Statistics: a Bayesian perspective

    National Research Council Canada - National Science Library

    Berry, Donald A

    1996-01-01

    ...: it is the only introductory textbook based on Bayesian ideas, it combines concepts and methods, it presents statistics as a means of integrating data into the significant process, it develops ideas...

  15. Energy Statistics

    International Nuclear Information System (INIS)

    Anon.

    1994-01-01

    For the years 1992 and 1993, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period. The tables and figures shown in this publication are: Changes in the volume of GNP and energy consumption; Coal consumption; Natural gas consumption; Peat consumption; Domestic oil deliveries; Import prices of oil; Price development of principal oil products; Fuel prices for power production; Total energy consumption by source; Electricity supply; Energy imports by country of origin in 1993; Energy exports by recipient country in 1993; Consumer prices of liquid fuels; Consumer prices of hard coal and natural gas, prices of indigenous fuels; Average electricity price by type of consumer; Price of district heating by type of consumer and Excise taxes and turnover taxes included in consumer prices of some energy sources

  16. Statistical Optics

    Science.gov (United States)

    Goodman, Joseph W.

    2000-07-01

    The Wiley Classics Library consists of selected books that have become recognized classics in their respective fields. With these new unabridged and inexpensive editions, Wiley hopes to extend the life of these important works by making them available to future generations of mathematicians and scientists. Currently available in the Series: T. W. Anderson The Statistical Analysis of Time Series T. S. Arthanari & Yadolah Dodge Mathematical Programming in Statistics Emil Artin Geometric Algebra Norman T. J. Bailey The Elements of Stochastic Processes with Applications to the Natural Sciences Robert G. Bartle The Elements of Integration and Lebesgue Measure George E. P. Box & Norman R. Draper Evolutionary Operation: A Statistical Method for Process Improvement George E. P. Box & George C. Tiao Bayesian Inference in Statistical Analysis R. W. Carter Finite Groups of Lie Type: Conjugacy Classes and Complex Characters R. W. Carter Simple Groups of Lie Type William G. Cochran & Gertrude M. Cox Experimental Designs, Second Edition Richard Courant Differential and Integral Calculus, Volume I RIchard Courant Differential and Integral Calculus, Volume II Richard Courant & D. Hilbert Methods of Mathematical Physics, Volume I Richard Courant & D. Hilbert Methods of Mathematical Physics, Volume II D. R. Cox Planning of Experiments Harold S. M. Coxeter Introduction to Geometry, Second Edition Charles W. Curtis & Irving Reiner Representation Theory of Finite Groups and Associative Algebras Charles W. Curtis & Irving Reiner Methods of Representation Theory with Applications to Finite Groups and Orders, Volume I Charles W. Curtis & Irving Reiner Methods of Representation Theory with Applications to Finite Groups and Orders, Volume II Cuthbert Daniel Fitting Equations to Data: Computer Analysis of Multifactor Data, Second Edition Bruno de Finetti Theory of Probability, Volume I Bruno de Finetti Theory of Probability, Volume 2 W. Edwards Deming Sample Design in Business Research

  17. Statistical utilitarianism

    OpenAIRE

    Pivato, Marcus

    2013-01-01

    We show that, in a sufficiently large population satisfying certain statistical regularities, it is often possible to accurately estimate the utilitarian social welfare function, even if we only have very noisy data about individual utility functions and interpersonal utility comparisons. In particular, we show that it is often possible to identify an optimal or close-to-optimal utilitarian social choice using voting rules such as the Borda rule, approval voting, relative utilitarianism, or a...

  18. Experimental statistics

    CERN Document Server

    Natrella, Mary Gibbons

    1963-01-01

    Formulated to assist scientists and engineers engaged in army ordnance research and development programs, this well-known and highly regarded handbook is a ready reference for advanced undergraduate and graduate students as well as for professionals seeking engineering information and quantitative data for designing, developing, constructing, and testing equipment. Topics include characterizing and comparing the measured performance of a material, product, or process; general considerations in planning experiments; statistical techniques for analyzing extreme-value data; use of transformations

  19. Intervention for Maltreating Fathers: Statistically and Clinically Significant Change

    Science.gov (United States)

    Scott, Katreena L.; Lishak, Vicky

    2012-01-01

    Objective: Fathers are seldom the focus of efforts to address child maltreatment and little is currently known about the effectiveness of intervention for this population. To address this gap, we examined the efficacy of a community-based group treatment program for fathers who had abused or neglected their children or exposed their children to…

  20. The questioned p value: clinical, practical and statistical significance

    Directory of Open Access Journals (Sweden)

    Rosa Jiménez-Paneque

    2016-09-01

    Full Text Available Resumen El uso del valor de p y la significación estadística han estado en entredicho desde principios de la década de los 80 en el siglo pasado hasta nuestros días. Mucho se ha discutido al respecto en el ámbito de la estadística y sus aplicaciones, en particular a la Epidemiología y la Salud Pública. El valor de p y su equivalente, la significación estadística, son por demás conceptos difíciles de asimilar para los muchos profesionales de la salud involucrados de alguna manera en la investigación aplicada a sus áreas de trabajo. Sin embargo, su significado debería ser claro en términos intuitivos a pesar de que se basa en conceptos teóricos del terreno de la Estadística-Matemática. Este artículo intenta presentar al valor de p como un concepto que se aplica a la vida diaria y por tanto intuitivamente sencillo pero cuyo uso adecuado no se puede separar de elementos teóricos y metodológicos con complejidad intrínseca. Se explican también de manera intuitiva las razones detrás de las críticas que ha recibido el valor de p y su uso aislado, principalmente la necesidad de deslindar significación estadística de significación clínica y se mencionan algunos de los remedios propuestos para estos problemas. Se termina aludiendo a la actual tendencia a reivindicar su uso apelando a la conveniencia de utilizarlo en ciertas situaciones y la reciente declaración de la Asociación Americana de Estadística al respecto.

  1. Sibling Competition & Growth Tradeoffs. Biological vs. Statistical Significance

    OpenAIRE

    Kramer, Karen L.; Veile, Amanda; Ot?rola-Castillo, Erik

    2016-01-01

    Early childhood growth has many downstream effects on future health and reproduction and is an important measure of offspring quality. While a tradeoff between family size and child growth outcomes is theoretically predicted in high-fertility societies, empirical evidence is mixed. This is often attributed to phenotypic variation in parental condition. However, inconsistent study results may also arise because family size confounds the potentially differential effects that older and younger s...

  2. Energy statistics

    International Nuclear Information System (INIS)

    Anon.

    1989-01-01

    World data from the United Nation's latest Energy Statistics Yearbook, first published in our last issue, are completed here. The 1984-86 data were revised and 1987 data added for world commercial energy production and consumption, world natural gas plant liquids production, world LP-gas production, imports, exports, and consumption, world residual fuel oil production, imports, exports, and consumption, world lignite production, imports, exports, and consumption, world peat production and consumption, world electricity production, imports, exports, and consumption (Table 80), and world nuclear electric power production

  3. Statistics 101 for Radiologists.

    Science.gov (United States)

    Anvari, Arash; Halpern, Elkan F; Samir, Anthony E

    2015-10-01

    Diagnostic tests have wide clinical applications, including screening, diagnosis, measuring treatment effect, and determining prognosis. Interpreting diagnostic test results requires an understanding of key statistical concepts used to evaluate test efficacy. This review explains descriptive statistics and discusses probability, including mutually exclusive and independent events and conditional probability. In the inferential statistics section, a statistical perspective on study design is provided, together with an explanation of how to select appropriate statistical tests. Key concepts in recruiting study samples are discussed, including representativeness and random sampling. Variable types are defined, including predictor, outcome, and covariate variables, and the relationship of these variables to one another. In the hypothesis testing section, we explain how to determine if observed differences between groups are likely to be due to chance. We explain type I and II errors, statistical significance, and study power, followed by an explanation of effect sizes and how confidence intervals can be used to generalize observed effect sizes to the larger population. Statistical tests are explained in four categories: t tests and analysis of variance, proportion analysis tests, nonparametric tests, and regression techniques. We discuss sensitivity, specificity, accuracy, receiver operating characteristic analysis, and likelihood ratios. Measures of reliability and agreement, including κ statistics, intraclass correlation coefficients, and Bland-Altman graphs and analysis, are introduced. © RSNA, 2015.

  4. Time-varying Crash Risk

    DEFF Research Database (Denmark)

    Christoffersen, Peter; Feunoua, Bruno; Jeon, Yoontae

    We estimate a continuous-time model with stochastic volatility and dynamic crash probability for the S&P 500 index and find that market illiquidity dominates other factors in explaining the stock market crash risk. While the crash probability is time-varying, its dynamic depends only weakly on re...

  5. Eestlased Karlovy Varys / J. R.

    Index Scriptorium Estoniae

    J. R.

    2007-01-01

    Ilmar Raagi mängufilm "Klass" osaleb 42. Karlovy Vary rahvusvahelise filmifestivali võistlusprogrammis "East of the West" ja Asko Kase lühimängufilm "Zen läbi prügi" on valitud festivali kõrvalprogrammi "Forum of Independents"

  6. Esmaklassiline Karlovy Vary / Jaanus Noormets

    Index Scriptorium Estoniae

    Noormets, Jaanus

    2007-01-01

    Ilmar Raagi mängufilm "Klass" võitis 42. Karlovy Vary rahvusvahelise filmifestivalil kaks auhinda - ametliku kõrvalvõistlusprogrammi "East of the West" eripreemia "Special mention" ja Euroopa väärtfilmikinode keti Europa Cinemas preemia. Ka Asko Kase lühifilmi "Zen läbi prügi linastumisest ning teistest auhinnasaajatest ning osalejatest

  7. Optimistlik Karlovy Vary / Jaan Ruus

    Index Scriptorium Estoniae

    Ruus, Jaan, 1938-2017

    2007-01-01

    42. Karlovy Vary rahvusvahelise filmifestivali auhinnatud filmidest (žürii esimees Peter Bart). Kristallgloobuse sai Islandi-Saksamaa "Katseklaasilinn" (režii Baltasar Kormakur), parimaks režissööriks tunnistati norralane Bard Breien ("Negatiivse mõtlemise kunst"). Austraallase Michael James Rowlandi "Hea õnne teekond" sai žürii eripreemia

  8. National Statistical Commission and Indian Official Statistics*

    Indian Academy of Sciences (India)

    IAS Admin

    a good collection of official statistics of that time. With more .... statistical agencies and institutions to provide details of statistical activities .... ing several training programmes. .... ful completion of Indian Statistical Service examinations, the.

  9. Basics of statistical physics

    CERN Document Server

    Müller-Kirsten, Harald J W

    2013-01-01

    Statistics links microscopic and macroscopic phenomena, and requires for this reason a large number of microscopic elements like atoms. The results are values of maximum probability or of averaging. This introduction to statistical physics concentrates on the basic principles, and attempts to explain these in simple terms supplemented by numerous examples. These basic principles include the difference between classical and quantum statistics, a priori probabilities as related to degeneracies, the vital aspect of indistinguishability as compared with distinguishability in classical physics, the differences between conserved and non-conserved elements, the different ways of counting arrangements in the three statistics (Maxwell-Boltzmann, Fermi-Dirac, Bose-Einstein), the difference between maximization of the number of arrangements of elements, and averaging in the Darwin-Fowler method. Significant applications to solids, radiation and electrons in metals are treated in separate chapters, as well as Bose-Eins...

  10. Testing Significance Testing

    Directory of Open Access Journals (Sweden)

    Joachim I. Krueger

    2018-04-01

    Full Text Available The practice of Significance Testing (ST remains widespread in psychological science despite continual criticism of its flaws and abuses. Using simulation experiments, we address four concerns about ST and for two of these we compare ST’s performance with prominent alternatives. We find the following: First, the 'p' values delivered by ST predict the posterior probability of the tested hypothesis well under many research conditions. Second, low 'p' values support inductive inferences because they are most likely to occur when the tested hypothesis is false. Third, 'p' values track likelihood ratios without raising the uncertainties of relative inference. Fourth, 'p' values predict the replicability of research findings better than confidence intervals do. Given these results, we conclude that 'p' values may be used judiciously as a heuristic tool for inductive inference. Yet, 'p' values cannot bear the full burden of inference. We encourage researchers to be flexible in their selection and use of statistical methods.

  11. Genetic polymorphisms in varied environments.

    Science.gov (United States)

    Powell, J R

    1971-12-03

    Thirteen experimenital populationis of Drosophila willistoni were maintained in cages, in some of which the environments were relatively constant and in others varied. After 45 weeks, the populations were assayed by gel electrophoresis for polymorphisms at 22 protein loci. The average heterozygosity per individual and the average unmber of alleles per locus were higher in populations maintained in heterogeneous environments than in populations in more constant enviroments.

  12. Can a significance test be genuinely Bayesian?

    OpenAIRE

    Pereira, Carlos A. de B.; Stern, Julio Michael; Wechsler, Sergio

    2008-01-01

    The Full Bayesian Significance Test, FBST, is extensively reviewed. Its test statistic, a genuine Bayesian measure of evidence, is discussed in detail. Its behavior in some problems of statistical inference like testing for independence in contingency tables is discussed.

  13. The impact of reorienting cone-beam computed tomographic images in varied head positions on the coordinates of anatomical landmarks

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jae Hun; Jeong, Ho Gul; Hwang, Jae Joon; Lee, Jung Hee; Han, Sang Sun [Dept. of Oral and Maxillofacial Radiology, Yonsei University, College of Dentistry, Seoul (Korea, Republic of)

    2016-06-15

    The aim of this study was to compare the coordinates of anatomical landmarks on cone-beam computed tomographic (CBCT) images in varied head positions before and after reorientation using image analysis software. CBCT images were taken in a normal position and four varied head positions using a dry skull marked with 3 points where gutta percha was fixed. In each of the five radiographic images, reference points were set, 20 anatomical landmarks were identified, and each set of coordinates was calculated. Coordinates in the images from the normally positioned head were compared with those in the images obtained from varied head positions using statistical methods. Post-reorientation coordinates calculated using a three-dimensional image analysis program were also compared to the reference coordinates. In the original images, statistically significant differences were found between coordinates in the normal-position and varied-position images. However, post-reorientation, no statistically significant differences were found between coordinates in the normal-position and varied-position images. The changes in head position impacted the coordinates of the anatomical landmarks in three-dimensional images. However, reorientation using image analysis software allowed accurate superimposition onto the reference positions.

  14. Whither Statistics Education Research?

    Science.gov (United States)

    Watson, Jane

    2016-01-01

    This year marks the 25th anniversary of the publication of a "National Statement on Mathematics for Australian Schools", which was the first curriculum statement this country had including "Chance and Data" as a significant component. It is hence an opportune time to survey the history of the related statistics education…

  15. Varying Constants, Gravitation and Cosmology

    Directory of Open Access Journals (Sweden)

    Jean-Philippe Uzan

    2011-03-01

    Full Text Available Fundamental constants are a cornerstone of our physical laws. Any constant varying in space and/or time would reflect the existence of an almost massless field that couples to matter. This will induce a violation of the universality of free fall. Thus, it is of utmost importance for our understanding of gravity and of the domain of validity of general relativity to test for their constancy. We detail the relations between the constants, the tests of the local position invariance and of the universality of free fall. We then review the main experimental and observational constraints that have been obtained from atomic clocks, the Oklo phenomenon, solar system observations, meteorite dating, quasar absorption spectra, stellar physics, pulsar timing, the cosmic microwave background and big bang nucleosynthesis. At each step we describe the basics of each system, its dependence with respect to the constants, the known systematic effects and the most recent constraints that have been obtained. We then describe the main theoretical frameworks in which the low-energy constants may actually be varying and we focus on the unification mechanisms and the relations between the variation of different constants. To finish, we discuss the more speculative possibility of understanding their numerical values and the apparent fine-tuning that they confront us with.

  16. Tumor significant dose

    International Nuclear Information System (INIS)

    Supe, S.J.; Nagalaxmi, K.V.; Meenakshi, L.

    1983-01-01

    In the practice of radiotherapy, various concepts like NSD, CRE, TDF, and BIR are being used to evaluate the biological effectiveness of the treatment schedules on the normal tissues. This has been accepted as the tolerance of the normal tissue is the limiting factor in the treatment of cancers. At present when various schedules are tried, attention is therefore paid to the biological damage of the normal tissues only and it is expected that the damage to the cancerous tissues would be extensive enough to control the cancer. Attempt is made in the present work to evaluate the concent of tumor significant dose (TSD) which will represent the damage to the cancerous tissue. Strandquist in the analysis of a large number of cases of squamous cell carcinoma found that for the 5 fraction/week treatment, the total dose required to bring about the same damage for the cancerous tissue is proportional to T/sup -0.22/, where T is the overall time over which the dose is delivered. Using this finding the TSD was defined as DxN/sup -p/xT/sup -q/, where D is the total dose, N the number of fractions, T the overall time p and q are the exponents to be suitably chosen. The values of p and q are adjusted such that p+q< or =0.24, and p varies from 0.0 to 0.24 and q varies from 0.0 to 0.22. Cases of cancer of cervix uteri treated between 1978 and 1980 in the V. N. Cancer Centre, Kuppuswamy Naidu Memorial Hospital, Coimbatore, India were analyzed on the basis of these formulations. These data, coupled with the clinical experience, were used for choice of a formula for the TSD. Further, the dose schedules used in the British Institute of Radiology fraction- ation studies were also used to propose that the tumor significant dose is represented by DxN/sup -0.18/xT/sup -0.06/

  17. Learning Predictive Statistics: Strategies and Brain Mechanisms.

    Science.gov (United States)

    Wang, Rui; Shen, Yuan; Tino, Peter; Welchman, Andrew E; Kourtzi, Zoe

    2017-08-30

    When immersed in a new environment, we are challenged to decipher initially incomprehensible streams of sensory information. However, quite rapidly, the brain finds structure and meaning in these incoming signals, helping us to predict and prepare ourselves for future actions. This skill relies on extracting the statistics of event streams in the environment that contain regularities of variable complexity from simple repetitive patterns to complex probabilistic combinations. Here, we test the brain mechanisms that mediate our ability to adapt to the environment's statistics and predict upcoming events. By combining behavioral training and multisession fMRI in human participants (male and female), we track the corticostriatal mechanisms that mediate learning of temporal sequences as they change in structure complexity. We show that learning of predictive structures relates to individual decision strategy; that is, selecting the most probable outcome in a given context (maximizing) versus matching the exact sequence statistics. These strategies engage distinct human brain regions: maximizing engages dorsolateral prefrontal, cingulate, sensory-motor regions, and basal ganglia (dorsal caudate, putamen), whereas matching engages occipitotemporal regions (including the hippocampus) and basal ganglia (ventral caudate). Our findings provide evidence for distinct corticostriatal mechanisms that facilitate our ability to extract behaviorally relevant statistics to make predictions. SIGNIFICANCE STATEMENT Making predictions about future events relies on interpreting streams of information that may initially appear incomprehensible. Past work has studied how humans identify repetitive patterns and associative pairings. However, the natural environment contains regularities that vary in complexity from simple repetition to complex probabilistic combinations. Here, we combine behavior and multisession fMRI to track the brain mechanisms that mediate our ability to adapt to

  18. Regionally-varying and regionally-uniform electricity pricing policies compared across four usage categories

    International Nuclear Information System (INIS)

    Cho, Seong-Hoon; Kim, Taeyoung; Kim, Hyun Jae; Park, Kihyun; Roberts, Roland K.

    2015-01-01

    The objective of our research is to predict how electricity demand varies spatially between status quo regionally-uniform electricity pricing and hypothetical regionally-varying electricity pricing across usage categories. We summarize the empirical results of a case study of electricity demand in South Korea with three key findings and their related implications. First, the price elasticities of electricity demand differ across usage categories. Specifically, electricity demands for manufacturing and retail uses are price inelastic and close to unit elastic, respectively, while those for agricultural and residential uses are not statistically significant. This information is important in designing energy policy, because higher electricity prices could reduce electricity demands for manufacturing and retail uses, resulting in slower growth in those sectors. Second, spatial spillovers in electricity demand vary across uses. Understanding the spatial structure of electricity demand provides useful information to energy policy makers for anticipating changes in demand across regions via regionally-varying electricity pricing for different uses. Third, simulation results suggest that spatial variations among electricity demands by usage category under a regionally-varying electricity-pricing policy differ from those under a regionally-uniform electricity-pricing policy. Differences in spatial changes between the policies provide information for developing a realistic regionally-varying electricity-pricing policy according to usage category. - Highlights: • We compare regionally-varying and regionally-uniform electricity pricing policies. • We summarize empirical results of a case study of electricity demand in South Korea. • We confirm that spatial spillovers in electricity demands vary across different uses. • We find positive spatial spillovers in the manufacturing and residential sectors. • Our methods help policy makers evaluate regionally-varying pricing

  19. Varying prior information in Bayesian inversion

    International Nuclear Information System (INIS)

    Walker, Matthew; Curtis, Andrew

    2014-01-01

    Bayes' rule is used to combine likelihood and prior probability distributions. The former represents knowledge derived from new data, the latter represents pre-existing knowledge; the Bayesian combination is the so-called posterior distribution, representing the resultant new state of knowledge. While varying the likelihood due to differing data observations is common, there are also situations where the prior distribution must be changed or replaced repeatedly. For example, in mixture density neural network (MDN) inversion, using current methods the neural network employed for inversion needs to be retrained every time prior information changes. We develop a method of prior replacement to vary the prior without re-training the network. Thus the efficiency of MDN inversions can be increased, typically by orders of magnitude when applied to geophysical problems. We demonstrate this for the inversion of seismic attributes in a synthetic subsurface geological reservoir model. We also present results which suggest that prior replacement can be used to control the statistical properties (such as variance) of the final estimate of the posterior in more general (e.g., Monte Carlo based) inverse problem solutions. (paper)

  20. Weighted approximation with varying weight

    CERN Document Server

    Totik, Vilmos

    1994-01-01

    A new construction is given for approximating a logarithmic potential by a discrete one. This yields a new approach to approximation with weighted polynomials of the form w"n"(" "= uppercase)P"n"(" "= uppercase). The new technique settles several open problems, and it leads to a simple proof for the strong asymptotics on some L p(uppercase) extremal problems on the real line with exponential weights, which, for the case p=2, are equivalent to power- type asymptotics for the leading coefficients of the corresponding orthogonal polynomials. The method is also modified toyield (in a sense) uniformly good approximation on the whole support. This allows one to deduce strong asymptotics in some L p(uppercase) extremal problems with varying weights. Applications are given, relating to fast decreasing polynomials, asymptotic behavior of orthogonal polynomials and multipoint Pade approximation. The approach is potential-theoretic, but the text is self-contained.

  1. Estrelas variáveis

    OpenAIRE

    Viana, Sérgio Manuel de Oliveira

    2001-01-01

    A observação do céu nocturno é uma prática que vem da Antiguidade. Desde então e durante muito tempo pensou-se que as estrelas mantinham o brilho constante. Assim foi até ao século XVI, quando David Fabricius observou uma estrela cujo brilho variava periodicamente. Dois séculos mais tarde, Jonh Goodricke descobriu uma segunda estrela e com o desenvolvimento de instrumentos de observação este conjunto foi muito alargado e hoje inclui o Sol.A variação do brilho das estrelas variáveis permite d...

  2. Worry, Intolerance of Uncertainty, and Statistics Anxiety

    Science.gov (United States)

    Williams, Amanda S.

    2013-01-01

    Statistics anxiety is a problem for most graduate students. This study investigates the relationship between intolerance of uncertainty, worry, and statistics anxiety. Intolerance of uncertainty was significantly related to worry, and worry was significantly related to three types of statistics anxiety. Six types of statistics anxiety were…

  3. Childhood Cancer Statistics

    Science.gov (United States)

    ... Watchdog Ratings Feedback Contact Select Page Childhood Cancer Statistics Home > Cancer Resources > Childhood Cancer Statistics Childhood Cancer Statistics – Graphs and Infographics Number of Diagnoses Incidence Rates ...

  4. Statistics for experimentalists

    CERN Document Server

    Cooper, B E

    2014-01-01

    Statistics for Experimentalists aims to provide experimental scientists with a working knowledge of statistical methods and search approaches to the analysis of data. The book first elaborates on probability and continuous probability distributions. Discussions focus on properties of continuous random variables and normal variables, independence of two random variables, central moments of a continuous distribution, prediction from a normal distribution, binomial probabilities, and multiplication of probabilities and independence. The text then examines estimation and tests of significance. Topics include estimators and estimates, expected values, minimum variance linear unbiased estimators, sufficient estimators, methods of maximum likelihood and least squares, and the test of significance method. The manuscript ponders on distribution-free tests, Poisson process and counting problems, correlation and function fitting, balanced incomplete randomized block designs and the analysis of covariance, and experiment...

  5. Mutagenicity potential of commercial broth cubes at varying concentrations

    International Nuclear Information System (INIS)

    De Torres, Nelson Velasquez; Talain, Augusto Nicolas.

    1997-01-01

    Today, there has been a growing concern on the mutagenicity potential of environmental chemical systems. These environmental chemicals such as pesticides, food additives, synthetic drugs, water and atmospheric pollutants are possible causes of mutagenic activity. Meat products and some meat flavorings, were also reported to exhibit mutagenic activity. And since these products are normal part of the daily human diet, there is a need for extensive studies regarding the possible mutagenic activity associated with these products. This study aimed to evaluate the mutagenicity potential of commercial broth cubes at varying concentration. The researchers sought to answer the following questions: 1. Do beef, pork and chicken broth cubes exhibit mutagenic activity? 2. Are there significant differences in the mutagenic activity among the three samples? 3. Are these significant differences in the mutagenic activity exhibited by each of the samples compared to that of Mitomycin-C (positive control)? 4. Which of the sample of each specific concentration exhibit the greatest mutagenic activity? Three specific concentrations of beef, pork and chicken broth cubes were prepared and their mutagenicity potential was evaluated by using the Micronucleus test. The formation of micro nucleated polychromatic and micro nucleated normo chromatic erythrocytes in bone marrow cells of mice treated with these samples were detected using a Carl-Zeiss photo microscope. The statistical tool used to test the validity of the null hypothesis was analysis of variance using randomized complete block design and independent T- test. (author)

  6. Mutagenicity potential of commercial broth cubes at varying concentrations

    Energy Technology Data Exchange (ETDEWEB)

    De Torres, Nelson Velasquez; Talain, Augusto Nicolas

    1998-12-31

    Today, there has been a growing concern on the mutagenicity potential of environmental chemical systems. These environmental chemicals such as pesticides, food additives, synthetic drugs, water and atmospheric pollutants are possible causes of mutagenic activity. Meat products and some meat flavorings, were also reported to exhibit mutagenic activity. And since these products are normal part of the daily human diet, there is a need for extensive studies regarding the possible mutagenic activity associated with these products. This study aimed to evaluate the mutagenicity potential of commercial broth cubes at varying concentration. The researchers sought to answer the following questions: 1. Do beef, pork and chicken broth cubes exhibit mutagenic activity? 2. Are there significant differences in the mutagenic activity among the three samples? 3. Are these significant differences in the mutagenic activity exhibited by each of the samples compared to that of Mitomycin-C (positive control)? 4. Which of the sample of each specific concentration exhibit the greatest mutagenic activity? Three specific concentrations of beef, pork and chicken broth cubes were prepared and their mutagenicity potential was evaluated by using the Micronucleus test. The formation of micro nucleated polychromatic and micro nucleated normo chromatic erythrocytes in bone marrow cells of mice treated with these samples were detected using a Carl-Zeiss photo microscope. The statistical tool used to test the validity of the null hypothesis was analysis of variance using randomized complete block design and independent T- test. (author). 28 refs., 9 figs., 26 tabs.

  7. Statistical modeling of Earth's plasmasphere

    Science.gov (United States)

    Veibell, Victoir

    The behavior of plasma near Earth's geosynchronous orbit is of vital importance to both satellite operators and magnetosphere modelers because it also has a significant influence on energy transport, ion composition, and induced currents. The system is highly complex in both time and space, making the forecasting of extreme space weather events difficult. This dissertation examines the behavior and statistical properties of plasma mass density near geosynchronous orbit by using both linear and nonlinear models, as well as epoch analyses, in an attempt to better understand the physical processes that precipitates and drives its variations. It is shown that while equatorial mass density does vary significantly on an hourly timescale when a drop in the disturbance time scale index ( Dst) was observed, it does not vary significantly between the day of a Dst event onset and the day immediately following. It is also shown that increases in equatorial mass density were not, on average, preceded or followed by any significant change in the examined solar wind or geomagnetic variables, including Dst, despite prior results that considered a few selected events and found a notable influence. It is verified that equatorial mass density and and solar activity via the F10.7 index have a strong correlation, which is stronger over longer timescales such as 27 days than it is over an hourly timescale. It is then shown that this connection seems to affect the behavior of equatorial mass density most during periods of strong solar activity leading to large mass density reactions to Dst drops for high values of F10.7. It is also shown that equatorial mass density behaves differently before and after events based on the value of F10.7 at the onset of an equatorial mass density event or a Dst event, and that a southward interplanetary magnetic field at onset leads to slowed mass density growth after event onset. These behavioral differences provide insight into how solar and geomagnetic

  8. MQSA National Statistics

    Science.gov (United States)

    ... Standards Act and Program MQSA Insights MQSA National Statistics Share Tweet Linkedin Pin it More sharing options ... but should level off with time. Archived Scorecard Statistics 2018 Scorecard Statistics 2017 Scorecard Statistics 2016 Scorecard ...

  9. State Transportation Statistics 2014

    Science.gov (United States)

    2014-12-15

    The Bureau of Transportation Statistics (BTS) presents State Transportation Statistics 2014, a statistical profile of transportation in the 50 states and the District of Columbia. This is the 12th annual edition of State Transportation Statistics, a ...

  10. Significance evaluation in factor graphs

    DEFF Research Database (Denmark)

    Madsen, Tobias; Hobolth, Asger; Jensen, Jens Ledet

    2017-01-01

    in genomics and the multiple-testing issues accompanying them, accurate significance evaluation is of great importance. We here address the problem of evaluating statistical significance of observations from factor graph models. Results Two novel numerical approximations for evaluation of statistical...... significance are presented. First a method using importance sampling. Second a saddlepoint approximation based method. We develop algorithms to efficiently compute the approximations and compare them to naive sampling and the normal approximation. The individual merits of the methods are analysed both from....... Conclusions The applicability of saddlepoint approximation and importance sampling is demonstrated on known models in the factor graph framework. Using the two methods we can substantially improve computational cost without compromising accuracy. This contribution allows analyses of large datasets...

  11. Truths, lies, and statistics.

    Science.gov (United States)

    Thiese, Matthew S; Walker, Skyler; Lindsey, Jenna

    2017-10-01

    Distribution of valuable research discoveries are needed for the continual advancement of patient care. Publication and subsequent reliance of false study results would be detrimental for patient care. Unfortunately, research misconduct may originate from many sources. While there is evidence of ongoing research misconduct in all it's forms, it is challenging to identify the actual occurrence of research misconduct, which is especially true for misconduct in clinical trials. Research misconduct is challenging to measure and there are few studies reporting the prevalence or underlying causes of research misconduct among biomedical researchers. Reported prevalence estimates of misconduct are probably underestimates, and range from 0.3% to 4.9%. There have been efforts to measure the prevalence of research misconduct; however, the relatively few published studies are not freely comparable because of varying characterizations of research misconduct and the methods used for data collection. There are some signs which may point to an increased possibility of research misconduct, however there is a need for continued self-policing by biomedical researchers. There are existing resources to assist in ensuring appropriate statistical methods and preventing other types of research fraud. These included the "Statistical Analyses and Methods in the Published Literature", also known as the SAMPL guidelines, which help scientists determine the appropriate method of reporting various statistical methods; the "Strengthening Analytical Thinking for Observational Studies", or the STRATOS, which emphases on execution and interpretation of results; and the Committee on Publication Ethics (COPE), which was created in 1997 to deliver guidance about publication ethics. COPE has a sequence of views and strategies grounded in the values of honesty and accuracy.

  12. Renyi statistics in equilibrium statistical mechanics

    International Nuclear Information System (INIS)

    Parvan, A.S.; Biro, T.S.

    2010-01-01

    The Renyi statistics in the canonical and microcanonical ensembles is examined both in general and in particular for the ideal gas. In the microcanonical ensemble the Renyi statistics is equivalent to the Boltzmann-Gibbs statistics. By the exact analytical results for the ideal gas, it is shown that in the canonical ensemble, taking the thermodynamic limit, the Renyi statistics is also equivalent to the Boltzmann-Gibbs statistics. Furthermore it satisfies the requirements of the equilibrium thermodynamics, i.e. the thermodynamical potential of the statistical ensemble is a homogeneous function of first degree of its extensive variables of state. We conclude that the Renyi statistics arrives at the same thermodynamical relations, as those stemming from the Boltzmann-Gibbs statistics in this limit.

  13. Sampling, Probability Models and Statistical Reasoning Statistical

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 1; Issue 5. Sampling, Probability Models and Statistical Reasoning Statistical Inference. Mohan Delampady V R Padmawar. General Article Volume 1 Issue 5 May 1996 pp 49-58 ...

  14. Statistics Using Just One Formula

    Science.gov (United States)

    Rosenthal, Jeffrey S.

    2018-01-01

    This article advocates that introductory statistics be taught by basing all calculations on a single simple margin-of-error formula and deriving all of the standard introductory statistical concepts (confidence intervals, significance tests, comparisons of means and proportions, etc) from that one formula. It is argued that this approach will…

  15. Statistics Anxiety among Postgraduate Students

    Science.gov (United States)

    Koh, Denise; Zawi, Mohd Khairi

    2014-01-01

    Most postgraduate programmes, that have research components, require students to take at least one course of research statistics. Not all postgraduate programmes are science based, there are a significant number of postgraduate students who are from the social sciences that will be taking statistics courses, as they try to complete their…

  16. Bose and his statistics

    International Nuclear Information System (INIS)

    Venkataraman, G.

    1992-01-01

    Treating radiation gas as a classical gas, Einstein derived Planck's law of radiation by considering the dynamic equilibrium between atoms and radiation. Dissatisfied with this treatment, S.N. Bose derived Plank's law by another original way. He treated the problem in generality: he counted how many cells were available for the photon gas in phase space and distributed the photons into these cells. In this manner of distribution, there were three radically new ideas: The indistinguishability of particles, the spin of the photon (with only two possible orientations) and the nonconservation of photon number. This gave rise to a new discipline of quantum statistical mechanics. Physics underlying Bose's discovery, its significance and its role in development of the concept of ideal gas, spin-statistics theorem and spin particles are described. The book has been written in a simple and direct language in an informal style aiming to stimulate the curiosity of a reader. (M.G.B.)

  17. Fitting Social Network Models Using Varying Truncation Stochastic Approximation MCMC Algorithm

    KAUST Repository

    Jin, Ick Hoon

    2013-10-01

    The exponential random graph model (ERGM) plays a major role in social network analysis. However, parameter estimation for the ERGM is a hard problem due to the intractability of its normalizing constant and the model degeneracy. The existing algorithms, such as Monte Carlo maximum likelihood estimation (MCMLE) and stochastic approximation, often fail for this problem in the presence of model degeneracy. In this article, we introduce the varying truncation stochastic approximation Markov chain Monte Carlo (SAMCMC) algorithm to tackle this problem. The varying truncation mechanism enables the algorithm to choose an appropriate starting point and an appropriate gain factor sequence, and thus to produce a reasonable parameter estimate for the ERGM even in the presence of model degeneracy. The numerical results indicate that the varying truncation SAMCMC algorithm can significantly outperform the MCMLE and stochastic approximation algorithms: for degenerate ERGMs, MCMLE and stochastic approximation often fail to produce any reasonable parameter estimates, while SAMCMC can do; for nondegenerate ERGMs, SAMCMC can work as well as or better than MCMLE and stochastic approximation. The data and source codes used for this article are available online as supplementary materials. © 2013 American Statistical Association, Institute of Mathematical Statistics, and Interface Foundation of North America.

  18. Error Analysis for RADAR Neighbor Matching Localization in Linear Logarithmic Strength Varying Wi-Fi Environment

    Directory of Open Access Journals (Sweden)

    Mu Zhou

    2014-01-01

    Full Text Available This paper studies the statistical errors for the fingerprint-based RADAR neighbor matching localization with the linearly calibrated reference points (RPs in logarithmic received signal strength (RSS varying Wi-Fi environment. To the best of our knowledge, little comprehensive analysis work has appeared on the error performance of neighbor matching localization with respect to the deployment of RPs. However, in order to achieve the efficient and reliable location-based services (LBSs as well as the ubiquitous context-awareness in Wi-Fi environment, much attention has to be paid to the highly accurate and cost-efficient localization systems. To this end, the statistical errors by the widely used neighbor matching localization are significantly discussed in this paper to examine the inherent mathematical relations between the localization errors and the locations of RPs by using a basic linear logarithmic strength varying model. Furthermore, based on the mathematical demonstrations and some testing results, the closed-form solutions to the statistical errors by RADAR neighbor matching localization can be an effective tool to explore alternative deployment of fingerprint-based neighbor matching localization systems in the future.

  19. Error Analysis for RADAR Neighbor Matching Localization in Linear Logarithmic Strength Varying Wi-Fi Environment

    Science.gov (United States)

    Tian, Zengshan; Xu, Kunjie; Yu, Xiang

    2014-01-01

    This paper studies the statistical errors for the fingerprint-based RADAR neighbor matching localization with the linearly calibrated reference points (RPs) in logarithmic received signal strength (RSS) varying Wi-Fi environment. To the best of our knowledge, little comprehensive analysis work has appeared on the error performance of neighbor matching localization with respect to the deployment of RPs. However, in order to achieve the efficient and reliable location-based services (LBSs) as well as the ubiquitous context-awareness in Wi-Fi environment, much attention has to be paid to the highly accurate and cost-efficient localization systems. To this end, the statistical errors by the widely used neighbor matching localization are significantly discussed in this paper to examine the inherent mathematical relations between the localization errors and the locations of RPs by using a basic linear logarithmic strength varying model. Furthermore, based on the mathematical demonstrations and some testing results, the closed-form solutions to the statistical errors by RADAR neighbor matching localization can be an effective tool to explore alternative deployment of fingerprint-based neighbor matching localization systems in the future. PMID:24683349

  20. Atmospheric particle formation in spatially and temporally varying conditions

    Energy Technology Data Exchange (ETDEWEB)

    Lauros, J.

    2011-07-01

    Atmospheric particles affect the radiation balance of the Earth and thus the climate. New particle formation from nucleation has been observed in diverse atmospheric conditions but the actual formation path is still unknown. The prevailing conditions can be exploited to evaluate proposed formation mechanisms. This study aims to improve our understanding of new particle formation from the view of atmospheric conditions. The role of atmospheric conditions on particle formation was studied by atmospheric measurements, theoretical model simulations and simulations based on observations. Two separate column models were further developed for aerosol and chemical simulations. Model simulations allowed us to expand the study from local conditions to varying conditions in the atmospheric boundary layer, while the long-term measurements described especially characteristic mean conditions associated with new particle formation. The observations show statistically significant difference in meteorological and back-ground aerosol conditions between observed event and non-event days. New particle formation above boreal forest is associated with strong convective activity, low humidity and low condensation sink. The probability of a particle formation event is predicted by an equation formulated for upper boundary layer conditions. The model simulations call into question if kinetic sulphuric acid induced nucleation is the primary particle formation mechanism in the presence of organic vapours. Simultaneously the simulations show that ignoring spatial and temporal variation in new particle formation studies may lead to faulty conclusions. On the other hand, the theoretical simulations indicate that short-scale variations in temperature and humidity unlikely have a significant effect on mean binary water sulphuric acid nucleation rate. The study emphasizes the significance of mixing and fluxes in particle formation studies, especially in the atmospheric boundary layer. The further

  1. Statistical and theoretical research

    International Nuclear Information System (INIS)

    Anon.

    1983-01-01

    Significant accomplishments include the creation of field designs to detect population impacts, new census procedures for small mammals, and methods for designing studies to determine where and how much of a contaminant is extent over certain landscapes. A book describing these statistical methods is currently being written and will apply to a variety of environmental contaminants, including radionuclides. PNL scientists also have devised an analytical method for predicting the success of field eexperiments on wild populations. Two highlights of current research are the discoveries that population of free-roaming horse herds can double in four years and that grizzly bear populations may be substantially smaller than once thought. As stray horses become a public nuisance at DOE and other large Federal sites, it is important to determine their number. Similar statistical theory can be readily applied to other situations where wild animals are a problem of concern to other government agencies. Another book, on statistical aspects of radionuclide studies, is written specifically for researchers in radioecology

  2. The foundations of statistics

    CERN Document Server

    Savage, Leonard J

    1972-01-01

    Classic analysis of the foundations of statistics and development of personal probability, one of the greatest controversies in modern statistical thought. Revised edition. Calculus, probability, statistics, and Boolean algebra are recommended.

  3. State Transportation Statistics 2010

    Science.gov (United States)

    2011-09-14

    The Bureau of Transportation Statistics (BTS), a part of DOTs Research and Innovative Technology Administration (RITA), presents State Transportation Statistics 2010, a statistical profile of transportation in the 50 states and the District of Col...

  4. State Transportation Statistics 2012

    Science.gov (United States)

    2013-08-15

    The Bureau of Transportation Statistics (BTS), a part of the U.S. Department of Transportation's (USDOT) Research and Innovative Technology Administration (RITA), presents State Transportation Statistics 2012, a statistical profile of transportation ...

  5. Adrenal Gland Tumors: Statistics

    Science.gov (United States)

    ... Gland Tumor: Statistics Request Permissions Adrenal Gland Tumor: Statistics Approved by the Cancer.Net Editorial Board , 03/ ... primary adrenal gland tumor is very uncommon. Exact statistics are not available for this type of tumor ...

  6. State transportation statistics 2009

    Science.gov (United States)

    2009-01-01

    The Bureau of Transportation Statistics (BTS), a part of DOTs Research and : Innovative Technology Administration (RITA), presents State Transportation : Statistics 2009, a statistical profile of transportation in the 50 states and the : District ...

  7. State Transportation Statistics 2011

    Science.gov (United States)

    2012-08-08

    The Bureau of Transportation Statistics (BTS), a part of DOTs Research and Innovative Technology Administration (RITA), presents State Transportation Statistics 2011, a statistical profile of transportation in the 50 states and the District of Col...

  8. Neuroendocrine Tumor: Statistics

    Science.gov (United States)

    ... Tumor > Neuroendocrine Tumor: Statistics Request Permissions Neuroendocrine Tumor: Statistics Approved by the Cancer.Net Editorial Board , 01/ ... the body. It is important to remember that statistics on the survival rates for people with a ...

  9. State Transportation Statistics 2013

    Science.gov (United States)

    2014-09-19

    The Bureau of Transportation Statistics (BTS), a part of the U.S. Department of Transportations (USDOT) Research and Innovative Technology Administration (RITA), presents State Transportation Statistics 2013, a statistical profile of transportatio...

  10. BTS statistical standards manual

    Science.gov (United States)

    2005-10-01

    The Bureau of Transportation Statistics (BTS), like other federal statistical agencies, establishes professional standards to guide the methods and procedures for the collection, processing, storage, and presentation of statistical data. Standards an...

  11. A varying-α brane world cosmology

    International Nuclear Information System (INIS)

    Youm, Donam

    2001-08-01

    We study the brane world cosmology in the RS2 model where the electric charge varies with time in the manner described by the varying fine-structure constant theory of Bekenstein. We map such varying electric charge cosmology to the dual variable-speed-of-light cosmology by changing system of units. We comment on cosmological implications for such cosmological models. (author)

  12. [Comment on] Statistical discrimination

    Science.gov (United States)

    Chinn, Douglas

    In the December 8, 1981, issue of Eos, a news item reported the conclusion of a National Research Council study that sexual discrimination against women with Ph.D.'s exists in the field of geophysics. Basically, the item reported that even when allowances are made for motherhood the percentage of female Ph.D.'s holding high university and corporate positions is significantly lower than the percentage of male Ph.D.'s holding the same types of positions. The sexual discrimination conclusion, based only on these statistics, assumes that there are no basic psychological differences between men and women that might cause different populations in the employment group studied. Therefore, the reasoning goes, after taking into account possible effects from differences related to anatomy, such as women stopping their careers in order to bear and raise children, the statistical distributions of positions held by male and female Ph.D.'s ought to be very similar to one another. Any significant differences between the distributions must be caused primarily by sexual discrimination.

  13. Clinical significance of neonatal menstruation.

    Science.gov (United States)

    Brosens, Ivo; Benagiano, Giuseppe

    2016-01-01

    Past studies have clearly shown the existence of a spectrum of endometrial progesterone responses in neonatal endometrium, varying from proliferation to full decidualization with menstrual-like shedding. The bleedings represent, similar to what occurs in adult menstruation, a progesterone withdrawal bleeding. Today, the bleeding is completely neglected and considered an uneventful episode of no clinical significance. Yet clinical studies have linked the risk of bleeding to a series of events indicating fetal distress. The potential link between the progesterone response and major adolescent disorders requires to be investigated by prospective studies. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  14. Preventing statistical errors in scientific journals.

    NARCIS (Netherlands)

    Nuijten, M.B.

    2016-01-01

    There is evidence for a high prevalence of statistical reporting errors in psychology and other scientific fields. These errors display a systematic preference for statistically significant results, distorting the scientific literature. There are several possible causes for this systematic error

  15. Statistics in Schools

    Science.gov (United States)

    Information Statistics in Schools Educate your students about the value and everyday use of statistics. The Statistics in Schools program provides resources for teaching and learning with real life data. Explore the site for standards-aligned, classroom-ready activities. Statistics in Schools Math Activities History

  16. Transport Statistics - Transport - UNECE

    Science.gov (United States)

    Sustainable Energy Statistics Trade Transport Themes UNECE and the SDGs Climate Change Gender Ideas 4 Change UNECE Weekly Videos UNECE Transport Areas of Work Transport Statistics Transport Transport Statistics About us Terms of Reference Meetings and Events Meetings Working Party on Transport Statistics (WP.6

  17. Statistics of natural binaural sounds.

    Directory of Open Access Journals (Sweden)

    Wiktor Młynarski

    Full Text Available Binaural sound localization is usually considered a discrimination task, where interaural phase (IPD and level (ILD disparities at narrowly tuned frequency channels are utilized to identify a position of a sound source. In natural conditions however, binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. Distribution of cues encountered naturally and their dependence on physical properties of an auditory scene have not been studied before. In the present work we analyzed statistics of naturally encountered binaural sounds. We performed binaural recordings of three auditory scenes with varying spatial configuration and analyzed empirical cue distributions from each scene. We have found that certain properties such as the spread of IPD distributions as well as an overall shape of ILD distributions do not vary strongly between different auditory scenes. Moreover, we found that ILD distributions vary much weaker across frequency channels and IPDs often attain much higher values, than can be predicted from head filtering properties. In order to understand the complexity of the binaural hearing task in the natural environment, sound waveforms were analyzed by performing Independent Component Analysis (ICA. Properties of learned basis functions indicate that in natural conditions soundwaves in each ear are predominantly generated by independent sources. This implies that the real-world sound localization must rely on mechanisms more complex than a mere cue extraction.

  18. Statistics of natural binaural sounds.

    Science.gov (United States)

    Młynarski, Wiktor; Jost, Jürgen

    2014-01-01

    Binaural sound localization is usually considered a discrimination task, where interaural phase (IPD) and level (ILD) disparities at narrowly tuned frequency channels are utilized to identify a position of a sound source. In natural conditions however, binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. Distribution of cues encountered naturally and their dependence on physical properties of an auditory scene have not been studied before. In the present work we analyzed statistics of naturally encountered binaural sounds. We performed binaural recordings of three auditory scenes with varying spatial configuration and analyzed empirical cue distributions from each scene. We have found that certain properties such as the spread of IPD distributions as well as an overall shape of ILD distributions do not vary strongly between different auditory scenes. Moreover, we found that ILD distributions vary much weaker across frequency channels and IPDs often attain much higher values, than can be predicted from head filtering properties. In order to understand the complexity of the binaural hearing task in the natural environment, sound waveforms were analyzed by performing Independent Component Analysis (ICA). Properties of learned basis functions indicate that in natural conditions soundwaves in each ear are predominantly generated by independent sources. This implies that the real-world sound localization must rely on mechanisms more complex than a mere cue extraction.

  19. Generalized quantum statistics

    International Nuclear Information System (INIS)

    Chou, C.

    1992-01-01

    In the paper, a non-anyonic generalization of quantum statistics is presented, in which Fermi-Dirac statistics (FDS) and Bose-Einstein statistics (BES) appear as two special cases. The new quantum statistics, which is characterized by the dimension of its single particle Fock space, contains three consistent parts, namely the generalized bilinear quantization, the generalized quantum mechanical description and the corresponding statistical mechanics

  20. Statistical analysis and data management

    International Nuclear Information System (INIS)

    Anon.

    1981-01-01

    This report provides an overview of the history of the WIPP Biology Program. The recommendations of the American Institute of Biological Sciences (AIBS) for the WIPP biology program are summarized. The data sets available for statistical analyses and problems associated with these data sets are also summarized. Biological studies base maps are presented. A statistical model is presented to evaluate any correlation between climatological data and small mammal captures. No statistically significant relationship between variance in small mammal captures on Dr. Gennaro's 90m x 90m grid and precipitation records from the Duval Potash Mine were found

  1. Statistical Evidence for the Preference of Frailty Distributions with Regularly-Varying-at-Zero Densities

    DEFF Research Database (Denmark)

    Missov, Trifon I.; Schöley, Jonas

    to this criterion admissible distributions are, for example, the gamma, the beta, the truncated normal, the log-logistic and the Weibull, while distributions like the log-normal and the inverse Gaussian do not satisfy this condition. In this article we show that models with admissible frailty distributions...... and a Gompertz baseline provide a better fit to adult human mortality data than the corresponding models with non-admissible frailty distributions. We implement estimation procedures for mixture models with a Gompertz baseline and frailty that follows a gamma, truncated normal, log-normal, or inverse Gaussian...

  2. Classical model of intermediate statistics

    International Nuclear Information System (INIS)

    Kaniadakis, G.

    1994-01-01

    In this work we present a classical kinetic model of intermediate statistics. In the case of Brownian particles we show that the Fermi-Dirac (FD) and Bose-Einstein (BE) distributions can be obtained, just as the Maxwell-Boltzmann (MD) distribution, as steady states of a classical kinetic equation that intrinsically takes into account an exclusion-inclusion principle. In our model the intermediate statistics are obtained as steady states of a system of coupled nonlinear kinetic equations, where the coupling constants are the transmutational potentials η κκ' . We show that, besides the FD-BE intermediate statistics extensively studied from the quantum point of view, we can also study the MB-FD and MB-BE ones. Moreover, our model allows us to treat the three-state mixing FD-MB-BE intermediate statistics. For boson and fermion mixing in a D-dimensional space, we obtain a family of FD-BE intermediate statistics by varying the transmutational potential η BF . This family contains, as a particular case when η BF =0, the quantum statistics recently proposed by L. Wu, Z. Wu, and J. Sun [Phys. Lett. A 170, 280 (1992)]. When we consider the two-dimensional FD-BE statistics, we derive an analytic expression of the fraction of fermions. When the temperature T→∞, the system is composed by an equal number of bosons and fermions, regardless of the value of η BF . On the contrary, when T=0, η BF becomes important and, according to its value, the system can be completely bosonic or fermionic, or composed both by bosons and fermions

  3. Detecting Novelty and Significance

    Science.gov (United States)

    Ferrari, Vera; Bradley, Margaret M.; Codispoti, Maurizio; Lang, Peter J.

    2013-01-01

    Studies of cognition often use an “oddball” paradigm to study effects of stimulus novelty and significance on information processing. However, an oddball tends to be perceptually more novel than the standard, repeated stimulus as well as more relevant to the ongoing task, making it difficult to disentangle effects due to perceptual novelty and stimulus significance. In the current study, effects of perceptual novelty and significance on ERPs were assessed in a passive viewing context by presenting repeated and novel pictures (natural scenes) that either signaled significant information regarding the current context or not. A fronto-central N2 component was primarily affected by perceptual novelty, whereas a centro-parietal P3 component was modulated by both stimulus significance and novelty. The data support an interpretation that the N2 reflects perceptual fluency and is attenuated when a current stimulus matches an active memory representation and that the amplitude of the P3 reflects stimulus meaning and significance. PMID:19400680

  4. Asymptomatic proteinuria. Clinical significance.

    Science.gov (United States)

    Papper, S

    1977-09-01

    Patients with asymptomatic proteinuria have varied reasons for the proteinuria and travel diverse courses. In the individual with normal renal function and no systemic cause, ie, idiopathic asymptomatic proteinuria, the outlook is generally favorable. Microscopic hematuria probably raises some degree of question about prognosis. The kidney shows normal glomeruli, subtle changes, or an identifiable lesion. The initial approach includes a clinical and laboratory search for systemic disease, repeated urinalyses, quantitative measurements of proteinuria, determination of creatinine clearance, protein electrophoresis where indicated, and intravenous pyelography. The need for regularly scheduled follow-up evaluation is emphasized. Although the initial approach need not include renal biopsy, a decline in creatinine clearance, an increase in proteinuria, or both are indications for biopsy and consideration of drug therapy.

  5. Significant NRC Enforcement Actions

    Data.gov (United States)

    Nuclear Regulatory Commission — This dataset provides a list of Nuclear Regulartory Commission (NRC) issued significant enforcement actions. These actions, referred to as "escalated", are issued by...

  6. National Statistical Commission and Indian Official Statistics

    Indian Academy of Sciences (India)

    Author Affiliations. T J Rao1. C. R. Rao Advanced Institute of Mathematics, Statistics and Computer Science (AIMSCS) University of Hyderabad Campus Central University Post Office, Prof. C. R. Rao Road Hyderabad 500 046, AP, India.

  7. Statistics For Dummies

    CERN Document Server

    Rumsey, Deborah

    2011-01-01

    The fun and easy way to get down to business with statistics Stymied by statistics? No fear ? this friendly guide offers clear, practical explanations of statistical ideas, techniques, formulas, and calculations, with lots of examples that show you how these concepts apply to your everyday life. Statistics For Dummies shows you how to interpret and critique graphs and charts, determine the odds with probability, guesstimate with confidence using confidence intervals, set up and carry out a hypothesis test, compute statistical formulas, and more.Tracks to a typical first semester statistics cou

  8. Industrial statistics with Minitab

    CERN Document Server

    Cintas, Pere Grima; Llabres, Xavier Tort-Martorell

    2012-01-01

    Industrial Statistics with MINITAB demonstrates the use of MINITAB as a tool for performing statistical analysis in an industrial context. This book covers introductory industrial statistics, exploring the most commonly used techniques alongside those that serve to give an overview of more complex issues. A plethora of examples in MINITAB are featured along with case studies for each of the statistical techniques presented. Industrial Statistics with MINITAB: Provides comprehensive coverage of user-friendly practical guidance to the essential statistical methods applied in industry.Explores

  9. Statistical crack mechanics

    International Nuclear Information System (INIS)

    Dienes, J.K.

    1993-01-01

    Although it is possible to simulate the ground blast from a single explosive shot with a simple computer algorithm and appropriate constants, the most commonly used modelling methods do not account for major changes in geology or shot energy because mechanical features such as tectonic stresses, fault structure, microcracking, brittle-ductile transition, and water content are not represented in significant detail. An alternative approach for modelling called Statistical Crack Mechanics is presented in this paper. This method, developed in the seventies as a part of the oil shale program, accounts for crack opening, shear, growth, and coalescence. Numerous photographs and micrographs show that shocked materials tend to involve arrays of planar cracks. The approach described here provides a way to account for microstructure and give a representation of the physical behavior of a material at the microscopic level that can account for phenomena such as permeability, fragmentation, shear banding, and hot-spot formation in explosives

  10. Graphene Statistical Mechanics

    Science.gov (United States)

    Bowick, Mark; Kosmrlj, Andrej; Nelson, David; Sknepnek, Rastko

    2015-03-01

    Graphene provides an ideal system to test the statistical mechanics of thermally fluctuating elastic membranes. The high Young's modulus of graphene means that thermal fluctuations over even small length scales significantly stiffen the renormalized bending rigidity. We study the effect of thermal fluctuations on graphene ribbons of width W and length L, pinned at one end, via coarse-grained Molecular Dynamics simulations and compare with analytic predictions of the scaling of width-averaged root-mean-squared height fluctuations as a function of distance along the ribbon. Scaling collapse as a function of W and L also allows us to extract the scaling exponent eta governing the long-wavelength stiffening of the bending rigidity. A full understanding of the geometry-dependent mechanical properties of graphene, including arrays of cuts, may allow the design of a variety of modular elements with desired mechanical properties starting from pure graphene alone. Supported by NSF grant DMR-1435794

  11. Modelling tourists arrival using time varying parameter

    Science.gov (United States)

    Suciptawati, P.; Sukarsa, K. G.; Kencana, Eka N.

    2017-06-01

    The importance of tourism and its related sectors to support economic development and poverty reduction in many countries increase researchers’ attentions to study and model tourists’ arrival. This work is aimed to demonstrate time varying parameter (TVP) technique to model the arrival of Korean’s tourists to Bali. The number of Korean tourists whom visiting Bali for period January 2010 to December 2015 were used to model the number of Korean’s tourists to Bali (KOR) as dependent variable. The predictors are the exchange rate of Won to IDR (WON), the inflation rate in Korea (INFKR), and the inflation rate in Indonesia (INFID). Observing tourists visit to Bali tend to fluctuate by their nationality, then the model was built by applying TVP and its parameters were approximated using Kalman Filter algorithm. The results showed all of predictor variables (WON, INFKR, INFID) significantly affect KOR. For in-sample and out-of-sample forecast with ARIMA’s forecasted values for the predictors, TVP model gave mean absolute percentage error (MAPE) as much as 11.24 percent and 12.86 percent, respectively.

  12. Spacetime-varying couplings and Lorentz violation

    International Nuclear Information System (INIS)

    Kostelecky, V. Alan; Lehnert, Ralf; Perry, Malcolm J.

    2003-01-01

    Spacetime-varying coupling constants can be associated with violations of local Lorentz invariance and CPT symmetry. An analytical supergravity cosmology with a time-varying fine-structure constant provides an explicit example. Estimates are made for some experimental constraints

  13. Detection of dynamically varying interaural time differences

    DEFF Research Database (Denmark)

    Kohlrausch, Armin; Le Goff, Nicolas; Breebaart, Jeroen

    2010-01-01

    of fringes surrounding the probe is equal to the addition of the effects of the individual fringes. In this contribution, we present behavioral data for the same experimental condition, called dynamically varying ITD detection, but for a wider range of probe and fringe durations. Probe durations varied...

  14. [Big data in official statistics].

    Science.gov (United States)

    Zwick, Markus

    2015-08-01

    The concept of "big data" stands to change the face of official statistics over the coming years, having an impact on almost all aspects of data production. The tasks of future statisticians will not necessarily be to produce new data, but rather to identify and make use of existing data to adequately describe social and economic phenomena. Until big data can be used correctly in official statistics, a lot of questions need to be answered and problems solved: the quality of data, data protection, privacy, and the sustainable availability are some of the more pressing issues to be addressed. The essential skills of official statisticians will undoubtedly change, and this implies a number of challenges to be faced by statistical education systems, in universities, and inside the statistical offices. The national statistical offices of the European Union have concluded a concrete strategy for exploring the possibilities of big data for official statistics, by means of the Big Data Roadmap and Action Plan 1.0. This is an important first step and will have a significant influence on implementing the concept of big data inside the statistical offices of Germany.

  15. Recreational Boating Statistics 2012

    Data.gov (United States)

    Department of Homeland Security — Every year, the USCG compiles statistics on reported recreational boating accidents. These statistics are derived from accident reports that are filed by the owners...

  16. Recreational Boating Statistics 2013

    Data.gov (United States)

    Department of Homeland Security — Every year, the USCG compiles statistics on reported recreational boating accidents. These statistics are derived from accident reports that are filed by the owners...

  17. Statistical data analysis handbook

    National Research Council Canada - National Science Library

    Wall, Francis J

    1986-01-01

    It must be emphasized that this is not a text book on statistics. Instead it is a working tool that presents data analysis in clear, concise terms which can be readily understood even by those without formal training in statistics...

  18. CMS Program Statistics

    Data.gov (United States)

    U.S. Department of Health & Human Services — The CMS Office of Enterprise Data and Analytics has developed CMS Program Statistics, which includes detailed summary statistics on national health care, Medicare...

  19. Recreational Boating Statistics 2011

    Data.gov (United States)

    Department of Homeland Security — Every year, the USCG compiles statistics on reported recreational boating accidents. These statistics are derived from accident reports that are filed by the owners...

  20. Uterine Cancer Statistics

    Science.gov (United States)

    ... Doing AMIGAS Stay Informed Cancer Home Uterine Cancer Statistics Language: English (US) Español (Spanish) Recommend on Facebook ... the most commonly diagnosed gynecologic cancer. U.S. Cancer Statistics Data Visualizations Tool The Data Visualizations tool makes ...

  1. Tuberculosis Data and Statistics

    Science.gov (United States)

    ... Advisory Groups Federal TB Task Force Data and Statistics Language: English (US) Español (Spanish) Recommend on Facebook ... Set) Mortality and Morbidity Weekly Reports Data and Statistics Decrease in Reported Tuberculosis Cases MMWR 2010; 59 ( ...

  2. National transportation statistics 2011

    Science.gov (United States)

    2011-04-01

    Compiled and published by the U.S. Department of Transportation's Bureau of Transportation Statistics : (BTS), National Transportation Statistics presents information on the U.S. transportation system, including : its physical components, safety reco...

  3. National Transportation Statistics 2008

    Science.gov (United States)

    2009-01-08

    Compiled and published by the U.S. Department of Transportations Bureau of Transportation Statistics (BTS), National Transportation Statistics presents information on the U.S. transportation system, including its physical components, safety record...

  4. Mental Illness Statistics

    Science.gov (United States)

    ... News & Events About Us Home > Health Information Share Statistics Research shows that mental illnesses are common in ... of mental illnesses, such as suicide and disability. Statistics Top ı cs Mental Illness Any Anxiety Disorder ...

  5. School Violence: Data & Statistics

    Science.gov (United States)

    ... Social Media Publications Injury Center School Violence: Data & Statistics Recommend on Facebook Tweet Share Compartir The first ... Vehicle Safety Traumatic Brain Injury Injury Response Data & Statistics (WISQARS) Funded Programs Press Room Social Media Publications ...

  6. Aortic Aneurysm Statistics

    Science.gov (United States)

    ... Summary Coverdell Program 2012-2015 State Summaries Data & Statistics Fact Sheets Heart Disease and Stroke Fact Sheets ... Roadmap for State Planning Other Data Resources Other Statistic Resources Grantee Information Cross-Program Information Online Tools ...

  7. Alcohol Facts and Statistics

    Science.gov (United States)

    ... Standard Drink? Drinking Levels Defined Alcohol Facts and Statistics Print version Alcohol Use in the United States: ... 1238–1245, 2004. PMID: 15010446 National Center for Statistics and Analysis. 2014 Crash Data Key Findings (Traffic ...

  8. National Transportation Statistics 2009

    Science.gov (United States)

    2010-01-21

    Compiled and published by the U.S. Department of Transportation's Bureau of Transportation Statistics (BTS), National Transportation Statistics presents information on the U.S. transportation system, including its physical components, safety record, ...

  9. National transportation statistics 2010

    Science.gov (United States)

    2010-01-01

    National Transportation Statistics presents statistics on the U.S. transportation system, including its physical components, safety record, economic performance, the human and natural environment, and national security. This is a large online documen...

  10. Statistics for Finance

    DEFF Research Database (Denmark)

    Lindström, Erik; Madsen, Henrik; Nielsen, Jan Nygaard

    Statistics for Finance develops students’ professional skills in statistics with applications in finance. Developed from the authors’ courses at the Technical University of Denmark and Lund University, the text bridges the gap between classical, rigorous treatments of financial mathematics...

  11. Principles of applied statistics

    National Research Council Canada - National Science Library

    Cox, D. R; Donnelly, Christl A

    2011-01-01

    .... David Cox and Christl Donnelly distil decades of scientific experience into usable principles for the successful application of statistics, showing how good statistical strategy shapes every stage of an investigation...

  12. Applying contemporary statistical techniques

    CERN Document Server

    Wilcox, Rand R

    2003-01-01

    Applying Contemporary Statistical Techniques explains why traditional statistical methods are often inadequate or outdated when applied to modern problems. Wilcox demonstrates how new and more powerful techniques address these problems far more effectively, making these modern robust methods understandable, practical, and easily accessible.* Assumes no previous training in statistics * Explains how and why modern statistical methods provide more accurate results than conventional methods* Covers the latest developments on multiple comparisons * Includes recent advanc

  13. Interactive statistics with ILLMO

    NARCIS (Netherlands)

    Martens, J.B.O.S.

    2014-01-01

    Progress in empirical research relies on adequate statistical analysis and reporting. This article proposes an alternative approach to statistical modeling that is based on an old but mostly forgotten idea, namely Thurstone modeling. Traditional statistical methods assume that either the measured

  14. Ethics in Statistics

    Science.gov (United States)

    Lenard, Christopher; McCarthy, Sally; Mills, Terence

    2014-01-01

    There are many different aspects of statistics. Statistics involves mathematics, computing, and applications to almost every field of endeavour. Each aspect provides an opportunity to spark someone's interest in the subject. In this paper we discuss some ethical aspects of statistics, and describe how an introduction to ethics has been…

  15. Youth Sports Safety Statistics

    Science.gov (United States)

    ... 6):794-799. 31 American Heart Association. CPR statistics. www.heart.org/HEARTORG/CPRAndECC/WhatisCPR/CPRFactsandStats/CPRpercent20Statistics_ ... Mental Health Services Administration, Center for Behavioral Health Statistics and Quality. (January 10, 2013). The DAWN Report: ...

  16. On two methods of statistical image analysis

    NARCIS (Netherlands)

    Missimer, J; Knorr, U; Maguire, RP; Herzog, H; Seitz, RJ; Tellman, L; Leenders, K.L.

    1999-01-01

    The computerized brain atlas (CBA) and statistical parametric mapping (SPM) are two procedures for voxel-based statistical evaluation of PET activation studies. Each includes spatial standardization of image volumes, computation of a statistic, and evaluation of its significance. In addition,

  17. Statistics for Research

    CERN Document Server

    Dowdy, Shirley; Chilko, Daniel

    2011-01-01

    Praise for the Second Edition "Statistics for Research has other fine qualities besides superior organization. The examples and the statistical methods are laid out with unusual clarity by the simple device of using special formats for each. The book was written with great care and is extremely user-friendly."-The UMAP Journal Although the goals and procedures of statistical research have changed little since the Second Edition of Statistics for Research was published, the almost universal availability of personal computers and statistical computing application packages have made it possible f

  18. Statistics in a nutshell

    CERN Document Server

    Boslaugh, Sarah

    2013-01-01

    Need to learn statistics for your job? Want help passing a statistics course? Statistics in a Nutshell is a clear and concise introduction and reference for anyone new to the subject. Thoroughly revised and expanded, this edition helps you gain a solid understanding of statistics without the numbing complexity of many college texts. Each chapter presents easy-to-follow descriptions, along with graphics, formulas, solved examples, and hands-on exercises. If you want to perform common statistical analyses and learn a wide range of techniques without getting in over your head, this is your book.

  19. Statistics & probaility for dummies

    CERN Document Server

    Rumsey, Deborah J

    2013-01-01

    Two complete eBooks for one low price! Created and compiled by the publisher, this Statistics I and Statistics II bundle brings together two math titles in one, e-only bundle. With this special bundle, you'll get the complete text of the following two titles: Statistics For Dummies, 2nd Edition  Statistics For Dummies shows you how to interpret and critique graphs and charts, determine the odds with probability, guesstimate with confidence using confidence intervals, set up and carry out a hypothesis test, compute statistical formulas, and more. Tra

  20. Nonparametric statistical inference

    CERN Document Server

    Gibbons, Jean Dickinson

    2010-01-01

    Overall, this remains a very fine book suitable for a graduate-level course in nonparametric statistics. I recommend it for all people interested in learning the basic ideas of nonparametric statistical inference.-Eugenia Stoimenova, Journal of Applied Statistics, June 2012… one of the best books available for a graduate (or advanced undergraduate) text for a theory course on nonparametric statistics. … a very well-written and organized book on nonparametric statistics, especially useful and recommended for teachers and graduate students.-Biometrics, 67, September 2011This excellently presente

  1. Business statistics for dummies

    CERN Document Server

    Anderson, Alan

    2013-01-01

    Score higher in your business statistics course? Easy. Business statistics is a common course for business majors and MBA candidates. It examines common data sets and the proper way to use such information when conducting research and producing informational reports such as profit and loss statements, customer satisfaction surveys, and peer comparisons. Business Statistics For Dummies tracks to a typical business statistics course offered at the undergraduate and graduate levels and provides clear, practical explanations of business statistical ideas, techniques, formulas, and calculations, w

  2. Head First Statistics

    CERN Document Server

    Griffiths, Dawn

    2009-01-01

    Wouldn't it be great if there were a statistics book that made histograms, probability distributions, and chi square analysis more enjoyable than going to the dentist? Head First Statistics brings this typically dry subject to life, teaching you everything you want and need to know about statistics through engaging, interactive, and thought-provoking material, full of puzzles, stories, quizzes, visual aids, and real-world examples. Whether you're a student, a professional, or just curious about statistical analysis, Head First's brain-friendly formula helps you get a firm grasp of statistics

  3. Lectures on algebraic statistics

    CERN Document Server

    Drton, Mathias; Sullivant, Seth

    2009-01-01

    How does an algebraic geometer studying secant varieties further the understanding of hypothesis tests in statistics? Why would a statistician working on factor analysis raise open problems about determinantal varieties? Connections of this type are at the heart of the new field of "algebraic statistics". In this field, mathematicians and statisticians come together to solve statistical inference problems using concepts from algebraic geometry as well as related computational and combinatorial techniques. The goal of these lectures is to introduce newcomers from the different camps to algebraic statistics. The introduction will be centered around the following three observations: many important statistical models correspond to algebraic or semi-algebraic sets of parameters; the geometry of these parameter spaces determines the behaviour of widely used statistical inference procedures; computational algebraic geometry can be used to study parameter spaces and other features of statistical models.

  4. Statistics for economics

    CERN Document Server

    Naghshpour, Shahdad

    2012-01-01

    Statistics is the branch of mathematics that deals with real-life problems. As such, it is an essential tool for economists. Unfortunately, the way you and many other economists learn the concept of statistics is not compatible with the way economists think and learn. The problem is worsened by the use of mathematical jargon and complex derivations. Here's a book that proves none of this is necessary. All the examples and exercises in this book are constructed within the field of economics, thus eliminating the difficulty of learning statistics with examples from fields that have no relation to business, politics, or policy. Statistics is, in fact, not more difficult than economics. Anyone who can comprehend economics can understand and use statistics successfully within this field, including you! This book utilizes Microsoft Excel to obtain statistical results, as well as to perform additional necessary computations. Microsoft Excel is not the software of choice for performing sophisticated statistical analy...

  5. Baseline Statistics of Linked Statistical Data

    NARCIS (Netherlands)

    Scharnhorst, Andrea; Meroño-Peñuela, Albert; Guéret, Christophe

    2014-01-01

    We are surrounded by an ever increasing ocean of information, everybody will agree to that. We build sophisticated strategies to govern this information: design data models, develop infrastructures for data sharing, building tool for data analysis. Statistical datasets curated by National

  6. Time-Varying Value of Energy Efficiency in Michigan

    Energy Technology Data Exchange (ETDEWEB)

    Mims, Natalie; Eckman, Tom; Schwartz, Lisa C.

    2018-04-02

    Quantifying the time-varying value of energy efficiency is necessary to properly account for all of its benefits and costs and to identify and implement efficiency resources that contribute to a low-cost, reliable electric system. Historically, most quantification of the benefits of efficiency has focused largely on the economic value of annual energy reduction. Due to the lack of statistically representative metered end-use load shape data in Michigan (i.e., the hourly or seasonal timing of electricity savings), the ability to confidently characterize the time-varying value of energy efficiency savings in the state, especially for weather-sensitive measures such as central air conditioning, is limited. Still, electric utilities in Michigan can take advantage of opportunities to incorporate the time-varying value of efficiency into their planning. For example, end-use load research and hourly valuation of efficiency savings can be used for a variety of electricity planning functions, including load forecasting, demand-side management and evaluation, capacity planning, long-term resource planning, renewable energy integration, assessing potential grid modernization investments, establishing rates and pricing, and customer service (KEMA 2012). In addition, accurately calculating the time-varying value of efficiency may help energy efficiency program administrators prioritize existing offerings, set incentive or rebate levels that reflect the full value of efficiency, and design new programs.

  7. Eesti film võistleb Karlovy Varys

    Index Scriptorium Estoniae

    2008-01-01

    8. juulil esilinastub Karlovy Vary filmifestivalil Rene Vilbre noortefilm "Mina olin siin", mille aluseks on Sass Henno romaan "Mina olin siin. Esimene arest", stsenaariumi kirjutas Ilmar Raag. Film võistleb võistlusprogrammis "East of the West"

  8. Matching Value Propositions with Varied Customer Needs

    DEFF Research Database (Denmark)

    Heikka, Eija-Liisa; Frandsen, Thomas; Hsuan, Juliana

    2018-01-01

    Organizations seek to manage varied customer segments using varied value propositions. The ability of a knowledge-intensive business service (KIBS) provider to formulate value propositions into attractive offerings to varied customers becomes a competitive advantage. In this specific business based...... on often highly abstract service offerings, this requires the provider to have a clear overview of its knowledge and resources and how these can be configured to obtain the desired customization of services. Hence, the purpose of this paper is to investigate how a KIBS provider can match value propositions...... with varied customer needs utilizing service modularity. To accomplish this purpose, a qualitative multiple case study is organized around 5 projects allowing within-case and cross-case comparisons. Our findings describe how through the configuration of knowledge and resources a sustainable competitive...

  9. Compilation of Instantaneous Source Functions for Varying ...

    African Journals Online (AJOL)

    Compilation of Instantaneous Source Functions for Varying Architecture of a Layered Reservoir with Mixed Boundaries and Horizontal Well Completion Part III: B-Shaped Architecture with Vertical Well in the Upper Layer.

  10. Compilation of Instantaneous Source Functions for Varying ...

    African Journals Online (AJOL)

    Compilation of Instantaneous Source Functions for Varying Architecture of a Layered Reservoir with Mixed Boundaries and Horizontal Well Completion Part IV: Normal and Inverted Letter 'h' and 'H' Architecture.

  11. Reducing statistics anxiety and enhancing statistics learning achievement: effectiveness of a one-minute strategy.

    Science.gov (United States)

    Chiou, Chei-Chang; Wang, Yu-Min; Lee, Li-Tze

    2014-08-01

    Statistical knowledge is widely used in academia; however, statistics teachers struggle with the issue of how to reduce students' statistics anxiety and enhance students' statistics learning. This study assesses the effectiveness of a "one-minute paper strategy" in reducing students' statistics-related anxiety and in improving students' statistics-related achievement. Participants were 77 undergraduates from two classes enrolled in applied statistics courses. An experiment was implemented according to a pretest/posttest comparison group design. The quasi-experimental design showed that the one-minute paper strategy significantly reduced students' statistics anxiety and improved students' statistics learning achievement. The strategy was a better instructional tool than the textbook exercise for reducing students' statistics anxiety and improving students' statistics achievement.

  12. Significant Tsunami Events

    Science.gov (United States)

    Dunbar, P. K.; Furtney, M.; McLean, S. J.; Sweeney, A. D.

    2014-12-01

    Tsunamis have inflicted death and destruction on the coastlines of the world throughout history. The occurrence of tsunamis and the resulting effects have been collected and studied as far back as the second millennium B.C. The knowledge gained from cataloging and examining these events has led to significant changes in our understanding of tsunamis, tsunami sources, and methods to mitigate the effects of tsunamis. The most significant, not surprisingly, are often the most devastating, such as the 2011 Tohoku, Japan earthquake and tsunami. The goal of this poster is to give a brief overview of the occurrence of tsunamis and then focus specifically on several significant tsunamis. There are various criteria to determine the most significant tsunamis: the number of deaths, amount of damage, maximum runup height, had a major impact on tsunami science or policy, etc. As a result, descriptions will include some of the most costly (2011 Tohoku, Japan), the most deadly (2004 Sumatra, 1883 Krakatau), and the highest runup ever observed (1958 Lituya Bay, Alaska). The discovery of the Cascadia subduction zone as the source of the 1700 Japanese "Orphan" tsunami and a future tsunami threat to the U.S. northwest coast, contributed to the decision to form the U.S. National Tsunami Hazard Mitigation Program. The great Lisbon earthquake of 1755 marked the beginning of the modern era of seismology. Knowledge gained from the 1964 Alaska earthquake and tsunami helped confirm the theory of plate tectonics. The 1946 Alaska, 1952 Kuril Islands, 1960 Chile, 1964 Alaska, and the 2004 Banda Aceh, tsunamis all resulted in warning centers or systems being established.The data descriptions on this poster were extracted from NOAA's National Geophysical Data Center (NGDC) global historical tsunami database. Additional information about these tsunamis, as well as water level data can be found by accessing the NGDC website www.ngdc.noaa.gov/hazard/

  13. Endogenous time-varying risk aversion and asset returns.

    Science.gov (United States)

    Berardi, Michele

    2016-01-01

    Stylized facts about statistical properties for short horizon returns in financial markets have been identified in the literature, but a satisfactory understanding for their manifestation is yet to be achieved. In this work, we show that a simple asset pricing model with representative agent is able to generate time series of returns that replicate such stylized facts if the risk aversion coefficient is allowed to change endogenously over time in response to unexpected excess returns under evolutionary forces. The same model, under constant risk aversion, would instead generate returns that are essentially Gaussian. We conclude that an endogenous time-varying risk aversion represents a very parsimonious way to make the model match real data on key statistical properties, and therefore deserves careful consideration from economists and practitioners alike.

  14. Statistical Model-Based Face Pose Estimation

    Institute of Scientific and Technical Information of China (English)

    GE Xinliang; YANG Jie; LI Feng; WANG Huahua

    2007-01-01

    A robust face pose estimation approach is proposed by using face shape statistical model approach and pose parameters are represented by trigonometric functions. The face shape statistical model is firstly built by analyzing the face shapes from different people under varying poses. The shape alignment is vital in the process of building the statistical model. Then, six trigonometric functions are employed to represent the face pose parameters. Lastly, the mapping function is constructed between face image and face pose by linearly relating different parameters. The proposed approach is able to estimate different face poses using a few face training samples. Experimental results are provided to demonstrate its efficiency and accuracy.

  15. Statistical time lags in ac discharges

    International Nuclear Information System (INIS)

    Sobota, A; Kanters, J H M; Van Veldhuizen, E M; Haverlag, M; Manders, F

    2011-01-01

    The paper presents statistical time lags measured for breakdown events in near-atmospheric pressure argon and xenon. Ac voltage at 100, 400 and 800 kHz was used to drive the breakdown processes, and the voltage amplitude slope was varied between 10 and 1280 V ms -1 . The values obtained for the statistical time lags are roughly between 1 and 150 ms. It is shown that the statistical time lags in ac-driven discharges follow the same general trends as the discharges driven by voltage of monotonic slope. In addition, the validity of the Cobine-Easton expression is tested at an alternating voltage form.

  16. Statistical time lags in ac discharges

    Energy Technology Data Exchange (ETDEWEB)

    Sobota, A; Kanters, J H M; Van Veldhuizen, E M; Haverlag, M [Eindhoven University of Technology, Department of Applied Physics, Postbus 513, 5600MB Eindhoven (Netherlands); Manders, F, E-mail: a.sobota@tue.nl [Philips Lighting, LightLabs, Mathildelaan 1, 5600JM Eindhoven (Netherlands)

    2011-04-06

    The paper presents statistical time lags measured for breakdown events in near-atmospheric pressure argon and xenon. Ac voltage at 100, 400 and 800 kHz was used to drive the breakdown processes, and the voltage amplitude slope was varied between 10 and 1280 V ms{sup -1}. The values obtained for the statistical time lags are roughly between 1 and 150 ms. It is shown that the statistical time lags in ac-driven discharges follow the same general trends as the discharges driven by voltage of monotonic slope. In addition, the validity of the Cobine-Easton expression is tested at an alternating voltage form.

  17. Replicability of time-varying connectivity patterns in large resting state fMRI samples.

    Science.gov (United States)

    Abrol, Anees; Damaraju, Eswar; Miller, Robyn L; Stephen, Julia M; Claus, Eric D; Mayer, Andrew R; Calhoun, Vince D

    2017-12-01

    The past few years have seen an emergence of approaches that leverage temporal changes in whole-brain patterns of functional connectivity (the chronnectome). In this chronnectome study, we investigate the replicability of the human brain's inter-regional coupling dynamics during rest by evaluating two different dynamic functional network connectivity (dFNC) analysis frameworks using 7 500 functional magnetic resonance imaging (fMRI) datasets. To quantify the extent to which the emergent functional connectivity (FC) patterns are reproducible, we characterize the temporal dynamics by deriving several summary measures across multiple large, independent age-matched samples. Reproducibility was demonstrated through the existence of basic connectivity patterns (FC states) amidst an ensemble of inter-regional connections. Furthermore, application of the methods to conservatively configured (statistically stationary, linear and Gaussian) surrogate datasets revealed that some of the studied state summary measures were indeed statistically significant and also suggested that this class of null model did not explain the fMRI data fully. This extensive testing of reproducibility of similarity statistics also suggests that the estimated FC states are robust against variation in data quality, analysis, grouping, and decomposition methods. We conclude that future investigations probing the functional and neurophysiological relevance of time-varying connectivity assume critical importance. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  18. An estimation of crude oil import demand in Turkey: Evidence from time-varying parameters approach

    International Nuclear Information System (INIS)

    Ozturk, Ilhan; Arisoy, Ibrahim

    2016-01-01

    The aim of this study is to model crude oil import demand and estimate the price and income elasticities of imported crude oil in Turkey based on a time-varying parameters (TVP) approach with the aim of obtaining accurate and more robust estimates of price and income elasticities. This study employs annual time series data of domestic oil consumption, real GDP, and oil price for the period 1966–2012. The empirical results indicate that both the income and price elasticities are in line with the theoretical expectations. However, the income elasticity is statistically significant while the price elasticity is statistically insignificant. The relatively high value of income elasticity (1.182) from this study suggests that crude oil import in Turkey is more responsive to changes in income level. This result indicates that imported crude oil is a normal good and rising income levels will foster higher consumption of oil based equipments, vehicles and services by economic agents. The estimated income elasticity of 1.182 suggests that imported crude oil consumption grows at a higher rate than income. This in turn reduces oil intensity over time. Therefore, crude oil import during the estimation period is substantially driven by income. - Highlights: • We estimated the price and income elasticities of imported crude oil in Turkey. • Income elasticity is statistically significant and it is 1.182. • The price elasticity is statistically insignificant. • Crude oil import in Turkey is more responsive to changes in income level. • Crude oil import during the estimation period is substantially driven by income.

  19. Statistical Physics An Introduction

    CERN Document Server

    Yoshioka, Daijiro

    2007-01-01

    This book provides a comprehensive presentation of the basics of statistical physics. The first part explains the essence of statistical physics and how it provides a bridge between microscopic and macroscopic phenomena, allowing one to derive quantities such as entropy. Here the author avoids going into details such as Liouville’s theorem or the ergodic theorem, which are difficult for beginners and unnecessary for the actual application of the statistical mechanics. In the second part, statistical mechanics is applied to various systems which, although they look different, share the same mathematical structure. In this way readers can deepen their understanding of statistical physics. The book also features applications to quantum dynamics, thermodynamics, the Ising model and the statistical dynamics of free spins.

  20. Statistical symmetries in physics

    International Nuclear Information System (INIS)

    Green, H.S.; Adelaide Univ., SA

    1994-01-01

    Every law of physics is invariant under some group of transformations and is therefore the expression of some type of symmetry. Symmetries are classified as geometrical, dynamical or statistical. At the most fundamental level, statistical symmetries are expressed in the field theories of the elementary particles. This paper traces some of the developments from the discovery of Bose statistics, one of the two fundamental symmetries of physics. A series of generalizations of Bose statistics is described. A supersymmetric generalization accommodates fermions as well as bosons, and further generalizations, including parastatistics, modular statistics and graded statistics, accommodate particles with properties such as 'colour'. A factorization of elements of ggl(n b ,n f ) can be used to define truncated boson operators. A general construction is given for q-deformed boson operators, and explicit constructions of the same type are given for various 'deformed' algebras. A summary is given of some of the applications and potential applications. 39 refs., 2 figs

  1. The statistical stability phenomenon

    CERN Document Server

    Gorban, Igor I

    2017-01-01

    This monograph investigates violations of statistical stability of physical events, variables, and processes and develops a new physical-mathematical theory taking into consideration such violations – the theory of hyper-random phenomena. There are five parts. The first describes the phenomenon of statistical stability and its features, and develops methods for detecting violations of statistical stability, in particular when data is limited. The second part presents several examples of real processes of different physical nature and demonstrates the violation of statistical stability over broad observation intervals. The third part outlines the mathematical foundations of the theory of hyper-random phenomena, while the fourth develops the foundations of the mathematical analysis of divergent and many-valued functions. The fifth part contains theoretical and experimental studies of statistical laws where there is violation of statistical stability. The monograph should be of particular interest to engineers...

  2. Equilibrium statistical mechanics

    CERN Document Server

    Jackson, E Atlee

    2000-01-01

    Ideal as an elementary introduction to equilibrium statistical mechanics, this volume covers both classical and quantum methodology for open and closed systems. Introductory chapters familiarize readers with probability and microscopic models of systems, while additional chapters describe the general derivation of the fundamental statistical mechanics relationships. The final chapter contains 16 sections, each dealing with a different application, ordered according to complexity, from classical through degenerate quantum statistical mechanics. Key features include an elementary introduction t

  3. Applied statistics for economists

    CERN Document Server

    Lewis, Margaret

    2012-01-01

    This book is an undergraduate text that introduces students to commonly-used statistical methods in economics. Using examples based on contemporary economic issues and readily-available data, it not only explains the mechanics of the various methods, it also guides students to connect statistical results to detailed economic interpretations. Because the goal is for students to be able to apply the statistical methods presented, online sources for economic data and directions for performing each task in Excel are also included.

  4. Mineral industry statistics 1975

    Energy Technology Data Exchange (ETDEWEB)

    1978-01-01

    Production, consumption and marketing statistics are given for solid fuels (coal, peat), liquid fuels and gases (oil, natural gas), iron ore, bauxite and other minerals quarried in France, in 1975. Also accident statistics are included. Production statistics are presented of the Overseas Departments and territories (French Guiana, New Caledonia, New Hebrides). An account of modifications in the mining field in 1975 is given. Concessions, exploitation permits, and permits solely for prospecting for mineral products are discussed. (In French)

  5. Lectures on statistical mechanics

    CERN Document Server

    Bowler, M G

    1982-01-01

    Anyone dissatisfied with the almost ritual dullness of many 'standard' texts in statistical mechanics will be grateful for the lucid explanation and generally reassuring tone. Aimed at securing firm foundations for equilibrium statistical mechanics, topics of great subtlety are presented transparently and enthusiastically. Very little mathematical preparation is required beyond elementary calculus and prerequisites in physics are limited to some elementary classical thermodynamics. Suitable as a basis for a first course in statistical mechanics, the book is an ideal supplement to more convent

  6. Introduction to Statistics

    Directory of Open Access Journals (Sweden)

    Mirjam Nielen

    2017-01-01

    Full Text Available Always wondered why research papers often present rather complicated statistical analyses? Or wondered how to properly analyse the results of a pragmatic trial from your own practice? This talk will give an overview of basic statistical principles and focus on the why of statistics, rather than on the how.This is a podcast of Mirjam's talk at the Veterinary Evidence Today conference, Edinburgh November 2, 2016. 

  7. Equilibrium statistical mechanics

    CERN Document Server

    Mayer, J E

    1968-01-01

    The International Encyclopedia of Physical Chemistry and Chemical Physics, Volume 1: Equilibrium Statistical Mechanics covers the fundamental principles and the development of theoretical aspects of equilibrium statistical mechanics. Statistical mechanical is the study of the connection between the macroscopic behavior of bulk matter and the microscopic properties of its constituent atoms and molecules. This book contains eight chapters, and begins with a presentation of the master equation used for the calculation of the fundamental thermodynamic functions. The succeeding chapters highlight t

  8. Contributions to statistics

    CERN Document Server

    Mahalanobis, P C

    1965-01-01

    Contributions to Statistics focuses on the processes, methodologies, and approaches involved in statistics. The book is presented to Professor P. C. Mahalanobis on the occasion of his 70th birthday. The selection first offers information on the recovery of ancillary information and combinatorial properties of partially balanced designs and association schemes. Discussions focus on combinatorial applications of the algebra of association matrices, sample size analogy, association matrices and the algebra of association schemes, and conceptual statistical experiments. The book then examines latt

  9. Safety significance evaluation system

    International Nuclear Information System (INIS)

    Lew, B.S.; Yee, D.; Brewer, W.K.; Quattro, P.J.; Kirby, K.D.

    1991-01-01

    This paper reports that the Pacific Gas and Electric Company (PG and E), in cooperation with ABZ, Incorporated and Science Applications International Corporation (SAIC), investigated the use of artificial intelligence-based programming techniques to assist utility personnel in regulatory compliance problems. The result of this investigation is that artificial intelligence-based programming techniques can successfully be applied to this problem. To demonstrate this, a general methodology was developed and several prototype systems based on this methodology were developed. The prototypes address U.S. Nuclear Regulatory Commission (NRC) event reportability requirements, technical specification compliance based on plant equipment status, and quality assurance assistance. This collection of prototype modules is named the safety significance evaluation system

  10. Predicting significant torso trauma.

    Science.gov (United States)

    Nirula, Ram; Talmor, Daniel; Brasel, Karen

    2005-07-01

    Identification of motor vehicle crash (MVC) characteristics associated with thoracoabdominal injury would advance the development of automatic crash notification systems (ACNS) by improving triage and response times. Our objective was to determine the relationships between MVC characteristics and thoracoabdominal trauma to develop a torso injury probability model. Drivers involved in crashes from 1993 to 2001 within the National Automotive Sampling System were reviewed. Relationships between torso injury and MVC characteristics were assessed using multivariate logistic regression. Receiver operating characteristic curves were used to compare the model to current ACNS models. There were a total of 56,466 drivers. Age, ejection, braking, avoidance, velocity, restraints, passenger-side impact, rollover, and vehicle weight and type were associated with injury (p < 0.05). The area under the receiver operating characteristic curve (83.9) was significantly greater than current ACNS models. We have developed a thoracoabdominal injury probability model that may improve patient triage when used with ACNS.

  11. Gas revenue increasingly significant

    International Nuclear Information System (INIS)

    Megill, R.E.

    1991-01-01

    This paper briefly describes the wellhead prices of natural gas compared to crude oil over the past 70 years. Although natural gas prices have never reached price parity with crude oil, the relative value of a gas BTU has been increasing. It is one of the reasons that the total amount of money coming from natural gas wells is becoming more significant. From 1920 to 1955 the revenue at the wellhead for natural gas was only about 10% of the money received by producers. Most of the money needed for exploration, development, and production came from crude oil. At present, however, over 40% of the money from the upstream portion of the petroleum industry is from natural gas. As a result, in a few short years natural gas may become 50% of the money revenues generated from wellhead production facilities

  12. Statistics in a Nutshell

    CERN Document Server

    Boslaugh, Sarah

    2008-01-01

    Need to learn statistics as part of your job, or want some help passing a statistics course? Statistics in a Nutshell is a clear and concise introduction and reference that's perfect for anyone with no previous background in the subject. This book gives you a solid understanding of statistics without being too simple, yet without the numbing complexity of most college texts. You get a firm grasp of the fundamentals and a hands-on understanding of how to apply them before moving on to the more advanced material that follows. Each chapter presents you with easy-to-follow descriptions illustrat

  13. Understanding Computational Bayesian Statistics

    CERN Document Server

    Bolstad, William M

    2011-01-01

    A hands-on introduction to computational statistics from a Bayesian point of view Providing a solid grounding in statistics while uniquely covering the topics from a Bayesian perspective, Understanding Computational Bayesian Statistics successfully guides readers through this new, cutting-edge approach. With its hands-on treatment of the topic, the book shows how samples can be drawn from the posterior distribution when the formula giving its shape is all that is known, and how Bayesian inferences can be based on these samples from the posterior. These ideas are illustrated on common statistic

  14. Annual Statistical Supplement, 2002

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2002 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  15. Annual Statistical Supplement, 2010

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2010 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  16. Annual Statistical Supplement, 2007

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2007 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  17. Annual Statistical Supplement, 2001

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2001 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  18. Annual Statistical Supplement, 2016

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2016 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  19. Annual Statistical Supplement, 2011

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2011 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  20. Annual Statistical Supplement, 2005

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2005 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  1. Annual Statistical Supplement, 2015

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2015 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  2. Annual Statistical Supplement, 2003

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2003 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  3. Annual Statistical Supplement, 2017

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2017 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  4. Annual Statistical Supplement, 2008

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2008 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  5. Annual Statistical Supplement, 2014

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2014 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  6. Annual Statistical Supplement, 2004

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2004 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  7. Annual Statistical Supplement, 2000

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2000 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  8. Annual Statistical Supplement, 2009

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2009 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  9. Annual Statistical Supplement, 2006

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2006 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  10. Principles of statistics

    CERN Document Server

    Bulmer, M G

    1979-01-01

    There are many textbooks which describe current methods of statistical analysis, while neglecting related theory. There are equally many advanced textbooks which delve into the far reaches of statistical theory, while bypassing practical applications. But between these two approaches is an unfilled gap, in which theory and practice merge at an intermediate level. Professor M. G. Bulmer's Principles of Statistics, originally published in 1965, was created to fill that need. The new, corrected Dover edition of Principles of Statistics makes this invaluable mid-level text available once again fo

  11. 100 statistical tests

    CERN Document Server

    Kanji, Gopal K

    2006-01-01

    This expanded and updated Third Edition of Gopal K. Kanji's best-selling resource on statistical tests covers all the most commonly used tests with information on how to calculate and interpret results with simple datasets. Each entry begins with a short summary statement about the test's purpose, and contains details of the test objective, the limitations (or assumptions) involved, a brief outline of the method, a worked example, and the numerical calculation. 100 Statistical Tests, Third Edition is the one indispensable guide for users of statistical materials and consumers of statistical information at all levels and across all disciplines.

  12. Statistical distribution sampling

    Science.gov (United States)

    Johnson, E. S.

    1975-01-01

    Determining the distribution of statistics by sampling was investigated. Characteristic functions, the quadratic regression problem, and the differential equations for the characteristic functions are analyzed.

  13. Testing statistical hypotheses of equivalence

    CERN Document Server

    Wellek, Stefan

    2010-01-01

    Equivalence testing has grown significantly in importance over the last two decades, especially as its relevance to a variety of applications has become understood. Yet published work on the general methodology remains scattered in specialists' journals, and for the most part, it focuses on the relatively narrow topic of bioequivalence assessment.With a far broader perspective, Testing Statistical Hypotheses of Equivalence provides the first comprehensive treatment of statistical equivalence testing. The author addresses a spectrum of specific, two-sided equivalence testing problems, from the

  14. Uranium chemistry: significant advances

    International Nuclear Information System (INIS)

    Mazzanti, M.

    2011-01-01

    The author reviews recent progress in uranium chemistry achieved in CEA laboratories. Like its neighbors in the Mendeleev chart uranium undergoes hydrolysis, oxidation and disproportionation reactions which make the chemistry of these species in water highly complex. The study of the chemistry of uranium in an anhydrous medium has led to correlate the structural and electronic differences observed in the interaction of uranium(III) and the lanthanides(III) with nitrogen or sulfur molecules and the effectiveness of these molecules in An(III)/Ln(III) separation via liquid-liquid extraction. Recent work on the redox reactivity of trivalent uranium U(III) in an organic medium with molecules such as water or an azide ion (N 3 - ) in stoichiometric quantities, led to extremely interesting uranium aggregates particular those involved in actinide migration in the environment or in aggregation problems in the fuel processing cycle. Another significant advance was the discovery of a compound containing the uranyl ion with a degree of oxidation (V) UO 2 + , obtained by oxidation of uranium(III). Recently chemists have succeeded in blocking the disproportionation reaction of uranyl(V) and in stabilizing polymetallic complexes of uranyl(V), opening the way to to a systematic study of the reactivity and the electronic and magnetic properties of uranyl(V) compounds. (A.C.)

  15. Meaning and significance of

    Directory of Open Access Journals (Sweden)

    Ph D Student Roman Mihaela

    2011-05-01

    Full Text Available The concept of "public accountability" is a challenge for political science as a new concept in this area in full debate and developement ,both in theory and practice. This paper is a theoretical approach of displaying some definitions, relevant meanings and significance odf the concept in political science. The importance of this concept is that although originally it was used as a tool to improve effectiveness and eficiency of public governance, it has gradually become a purpose it itself. "Accountability" has become an image of good governance first in the United States of America then in the European Union.Nevertheless,the concept is vaguely defined and provides ambiguous images of good governance.This paper begins with the presentation of some general meanings of the concept as they emerge from specialized dictionaries and ancyclopaedies and continues with the meanings developed in political science. The concept of "public accontability" is rooted in economics and management literature,becoming increasingly relevant in today's political science both in theory and discourse as well as in practice in formulating and evaluating public policies. A first conclusin that emerges from, the analysis of the evolution of this term is that it requires a conceptual clarification in political science. A clear definition will then enable an appropriate model of proving the system of public accountability in formulating and assessing public policies, in order to implement a system of assessment and monitoring thereof.

  16. Varying constants, black holes, and quantum gravity

    International Nuclear Information System (INIS)

    Carlip, S.

    2003-01-01

    Tentative observations and theoretical considerations have recently led to renewed interest in models of fundamental physics in which certain 'constants' vary in time. Assuming fixed black hole mass and the standard form of the Bekenstein-Hawking entropy, Davies, Davis and Lineweaver have argued that the laws of black hole thermodynamics disfavor models in which the fundamental electric charge e changes. I show that with these assumptions, similar considerations severely constrain 'varying speed of light' models, unless we are prepared to abandon cherished assumptions about quantum gravity. Relaxation of these assumptions permits sensible theories of quantum gravity with ''varying constants,'' but also eliminates the thermodynamic constraints, though the black hole mass spectrum may still provide some restrictions on the range of allowable models

  17. Significant Radionuclides Determination

    Energy Technology Data Exchange (ETDEWEB)

    Jo A. Ziegler

    2001-07-31

    The purpose of this calculation is to identify radionuclides that are significant to offsite doses from potential preclosure events for spent nuclear fuel (SNF) and high-level radioactive waste expected to be received at the potential Monitored Geologic Repository (MGR). In this calculation, high-level radioactive waste is included in references to DOE SNF. A previous document, ''DOE SNF DBE Offsite Dose Calculations'' (CRWMS M&O 1999b), calculated the source terms and offsite doses for Department of Energy (DOE) and Naval SNF for use in design basis event analyses. This calculation reproduces only DOE SNF work (i.e., no naval SNF work is included in this calculation) created in ''DOE SNF DBE Offsite Dose Calculations'' and expands the calculation to include DOE SNF expected to produce a high dose consequence (even though the quantity of the SNF is expected to be small) and SNF owned by commercial nuclear power producers. The calculation does not address any specific off-normal/DBE event scenarios for receiving, handling, or packaging of SNF. The results of this calculation are developed for comparative analysis to establish the important radionuclides and do not represent the final source terms to be used for license application. This calculation will be used as input to preclosure safety analyses and is performed in accordance with procedure AP-3.12Q, ''Calculations'', and is subject to the requirements of DOE/RW-0333P, ''Quality Assurance Requirements and Description'' (DOE 2000) as determined by the activity evaluation contained in ''Technical Work Plan for: Preclosure Safety Analysis, TWP-MGR-SE-000010'' (CRWMS M&O 2000b) in accordance with procedure AP-2.21Q, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities''.

  18. Study of selected phenotype switching strategies in time varying environment

    Energy Technology Data Exchange (ETDEWEB)

    Horvath, Denis, E-mail: horvath.denis@gmail.com [Centre of Interdisciplinary Biosciences, Institute of Physics, Faculty of Science, P.J. Šafárik University in Košice, Jesenná 5, 040 01 Košice (Slovakia); Brutovsky, Branislav, E-mail: branislav.brutovsky@upjs.sk [Department of Biophysics, Institute of Physics, P.J. Šafárik University in Košice, Jesenná 5, 040 01 Košice (Slovakia)

    2016-03-22

    Population heterogeneity plays an important role across many research, as well as the real-world, problems. The population heterogeneity relates to the ability of a population to cope with an environment change (or uncertainty) preventing its extinction. However, this ability is not always desirable as can be exemplified by an intratumor heterogeneity which positively correlates with the development of resistance to therapy. Causation of population heterogeneity is therefore in biology and medicine an intensively studied topic. In this paper the evolution of a specific strategy of population diversification, the phenotype switching, is studied at a conceptual level. The presented simulation model studies evolution of a large population of asexual organisms in a time-varying environment represented by a stochastic Markov process. Each organism disposes with a stochastic or nonlinear deterministic switching strategy realized by discrete-time models with evolvable parameters. We demonstrate that under rapidly varying exogenous conditions organisms operate in the vicinity of the bet-hedging strategy, while the deterministic patterns become relevant as the environmental variations are less frequent. Statistical characterization of the steady state regimes of the populations is done using the Hellinger and Kullback–Leibler functional distances and the Hamming distance. - Highlights: • Relation between phenotype switching and environment is studied. • The Markov chain Monte Carlo based model is developed. • Stochastic and deterministic strategies of phenotype switching are utilized. • Statistical measures of the dynamic heterogeneity reveal universal properties. • The results extend to higher lattice dimensions.

  19. Study of selected phenotype switching strategies in time varying environment

    International Nuclear Information System (INIS)

    Horvath, Denis; Brutovsky, Branislav

    2016-01-01

    Population heterogeneity plays an important role across many research, as well as the real-world, problems. The population heterogeneity relates to the ability of a population to cope with an environment change (or uncertainty) preventing its extinction. However, this ability is not always desirable as can be exemplified by an intratumor heterogeneity which positively correlates with the development of resistance to therapy. Causation of population heterogeneity is therefore in biology and medicine an intensively studied topic. In this paper the evolution of a specific strategy of population diversification, the phenotype switching, is studied at a conceptual level. The presented simulation model studies evolution of a large population of asexual organisms in a time-varying environment represented by a stochastic Markov process. Each organism disposes with a stochastic or nonlinear deterministic switching strategy realized by discrete-time models with evolvable parameters. We demonstrate that under rapidly varying exogenous conditions organisms operate in the vicinity of the bet-hedging strategy, while the deterministic patterns become relevant as the environmental variations are less frequent. Statistical characterization of the steady state regimes of the populations is done using the Hellinger and Kullback–Leibler functional distances and the Hamming distance. - Highlights: • Relation between phenotype switching and environment is studied. • The Markov chain Monte Carlo based model is developed. • Stochastic and deterministic strategies of phenotype switching are utilized. • Statistical measures of the dynamic heterogeneity reveal universal properties. • The results extend to higher lattice dimensions.

  20. Fermi–Dirac Statistics

    Indian Academy of Sciences (India)

    IAS Admin

    Pauli exclusion principle, Fermi–. Dirac statistics, identical and in- distinguishable particles, Fermi gas. Fermi–Dirac Statistics. Derivation and Consequences. S Chaturvedi and Shyamal Biswas. (left) Subhash Chaturvedi is at University of. Hyderabad. His current research interests include phase space descriptions.

  1. Generalized interpolative quantum statistics

    International Nuclear Information System (INIS)

    Ramanathan, R.

    1992-01-01

    A generalized interpolative quantum statistics is presented by conjecturing a certain reordering of phase space due to the presence of possible exotic objects other than bosons and fermions. Such an interpolation achieved through a Bose-counting strategy predicts the existence of an infinite quantum Boltzmann-Gibbs statistics akin to the one discovered by Greenberg recently

  2. Handbook of Spatial Statistics

    CERN Document Server

    Gelfand, Alan E

    2010-01-01

    Offers an introduction detailing the evolution of the field of spatial statistics. This title focuses on the three main branches of spatial statistics: continuous spatial variation (point referenced data); discrete spatial variation, including lattice and areal unit data; and, spatial point patterns.

  3. Statistical tables 2003

    International Nuclear Information System (INIS)

    2003-01-01

    The energy statistical table is a selection of statistical data for energies and countries from 1997 to 2002. It concerns the petroleum, the natural gas, the coal, the electric power, the production, the external market, the consumption per sector, the energy accounting 2002 and graphs on the long-dated forecasting. (A.L.B.)

  4. Bayesian statistical inference

    Directory of Open Access Journals (Sweden)

    Bruno De Finetti

    2017-04-01

    Full Text Available This work was translated into English and published in the volume: Bruno De Finetti, Induction and Probability, Biblioteca di Statistica, eds. P. Monari, D. Cocchi, Clueb, Bologna, 1993.Bayesian statistical Inference is one of the last fundamental philosophical papers in which we can find the essential De Finetti's approach to the statistical inference.

  5. Practical statistics for educators

    CERN Document Server

    Ravid, Ruth

    2014-01-01

    Practical Statistics for Educators, Fifth Edition, is a clear and easy-to-follow text written specifically for education students in introductory statistics courses and in action research courses. It is also a valuable resource and guidebook for educational practitioners who wish to study their own settings.

  6. Thiele. Pioneer in statistics

    DEFF Research Database (Denmark)

    Lauritzen, Steffen Lilholt

    This book studies the brilliant Danish 19th Century astronomer, T.N. Thiele who made important contributions to statistics, actuarial science, astronomy and mathematics. The most important of these contributions in statistics are translated into English for the first time, and the text includes...

  7. Applied Statistics with SPSS

    Science.gov (United States)

    Huizingh, Eelko K. R. E.

    2007-01-01

    Accessibly written and easy to use, "Applied Statistics Using SPSS" is an all-in-one self-study guide to SPSS and do-it-yourself guide to statistics. What is unique about Eelko Huizingh's approach is that this book is based around the needs of undergraduate students embarking on their own research project, and its self-help style is designed to…

  8. Cancer Statistics Animator

    Science.gov (United States)

    This tool allows users to animate cancer trends over time by cancer site and cause of death, race, and sex. Provides access to incidence, mortality, and survival. Select the type of statistic, variables, format, and then extract the statistics in a delimited format for further analyses.

  9. Energy statistics yearbook 2002

    International Nuclear Information System (INIS)

    2005-01-01

    The Energy Statistics Yearbook 2002 is a comprehensive collection of international energy statistics prepared by the United Nations Statistics Division. It is the forty-sixth in a series of annual compilations which commenced under the title World Energy Supplies in Selected Years, 1929-1950. It updates the statistical series shown in the previous issue. Supplementary series of monthly and quarterly data on production of energy may be found in the Monthly Bulletin of Statistics. The principal objective of the Yearbook is to provide a global framework of comparable data on long-term trends in the supply of mainly commercial primary and secondary forms of energy. Data for each type of fuel and aggregate data for the total mix of commercial fuels are shown for individual countries and areas and are summarized into regional and world totals. The data are compiled primarily from the annual energy questionnaire distributed by the United Nations Statistics Division and supplemented by official national statistical publications. Where official data are not available or are inconsistent, estimates are made by the Statistics Division based on governmental, professional or commercial materials. Estimates include, but are not limited to, extrapolated data based on partial year information, use of annual trends, trade data based on partner country reports, breakdowns of aggregated data as well as analysis of current energy events and activities

  10. Advances in statistics

    Science.gov (United States)

    Howard Stauffer; Nadav Nur

    2005-01-01

    The papers included in the Advances in Statistics section of the Partners in Flight (PIF) 2002 Proceedings represent a small sample of statistical topics of current importance to Partners In Flight research scientists: hierarchical modeling, estimation of detection probabilities, and Bayesian applications. Sauer et al. (this volume) examines a hierarchical model...

  11. Energy statistics yearbook 2001

    International Nuclear Information System (INIS)

    2004-01-01

    The Energy Statistics Yearbook 2001 is a comprehensive collection of international energy statistics prepared by the United Nations Statistics Division. It is the forty-fifth in a series of annual compilations which commenced under the title World Energy Supplies in Selected Years, 1929-1950. It updates the statistical series shown in the previous issue. Supplementary series of monthly and quarterly data on production of energy may be found in the Monthly Bulletin of Statistics. The principal objective of the Yearbook is to provide a global framework of comparable data on long-term trends in the supply of mainly commercial primary and secondary forms of energy. Data for each type of fuel and aggregate data for the total mix of commercial fuels are shown for individual countries and areas and are summarized into regional and world totals. The data are compiled primarily from the annual energy questionnaire distributed by the United Nations Statistics Division and supplemented by official national statistical publications. Where official data are not available or are inconsistent, estimates are made by the Statistics Division based on governmental, professional or commercial materials. Estimates include, but are not limited to, extrapolated data based on partial year information, use of annual trends, trade data based on partner country reports, breakdowns of aggregated data as well as analysis of current energy events and activities

  12. Energy statistics yearbook 2000

    International Nuclear Information System (INIS)

    2002-01-01

    The Energy Statistics Yearbook 2000 is a comprehensive collection of international energy statistics prepared by the United Nations Statistics Division. It is the forty-third in a series of annual compilations which commenced under the title World Energy Supplies in Selected Years, 1929-1950. It updates the statistical series shown in the previous issue. Supplementary series of monthly and quarterly data on production of energy may be found in the Monthly Bulletin of Statistics. The principal objective of the Yearbook is to provide a global framework of comparable data on long-term trends in the supply of mainly commercial primary and secondary forms of energy. Data for each type of fuel and aggregate data for the total mix of commercial fuels are shown for individual countries and areas and are summarized into regional and world totals. The data are compiled primarily from the annual energy questionnaire distributed by the United Nations Statistics Division and supplemented by official national statistical publications. Where official data are not available or are inconsistent, estimates are made by the Statistics Division based on governmental, professional or commercial materials. Estimates include, but are not limited to, extrapolated data based on partial year information, use of annual trends, trade data based on partner country reports, breakdowns of aggregated data as well as analysis of current energy events and activities

  13. Temperature dependent anomalous statistics

    International Nuclear Information System (INIS)

    Das, A.; Panda, S.

    1991-07-01

    We show that the anomalous statistics which arises in 2 + 1 dimensional Chern-Simons gauge theories can become temperature dependent in the most natural way. We analyze and show that a statistic's changing phase transition can happen in these theories only as T → ∞. (author). 14 refs

  14. Introduction to Bayesian statistics

    CERN Document Server

    Bolstad, William M

    2017-01-01

    There is a strong upsurge in the use of Bayesian methods in applied statistical analysis, yet most introductory statistics texts only present frequentist methods. Bayesian statistics has many important advantages that students should learn about if they are going into fields where statistics will be used. In this Third Edition, four newly-added chapters address topics that reflect the rapid advances in the field of Bayesian staistics. The author continues to provide a Bayesian treatment of introductory statistical topics, such as scientific data gathering, discrete random variables, robust Bayesian methods, and Bayesian approaches to inferenfe cfor discrete random variables, bionomial proprotion, Poisson, normal mean, and simple linear regression. In addition, newly-developing topics in the field are presented in four new chapters: Bayesian inference with unknown mean and variance; Bayesian inference for Multivariate Normal mean vector; Bayesian inference for Multiple Linear RegressionModel; and Computati...

  15. Understanding advanced statistical methods

    CERN Document Server

    Westfall, Peter

    2013-01-01

    Introduction: Probability, Statistics, and ScienceReality, Nature, Science, and ModelsStatistical Processes: Nature, Design and Measurement, and DataModelsDeterministic ModelsVariabilityParametersPurely Probabilistic Statistical ModelsStatistical Models with Both Deterministic and Probabilistic ComponentsStatistical InferenceGood and Bad ModelsUses of Probability ModelsRandom Variables and Their Probability DistributionsIntroductionTypes of Random Variables: Nominal, Ordinal, and ContinuousDiscrete Probability Distribution FunctionsContinuous Probability Distribution FunctionsSome Calculus-Derivatives and Least SquaresMore Calculus-Integrals and Cumulative Distribution FunctionsProbability Calculation and SimulationIntroductionAnalytic Calculations, Discrete and Continuous CasesSimulation-Based ApproximationGenerating Random NumbersIdentifying DistributionsIntroductionIdentifying Distributions from Theory AloneUsing Data: Estimating Distributions via the HistogramQuantiles: Theoretical and Data-Based Estimate...

  16. Time-varying surface electromyography topography as a prognostic tool for chronic low back pain rehabilitation.

    Science.gov (United States)

    Hu, Yong; Kwok, Jerry Weilun; Tse, Jessica Yuk-Hang; Luk, Keith Dip-Kei

    2014-06-01

    responding and nonresponding groups. The relative area (RA) and relative width (RW) of RMSD at flexion and extension in the responding group were significantly lower than those in the nonresponding group (p<.05). The areas under the ROC curve of RA and RW of RMSD at flexion and extension were greater than 0.7 and were statistically significant. The quantitative time-varying analysis of sEMG topography showed significant difference between the healthy and LBP groups. The discrepancies in quantitative dynamic sEMG topography of LBP group from normal group, in terms of RA and RW of RMSD at flexion and extension, were able to identify those LBP subjects who would respond to a conservative rehabilitation program focused on functional restoration of lumbar muscle. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. Reduced risk of breast cancer associated with recreational physical activity varies by HER2 status

    International Nuclear Information System (INIS)

    Ma, Huiyan; Xu, Xinxin; Ursin, Giske; Simon, Michael S; Marchbanks, Polly A; Malone, Kathleen E; Lu, Yani; McDonald, Jill A; Folger, Suzanne G; Weiss, Linda K; Sullivan-Halley, Jane; Deapen, Dennis M; Press, Michael F; Bernstein, Leslie

    2015-01-01

    Convincing epidemiologic evidence indicates that physical activity is inversely associated with breast cancer risk. Whether this association varies by the tumor protein expression status of the estrogen receptor (ER), progesterone receptor (PR), human epidermal growth factor receptor 2 (HER2), or p53 is unclear. We evaluated the effects of recreational physical activity on risk of invasive breast cancer classified by the four biomarkers, fitting multivariable unconditional logistic regression models to data from 1195 case and 2012 control participants in the population-based Women’s Contraceptive and Reproductive Experiences Study. Self-reported recreational physical activity at different life periods was measured as average annual metabolic equivalents of energy expenditure [MET]-hours per week. Our biomarker-specific analyses showed that lifetime recreational physical activity was negatively associated with the risks of ER-positive (ER+) and of HER2-negative (HER2−) subtypes (both P trend ≤ 0.04), but not with other subtypes (all P trend > 0.10). Analyses using combinations of biomarkers indicated that risk of invasive breast cancer varied only by HER2 status. Risk of HER2–breast cancer decreased with increasing number of MET-hours of recreational physical activity in each specific life period examined, although some trend tests were only marginally statistically significant (all P trend ≤ 0.06). The test for homogeneity of trends (HER2– vs. HER2+) reached statistical significance only when evaluating physical activity during the first 10 years after menarche (P homogeneity = 0.03). Our data suggest that physical activity reduces risk of invasive breast cancers that lack HER2 overexpression, increasing our understanding of the biological mechanisms by which physical activity acts

  18. "Mina olin siin" esilinastub Karlovy Varys

    Index Scriptorium Estoniae

    2008-01-01

    Karlovy Vary filmifestivalil esilinastub Rene Vilbre noortefilm "Mina olin siin", mille aluseks on Sass Henno romaan "Mina olin siin. Esimene arest", stsenaariumi kirjutas Ilmar Raag. Film võistleb võistlusprogrammis "East of the West". Esitlema sõidavad R. Vilbre, R. Sildos, R. Kaljujärv, T. Tuisk

  19. Tracking time-varying coefficient-functions

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Joensen, Alfred K.

    2000-01-01

    is a combination of recursive least squares with exponential forgetting and local polynomial regression. It is argued, that it is appropriate to let the forgetting factor vary with the value of the external signal which is the argument of the coefficient functions. Some of the key properties of the modified method...... are studied by simulation...

  20. Filmihullu eluvesi voolab Karlovy Varys / Margit Tõnson

    Index Scriptorium Estoniae

    Tõnson, Margit, 1978-

    2010-01-01

    Karlovy Vary rahvusvahelisest filmifestivalist. Filmidest "Mr. Nobody" (rež. Jaco Van Dormaeli), "Kasside ema Teresa" (rež. Pawel Sala) ja "The Arbor" (rež. Clio Barnardi). Nimekiri võitnud töödest ja viimastel aastatel festivalil näidatud Eesti mängufilmidest

  1. Ellipsometry with randomly varying polarization states

    NARCIS (Netherlands)

    Liu, F.; Lee, C. J.; Chen, J. Q.; E. Louis,; van der Slot, P. J. M.; Boller, K. J.; F. Bijkerk,

    2012-01-01

    We show that, under the right conditions, one can make highly accurate polarization-based measurements without knowing the absolute polarization state of the probing light field. It is shown that light, passed through a randomly varying birefringent material has a well-defined orbit on the Poincar

  2. Õunpuu Karlovy Varys edukas

    Index Scriptorium Estoniae

    2010-01-01

    45. Karlovy Vary filmifestivali võistlusprogrammis "East of the West" märgiti ära Veiko Õunpuu film "Püha Tõnu kiusamine". Peaauhind läks rumeenlase Cristi Puiu filmile "Aurora". Grand prix´sai Augustĺ Vila film "La mosquitera". Teisi preemiasaajaid

  3. Time varying determinants of bond flows to emerging markets

    Directory of Open Access Journals (Sweden)

    Yasemin Erduman

    2016-06-01

    Full Text Available This paper investigates the time varying nature of the determinants of bond flows with a focus on the global financial crisis period. We estimate a time varying regression model using Bayesian estimation methods, where the posterior distribution is approximated by Gibbs sampling algorithm. Our findings suggest that the interest rate differential is the most significant pull factor of portfolio bond flows, along with the inflation rate, while the growth rate does not play a significant role. Among the push factors, global liquidity is the most important driver of bond flows. It matters the most, when unconventional monetary easing policies were first announced; and its importance as a determinant of portfolio bond flows decreases over time, starting with the Eurozone crisis, and diminishes with the tapering talk. Global risk appetite and the risk perception towards the emerging countries also have relatively small and stable significant effects on bond flows.

  4. Statistics at a glance.

    Science.gov (United States)

    Ector, Hugo

    2010-12-01

    I still remember my first book on statistics: "Elementary statistics with applications in medicine and the biological sciences" by Frederick E. Croxton. For me, it has been the start of pursuing understanding statistics in daily life and in medical practice. It was the first volume in a long row of books. In his introduction, Croxton pretends that"nearly everyone involved in any aspect of medicine needs to have some knowledge of statistics". The reality is that for many clinicians, statistics are limited to a "P statistical methods. They have never had the opportunity to learn concise and clear descriptions of the key features. I have experienced how some authors can describe difficult methods in a well understandable language. Others fail completely. As a teacher, I tell my students that life is impossible without a basic knowledge of statistics. This feeling has resulted in an annual seminar of 90 minutes. This tutorial is the summary of this seminar. It is a summary and a transcription of the best pages I have detected.

  5. Statistical mechanics in JINR

    International Nuclear Information System (INIS)

    Tonchev, N.; Shumovskij, A.S.

    1986-01-01

    The history of investigations, conducted at the JINR in the field of statistical mechanics, beginning with the fundamental works by Bogolyubov N.N. on superconductivity microscopic theory is presented. Ideas, introduced in these works and methods developed in them, have largely determined the ways for developing statistical mechanics in the JINR and Hartree-Fock-Bogolyubov variational principle has become an important method of the modern nucleus theory. A brief review of the main achievements, connected with the development of statistical mechanics methods and their application in different fields of physical science is given

  6. The nature of statistics

    CERN Document Server

    Wallis, W Allen

    2014-01-01

    Focusing on everyday applications as well as those of scientific research, this classic of modern statistical methods requires little to no mathematical background. Readers develop basic skills for evaluating and using statistical data. Lively, relevant examples include applications to business, government, social and physical sciences, genetics, medicine, and public health. ""W. Allen Wallis and Harry V. Roberts have made statistics fascinating."" - The New York Times ""The authors have set out with considerable success, to write a text which would be of interest and value to the student who,

  7. AP statistics crash course

    CERN Document Server

    D'Alessio, Michael

    2012-01-01

    AP Statistics Crash Course - Gets You a Higher Advanced Placement Score in Less Time Crash Course is perfect for the time-crunched student, the last-minute studier, or anyone who wants a refresher on the subject. AP Statistics Crash Course gives you: Targeted, Focused Review - Study Only What You Need to Know Crash Course is based on an in-depth analysis of the AP Statistics course description outline and actual Advanced Placement test questions. It covers only the information tested on the exam, so you can make the most of your valuable study time. Our easy-to-read format covers: exploring da

  8. Statistical deception at work

    CERN Document Server

    Mauro, John

    2013-01-01

    Written to reveal statistical deceptions often thrust upon unsuspecting journalists, this book views the use of numbers from a public perspective. Illustrating how the statistical naivete of journalists often nourishes quantitative misinformation, the author's intent is to make journalists more critical appraisers of numerical data so that in reporting them they do not deceive the public. The book frequently uses actual reported examples of misused statistical data reported by mass media and describes how journalists can avoid being taken in by them. Because reports of survey findings seldom g

  9. Statistical Group Comparison

    CERN Document Server

    Liao, Tim Futing

    2011-01-01

    An incomparably useful examination of statistical methods for comparisonThe nature of doing science, be it natural or social, inevitably calls for comparison. Statistical methods are at the heart of such comparison, for they not only help us gain understanding of the world around us but often define how our research is to be carried out. The need to compare between groups is best exemplified by experiments, which have clearly defined statistical methods. However, true experiments are not always possible. What complicates the matter more is a great deal of diversity in factors that are not inde

  10. Statistical Pattern Recognition

    CERN Document Server

    Webb, Andrew R

    2011-01-01

    Statistical pattern recognition relates to the use of statistical techniques for analysing data measurements in order to extract information and make justified decisions.  It is a very active area of study and research, which has seen many advances in recent years. Applications such as data mining, web searching, multimedia data retrieval, face recognition, and cursive handwriting recognition, all require robust and efficient pattern recognition techniques. This third edition provides an introduction to statistical pattern theory and techniques, with material drawn from a wide range of fields,

  11. Mineral statistics yearbook 1994

    International Nuclear Information System (INIS)

    1994-01-01

    A summary of mineral production in Saskatchewan was compiled and presented as a reference manual. Statistical information on fuel minerals such as crude oil, natural gas, liquefied petroleum gas and coal, and of industrial and metallic minerals, such as potash, sodium sulphate, salt and uranium, was provided in all conceivable variety of tables. Production statistics, disposition and value of sales of industrial and metallic minerals were also made available. Statistical data on drilling of oil and gas reservoirs and crown land disposition were also included. figs., tabs

  12. Evolutionary Statistical Procedures

    CERN Document Server

    Baragona, Roberto; Poli, Irene

    2011-01-01

    This proposed text appears to be a good introduction to evolutionary computation for use in applied statistics research. The authors draw from a vast base of knowledge about the current literature in both the design of evolutionary algorithms and statistical techniques. Modern statistical research is on the threshold of solving increasingly complex problems in high dimensions, and the generalization of its methodology to parameters whose estimators do not follow mathematically simple distributions is underway. Many of these challenges involve optimizing functions for which analytic solutions a

  13. Methods of statistical physics

    CERN Document Server

    Akhiezer, Aleksandr I

    1981-01-01

    Methods of Statistical Physics is an exposition of the tools of statistical mechanics, which evaluates the kinetic equations of classical and quantized systems. The book also analyzes the equations of macroscopic physics, such as the equations of hydrodynamics for normal and superfluid liquids and macroscopic electrodynamics. The text gives particular attention to the study of quantum systems. This study begins with a discussion of problems of quantum statistics with a detailed description of the basics of quantum mechanics along with the theory of measurement. An analysis of the asymptotic be

  14. Cancer Data and Statistics Tools

    Science.gov (United States)

    ... Educational Campaigns Initiatives Stay Informed Cancer Data and Statistics Tools Recommend on Facebook Tweet Share Compartir Cancer Statistics Tools United States Cancer Statistics: Data Visualizations The ...

  15. Do effects of common case-mix adjusters on patient experiences vary across patient groups?

    Science.gov (United States)

    de Boer, Dolf; van der Hoek, Lucas; Rademakers, Jany; Delnoij, Diana; van den Berg, Michael

    2017-11-22

    Many survey studies in health care adjust for demographic characteristics such as age, gender, educational attainment and general health when performing statistical analyses. Whether the effects of these demographic characteristics are consistent between patient groups remains to be determined. This is important as the rationale for adjustment is often that demographic sub-groups differ in their so-called 'response tendency'. This rationale may be less convincing if the effects of response tendencies vary across patient groups. The present paper examines whether the impact of these characteristics on patients' global rating of care varies across patient groups. Secondary analyses using multi-level regression models were performed on a dataset including 32 different patient groups and 145,578 observations. For each demographic variable, the 95% expected range of case-mix coefficients across patient groups is presented. In addition, we report whether the variance of coefficients for demographic variables across patient groups is significant. Overall, men, elderly, lower educated people and people in good health tend to give higher global ratings. However, these effects varied significantly across patient groups and included the possibility of no effect or an opposite effect in some patient groups. The response tendency attributed to demographic characteristics - such as older respondents being milder, or higher educated respondents being more critical - is not general or universal. As such, the mechanism linking demographic characteristics to survey results on patient experiences with quality of care is more complicated than a general response tendency. It is possible that the response tendency interacts with patient group, but it is also possible that other mechanisms are at play.

  16. Spatially varying predictors of teenage birth rates among counties in the United States

    Directory of Open Access Journals (Sweden)

    Carla Shoff

    2012-09-01

    Full Text Available BACKGROUND Limited information is available about teenage pregnancy and childbearing in rural areas, even though approximately 20 percent of the nation's youth live in rural areas. Identifying whether there are differences in the teenage birth rate (TBR across metropolitan and nonmetropolitan areas is important because these differences may reflect modifiable ecological-level influences such as education, employment, laws, healthcare infrastructure, and policies that could potentially reduce the TBR. OBJECTIVE The goals of this study are to investigate whether there are spatially varying relationships between the TBR and the independent variables, and if so, whether these associations differ between metropolitan and nonmetropolitan counties. METHODS We explore the heterogeneity within metropolitan/nonmetropolitan county groups separately using geographically weighted regression (GWR, and investigate the difference between metropolitan/nonmetropolitan counties using spatial regime models with spatial errors. These analyses were applied to county-level data from the National Center for Health Statistics and the US Census Bureau. RESULTS GWR results suggested that non-stationarity exists in the associations between TBR and determinants within metropolitan/nonmetropolitan groups. The spatial regime analysis indicated that the effect of socioeconomic disadvantage on TBR significantly varied by the metropolitan status of counties. CONCLUSIONS While the spatially varying relationships between the TBR and independent variables were found within each metropolitan status of counties, only the magnitude of the impact of the socioeconomic disadvantage index is significantly stronger among metropolitan counties than nonmetropolitan counties. Our findings suggested that place-specific policies for the disadvantaged groups in a county could be implemented to reduce TBR in the US.

  17. Do effects of common case-mix adjusters on patient experiences vary across patient groups?

    Directory of Open Access Journals (Sweden)

    Dolf de Boer

    2017-11-01

    Full Text Available Abstract Background Many survey studies in health care adjust for demographic characteristics such as age, gender, educational attainment and general health when performing statistical analyses. Whether the effects of these demographic characteristics are consistent between patient groups remains to be determined. This is important as the rationale for adjustment is often that demographic sub-groups differ in their so-called ‘response tendency’. This rationale may be less convincing if the effects of response tendencies vary across patient groups. The present paper examines whether the impact of these characteristics on patients’ global rating of care varies across patient groups. Methods Secondary analyses using multi-level regression models were performed on a dataset including 32 different patient groups and 145,578 observations. For each demographic variable, the 95% expected range of case-mix coefficients across patient groups is presented. In addition, we report whether the variance of coefficients for demographic variables across patient groups is significant. Results Overall, men, elderly, lower educated people and people in good health tend to give higher global ratings. However, these effects varied significantly across patient groups and included the possibility of no effect or an opposite effect in some patient groups. Conclusion The response tendency attributed to demographic characteristics – such as older respondents being milder, or higher educated respondents being more critical – is not general or universal. As such, the mechanism linking demographic characteristics to survey results on patient experiences with quality of care is more complicated than a general response tendency. It is possible that the response tendency interacts with patient group, but it is also possible that other mechanisms are at play.

  18. Breast cancer statistics, 2011.

    Science.gov (United States)

    DeSantis, Carol; Siegel, Rebecca; Bandi, Priti; Jemal, Ahmedin

    2011-01-01

    In this article, the American Cancer Society provides an overview of female breast cancer statistics in the United States, including trends in incidence, mortality, survival, and screening. Approximately 230,480 new cases of invasive breast cancer and 39,520 breast cancer deaths are expected to occur among US women in 2011. Breast cancer incidence rates were stable among all racial/ethnic groups from 2004 to 2008. Breast cancer death rates have been declining since the early 1990s for all women except American Indians/Alaska Natives, among whom rates have remained stable. Disparities in breast cancer death rates are evident by state, socioeconomic status, and race/ethnicity. While significant declines in mortality rates were observed for 36 states and the District of Columbia over the past 10 years, rates for 14 states remained level. Analyses by county-level poverty rates showed that the decrease in mortality rates began later and was slower among women residing in poor areas. As a result, the highest breast cancer death rates shifted from the affluent areas to the poor areas in the early 1990s. Screening rates continue to be lower in poor women compared with non-poor women, despite much progress in increasing mammography utilization. In 2008, 51.4% of poor women had undergone a screening mammogram in the past 2 years compared with 72.8% of non-poor women. Encouraging patients aged 40 years and older to have annual mammography and a clinical breast examination is the single most important step that clinicians can take to reduce suffering and death from breast cancer. Clinicians should also ensure that patients at high risk of breast cancer are identified and offered appropriate screening and follow-up. Continued progress in the control of breast cancer will require sustained and increased efforts to provide high-quality screening, diagnosis, and treatment to all segments of the population. Copyright © 2011 American Cancer Society, Inc.

  19. Elements of statistical thermodynamics

    CERN Document Server

    Nash, Leonard K

    2006-01-01

    Encompassing essentially all aspects of statistical mechanics that appear in undergraduate texts, this concise, elementary treatment shows how an atomic-molecular perspective yields new insights into macroscopic thermodynamics. 1974 edition.

  20. VA PTSD Statistics

    Data.gov (United States)

    Department of Veterans Affairs — National-level, VISN-level, and/or VAMC-level statistics on the numbers and percentages of users of VHA care form the Northeast Program Evaluation Center (NEPEC)....

  1. Statistical nuclear reactions

    International Nuclear Information System (INIS)

    Hilaire, S.

    2001-01-01

    A review of the statistical model of nuclear reactions is presented. The main relations are described, together with the ingredients necessary to perform practical calculations. In addition, a substantial overview of the width fluctuation correction factor is given. (author)

  2. Plague Maps and Statistics

    Science.gov (United States)

    ... and Statistics Recommend on Facebook Tweet Share Compartir Plague in the United States Plague was first introduced ... them at higher risk. Reported Cases of Human Plague - United States, 1970-2016 Since the mid–20th ...

  3. Statistical Measures of Marksmanship

    National Research Council Canada - National Science Library

    Johnson, Richard

    2001-01-01

    .... This report describes objective statistical procedures to measure both rifle marksmanship accuracy, the proximity of an array of shots to the center of mass of a target, and marksmanship precision...

  4. Titanic: A Statistical Exploration.

    Science.gov (United States)

    Takis, Sandra L.

    1999-01-01

    Uses the available data about the Titanic's passengers to interest students in exploring categorical data and the chi-square distribution. Describes activities incorporated into a statistics class and gives additional resources for collecting information about the Titanic. (ASK)

  5. Data and Statistics

    Science.gov (United States)

    ... About Us Information For… Media Policy Makers Data & Statistics Recommend on Facebook Tweet Share Compartir Sickle cell ... 1999 through 2002. This drop coincided with the introduction in 2000 of a vaccine that protects against ...

  6. CDC WONDER: Cancer Statistics

    Data.gov (United States)

    U.S. Department of Health & Human Services — The United States Cancer Statistics (USCS) online databases in WONDER provide cancer incidence and mortality data for the United States for the years since 1999, by...

  7. Probability and Statistical Inference

    OpenAIRE

    Prosper, Harrison B.

    2006-01-01

    These lectures introduce key concepts in probability and statistical inference at a level suitable for graduate students in particle physics. Our goal is to paint as vivid a picture as possible of the concepts covered.

  8. On quantum statistical inference

    NARCIS (Netherlands)

    Barndorff-Nielsen, O.E.; Gill, R.D.; Jupp, P.E.

    2003-01-01

    Interest in problems of statistical inference connected to measurements of quantum systems has recently increased substantially, in step with dramatic new developments in experimental techniques for studying small quantum systems. Furthermore, developments in the theory of quantum measurements have

  9. CMS Statistics Reference Booklet

    Data.gov (United States)

    U.S. Department of Health & Human Services — The annual CMS Statistics reference booklet provides a quick reference for summary information about health expenditures and the Medicare and Medicaid health...

  10. Statistical mechanics of superconductivity

    CERN Document Server

    Kita, Takafumi

    2015-01-01

    This book provides a theoretical, step-by-step comprehensive explanation of superconductivity for undergraduate and graduate students who have completed elementary courses on thermodynamics and quantum mechanics. To this end, it adopts the unique approach of starting with the statistical mechanics of quantum ideal gases and successively adding and clarifying elements and techniques indispensible for understanding it. They include the spin-statistics theorem, second quantization, density matrices, the Bloch–De Dominicis theorem, the variational principle in statistical mechanics, attractive interaction, and bound states. Ample examples of their usage are also provided in terms of topics from advanced statistical mechanics such as two-particle correlations of quantum ideal gases, derivation of the Hartree–Fock equations, and Landau’s Fermi-liquid theory, among others. With these preliminaries, the fundamental mean-field equations of superconductivity are derived with maximum mathematical clarity based on ...

  11. Statistical electromagnetics: Complex cavities

    NARCIS (Netherlands)

    Naus, H.W.L.

    2008-01-01

    A selection of the literature on the statistical description of electromagnetic fields and complex cavities is concisely reviewed. Some essential concepts, for example, the application of the central limit theorem and the maximum entropy principle, are scrutinized. Implicit assumptions, biased

  12. Statistics of Extremes

    KAUST Repository

    Davison, Anthony C.; Huser, Raphaë l

    2015-01-01

    Statistics of extremes concerns inference for rare events. Often the events have never yet been observed, and their probabilities must therefore be estimated by extrapolation of tail models fitted to available data. Because data concerning the event

  13. Infant Statistical Learning

    Science.gov (United States)

    Saffran, Jenny R.; Kirkham, Natasha Z.

    2017-01-01

    Perception involves making sense of a dynamic, multimodal environment. In the absence of mechanisms capable of exploiting the statistical patterns in the natural world, infants would face an insurmountable computational problem. Infant statistical learning mechanisms facilitate the detection of structure. These abilities allow the infant to compute across elements in their environmental input, extracting patterns for further processing and subsequent learning. In this selective review, we summarize findings that show that statistical learning is both a broad and flexible mechanism (supporting learning from different modalities across many different content areas) and input specific (shifting computations depending on the type of input and goal of learning). We suggest that statistical learning not only provides a framework for studying language development and object knowledge in constrained laboratory settings, but also allows researchers to tackle real-world problems, such as multilingualism, the role of ever-changing learning environments, and differential developmental trajectories. PMID:28793812

  14. Transport statistics 1996

    CSIR Research Space (South Africa)

    Shepperson, L

    1997-12-01

    Full Text Available This publication contains transport and related statistics on roads, vehicles, infrastructure, passengers, freight, rail, air, maritime and road traffic, and international comparisons. The information compiled in this publication has been gathered...

  15. Visuanimation in statistics

    KAUST Repository

    Genton, Marc G.; Castruccio, Stefano; Crippa, Paola; Dutta, Subhajit; Huser, Raphaë l; Sun, Ying; Vettori, Sabrina

    2015-01-01

    This paper explores the use of visualization through animations, coined visuanimation, in the field of statistics. In particular, it illustrates the embedding of animations in the paper itself and the storage of larger movies in the online

  16. Playing at Statistical Mechanics

    Science.gov (United States)

    Clark, Paul M.; And Others

    1974-01-01

    Discussed are the applications of counting techniques of a sorting game to distributions and concepts in statistical mechanics. Included are the following distributions: Fermi-Dirac, Bose-Einstein, and most probable. (RH)

  17. Illinois travel statistics, 2008

    Science.gov (United States)

    2009-01-01

    The 2008 Illinois Travel Statistics publication is assembled to provide detailed traffic : information to the different users of traffic data. While most users of traffic data at this level : of detail are within the Illinois Department of Transporta...

  18. Illinois travel statistics, 2009

    Science.gov (United States)

    2010-01-01

    The 2009 Illinois Travel Statistics publication is assembled to provide detailed traffic : information to the different users of traffic data. While most users of traffic data at this level : of detail are within the Illinois Department of Transporta...

  19. Cholesterol Facts and Statistics

    Science.gov (United States)

    ... Managing High Cholesterol Cholesterol-lowering Medicine High Cholesterol Statistics and Maps High Cholesterol Facts High Cholesterol Maps ... Deo R, et al. Heart disease and stroke statistics—2017 update: a report from the American Heart ...

  20. Illinois travel statistics, 2010

    Science.gov (United States)

    2011-01-01

    The 2010 Illinois Travel Statistics publication is assembled to provide detailed traffic : information to the different users of traffic data. While most users of traffic data at this level : of detail are within the Illinois Department of Transporta...

  1. EDI Performance Statistics

    Data.gov (United States)

    U.S. Department of Health & Human Services — This section contains statistical information and reports related to the percentage of electronic transactions being sent to Medicare contractors in the formats...

  2. Information theory and statistics

    CERN Document Server

    Kullback, Solomon

    1968-01-01

    Highly useful text studies logarithmic measures of information and their application to testing statistical hypotheses. Includes numerous worked examples and problems. References. Glossary. Appendix. 1968 2nd, revised edition.

  3. Boating Accident Statistics

    Data.gov (United States)

    Department of Homeland Security — Accident statistics available on the Coast Guard’s website by state, year, and one variable to obtain tables and/or graphs. Data from reports has been loaded for...

  4. Medicaid Drug Claims Statistics

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Medicaid Drug Claims Statistics CD is a useful tool that conveniently breaks up Medicaid claim counts and separates them by quarter and includes an annual count.

  5. Statistical theory of heat

    CERN Document Server

    Scheck, Florian

    2016-01-01

    Scheck’s textbook starts with a concise introduction to classical thermodynamics, including geometrical aspects. Then a short introduction to probabilities and statistics lays the basis for the statistical interpretation of thermodynamics. Phase transitions, discrete models and the stability of matter are explained in great detail. Thermodynamics has a special role in theoretical physics. Due to the general approach of thermodynamics the field has a bridging function between several areas like the theory of condensed matter, elementary particle physics, astrophysics and cosmology. The classical thermodynamics describes predominantly averaged properties of matter, reaching from few particle systems and state of matter to stellar objects. Statistical Thermodynamics covers the same fields, but explores them in greater depth and unifies classical statistical mechanics with quantum theory of multiple particle systems. The content is presented as two tracks: the fast track for master students, providing the essen...

  6. Record Statistics and Dynamics

    DEFF Research Database (Denmark)

    Sibani, Paolo; Jensen, Henrik J.

    2009-01-01

    with independent random increments. The term record dynamics covers the rather new idea that records may, in special situations, have measurable dynamical consequences. The approach applies to the aging dynamics of glasses and other systems with multiple metastable states. The basic idea is that record sizes...... fluctuations of e. g. the energy are able to push the system past some sort of ‘edge of stability’, inducing irreversible configurational changes, whose statistics then closely follows the statistics of record fluctuations....

  7. Introductory statistical inference

    CERN Document Server

    Mukhopadhyay, Nitis

    2014-01-01

    This gracefully organized text reveals the rigorous theory of probability and statistical inference in the style of a tutorial, using worked examples, exercises, figures, tables, and computer simulations to develop and illustrate concepts. Drills and boxed summaries emphasize and reinforce important ideas and special techniques.Beginning with a review of the basic concepts and methods in probability theory, moments, and moment generating functions, the author moves to more intricate topics. Introductory Statistical Inference studies multivariate random variables, exponential families of dist

  8. Statistical mechanics rigorous results

    CERN Document Server

    Ruelle, David

    1999-01-01

    This classic book marks the beginning of an era of vigorous mathematical progress in equilibrium statistical mechanics. Its treatment of the infinite system limit has not been superseded, and the discussion of thermodynamic functions and states remains basic for more recent work. The conceptual foundation provided by the Rigorous Results remains invaluable for the study of the spectacular developments of statistical mechanics in the second half of the 20th century.

  9. Statistical mechanics of anyons

    International Nuclear Information System (INIS)

    Arovas, D.P.

    1985-01-01

    We study the statistical mechanics of a two-dimensional gas of free anyons - particles which interpolate between Bose-Einstein and Fermi-Dirac character. Thermodynamic quantities are discussed in the low-density regime. In particular, the second virial coefficient is evaluated by two different methods and is found to exhibit a simple, periodic, but nonanalytic behavior as a function of the statistics determining parameter. (orig.)

  10. Fundamentals of statistics

    CERN Document Server

    Mulholland, Henry

    1968-01-01

    Fundamentals of Statistics covers topics on the introduction, fundamentals, and science of statistics. The book discusses the collection, organization and representation of numerical data; elementary probability; the binomial Poisson distributions; and the measures of central tendency. The text describes measures of dispersion for measuring the spread of a distribution; continuous distributions for measuring on a continuous scale; the properties and use of normal distribution; and tests involving the normal or student's 't' distributions. The use of control charts for sample means; the ranges

  11. Business statistics I essentials

    CERN Document Server

    Clark, Louise

    2014-01-01

    REA's Essentials provide quick and easy access to critical information in a variety of different fields, ranging from the most basic to the most advanced. As its name implies, these concise, comprehensive study guides summarize the essentials of the field covered. Essentials are helpful when preparing for exams, doing homework and will remain a lasting reference source for students, teachers, and professionals. Business Statistics I includes descriptive statistics, introduction to probability, probability distributions, sampling and sampling distributions, interval estimation, and hypothesis t

  12. Varied line-space gratings and applications

    International Nuclear Information System (INIS)

    McKinney, W.R.

    1991-01-01

    This paper presents a straightforward analytical and numerical method for the design of a specific type of varied line-space grating system. The mathematical development will assume plane or nearly-plane spherical gratings which are illuminated by convergent light, which covers many interesting cases for synchrotron radiation. The gratings discussed will have straight grooves whose spacing varies across the principal plane of the grating. Focal relationships and formulae for the optical grating-pole-to-exist-slit distance and grating radius previously presented by other authors will be derived with a symbolic algebra system. It is intended to provide the optical designer with the tools necessary to design such a system properly. Finally, some possible advantages and disadvantages for application to synchrotron to synchrotron radiation beamlines will be discussed

  13. The Thermal Collector With Varied Glass Covers

    International Nuclear Information System (INIS)

    Luminosu, I.; Pop, N.

    2010-01-01

    The thermal collector with varied glass covers represents an innovation realized in order to build a collector able to reach the desired temperature by collecting the solar radiation from the smallest surface, with the highest efficiency. In the case of the thermal collector with variable cover glasses, the number of the glass plates covering the absorber increases together with the length of the circulation pipe for the working fluid. The thermal collector with varied glass covers compared to the conventional collector better meet user requirements because: for the same temperature increase, has the collecting area smaller; for the same collection area, realizes the highest temperature increase and has the highest efficiency. This works is addressed to researchers in the solar energy and to engineers responsible with air-conditioning systems design or industrial and agricultural products drying.

  14. Breakthroughs in statistics

    CERN Document Server

    Johnson, Norman

    This is author-approved bcc: This is the third volume of a collection of seminal papers in the statistical sciences written during the past 110 years. These papers have each had an outstanding influence on the development of statistical theory and practice over the last century. Each paper is preceded by an introduction written by an authority in the field providing background information and assessing its influence. Volume III concerntrates on articles from the 1980's while including some earlier articles not included in Volume I and II. Samuel Kotz is Professor of Statistics in the College of Business and Management at the University of Maryland. Norman L. Johnson is Professor Emeritus of Statistics at the University of North Carolina. Also available: Breakthroughs in Statistics Volume I: Foundations and Basic Theory Samuel Kotz and Norman L. Johnson, Editors 1993. 631 pp. Softcover. ISBN 0-387-94037-5 Breakthroughs in Statistics Volume II: Methodology and Distribution Samuel Kotz and Norman L. Johnson, Edi...

  15. Practical Statistics for Particle Physicists

    CERN Document Server

    Lista, Luca

    2017-01-01

    These three lectures provide an introduction to the main concepts of statistical data analysis useful for precision measurements and searches for new signals in High Energy Physics. The frequentist and Bayesian approaches to probability theory will introduced and, for both approaches, inference methods will be presented. Hypothesis tests will be discussed, then significance and upper limit evaluation will be presented with an overview of the modern and most advanced techniques adopted for data analysis at the Large Hadron Collider.

  16. Spatially varying dispersion to model breakthrough curves.

    Science.gov (United States)

    Li, Guangquan

    2011-01-01

    Often the water flowing in a karst conduit is a combination of contaminated water entering at a sinkhole and cleaner water released from the limestone matrix. Transport processes in the conduit are controlled by advection, mixing (dilution and dispersion), and retention-release. In this article, a karst transport model considering advection, spatially varying dispersion, and dilution (from matrix seepage) is developed. Two approximate Green's functions are obtained using transformation of variables, respectively, for the initial-value problem and for the boundary-value problem. A numerical example illustrates that mixing associated with strong spatially varying conduit dispersion can cause strong skewness and long tailing in spring breakthrough curves. Comparison of the predicted breakthrough curve against that measured from a dye-tracing experiment between Ames Sink and Indian Spring, Northwest Florida, shows that the conduit dispersivity can be as large as 400 m. Such a large number is believed to imply strong solute interaction between the conduit and the matrix and/or multiple flow paths in a conduit network. It is concluded that Taylor dispersion is not dominant in transport in a karst conduit, and the complicated retention-release process between mobile- and immobile waters may be described by strong spatially varying conduit dispersion. Copyright © 2010 The Author(s). Journal compilation © 2010 National Ground Water Association.

  17. New varying speed of light theories

    CERN Document Server

    Magueijo, J

    2003-01-01

    We review recent work on the possibility of a varying speed of light (VSL). We start by discussing the physical meaning of a varying $c$, dispelling the myth that the constancy of $c$ is a matter of logical consistency. We then summarize the main VSL mechanisms proposed so far: hard breaking of Lorentz invariance; bimetric theories (where the speeds of gravity and light are not the same); locally Lorentz invariant VSL theories; theories exhibiting a color dependent speed of light; varying $c$ induced by extra dimensions (e.g. in the brane-world scenario); and field theories where VSL results from vacuum polarization or CPT violation. We show how VSL scenarios may solve the cosmological problems usually tackled by inflation, and also how they may produce a scale-invariant spectrum of Gaussian fluctuations, capable of explaining the WMAP data. We then review the connection between VSL and theories of quantum gravity, showing how ``doubly special'' relativity has emerged as a VSL effective model of quantum space...

  18. Probability and statistics with integrated software routines

    CERN Document Server

    Deep, Ronald

    2005-01-01

    Probability & Statistics with Integrated Software Routines is a calculus-based treatment of probability concurrent with and integrated with statistics through interactive, tailored software applications designed to enhance the phenomena of probability and statistics. The software programs make the book unique.The book comes with a CD containing the interactive software leading to the Statistical Genie. The student can issue commands repeatedly while making parameter changes to observe the effects. Computer programming is an excellent skill for problem solvers, involving design, prototyping, data gathering, testing, redesign, validating, etc, all wrapped up in the scientific method.See also: CD to accompany Probability and Stats with Integrated Software Routines (0123694698)* Incorporates more than 1,000 engaging problems with answers* Includes more than 300 solved examples* Uses varied problem solving methods

  19. UN Data- Environmental Statistics: Waste

    Data.gov (United States)

    World Wide Human Geography Data Working Group — The Environment Statistics Database contains selected water and waste statistics by country. Statistics on water and waste are based on official statistics supplied...

  20. UN Data: Environment Statistics: Waste

    Data.gov (United States)

    World Wide Human Geography Data Working Group — The Environment Statistics Database contains selected water and waste statistics by country. Statistics on water and waste are based on official statistics supplied...

  1. Encounter Probability of Significant Wave Height

    DEFF Research Database (Denmark)

    Liu, Z.; Burcharth, H. F.

    The determination of the design wave height (often given as the significant wave height) is usually based on statistical analysis of long-term extreme wave height measurement or hindcast. The result of such extreme wave height analysis is often given as the design wave height corresponding to a c...

  2. Anistropically varying conductivity in irreversible electroporation simulations.

    Science.gov (United States)

    Labarbera, Nicholas; Drapaca, Corina

    2017-11-01

    One recent area of cancer research is irreversible electroporation (IRE). Irreversible electroporation is a minimally invasive procedure where needle electrodes are inserted into the body to ablate tumor cells with electricity. The aim of this paper is to propose a mathematical model that incorporates a tissue's conductivity increasing more in the direction of the electrical field as this has been shown to occur in experiments. It was necessary to mathematically derive a valid form of the conductivity tensor such that it is dependent on the electrical field direction and can be easily implemented into numerical software. The derivation of a conductivity tensor that can take arbitrary functions for the conductivity in the directions tangent and normal to the electrical field is the main contribution of this paper. Numerical simulations were performed for isotropic-varying and anisotropic-varying conductivities to evaluate the importance of including the electrical field's direction in the formulation for conductivity. By starting from previously published experimental results, this paper derived a general formulation for an anistropic-varying tensor for implementation into irreversible electroporation modeling software. The anistropic-varying tensor formulation allows the conductivity to take into consideration both electrical field direction and magnitude, as opposed to previous published works that only took into account electrical field magnitude. The anisotropic formulation predicts roughly a five percent decrease in ablation size for the monopolar simulation and approximately a ten percent decrease in ablation size for the bipolar simulations. This is a positive result as previously reported results found the isotropic formulation to overpredict ablation size for both monopolar and bipolar simulations. Furthermore, it was also reported that the isotropic formulation overpredicts the ablation size more for the bipolar case than the monopolar case. Thus, our

  3. Neopuff T-piece resuscitator mask ventilation: Does mask leak vary with different peak inspiratory pressures in a manikin model?

    Science.gov (United States)

    Maheshwari, Rajesh; Tracy, Mark; Hinder, Murray; Wright, Audrey

    2017-08-01

    The aim of this study was to compare mask leak with three different peak inspiratory pressure (PIP) settings during T-piece resuscitator (TPR; Neopuff) mask ventilation on a neonatal manikin model. Participants were neonatal unit staff members. They were instructed to provide mask ventilation with a TPR with three PIP settings (20, 30, 40 cm H 2 O) chosen in a random order. Each episode was for 2 min with 2-min rest period. Flow rate and positive end-expiratory pressure (PEEP) were kept constant. Airway pressure, inspiratory and expiratory tidal volumes, mask leak, respiratory rate and inspiratory time were recorded. Repeated measures analysis of variance was used for statistical analysis. A total of 12 749 inflations delivered by 40 participants were analysed. There were no statistically significant differences (P > 0.05) in the mask leak with the three PIP settings. No statistically significant differences were seen in respiratory rate and inspiratory time with the three PIP settings. There was a significant rise in PEEP as the PIP increased. Failure to achieve the desired PIP was observed especially at the higher settings. In a neonatal manikin model, the mask leak does not vary as a function of the PIP when the flow rate is constant. With a fixed rate and inspiratory time, there seems to be a rise in PEEP with increasing PIP. © 2017 Paediatrics and Child Health Division (The Royal Australasian College of Physicians).

  4. A combustão catalítica do metano: estudo estatístico do efeito das variáveis de preparação e pré-tratamento de catalisadores de paládio suportado sobre a atividade catalítica The catalytic combustion of methane: statistical study of preparation and pretreatment conditions of palladium supported catalysts and their relationship with catalytic activity

    Directory of Open Access Journals (Sweden)

    Maria da Graça Carneiro da Rocha

    2001-04-01

    Full Text Available The catalytic combustion of methane on alumina supported palladium catalysts was studied. It has been reported that the activity of the catalyst increases with its time on line, despite of an increase of the palladium particle size. However, different preparation, pretreatment and testing conditions can be the reason for the observed different results. An experimental design, which allows to verify the influence of several parameters at the same time with a good statistical quality, was used. A Plackett-Burman design was selected for the screening of the variables which have an effect on the increase of the catalyst activity.

  5. Residential segregation and the health of African-American infants: does the effect vary by prevalence?

    Science.gov (United States)

    Nyarko, Kwame A; Wehby, George L

    2012-10-01

    Segregation effects may vary between areas (e.g., counties) of low and high low birth weight (LBW; birth (PTB; rates due to interactions with area differences in risks and resources. We assess whether the effects of residential segregation on county-level LBW and PTB rates for African-American infants vary by the prevalence of these conditions. The study sample includes 368 counties of 100,000 or more residents and at least 50 African-American live births in 2000. Residentially segregated counties are identified alternatively by county-level dissimilarity and isolation indices. Quantile regression is used to assess how residential segregation affects the entire distributions of county-level LBW and PTB rates (i.e. by prevalence). Residential segregation increases LBW and PTB rates significantly in areas of low prevalence, but has no such effects for areas of high prevalence. As a sensitivity analysis, we use metropolitan statistical area level data and obtain similar results. Our findings suggest that residential segregation has adverse effects mainly in areas of low prevalence of LBW and preterm birth, which are expected overall to have fewer risk factors and more resources for infant health, but not in high prevalence areas, which are expected to have more risk factors and fewer resources. Residential policies aimed at area resource improvements may be more effective.

  6. Conformity and statistical tolerancing

    Science.gov (United States)

    Leblond, Laurent; Pillet, Maurice

    2018-02-01

    Statistical tolerancing was first proposed by Shewhart (Economic Control of Quality of Manufactured Product, (1931) reprinted 1980 by ASQC), in spite of this long history, its use remains moderate. One of the probable reasons for this low utilization is undoubtedly the difficulty for designers to anticipate the risks of this approach. The arithmetic tolerance (worst case) allows a simple interpretation: conformity is defined by the presence of the characteristic in an interval. Statistical tolerancing is more complex in its definition. An interval is not sufficient to define the conformance. To justify the statistical tolerancing formula used by designers, a tolerance interval should be interpreted as the interval where most of the parts produced should probably be located. This tolerance is justified by considering a conformity criterion of the parts guaranteeing low offsets on the latter characteristics. Unlike traditional arithmetic tolerancing, statistical tolerancing requires a sustained exchange of information between design and manufacture to be used safely. This paper proposes a formal definition of the conformity, which we apply successively to the quadratic and arithmetic tolerancing. We introduce a concept of concavity, which helps us to demonstrate the link between tolerancing approach and conformity. We use this concept to demonstrate the various acceptable propositions of statistical tolerancing (in the space decentring, dispersion).

  7. Intuitive introductory statistics

    CERN Document Server

    Wolfe, Douglas A

    2017-01-01

    This textbook is designed to give an engaging introduction to statistics and the art of data analysis. The unique scope includes, but also goes beyond, classical methodology associated with the normal distribution. What if the normal model is not valid for a particular data set? This cutting-edge approach provides the alternatives. It is an introduction to the world and possibilities of statistics that uses exercises, computer analyses, and simulations throughout the core lessons. These elementary statistical methods are intuitive. Counting and ranking features prominently in the text. Nonparametric methods, for instance, are often based on counts and ranks and are very easy to integrate into an introductory course. The ease of computation with advanced calculators and statistical software, both of which factor into this text, allows important techniques to be introduced earlier in the study of statistics. This book's novel scope also includes measuring symmetry with Walsh averages, finding a nonp...

  8. Wind energy statistics

    International Nuclear Information System (INIS)

    Holttinen, H.; Tammelin, B.; Hyvoenen, R.

    1997-01-01

    The recording, analyzing and publishing of statistics of wind energy production has been reorganized in cooperation of VTT Energy, Finnish Meteorological (FMI Energy) and Finnish Wind Energy Association (STY) and supported by the Ministry of Trade and Industry (KTM). VTT Energy has developed a database that contains both monthly data and information on the wind turbines, sites and operators involved. The monthly production figures together with component failure statistics are collected from the operators by VTT Energy, who produces the final wind energy statistics to be published in Tuulensilmae and reported to energy statistics in Finland and abroad (Statistics Finland, Eurostat, IEA). To be able to verify the annual and monthly wind energy potential with average wind energy climate a production index in adopted. The index gives the expected wind energy production at various areas in Finland calculated using real wind speed observations, air density and a power curve for a typical 500 kW-wind turbine. FMI Energy has produced the average figures for four weather stations using the data from 1985-1996, and produces the monthly figures. (orig.)

  9. Orthoptic parameters and asthenopic symptoms analysis after 3D viewing at varying distances

    Directory of Open Access Journals (Sweden)

    Oleeviya Joseph

    2018-05-01

    Full Text Available AIM: To analyse visual modifications such as amplitude of accommodation, near point of convergence(NPCreopsis and near phoria associated with asthenopic symptoms after 3D viewing at varying distances.METHODS: A prospective study. Thirty young adults were randomly selected. Each individual was exposed to 3D viewing thrice in a day for a fixed distance and the distance was varied on three consecutive days. Same video of equal duration and different screen sizes were used for every distance. Cyclic 3D mode of K-multimedia(KMplayer was used for projecting the 3D video. Different variables like stereopsis, amplitude of accommodation, near point of accommodation, near phoria and asthenopic symptoms were recorded immediately after 3D video viewing. Stereopsis was measured with “Toegepast Natuurwetenschappelijk Onderzoek” or “Netherlands Organisation for Applied Scientific Research”(TNO test, amplitude of accommodation and NPC were measured using RAF ruler, near phoria was measured using prism bar and a closed ended sample questionnaire was used to know the occurrence of asthenopic symptoms. Statistical analyses were performed using descriptive statistics, paired t-test etc. Qualitative data was analyzed using Chi-square test.RESULTS: For every distance of 40 cm, 3 m and 6 m, amplitude of accommodation was significantly reduced by 0.66 D, 1.12 D and 1.44 D. NPC got significantly receded by 0.63 cm, 0.93 cm and 1.23 cm, and the near phoria was significantly increased by 0.87, and 2.2 prism dioptres(PDbase-in respectively. It was found that most of the subjects got pain around the eyes, headache and irritation for each viewing distance. This study also revealed that 3D video viewing in theaters may increase the symptoms of headache, watering and irritation. Symptoms like headache, watering, fatigue, irritation and nausea may increase considerably at home environment and symptoms such as headache and watering may cause significant discomfort by 3D

  10. MCBS Sites of Biodiversity Significance

    Data.gov (United States)

    Minnesota Department of Natural Resources — This data layer represents areas with varying levels of native biodiversity that may contain high quality native plant communities, rare plants, rare animals, and/or...

  11. Bayesian approach to inverse statistical mechanics

    Science.gov (United States)

    Habeck, Michael

    2014-05-01

    Inverse statistical mechanics aims to determine particle interactions from ensemble properties. This article looks at this inverse problem from a Bayesian perspective and discusses several statistical estimators to solve it. In addition, a sequential Monte Carlo algorithm is proposed that draws the interaction parameters from their posterior probability distribution. The posterior probability involves an intractable partition function that is estimated along with the interactions. The method is illustrated for inverse problems of varying complexity, including the estimation of a temperature, the inverse Ising problem, maximum entropy fitting, and the reconstruction of molecular interaction potentials.

  12. Progress on MEVVA source VARIS at GSI

    Science.gov (United States)

    Adonin, A.; Hollinger, R.

    2018-05-01

    For the last few years, the development of the VARIS (vacuum arc ion source) was concentrated on several aspects. One of them was the production of high current ion beams of heavy metals such as Au, Pb, and Bi. The requested ion charge state for these ion species is 4+. This is quite challenging to produce in vacuum arc driven sources for reasonable beam pulse length (>120 µs) due to the physical properties of these elements. However, the situation can be dramatically improved by using the composite materials or alloys with enhanced physical properties of the cathodes. Another aspect is an increase of the beam brilliance for intense U4+ beams by the optimization of the geometry of the extraction system. A new 7-hole triode extraction system allows an increase of the extraction voltage from 30 kV to 40 kV and also reduces the outer aperture of the extracted ion beam. Thus, a record beam brilliance for the U4+ beam in front of the RFQ (Radio-Frequency Quadrupole) has been achieved, exceeding the RFQ space charge limit for an ion current of 15 mA. Several new projectiles in the middle-heavy region have been successfully developed from VARIS to fulfill the requirements of the future FAIR (Facility for Antiproton and Ion Research) programs. An influence of an auxiliary gas on the production performance of certain ion charge states as well as on operation stability has been investigated. The optimization of the ion source parameters for a maximum production efficiency and highest particle current in front of the RFQ has been performed. The next important aspect of the development will be the increase of the operation repetition rate of VARIS for all elements especially for uranium to 2.7 Hz in order to provide the maximum availability of high current ion beams for future FAIR experiments.

  13. New varying speed of light theories

    International Nuclear Information System (INIS)

    Magueijo, Joao

    2003-01-01

    We review recent work on the possibility of a varying speed of light (VSL). We start by discussing the physical meaning of a varying-c, dispelling the myth that the constancy of c is a matter of logical consistency. We then summarize the main VSL mechanisms proposed so far: hard breaking of Lorentz invariance; bimetric theories (where the speeds of gravity and light are not the same); locally Lorentz invariant VSL theories; theories exhibiting a colour-dependent speed of light; varying-c induced by extra dimensions (e.g. in the brane-world scenario); and field theories where VSL results from vacuum polarization or CPT violation. We show how VSL scenarios may solve the cosmological problems usually tackled by inflation, and also how they may produce a scale-invariant spectrum of Gaussian fluctuations, capable of explaining the WMAP data. We then review the connection between VSL and theories of quantum gravity, showing how 'doubly special' relativity has emerged as a VSL effective model of quantum space-time, with observational implications for ultra-high energy cosmic rays (UHECRs) and gamma ray bursts. Some recent work on the physics of 'black' holes and other compact objects in VSL theories is also described, highlighting phenomena associated with spatial (as opposed to temporal) variations in c. Finally, we describe the observational status of the theory. The evidence is currently slim-redshift dependence in the atomic fine structure, anomalies with UHECRs, and (to a much lesser extent) the acceleration of the universe and the WMAP data. The constraints (e.g. those arising from nucleosynthesis or geological bounds) are tight but not insurmountable. We conclude with the observational predictions of the theory and the prospects for its refutation or vindication

  14. Conceptual Modeling of Time-Varying Information

    DEFF Research Database (Denmark)

    Gregersen, Heidi; Jensen, Christian S.

    2004-01-01

    A wide range of database applications manage information that varies over time. Many of the underlying database schemas of these were designed using the Entity-Relationship (ER) model. In the research community as well as in industry, it is common knowledge that the temporal aspects of the mini......-world are important, but difficult to capture using the ER model. Several enhancements to the ER model have been proposed in an attempt to support the modeling of temporal aspects of information. Common to the existing temporally extended ER models, few or no specific requirements to the models were given...

  15. A time-varying magnetic flux concentrator

    International Nuclear Information System (INIS)

    Kibret, B; Premaratne, M; Lewis, P M; Thomson, R; Fitzgerald, P B

    2016-01-01

    It is known that diverse technological applications require the use of focused magnetic fields. This has driven the quest for controlling the magnetic field. Recently, the principles in transformation optics and metamaterials have allowed the realization of practical static magnetic flux concentrators. Extending such progress, here, we propose a time-varying magnetic flux concentrator cylindrical shell that uses electric conductors and ferromagnetic materials to guide magnetic flux to its center. Its performance is discussed based on finite-element simulation results. Our proposed design has potential applications in magnetic sensors, medical devices, wireless power transfer, and near-field wireless communications. (paper)

  16. Linear Parameter Varying Control of Induction Motors

    DEFF Research Database (Denmark)

    Trangbæk, Klaus

    The subject of this thesis is the development of linear parameter varying (LPV) controllers and observers for control of induction motors. The induction motor is one of the most common machines in industrial applications. Being a highly nonlinear system, it poses challenging control problems...... for high performance applications. This thesis demonstrates how LPV control theory provides a systematic way to achieve good performance for these problems. The main contributions of this thesis are the application of the LPV control theory to induction motor control as well as various contributions...

  17. Probability and Bayesian statistics

    CERN Document Server

    1987-01-01

    This book contains selected and refereed contributions to the "Inter­ national Symposium on Probability and Bayesian Statistics" which was orga­ nized to celebrate the 80th birthday of Professor Bruno de Finetti at his birthplace Innsbruck in Austria. Since Professor de Finetti died in 1985 the symposium was dedicated to the memory of Bruno de Finetti and took place at Igls near Innsbruck from 23 to 26 September 1986. Some of the pa­ pers are published especially by the relationship to Bruno de Finetti's scientific work. The evolution of stochastics shows growing importance of probability as coherent assessment of numerical values as degrees of believe in certain events. This is the basis for Bayesian inference in the sense of modern statistics. The contributions in this volume cover a broad spectrum ranging from foundations of probability across psychological aspects of formulating sub­ jective probability statements, abstract measure theoretical considerations, contributions to theoretical statistics an...

  18. Multivariate Statistical Process Control

    DEFF Research Database (Denmark)

    Kulahci, Murat

    2013-01-01

    As sensor and computer technology continues to improve, it becomes a normal occurrence that we confront with high dimensional data sets. As in many areas of industrial statistics, this brings forth various challenges in statistical process control (SPC) and monitoring for which the aim...... is to identify “out-of-control” state of a process using control charts in order to reduce the excessive variation caused by so-called assignable causes. In practice, the most common method of monitoring multivariate data is through a statistic akin to the Hotelling’s T2. For high dimensional data with excessive...... amount of cross correlation, practitioners are often recommended to use latent structures methods such as Principal Component Analysis to summarize the data in only a few linear combinations of the original variables that capture most of the variation in the data. Applications of these control charts...

  19. Statistics for Finance

    DEFF Research Database (Denmark)

    Lindström, Erik; Madsen, Henrik; Nielsen, Jan Nygaard

    Statistics for Finance develops students’ professional skills in statistics with applications in finance. Developed from the authors’ courses at the Technical University of Denmark and Lund University, the text bridges the gap between classical, rigorous treatments of financial mathematics...... that rarely connect concepts to data and books on econometrics and time series analysis that do not cover specific problems related to option valuation. The book discusses applications of financial derivatives pertaining to risk assessment and elimination. The authors cover various statistical...... and mathematical techniques, including linear and nonlinear time series analysis, stochastic calculus models, stochastic differential equations, Itō’s formula, the Black–Scholes model, the generalized method-of-moments, and the Kalman filter. They explain how these tools are used to price financial derivatives...

  20. 1992 Energy statistics Yearbook

    International Nuclear Information System (INIS)

    1994-01-01

    The principal objective of the Yearbook is to provide a global framework of comparable data on long-term trends in the supply of mainly commercial primary and secondary forms of energy. Data for each type of fuel and aggregate data for the total mix of commercial fuels are shown for individual countries and areas and are summarized into regional and world totals. The data are compiled primarily from annual questionnaires distributed by the United Nations Statistical Division and supplemented by official national statistical publications. Where official data are not available or are inconsistent, estimates are made by the Statistical Division based on governmental, professional or commercial materials. Estimates include, but are not limited to, extrapolated data based on partial year information, use of annual trends, trade data based on partner country reports, breakdowns of aggregated data as well as analysis of current energy events and activities

  1. Energy statistics manual

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2010-07-01

    Detailed, complete, timely and reliable statistics are essential to monitor the energy situation at a country level as well as at an international level. Energy statistics on supply, trade, stocks, transformation and demand are indeed the basis for any sound energy policy decision. For instance, the market of oil -- which is the largest traded commodity worldwide -- needs to be closely monitored in order for all market players to know at any time what is produced, traded, stocked and consumed and by whom. In view of the role and importance of energy in world development, one would expect that basic energy information to be readily available and reliable. This is not always the case and one can even observe a decline in the quality, coverage and timeliness of energy statistics over the last few years.

  2. Statistical Engine Knock Control

    DEFF Research Database (Denmark)

    Stotsky, Alexander A.

    2008-01-01

    A new statistical concept of the knock control of a spark ignition automotive engine is proposed . The control aim is associated with the statistical hy pothesis test which compares the threshold value to the average value of the max imal amplitud e of the knock sensor signal at a given freq uency....... C ontrol algorithm which is used for minimization of the regulation error realizes a simple count-up-count-d own logic. A new ad aptation algorithm for the knock d etection threshold is also d eveloped . C onfi d ence interval method is used as the b asis for ad aptation. A simple statistical mod el...... which includ es generation of the amplitud e signals, a threshold value d etermination and a knock sound mod el is d eveloped for evaluation of the control concept....

  3. Philosophy of statistics

    CERN Document Server

    Forster, Malcolm R

    2011-01-01

    Statisticians and philosophers of science have many common interests but restricted communication with each other. This volume aims to remedy these shortcomings. It provides state-of-the-art research in the area of philosophy of statistics by encouraging numerous experts to communicate with one another without feeling "restricted” by their disciplines or thinking "piecemeal” in their treatment of issues. A second goal of this book is to present work in the field without bias toward any particular statistical paradigm. Broadly speaking, the essays in this Handbook are concerned with problems of induction, statistics and probability. For centuries, foundational problems like induction have been among philosophers' favorite topics; recently, however, non-philosophers have increasingly taken a keen interest in these issues. This volume accordingly contains papers by both philosophers and non-philosophers, including scholars from nine academic disciplines.

  4. Perception in statistical graphics

    Science.gov (United States)

    VanderPlas, Susan Ruth

    There has been quite a bit of research on statistical graphics and visualization, generally focused on new types of graphics, new software to create graphics, interactivity, and usability studies. Our ability to interpret and use statistical graphics hinges on the interface between the graph itself and the brain that perceives and interprets it, and there is substantially less research on the interplay between graph, eye, brain, and mind than is sufficient to understand the nature of these relationships. The goal of the work presented here is to further explore the interplay between a static graph, the translation of that graph from paper to mental representation (the journey from eye to brain), and the mental processes that operate on that graph once it is transferred into memory (mind). Understanding the perception of statistical graphics should allow researchers to create more effective graphs which produce fewer distortions and viewer errors while reducing the cognitive load necessary to understand the information presented in the graph. Taken together, these experiments should lay a foundation for exploring the perception of statistical graphics. There has been considerable research into the accuracy of numerical judgments viewers make from graphs, and these studies are useful, but it is more effective to understand how errors in these judgments occur so that the root cause of the error can be addressed directly. Understanding how visual reasoning relates to the ability to make judgments from graphs allows us to tailor graphics to particular target audiences. In addition, understanding the hierarchy of salient features in statistical graphics allows us to clearly communicate the important message from data or statistical models by constructing graphics which are designed specifically for the perceptual system.

  5. READING STATISTICS AND RESEARCH

    Directory of Open Access Journals (Sweden)

    Reviewed by Yavuz Akbulut

    2008-10-01

    Full Text Available The book demonstrates the best and most conservative ways to decipher and critique research reports particularly for social science researchers. In addition, new editions of the book are always better organized, effectively structured and meticulously updated in line with the developments in the field of research statistics. Even the most trivial issues are revisited and updated in new editions. For instance, purchaser of the previous editions might check the interpretation of skewness and kurtosis indices in the third edition (p. 34 and in the fifth edition (p.29 to see how the author revisits every single detail. Theory and practice always go hand in hand in all editions of the book. Re-reading previous editions (e.g. third edition before reading the fifth edition gives the impression that the author never stops ameliorating his instructional text writing methods. In brief, “Reading Statistics and Research” is among the best sources showing research consumers how to understand and critically assess the statistical information and research results contained in technical research reports. In this respect, the review written by Mirko Savić in Panoeconomicus (2008, 2, pp. 249-252 will help the readers to get a more detailed overview of each chapters. I cordially urge the beginning researchers to pick a highlighter to conduct a detailed reading with the book. A thorough reading of the source will make the researchers quite selective in appreciating the harmony between the data analysis, results and discussion sections of typical journal articles. If interested, beginning researchers might begin with this book to grasp the basics of research statistics, and prop up their critical research reading skills with some statistics package applications through the help of Dr. Andy Field’s book, Discovering Statistics using SPSS (second edition published by Sage in 2005.

  6. Statistics for business

    CERN Document Server

    Waller, Derek L

    2008-01-01

    Statistical analysis is essential to business decision-making and management, but the underlying theory of data collection, organization and analysis is one of the most challenging topics for business students and practitioners. This user-friendly text and CD-ROM package will help you to develop strong skills in presenting and interpreting statistical information in a business or management environment. Based entirely on using Microsoft Excel rather than more complicated applications, it includes a clear guide to using Excel with the key functions employed in the book, a glossary of terms and

  7. Statistics As Principled Argument

    CERN Document Server

    Abelson, Robert P

    2012-01-01

    In this illuminating volume, Robert P. Abelson delves into the too-often dismissed problems of interpreting quantitative data and then presenting them in the context of a coherent story about one's research. Unlike too many books on statistics, this is a remarkably engaging read, filled with fascinating real-life (and real-research) examples rather than with recipes for analysis. It will be of true interest and lasting value to beginning graduate students and seasoned researchers alike. The focus of the book is that the purpose of statistics is to organize a useful argument from quantitative

  8. 1997 statistical yearbook

    International Nuclear Information System (INIS)

    1998-01-01

    The international office of energy information and studies (Enerdata), has published the second edition of its 1997 statistical yearbook which includes consolidated 1996 data with respect to the previous version from June 1997. The CD-Rom comprises the annual worldwide petroleum, natural gas, coal and electricity statistics from 1991 to 1996 with information about production, external trade, consumption, market shares, sectoral distribution of consumption and energy balance sheets. The world is divided into 12 zones (52 countries available). It contains also energy indicators: production and consumption tendencies, supply and production structures, safety of supplies, energy efficiency, and CO 2 emissions. (J.S.)

  9. Einstein's statistical mechanics

    Energy Technology Data Exchange (ETDEWEB)

    Baracca, A; Rechtman S, R

    1985-08-01

    The foundation of equilibrium classical statistical mechanics were laid down in 1902 independently by Gibbs and Einstein. The latter's contribution, developed in three papers published between 1902 and 1904, is usually forgotten and when not, rapidly dismissed as equivalent to Gibb's. We review in detail Einstein's ideas on the foundations of statistical mechanics and show that they constitute the beginning of a research program that led Einstein to quantum theory. We also show how these ideas may be used as a starting point for an introductory course on the subject.

  10. Einstein's statistical mechanics

    International Nuclear Information System (INIS)

    Baracca, A.; Rechtman S, R.

    1985-01-01

    The foundation of equilibrium classical statistical mechanics were laid down in 1902 independently by Gibbs and Einstein. The latter's contribution, developed in three papers published between 1902 and 1904, is usually forgotten and when not, rapidly dismissed as equivalent to Gibb's. We review in detail Einstein's ideas on the foundations of statistical mechanics and show that they constitute the beginning of a research program that led Einstein to quantum theory. We also show how these ideas may be used as a starting point for an introductory course on the subject. (author)

  11. Elementary Statistics Tables

    CERN Document Server

    Neave, Henry R

    2012-01-01

    This book, designed for students taking a basic introductory course in statistical analysis, is far more than just a book of tables. Each table is accompanied by a careful but concise explanation and useful worked examples. Requiring little mathematical background, Elementary Statistics Tables is thus not just a reference book but a positive and user-friendly teaching and learning aid. The new edition contains a new and comprehensive "teach-yourself" section on a simple but powerful approach, now well-known in parts of industry but less so in academia, to analysing and interpreting process dat

  12. Radiation counting statistics

    Energy Technology Data Exchange (ETDEWEB)

    Suh, M. Y.; Jee, K. Y.; Park, K. K.; Park, Y. J.; Kim, W. H

    1999-08-01

    This report is intended to describe the statistical methods necessary to design and conduct radiation counting experiments and evaluate the data from the experiment. The methods are described for the evaluation of the stability of a counting system and the estimation of the precision of counting data by application of probability distribution models. The methods for the determination of the uncertainty of the results calculated from the number of counts, as well as various statistical methods for the reduction of counting error are also described. (Author). 11 refs., 8 tabs., 8 figs.

  13. Radiation counting statistics

    Energy Technology Data Exchange (ETDEWEB)

    Suh, M. Y.; Jee, K. Y.; Park, K. K. [Korea Atomic Energy Research Institute, Taejon (Korea)

    1999-08-01

    This report is intended to describe the statistical methods necessary to design and conduct radiation counting experiments and evaluate the data from the experiments. The methods are described for the evaluation of the stability of a counting system and the estimation of the precision of counting data by application of probability distribution models. The methods for the determination of the uncertainty of the results calculated from the number of counts, as well as various statistical methods for the reduction of counting error are also described. 11 refs., 6 figs., 8 tabs. (Author)

  14. Bayesian statistics an introduction

    CERN Document Server

    Lee, Peter M

    2012-01-01

    Bayesian Statistics is the school of thought that combines prior beliefs with the likelihood of a hypothesis to arrive at posterior beliefs. The first edition of Peter Lee’s book appeared in 1989, but the subject has moved ever onwards, with increasing emphasis on Monte Carlo based techniques. This new fourth edition looks at recent techniques such as variational methods, Bayesian importance sampling, approximate Bayesian computation and Reversible Jump Markov Chain Monte Carlo (RJMCMC), providing a concise account of the way in which the Bayesian approach to statistics develops as wel

  15. Statistics II essentials

    CERN Document Server

    Milewski, Emil G

    2012-01-01

    REA's Essentials provide quick and easy access to critical information in a variety of different fields, ranging from the most basic to the most advanced. As its name implies, these concise, comprehensive study guides summarize the essentials of the field covered. Essentials are helpful when preparing for exams, doing homework and will remain a lasting reference source for students, teachers, and professionals. Statistics II discusses sampling theory, statistical inference, independent and dependent variables, correlation theory, experimental design, count data, chi-square test, and time se

  16. Search Databases and Statistics

    DEFF Research Database (Denmark)

    Refsgaard, Jan C; Munk, Stephanie; Jensen, Lars J

    2016-01-01

    having strengths and weaknesses that must be considered for the individual needs. These are reviewed in this chapter. Equally critical for generating highly confident output datasets is the application of sound statistical criteria to limit the inclusion of incorrect peptide identifications from database...... searches. Additionally, careful filtering and use of appropriate statistical tests on the output datasets affects the quality of all downstream analyses and interpretation of the data. Our considerations and general practices on these aspects of phosphoproteomics data processing are presented here....

  17. Environmental accounting and statistics

    International Nuclear Information System (INIS)

    Bartelmus, P.L.P.

    1992-01-01

    The objective of sustainable development is to integrate environmental concerns with mainstream socio-economic policies. Integrated policies need to be supported by integrated data. Environmental accounting achieves this integration by incorporating environmental costs and benefits into conventional national accounts. Modified accounting aggregates can thus be used in defining and measuring environmentally sound and sustainable economic growth. Further development objectives need to be assessed by more comprehensive, though necessarily less integrative, systems of environmental statistics and indicators. Integrative frameworks for the different statistical systems in the fields of economy, environment and population would facilitate the provision of comparable data for the analysis of integrated development. (author). 19 refs, 2 figs, 2 tabs

  18. Radiation counting statistics

    International Nuclear Information System (INIS)

    Suh, M. Y.; Jee, K. Y.; Park, K. K.; Park, Y. J.; Kim, W. H.

    1999-08-01

    This report is intended to describe the statistical methods necessary to design and conduct radiation counting experiments and evaluate the data from the experiment. The methods are described for the evaluation of the stability of a counting system and the estimation of the precision of counting data by application of probability distribution models. The methods for the determination of the uncertainty of the results calculated from the number of counts, as well as various statistical methods for the reduction of counting error are also described. (Author). 11 refs., 8 tabs., 8 figs

  19. Brown Dwarf Variability: What's Varying and Why?

    Science.gov (United States)

    Marley, Mark Scott

    2014-01-01

    Surveys by ground based telescopes, HST, and Spitzer have revealed that brown dwarfs of most spectral classes exhibit variability. The spectral and temporal signatures of the variability are complex and apparently defy simplistic classification which complicates efforts to model the changes. Important questions include understanding if clearings are forming in an otherwise uniform cloud deck or if thermal perturbations, perhaps associated with breaking gravity waves, are responsible. If clouds are responsible how long does it take for the atmospheric thermal profile to relax from a hot cloudy to a cooler cloudless state? If thermal perturbations are responsible then what atmospheric layers are varying? How do the observed variability timescales compare to atmospheric radiative, chemical, and dynamical timescales? I will address such questions by presenting modeling results for time-varying partly cloudy atmospheres and explore the importance of various atmospheric processes over the relevant timescales for brown dwarfs of a range of effective temperatures. Regardless of the origin of the observed variability, the complexity seen in the atmospheres of the field dwarfs hints at the variability that we may encounter in the next few years in directly imaged young Jupiters. Thus understanding the nature of variability in the field dwarfs, including sensitivity to gravity and metallicity, is of particular importance for exoplanet characterization.

  20. Varying hemin concentrations affect Porphyromonas gingivalis strains differently.

    Science.gov (United States)

    Ohya, Manabu; Cueno, Marni E; Tamura, Muneaki; Ochiai, Kuniyasu

    2016-05-01

    Porphyromonas gingivalis requires heme to grow, however, heme availability and concentration in the periodontal pockets vary. Fluctuations in heme concentration may affect each P. gingivalis strain differently, however, this was never fully demonstrated. Here, we elucidated the effects of varying hemin concentrations in representative P. gingivalis strains. Throughout this study, representative P. gingivalis strains [FDC381 (type I), MPWIb-01 (type Ib), TDC60 (type II), ATCC49417 (type III), W83 (type IV), and HNA99 (type V)] were used and grown for 24 h in growth media under varying hemin concentrations (5 × , 1 × , 0.5 × , 0.1 × ). Samples were lysed and protein standardized. Arg-gingipain (Rgp), H2O2, and superoxide dismutase (SOD) levels were subsequently measured. We focused our study on 24 h-grown strains which excluded MPWIb-01 and HNA99. Rgp activity among the 4 remaining strains varied with Rgp peaking at: 1 × for FDC381, 5 × for TDC60, 0.5 × for ATCC49417, 5 × and 0.5 × for W83. With regards to H2O2 and SOD amounts: FDC381 had similar H2O2 amounts in all hemin concentrations while SOD levels varied; TDC60 had the lowest H2O2 amount at 1 × while SOD levels became higher in relation to hemin concentration; ATCC49417 also had similar H2O2 amounts in all hemin concentrations while SOD levels were higher at 1 × and 0.5 × ; and W83 had statistically similar H2O2 and SOD amounts regardless of hemin concentration. Our results show that variations in hemin concentration affect each P. gingivalis strain differently. Published by Elsevier Ltd.

  1. Statistical mechanics for a system with imperfections: pt. 1

    International Nuclear Information System (INIS)

    Choh, S.T.; Kahng, W.H.; Um, C.I.

    1982-01-01

    Statistical mechanics is extended to treat a system where parts of the Hamiltonian are randomly varying. As the starting point of the theory, the statistical correlation among energy levels is neglected, allowing use of the central limit theorem of the probability theory. (Author)

  2. Detection of significant protein coevolution.

    Science.gov (United States)

    Ochoa, David; Juan, David; Valencia, Alfonso; Pazos, Florencio

    2015-07-01

    The evolution of proteins cannot be fully understood without taking into account the coevolutionary linkages entangling them. From a practical point of view, coevolution between protein families has been used as a way of detecting protein interactions and functional relationships from genomic information. The most common approach to inferring protein coevolution involves the quantification of phylogenetic tree similarity using a family of methodologies termed mirrortree. In spite of their success, a fundamental problem of these approaches is the lack of an adequate statistical framework to assess the significance of a given coevolutionary score (tree similarity). As a consequence, a number of ad hoc filters and arbitrary thresholds are required in an attempt to obtain a final set of confident coevolutionary signals. In this work, we developed a method for associating confidence estimators (P values) to the tree-similarity scores, using a null model specifically designed for the tree comparison problem. We show how this approach largely improves the quality and coverage (number of pairs that can be evaluated) of the detected coevolution in all the stages of the mirrortree workflow, independently of the starting genomic information. This not only leads to a better understanding of protein coevolution and its biological implications, but also to obtain a highly reliable and comprehensive network of predicted interactions, as well as information on the substructure of macromolecular complexes using only genomic information. The software and datasets used in this work are freely available at: http://csbg.cnb.csic.es/pMT/. pazos@cnb.csic.es Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  3. On the statistical assessment of classifiers using DNA microarray data

    Directory of Open Access Journals (Sweden)

    Carella M

    2006-08-01

    Full Text Available Abstract Background In this paper we present a method for the statistical assessment of cancer predictors which make use of gene expression profiles. The methodology is applied to a new data set of microarray gene expression data collected in Casa Sollievo della Sofferenza Hospital, Foggia – Italy. The data set is made up of normal (22 and tumor (25 specimens extracted from 25 patients affected by colon cancer. We propose to give answers to some questions which are relevant for the automatic diagnosis of cancer such as: Is the size of the available data set sufficient to build accurate classifiers? What is the statistical significance of the associated error rates? In what ways can accuracy be considered dependant on the adopted classification scheme? How many genes are correlated with the pathology and how many are sufficient for an accurate colon cancer classification? The method we propose answers these questions whilst avoiding the potential pitfalls hidden in the analysis and interpretation of microarray data. Results We estimate the generalization error, evaluated through the Leave-K-Out Cross Validation error, for three different classification schemes by varying the number of training examples and the number of the genes used. The statistical significance of the error rate is measured by using a permutation test. We provide a statistical analysis in terms of the frequencies of the genes involved in the classification. Using the whole set of genes, we found that the Weighted Voting Algorithm (WVA classifier learns the distinction between normal and tumor specimens with 25 training examples, providing e = 21% (p = 0.045 as an error rate. This remains constant even when the number of examples increases. Moreover, Regularized Least Squares (RLS and Support Vector Machines (SVM classifiers can learn with only 15 training examples, with an error rate of e = 19% (p = 0.035 and e = 18% (p = 0.037 respectively. Moreover, the error rate

  4. Statistical inference for template aging

    Science.gov (United States)

    Schuckers, Michael E.

    2006-04-01

    A change in classification error rates for a biometric device is often referred to as template aging. Here we offer two methods for determining whether the effect of time is statistically significant. The first of these is the use of a generalized linear model to determine if these error rates change linearly over time. This approach generalizes previous work assessing the impact of covariates using generalized linear models. The second approach uses of likelihood ratio tests methodology. The focus here is on statistical methods for estimation not the underlying cause of the change in error rates over time. These methodologies are applied to data from the National Institutes of Standards and Technology Biometric Score Set Release 1. The results of these applications are discussed.

  5. Statistical learning and selective inference.

    Science.gov (United States)

    Taylor, Jonathan; Tibshirani, Robert J

    2015-06-23

    We describe the problem of "selective inference." This addresses the following challenge: Having mined a set of data to find potential associations, how do we properly assess the strength of these associations? The fact that we have "cherry-picked"--searched for the strongest associations--means that we must set a higher bar for declaring significant the associations that we see. This challenge becomes more important in the era of big data and complex statistical modeling. The cherry tree (dataset) can be very large and the tools for cherry picking (statistical learning methods) are now very sophisticated. We describe some recent new developments in selective inference and illustrate their use in forward stepwise regression, the lasso, and principal components analysis.

  6. Statistical uncertainties and unrecognized relationships

    International Nuclear Information System (INIS)

    Rankin, J.P.

    1985-01-01

    Hidden relationships in specific designs directly contribute to inaccuracies in reliability assessments. Uncertainty factors at the system level may sometimes be applied in attempts to compensate for the impact of such unrecognized relationships. Often uncertainty bands are used to relegate unknowns to a miscellaneous category of low-probability occurrences. However, experience and modern analytical methods indicate that perhaps the dominant, most probable and significant events are sometimes overlooked in statistical reliability assurances. The author discusses the utility of two unique methods of identifying the otherwise often unforeseeable system interdependencies for statistical evaluations. These methods are sneak circuit analysis and a checklist form of common cause failure analysis. Unless these techniques (or a suitable equivalent) are also employed along with the more widely-known assurance tools, high reliability of complex systems may not be adequately assured. This concern is indicated by specific illustrations. 8 references, 5 figures

  7. A varying coefficient model to measure the effectiveness of mass media anti-smoking campaigns in generating calls to a Quitline.

    Science.gov (United States)

    Bui, Quang M; Huggins, Richard M; Hwang, Wen-Han; White, Victoria; Erbas, Bircan

    2010-01-01

    Anti-smoking advertisements are an effective population-based smoking reduction strategy. The Quitline telephone service provides a first point of contact for adults considering quitting. Because of data complexity, the relationship between anti-smoking advertising placement, intensity, and time trends in total call volume is poorly understood. In this study we use a recently developed semi-varying coefficient model to elucidate this relationship. Semi-varying coefficient models comprise parametric and nonparametric components. The model is fitted to the daily number of calls to Quitline in Victoria, Australia to estimate a nonparametric long-term trend and parametric terms for day-of-the-week effects and to clarify the relationship with target audience rating points (TARPs) for the Quit and nicotine replacement advertising campaigns. The number of calls to Quitline increased with the TARP value of both the Quit and other smoking cessation advertisement; the TARP values associated with the Quit program were almost twice as effective. The varying coefficient term was statistically significant for peak periods with little or no advertising. Semi-varying coefficient models are useful for modeling public health data when there is little or no information on other factors related to the at-risk population. These models are well suited to modeling call volume to Quitline, because the varying coefficient allowed the underlying time trend to depend on fixed covariates that also vary with time, thereby explaining more of the variation in the call model.

  8. Fisher's Contributions to Statistics

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 9. Fisher's Contributions to Statistics. T Krishnan. General Article Volume 2 Issue 9 September 1997 pp 32-37. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/002/09/0032-0037. Author Affiliations.

  9. ASURV: Astronomical SURVival Statistics

    Science.gov (United States)

    Feigelson, E. D.; Nelson, P. I.; Isobe, T.; LaValley, M.

    2014-06-01

    ASURV (Astronomical SURVival Statistics) provides astronomy survival analysis for right- and left-censored data including the maximum-likelihood Kaplan-Meier estimator and several univariate two-sample tests, bivariate correlation measures, and linear regressions. ASURV is written in FORTRAN 77, and is stand-alone and does not call any specialized libraries.

  10. Elementary statistical physics

    CERN Document Server

    Kittel, C

    1965-01-01

    This book is intended to help physics students attain a modest working knowledge of several areas of statistical mechanics, including stochastic processes and transport theory. The areas discussed are among those forming a useful part of the intellectual background of a physicist.

  11. SAPS, Crime statistics

    African Journals Online (AJOL)

    incidents' refer to 'incidents such as labour disputes and dissatisfaction with service delivery in which violence erupted and SAPS action was required to restore peace and order'.26. It is apparent from both the SAPS statistics and those provided by the Municipal IQ Hotspots. Monitor, that public protests and gatherings are.

  12. Statistical core design

    International Nuclear Information System (INIS)

    Oelkers, E.; Heller, A.S.; Farnsworth, D.A.; Kearfott, K.J.

    1978-01-01

    The report describes the statistical analysis of DNBR thermal-hydraulic margin of a 3800 MWt, 205-FA core under design overpower conditions. The analysis used LYNX-generated data at predetermined values of the input variables whose uncertainties were to be statistically combined. LYNX data were used to construct an efficient response surface model in the region of interest; the statistical analysis was accomplished through the evaluation of core reliability; utilizing propagation of the uncertainty distributions of the inputs. The response surface model was implemented in both the analytical error propagation and Monte Carlo Techniques. The basic structural units relating to the acceptance criteria are fuel pins. Therefore, the statistical population of pins with minimum DNBR values smaller than specified values is determined. The specified values are designated relative to the most probable and maximum design DNBR values on the power limiting pin used in present design analysis, so that gains over the present design criteria could be assessed for specified probabilistic acceptance criteria. The results are equivalent to gains ranging from 1.2 to 4.8 percent of rated power dependent on the acceptance criterion. The corresponding acceptance criteria range from 95 percent confidence that no pin will be in DNB to 99.9 percent of the pins, which are expected to avoid DNB

  13. Illinois forest statistics, 1985.

    Science.gov (United States)

    Jerold T. Hahn

    1987-01-01

    The third inventory of the timber resource of Illinois shows a 1% increase in commercial forest area and a 40% gain in growing-stock volume between 1962 and 1985. Presented are highlights and statistics on area, volume, growth, mortality, removals, utilization, and biomass.

  14. Air Carrier Traffic Statistics.

    Science.gov (United States)

    2013-11-01

    This report contains airline operating statistics for large certificated air carriers based on data reported to U.S. Department of Transportation (DOT) by carriers that hold a certificate issued under Section 401 of the Federal Aviation Act of 1958 a...

  15. Introduction to statistics

    CERN Multimedia

    CERN. Geneva

    2005-01-01

    The three lectures will present an introduction to statistical methods as used in High Energy Physics. As the time will be very limited, the course will seek mainly to define the important issues and to introduce the most wide used tools. Topics will include the interpretation and use of probability, estimation of parameters and testing of hypotheses.

  16. Introduction to Statistics course

    CERN Multimedia

    CERN. Geneva HR-RFA

    2006-01-01

    The four lectures will present an introduction to statistical methods as used in High Energy Physics. As the time will be very limited, the course will seek mainly to define the important issues and to introduce the most wide used tools. Topics will include the interpretation and use of probability, estimation of parameters and testing of hypotheses.

  17. Fisher's Contributions to Statistics

    Indian Academy of Sciences (India)

    of statistics are multifarious, profound and long-lasting. In fact, he can be ... that it is not even possible to mention them all in this short article. ... At that time the term 'likelihood' as oppo- .... Dedicated to the memory of Fisher soon after his death,.

  18. Michigan forest statistics, 1980.

    Science.gov (United States)

    Gerhard K. Raile; W. Brad Smith

    1983-01-01

    The fourth inventory of the timber resource of Michigan shows a 7% decline in commercial forest area and a 27% gain in growing-stock volume between 1966 and 1980. Highlights and statistics are presented on area, volume, growth, mortality, removals, utilization, and biomass.

  19. Air Carrier Traffic Statistics.

    Science.gov (United States)

    2012-07-01

    This report contains airline operating statistics for large certificated air carriers based on data reported to U.S. Department of Transportation (DOT) by carriers that hold a certificate issued under Section 401 of the Federal Aviation Act of 1958 a...

  20. Geometric statistical inference

    International Nuclear Information System (INIS)

    Periwal, Vipul

    1999-01-01

    A reparametrization-covariant formulation of the inverse problem of probability is explicitly solved for finite sample sizes. The inferred distribution is explicitly continuous for finite sample size. A geometric solution of the statistical inference problem in higher dimensions is outlined